Skip to content

Extended Kalman Filter

ExtendedKalmanFilter

Introduction and Overview

Implements a extended Kalman filter. For now the best documentation is my free book Kalman and Bayesian Filters in Python [1]

The test files in this directory also give you a basic idea of use, albeit without much description.

In brief, you will first construct this object, specifying the size of the state vector with dim_x and the size of the measurement vector that you will be using with dim_z. These are mostly used to perform size checks when you assign values to the various matrices. For example, if you specified dim_z=2 and then try to assign a 3x3 matrix to R (the measurement noise matrix you will get an assert exception because R should be 2x2. (If for whatever reason you need to alter the size of things midstream just use the underscore version of the matrices to assign directly: your_filter._R = a_3x3_matrix.)

After construction the filter will have default matrices created for you, but you must specify the values for each. It's usually easiest to just overwrite them rather than assign to each element yourself. This will be clearer in the example below. All are of type numpy.array.

References

.. [1] Labbe, Roger. "Kalman and Bayesian Filters in Python".

github repo: https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python

read online: http://nbviewer.ipython.org/github/rlabbe/Kalman-and-Bayesian-Filters-in-Python/blob/master/table_of_contents.ipynb


API Reference

ExtendedKalmanFilter

Bases: object

Implements an extended Kalman filter (EKF). You are responsible for setting the various state variables to reasonable values; the defaults will not give you a functional filter.

You will have to set the following attributes after constructing this object for the filter to perform properly. Please note that there are various checks in place to ensure that you have made everything the 'correct' size. However, it is possible to provide incorrectly sized arrays such that the linear algebra can not perform an operation. It can also fail silently - you can end up with matrices of a size that allows the linear algebra to work, but are the wrong shape for the problem you are trying to solve.

Parameters:

Name Type Description Default
dim_x int

Number of state variables for the Kalman filter. For example, if you are tracking the position and velocity of an object in two dimensions, dim_x would be 4.

This is used to set the default size of P, Q, and u

required
dim_z int

Number of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2.

required

Attributes:

Name Type Description
x array(dim_x, 1)

State estimate vector

P array(dim_x, dim_x)

Covariance matrix

x_prior array(dim_x, 1)

Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only.

P_prior array(dim_x, dim_x)

Prior (predicted) state covariance matrix. Read Only.

x_post array(dim_x, 1)

Posterior (updated) state estimate. Read Only.

P_post array(dim_x, dim_x)

Posterior (updated) state covariance matrix. Read Only.

R array(dim_z, dim_z)

Measurement noise matrix

Q array(dim_x, dim_x)

Process noise matrix

F array()

State Transition matrix

H array(dim_x, dim_x)

Measurement function

y array

Residual of the update step. Read only.

K array(dim_x, dim_z)

Kalman gain of the update step. Read only.

S array

Systen uncertaintly projected to measurement space. Read only.

z ndarray

Last measurement used in update(). Read only.

log_likelihood float

log-likelihood of the last measurement. Read only.

likelihood float

likelihood of last measurment. Read only.

Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.

mahalanobis float

mahalanobis distance of the innovation. E.g. 3 means measurement was 3 standard deviations away from the predicted value.

Read only.

Examples:

See my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python

Source code in bayesian_filters/kalman/EKF.py
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
class ExtendedKalmanFilter(object):
    """Implements an extended Kalman filter (EKF). You are responsible for
    setting the various state variables to reasonable values; the defaults
    will  not give you a functional filter.

    You will have to set the following attributes after constructing this
    object for the filter to perform properly. Please note that there are
    various checks in place to ensure that you have made everything the
    'correct' size. However, it is possible to provide incorrectly sized
    arrays such that the linear algebra can not perform an operation.
    It can also fail silently - you can end up with matrices of a size that
    allows the linear algebra to work, but are the wrong shape for the problem
    you are trying to solve.

    Parameters
    ----------

    dim_x : int
        Number of state variables for the Kalman filter. For example, if
        you are tracking the position and velocity of an object in two
        dimensions, dim_x would be 4.

        This is used to set the default size of P, Q, and u

    dim_z : int
        Number of measurement inputs. For example, if the sensor
        provides you with position in (x,y), dim_z would be 2.

    Attributes
    ----------
    x : numpy.array(dim_x, 1)
        State estimate vector

    P : numpy.array(dim_x, dim_x)
        Covariance matrix

    x_prior : numpy.array(dim_x, 1)
        Prior (predicted) state estimate. The *_prior and *_post attributes
        are for convienence; they store the  prior and posterior of the
        current epoch. Read Only.

    P_prior : numpy.array(dim_x, dim_x)
        Prior (predicted) state covariance matrix. Read Only.

    x_post : numpy.array(dim_x, 1)
        Posterior (updated) state estimate. Read Only.

    P_post : numpy.array(dim_x, dim_x)
        Posterior (updated) state covariance matrix. Read Only.

    R : numpy.array(dim_z, dim_z)
        Measurement noise matrix

    Q : numpy.array(dim_x, dim_x)
        Process noise matrix

    F : numpy.array()
        State Transition matrix

    H : numpy.array(dim_x, dim_x)
        Measurement function

    y : numpy.array
        Residual of the update step. Read only.

    K : numpy.array(dim_x, dim_z)
        Kalman gain of the update step. Read only.

    S :  numpy.array
        Systen uncertaintly projected to measurement space. Read only.

    z : ndarray
        Last measurement used in update(). Read only.

    log_likelihood : float
        log-likelihood of the last measurement. Read only.

    likelihood : float
        likelihood of last measurment. Read only.

        Computed from the log-likelihood. The log-likelihood can be very
        small,  meaning a large negative value such as -28000. Taking the
        exp() of that results in 0.0, which can break typical algorithms
        which multiply by this value, so by default we always return a
        number >= sys.float_info.min.

    mahalanobis : float
        mahalanobis distance of the innovation. E.g. 3 means measurement
        was 3 standard deviations away from the predicted value.

        Read only.

    Examples
    --------

    See my book Kalman and Bayesian Filters in Python
    https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
    """

    def __init__(self, dim_x, dim_z, dim_u=0):
        self.dim_x = dim_x
        self.dim_z = dim_z
        self.dim_u = dim_u

        self.x = zeros((dim_x, 1))  # state
        self.P = eye(dim_x)  # uncertainty covariance
        self.B = 0  # control transition matrix
        self.F = np.eye(dim_x)  # state transition matrix
        self.R = eye(dim_z)  # state uncertainty
        self.Q = eye(dim_x)  # process uncertainty
        self.y = zeros((dim_z, 1))  # residual

        z = np.array([None] * self.dim_z)
        self.z = reshape_z(z, self.dim_z, self.x.ndim)

        # gain and residual are computed during the innovation step. We
        # save them so that in case you want to inspect them for various
        # purposes
        self.K = np.zeros(self.x.shape)  # kalman gain
        self.y = zeros((dim_z, 1))
        self.S = np.zeros((dim_z, dim_z))  # system uncertainty
        self.SI = np.zeros((dim_z, dim_z))  # inverse system uncertainty

        # identity matrix. Do not alter this.
        self._I = np.eye(dim_x)

        self._log_likelihood = log(sys.float_info.min)
        self._likelihood = sys.float_info.min
        self._mahalanobis = None

        # these will always be a copy of x,P after predict() is called
        self.x_prior = self.x.copy()
        self.P_prior = self.P.copy()

        # these will always be a copy of x,P after update() is called
        self.x_post = self.x.copy()
        self.P_post = self.P.copy()

        # Optional function attributes for state transition and measurement
        # These can be set after initialization if needed for specific applications
        self.fx = None  # state transition function
        self.hx = None  # measurement function
        self.H = None  # Jacobian function (can be set as attribute or passed to methods)

    def predict_update(self, z, HJacobian, Hx, args=(), hx_args=(), u=0):
        """Performs the predict/update innovation of the extended Kalman
        filter.

        Parameters
        ----------

        z : np.array
            measurement for this step.
            If `None`, only predict step is perfomed.

        HJacobian : function
           function which computes the Jacobian of the H matrix (measurement
            function). Takes state variable (self.x) as input, along with the
            optional arguments in args, and returns H.

        Hx : function
            function which takes as input the state variable (self.x) along
            with the optional arguments in hx_args, and returns the measurement
            that would correspond to that state.

        args : tuple, optional, default (,)
            arguments to be passed into HJacobian after the required state
            variable.

        hx_args : tuple, optional, default (,)
            arguments to be passed into Hx after the required state
            variable.

        u : np.array or scalar
            optional control vector input to the filter.
        """
        # pylint: disable=too-many-locals

        if not isinstance(args, tuple):
            args = (args,)

        if not isinstance(hx_args, tuple):
            hx_args = (hx_args,)

        if np.isscalar(z) and self.dim_z == 1:
            z = np.asarray([z], float)

        F = self.F
        B = self.B
        P = self.P
        Q = self.Q
        R = self.R
        x = self.x

        H = HJacobian(x, *args)

        # predict step
        x = dot(F, x) + dot(B, u)
        P = dot(F, P).dot(F.T) + Q

        # save prior
        self.x_prior = np.copy(self.x)
        self.P_prior = np.copy(self.P)

        # update step
        PHT = dot(P, H.T)
        self.S = dot(H, PHT) + R
        self.SI = linalg.inv(self.S)
        self.K = dot(PHT, self.SI)

        self.y = z - Hx(x, *hx_args)
        self.x = x + dot(self.K, self.y)

        I_KH = self._I - dot(self.K, H)
        self.P = dot(I_KH, P).dot(I_KH.T) + dot(self.K, R).dot(self.K.T)

        # save measurement and posterior state
        self.z = deepcopy(z)
        self.x_post = self.x.copy()
        self.P_post = self.P.copy()

        # set to None to force recompute
        self._log_likelihood = None
        self._likelihood = None
        self._mahalanobis = None

    def update(self, z, HJacobian, Hx, R=None, args=(), hx_args=(), residual=np.subtract):
        """Performs the update innovation of the extended Kalman filter.

        Parameters
        ----------

        z : np.array
            measurement for this step.
            If `None`, posterior is not computed

        HJacobian : function
           function which computes the Jacobian of the H matrix (measurement
            function). Takes state variable (self.x) as input, returns H.

        Hx : function
            function which takes as input the state variable (self.x) along
            with the optional arguments in hx_args, and returns the measurement
            that would correspond to that state.

        R : np.array, scalar, or None
            Optionally provide R to override the measurement noise for this
            one call, otherwise  self.R will be used.

        args : tuple, optional, default (,)
            arguments to be passed into HJacobian after the required state
            variable. for robot localization you might need to pass in
            information about the map and time of day, so you might have
            `args=(map_data, time)`, where the signature of HCacobian will
            be `def HJacobian(x, map, t)`

        hx_args : tuple, optional, default (,)
            arguments to be passed into Hx function after the required state
            variable.

        residual : function (z, z2), optional
            Optional function that computes the residual (difference) between
            the two measurement vectors. If you do not provide this, then the
            built in minus operator will be used. You will normally want to use
            the built in unless your residual computation is nonlinear (for
            example, if they are angles)
        """

        if z is None:
            self.z = np.array([[None] * self.dim_z]).T
            self.x_post = self.x.copy()
            self.P_post = self.P.copy()
            return

        if not isinstance(args, tuple):
            args = (args,)

        if not isinstance(hx_args, tuple):
            hx_args = (hx_args,)

        if R is None:
            R = self.R
        elif np.isscalar(R):
            R = eye(self.dim_z) * R

        if np.isscalar(z) and self.dim_z == 1:
            z = np.asarray([z], float)

        H = HJacobian(self.x, *args)

        PHT = dot(self.P, H.T)
        self.S = dot(H, PHT) + R
        self.SI = linalg.inv(self.S)
        self.K = PHT.dot(self.SI)

        hx = Hx(self.x, *hx_args)
        self.y = residual(z, hx)
        self.x = self.x + dot(self.K, self.y)

        # P = (I-KH)P(I-KH)' + KRK' is more numerically stable
        # and works for non-optimal K vs the equation
        # P = (I-KH)P usually seen in the literature.
        I_KH = self._I - dot(self.K, H)
        self.P = dot(I_KH, self.P).dot(I_KH.T) + dot(self.K, R).dot(self.K.T)

        # set to None to force recompute
        self._log_likelihood = None
        self._likelihood = None
        self._mahalanobis = None

        # save measurement and posterior state
        self.z = deepcopy(z)
        self.x_post = self.x.copy()
        self.P_post = self.P.copy()

    def predict_x(self, u=0):
        """
        Predicts the next state of X. If you need to
        compute the next state yourself, override this function. You would
        need to do this, for example, if the usual Taylor expansion to
        generate F is not providing accurate results for you.
        """
        self.x = dot(self.F, self.x) + dot(self.B, u)

    def predict(self, u=0):
        """
        Predict next state (prior) using the Kalman filter state propagation
        equations.

        Parameters
        ----------

        u : np.array
            Optional control vector. If non-zero, it is multiplied by B
            to create the control input into the system.
        """

        self.predict_x(u)
        self.P = dot(self.F, self.P).dot(self.F.T) + self.Q

        # save prior
        self.x_prior = np.copy(self.x)
        self.P_prior = np.copy(self.P)

    @property
    def log_likelihood(self):
        """
        log-likelihood of the last measurement.
        """

        if self._log_likelihood is None:
            self._log_likelihood = logpdf(x=self.y, cov=self.S)
        return self._log_likelihood

    @property
    def likelihood(self):
        """
        Computed from the log-likelihood. The log-likelihood can be very
        small,  meaning a large negative value such as -28000. Taking the
        exp() of that results in 0.0, which can break typical algorithms
        which multiply by this value, so by default we always return a
        number >= sys.float_info.min.
        """
        if self._likelihood is None:
            self._likelihood = exp(self.log_likelihood)
            if self._likelihood == 0:
                self._likelihood = sys.float_info.min
        return self._likelihood

    @property
    def mahalanobis(self):
        """
        Mahalanobis distance of innovation. E.g. 3 means measurement
        was 3 standard deviations away from the predicted value.

        Returns
        -------
        mahalanobis : float
        """
        if self._mahalanobis is None:
            self._mahalanobis = sqrt(dot(dot(self.y.T, self.SI), self.y).item())
        return self._mahalanobis

    def __repr__(self):
        return "\n".join(
            [
                "KalmanFilter object",
                pretty_str("x", self.x),
                pretty_str("P", self.P),
                pretty_str("x_prior", self.x_prior),
                pretty_str("P_prior", self.P_prior),
                pretty_str("F", self.F),
                pretty_str("Q", self.Q),
                pretty_str("R", self.R),
                pretty_str("K", self.K),
                pretty_str("y", self.y),
                pretty_str("S", self.S),
                pretty_str("likelihood", self.likelihood),
                pretty_str("log-likelihood", self.log_likelihood),
                pretty_str("mahalanobis", self.mahalanobis),
            ]
        )

likelihood property

Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.

log_likelihood property

log-likelihood of the last measurement.

mahalanobis property

Mahalanobis distance of innovation. E.g. 3 means measurement was 3 standard deviations away from the predicted value.

Returns:

Name Type Description
mahalanobis float

predict(u=0)

Predict next state (prior) using the Kalman filter state propagation equations.

Parameters:

Name Type Description Default
u array

Optional control vector. If non-zero, it is multiplied by B to create the control input into the system.

0
Source code in bayesian_filters/kalman/EKF.py
def predict(self, u=0):
    """
    Predict next state (prior) using the Kalman filter state propagation
    equations.

    Parameters
    ----------

    u : np.array
        Optional control vector. If non-zero, it is multiplied by B
        to create the control input into the system.
    """

    self.predict_x(u)
    self.P = dot(self.F, self.P).dot(self.F.T) + self.Q

    # save prior
    self.x_prior = np.copy(self.x)
    self.P_prior = np.copy(self.P)

predict_update(z, HJacobian, Hx, args=(), hx_args=(), u=0)

Performs the predict/update innovation of the extended Kalman filter.

Parameters:

Name Type Description Default
z array

measurement for this step. If None, only predict step is perfomed.

required
HJacobian function

function which computes the Jacobian of the H matrix (measurement function). Takes state variable (self.x) as input, along with the optional arguments in args, and returns H.

required
Hx function

function which takes as input the state variable (self.x) along with the optional arguments in hx_args, and returns the measurement that would correspond to that state.

required
args tuple

arguments to be passed into HJacobian after the required state variable.

(,)
hx_args tuple

arguments to be passed into Hx after the required state variable.

(,)
u array or scalar

optional control vector input to the filter.

0
Source code in bayesian_filters/kalman/EKF.py
def predict_update(self, z, HJacobian, Hx, args=(), hx_args=(), u=0):
    """Performs the predict/update innovation of the extended Kalman
    filter.

    Parameters
    ----------

    z : np.array
        measurement for this step.
        If `None`, only predict step is perfomed.

    HJacobian : function
       function which computes the Jacobian of the H matrix (measurement
        function). Takes state variable (self.x) as input, along with the
        optional arguments in args, and returns H.

    Hx : function
        function which takes as input the state variable (self.x) along
        with the optional arguments in hx_args, and returns the measurement
        that would correspond to that state.

    args : tuple, optional, default (,)
        arguments to be passed into HJacobian after the required state
        variable.

    hx_args : tuple, optional, default (,)
        arguments to be passed into Hx after the required state
        variable.

    u : np.array or scalar
        optional control vector input to the filter.
    """
    # pylint: disable=too-many-locals

    if not isinstance(args, tuple):
        args = (args,)

    if not isinstance(hx_args, tuple):
        hx_args = (hx_args,)

    if np.isscalar(z) and self.dim_z == 1:
        z = np.asarray([z], float)

    F = self.F
    B = self.B
    P = self.P
    Q = self.Q
    R = self.R
    x = self.x

    H = HJacobian(x, *args)

    # predict step
    x = dot(F, x) + dot(B, u)
    P = dot(F, P).dot(F.T) + Q

    # save prior
    self.x_prior = np.copy(self.x)
    self.P_prior = np.copy(self.P)

    # update step
    PHT = dot(P, H.T)
    self.S = dot(H, PHT) + R
    self.SI = linalg.inv(self.S)
    self.K = dot(PHT, self.SI)

    self.y = z - Hx(x, *hx_args)
    self.x = x + dot(self.K, self.y)

    I_KH = self._I - dot(self.K, H)
    self.P = dot(I_KH, P).dot(I_KH.T) + dot(self.K, R).dot(self.K.T)

    # save measurement and posterior state
    self.z = deepcopy(z)
    self.x_post = self.x.copy()
    self.P_post = self.P.copy()

    # set to None to force recompute
    self._log_likelihood = None
    self._likelihood = None
    self._mahalanobis = None

predict_x(u=0)

Predicts the next state of X. If you need to compute the next state yourself, override this function. You would need to do this, for example, if the usual Taylor expansion to generate F is not providing accurate results for you.

Source code in bayesian_filters/kalman/EKF.py
def predict_x(self, u=0):
    """
    Predicts the next state of X. If you need to
    compute the next state yourself, override this function. You would
    need to do this, for example, if the usual Taylor expansion to
    generate F is not providing accurate results for you.
    """
    self.x = dot(self.F, self.x) + dot(self.B, u)

update(z, HJacobian, Hx, R=None, args=(), hx_args=(), residual=np.subtract)

Performs the update innovation of the extended Kalman filter.

Parameters:

Name Type Description Default
z array

measurement for this step. If None, posterior is not computed

required
HJacobian function

function which computes the Jacobian of the H matrix (measurement function). Takes state variable (self.x) as input, returns H.

required
Hx function

function which takes as input the state variable (self.x) along with the optional arguments in hx_args, and returns the measurement that would correspond to that state.

required
R np.array, scalar, or None

Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used.

None
args tuple

arguments to be passed into HJacobian after the required state variable. for robot localization you might need to pass in information about the map and time of day, so you might have args=(map_data, time), where the signature of HCacobian will be def HJacobian(x, map, t)

(,)
hx_args tuple

arguments to be passed into Hx function after the required state variable.

(,)
residual function(z, z2)

Optional function that computes the residual (difference) between the two measurement vectors. If you do not provide this, then the built in minus operator will be used. You will normally want to use the built in unless your residual computation is nonlinear (for example, if they are angles)

subtract
Source code in bayesian_filters/kalman/EKF.py
def update(self, z, HJacobian, Hx, R=None, args=(), hx_args=(), residual=np.subtract):
    """Performs the update innovation of the extended Kalman filter.

    Parameters
    ----------

    z : np.array
        measurement for this step.
        If `None`, posterior is not computed

    HJacobian : function
       function which computes the Jacobian of the H matrix (measurement
        function). Takes state variable (self.x) as input, returns H.

    Hx : function
        function which takes as input the state variable (self.x) along
        with the optional arguments in hx_args, and returns the measurement
        that would correspond to that state.

    R : np.array, scalar, or None
        Optionally provide R to override the measurement noise for this
        one call, otherwise  self.R will be used.

    args : tuple, optional, default (,)
        arguments to be passed into HJacobian after the required state
        variable. for robot localization you might need to pass in
        information about the map and time of day, so you might have
        `args=(map_data, time)`, where the signature of HCacobian will
        be `def HJacobian(x, map, t)`

    hx_args : tuple, optional, default (,)
        arguments to be passed into Hx function after the required state
        variable.

    residual : function (z, z2), optional
        Optional function that computes the residual (difference) between
        the two measurement vectors. If you do not provide this, then the
        built in minus operator will be used. You will normally want to use
        the built in unless your residual computation is nonlinear (for
        example, if they are angles)
    """

    if z is None:
        self.z = np.array([[None] * self.dim_z]).T
        self.x_post = self.x.copy()
        self.P_post = self.P.copy()
        return

    if not isinstance(args, tuple):
        args = (args,)

    if not isinstance(hx_args, tuple):
        hx_args = (hx_args,)

    if R is None:
        R = self.R
    elif np.isscalar(R):
        R = eye(self.dim_z) * R

    if np.isscalar(z) and self.dim_z == 1:
        z = np.asarray([z], float)

    H = HJacobian(self.x, *args)

    PHT = dot(self.P, H.T)
    self.S = dot(H, PHT) + R
    self.SI = linalg.inv(self.S)
    self.K = PHT.dot(self.SI)

    hx = Hx(self.x, *hx_args)
    self.y = residual(z, hx)
    self.x = self.x + dot(self.K, self.y)

    # P = (I-KH)P(I-KH)' + KRK' is more numerically stable
    # and works for non-optimal K vs the equation
    # P = (I-KH)P usually seen in the literature.
    I_KH = self._I - dot(self.K, H)
    self.P = dot(I_KH, self.P).dot(I_KH.T) + dot(self.K, R).dot(self.K.T)

    # set to None to force recompute
    self._log_likelihood = None
    self._likelihood = None
    self._mahalanobis = None

    # save measurement and posterior state
    self.z = deepcopy(z)
    self.x_post = self.x.copy()
    self.P_post = self.P.copy()