Kalman Filter Module¶
The kalman module contains various Kalman filter implementations.
KalmanFilter¶
KalmanFilter
¶
Bases: object
Implements a Kalman filter. You are responsible for setting the various state variables to reasonable values; the defaults will not give you a functional filter.
For now the best documentation is my free book Kalman and Bayesian Filters in Python [2]_. The test files in this directory also give you a basic idea of use, albeit without much description.
In brief, you will first construct this object, specifying the size of the state vector with dim_x and the size of the measurement vector that you will be using with dim_z. These are mostly used to perform size checks when you assign values to the various matrices. For example, if you specified dim_z=2 and then try to assign a 3x3 matrix to R (the measurement noise matrix you will get an assert exception because R should be 2x2. (If for whatever reason you need to alter the size of things midstream just use the underscore version of the matrices to assign directly: your_filter._R = a_3x3_matrix.)
After construction the filter will have default matrices created for you, but you must specify the values for each. It’s usually easiest to just overwrite them rather than assign to each element yourself. This will be clearer in the example below. All are of type numpy.array.
Examples:
Here is a filter that tracks position and velocity using a sensor that only reads position.
First construct the object with the required dimensionality. Here the state
(dim_x) has 2 coefficients (position and velocity), and the measurement
(dim_z) has one. In FilterPy x is the state, z is the measurement.
.. code::
from bayesian_filters.kalman import KalmanFilter
f = KalmanFilter (dim_x=2, dim_z=1)
Assign the initial value for the state (position and velocity). You can do this with a two dimensional array like so:
.. code::
f.x = np.array([[2.], # position
[0.]]) # velocity
or just use a one dimensional array, which I prefer doing.
.. code::
f.x = np.array([2., 0.])
Define the state transition matrix:
.. code::
f.F = np.array([[1.,1.],
[0.,1.]])
Define the measurement function. Here we need to convert a position-velocity vector into just a position vector, so we use:
.. code::
f.H = np.array([[1., 0.]])
Define the state's covariance matrix P.
.. code::
f.P = np.array([[1000., 0.],
[ 0., 1000.] ])
Now assign the measurement noise. Here the dimension is 1x1, so I can use a scalar
.. code::
f.R = 5
I could have done this instead:
.. code::
f.R = np.array([[5.]])
Note that this must be a 2 dimensional array.
Finally, I will assign the process noise. Here I will take advantage of another FilterPy library function:
.. code::
from bayesian_filters.common import Q_discrete_white_noise
f.Q = Q_discrete_white_noise(dim=2, dt=0.1, var=0.13)
Now just perform the standard predict/update loop:
.. code::
while some_condition_is_true:
z = get_sensor_reading()
f.predict()
f.update(z)
do_something_with_estimate (f.x)
Procedural Form
This module also contains stand alone functions to perform Kalman filtering. Use these if you are not a fan of objects.
Example
.. code::
while True:
z, R = read_sensor()
x, P = predict(x, P, F, Q)
x, P = update(x, P, z, R, H)
See my book Kalman and Bayesian Filters in Python [2]_.
You will have to set the following attributes after constructing this object for the filter to perform properly. Please note that there are various checks in place to ensure that you have made everything the 'correct' size. However, it is possible to provide incorrectly sized arrays such that the linear algebra can not perform an operation. It can also fail silently - you can end up with matrices of a size that allows the linear algebra to work, but are the wrong shape for the problem you are trying to solve.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim_x
|
int
|
Number of state variables for the Kalman filter. For example, if you are tracking the position and velocity of an object in two dimensions, dim_x would be 4. This is used to set the default size of P, Q, and u |
required |
dim_z
|
int
|
Number of of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2. |
required |
dim_u
|
int(optional)
|
size of the control input, if it is being used. Default value of 0 indicates it is not used. |
0
|
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x, 1)
|
Current state estimate. Any call to update() or predict() updates this variable. |
P |
array(dim_x, dim_x)
|
Current state covariance matrix. Any call to update() or predict() updates this variable. |
x_prior |
array(dim_x, 1)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convenience; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x, 1)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
z |
array
|
Last measurement used in update(). Read only. |
R |
array(dim_z, dim_z)
|
Measurement noise covariance matrix. Also known as the observation covariance. |
Q |
array(dim_x, dim_x)
|
Process noise covariance matrix. Also known as the transition covariance. |
F |
array()
|
State Transition matrix. Also known as |
H |
array(dim_z, dim_x)
|
Measurement function. Also known as the observation matrix, or as |
y |
array
|
Residual of the update step. Read only. |
K |
array(dim_x, dim_z)
|
Kalman gain of the update step. Read only. |
S |
array
|
System uncertainty (P projected to measurement space). Read only. |
SI |
array
|
Inverse system uncertainty. Read only. |
log_likelihood |
float
|
log-likelihood of the last measurement. Read only. |
likelihood |
float
|
likelihood of last measurement. Read only. Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min. |
mahalanobis |
float
|
mahalanobis distance of the innovation. Read only. |
inv |
function, default numpy.linalg.inv
|
If you prefer another inverse function, such as the Moore-Penrose pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv This is only used to invert self.S. If you know it is diagonal, you might choose to set it to bayesian_filters.common.inv_diagonal, which is several times faster than numpy.linalg.inv for diagonal matrices. |
alpha |
float
|
Fading memory setting. 1.0 gives the normal Kalman filter, and values slightly larger than 1.0 (such as 1.02) give a fading memory effect - previous measurements have less influence on the filter's estimates. This formulation of the Fading memory filter (there are many) is due to Dan Simon [1]_. |
References
.. [1] Dan Simon. "Optimal State Estimation." John Wiley & Sons. p. 208-212. (2006)
.. [2] Roger Labbe. "Kalman and Bayesian Filters in Python" https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
alpha
property
writable
¶
Fading memory setting. 1.0 gives the normal Kalman filter, and values slightly larger than 1.0 (such as 1.02) give a fading memory effect - previous measurements have less influence on the filter's estimates. This formulation of the Fading memory filter (there are many) is due to Dan Simon [1]_.
likelihood
property
¶
Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.
log_likelihood
property
¶
log-likelihood of the last measurement.
mahalanobis
property
¶
" Mahalanobis distance of measurement. E.g. 3 means measurement was 3 standard deviations away from the predicted value.
Returns:
| Name | Type | Description |
|---|---|---|
mahalanobis |
float
|
|
batch_filter(zs, Fs=None, Qs=None, Hs=None, Rs=None, Bs=None, us=None, update_first=False, saver=None)
¶
Batch processes a sequences of measurements.
Parameters
zs : list-like
list of measurements at each time step self.dt. Missing
measurements must be represented by None.
Fs : None, list-like, default=None optional value or list of values to use for the state transition matrix F.
If Fs is None then self.F is used for all epochs.
Otherwise it must contain a list-like list of F's, one for
each epoch. This allows you to have varying F per epoch.
Qs : None, np.array or list-like, default=None optional value or list of values to use for the process error covariance Q.
If Qs is None then self.Q is used for all epochs.
Otherwise it must contain a list-like list of Q's, one for
each epoch. This allows you to have varying Q per epoch.
Hs : None, np.array or list-like, default=None optional list of values to use for the measurement matrix H.
If Hs is None then self.H is used for all epochs.
If Hs contains a single matrix, then it is used as H for all
epochs.
Otherwise it must contain a list-like list of H's, one for
each epoch. This allows you to have varying H per epoch.
Rs : None, np.array or list-like, default=None optional list of values to use for the measurement error covariance R.
If Rs is None then self.R is used for all epochs.
Otherwise it must contain a list-like list of R's, one for
each epoch. This allows you to have varying R per epoch.
Bs : None, np.array or list-like, default=None optional list of values to use for the control transition matrix B.
If Bs is None then self.B is used for all epochs.
Otherwise it must contain a list-like list of B's, one for
each epoch. This allows you to have varying B per epoch.
us : None, np.array or list-like, default=None optional list of values to use for the control input vector;
If us is None then None is used for all epochs (equivalent to 0,
or no control input).
Otherwise it must contain a list-like list of u's, one for
each epoch.
update_first : bool, optional, default=False controls whether the order of operations is update followed by predict, or predict followed by update. Default is predict->update.
saver : bayesian_filters.common.Saver, optional bayesian_filters.common.Saver object. If provided, saver.save() will be called after every epoch
Returns
means : np.array((n,dim_x,1))
array of the state for each time step after the update. Each entry
is an np.array. In other words means[k,:] is the state at step
k.
covariance : np.array((n,dim_x,dim_x))
array of the covariances for each time step after the update.
In other words covariance[k,:,:] is the covariance at step k.
means_predictions : np.array((n,dim_x,1))
array of the state for each time step after the predictions. Each
entry is an np.array. In other words means[k,:] is the state at
step k.
covariance_predictions : np.array((n,dim_x,dim_x))
array of the covariances for each time step after the prediction.
In other words covariance[k,:,:] is the covariance at step k.
Examples
.. code-block:: Python
# this example demonstrates tracking a measurement where the time
# between measurement varies, as stored in dts. This requires
# that F be recomputed for each epoch. The output is then smoothed
# with an RTS smoother.
zs = [t + random.randn()*4 for t in range (40)]
Fs = [np.array([[1., dt], [0, 1]] for dt in dts]
(mu, cov, _, _) = kf.batch_filter(zs, Fs=Fs)
(xs, Ps, Ks, Pps) = kf.rts_smoother(mu, cov, Fs=Fs)
get_prediction(u=None, B=None, F=None, Q=None)
¶
Predict next state (prior) using the Kalman filter state propagation equations and returns it without modifying the object.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
array
|
Optional control vector. |
0
|
B
|
np.array(dim_x, dim_u), or None
|
Optional control transition matrix; a value of None
will cause the filter to use |
None
|
F
|
np.array(dim_x, dim_x), or None
|
Optional state transition matrix; a value of None
will cause the filter to use |
None
|
Q
|
np.array(dim_x, dim_x), scalar, or None
|
Optional process noise matrix; a value of None will cause the
filter to use |
None
|
Returns:
| Type | Description |
|---|---|
(x, P) : tuple
|
State vector and covariance array of the prediction. |
get_update(z=None)
¶
Computes the new estimate based on measurement z and returns it
without altering the state of the filter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
(dim_z, 1): array_like
|
measurement for this update. z can be a scalar if dim_z is 1, otherwise it must be convertible to a column vector. |
None
|
Returns:
| Type | Description |
|---|---|
(x, P) : tuple
|
State vector and covariance array of the update. |
log_likelihood_of(z)
¶
log likelihood of the measurement z. This should only be called
after a call to update(). Calling after predict() will yield an
incorrect result.
measurement_of_state(x)
¶
Helper function that converts a state into a measurement.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
array
|
kalman state vector |
required |
Returns:
| Name | Type | Description |
|---|---|---|
z |
(dim_z, 1): array_like
|
measurement for this update. z can be a scalar if dim_z is 1, otherwise it must be convertible to a column vector. |
predict(u=None, B=None, F=None, Q=None)
¶
Predict next state (prior) using the Kalman filter state propagation equations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
array
|
Optional control vector. |
0
|
B
|
np.array(dim_x, dim_u), or None
|
Optional control transition matrix; a value of None
will cause the filter to use |
None
|
F
|
np.array(dim_x, dim_x), or None
|
Optional state transition matrix; a value of None
will cause the filter to use |
None
|
Q
|
np.array(dim_x, dim_x), scalar, or None
|
Optional process noise matrix; a value of None will cause the
filter to use |
None
|
predict_steadystate(u=0, B=None)
¶
Predict state (prior) using the Kalman filter state propagation equations. Only x is updated, P is left unchanged. See update_steadstate() for a longer explanation of when to use this method.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
array
|
Optional control vector. If non-zero, it is multiplied by B to create the control input into the system. |
0
|
B
|
np.array(dim_x, dim_u), or None
|
Optional control transition matrix; a value of None
will cause the filter to use |
None
|
residual_of(z)
¶
Returns the residual for the given measurement (z). Does not alter the state of the filter.
rts_smoother(Xs, Ps, Fs=None, Qs=None, inv=np.linalg.inv)
¶
Runs the Rauch-Tung-Striebel Kalman smoother on a set of
means and covariances computed by a Kalman filter. The usual input
would come from the output of KalmanFilter.batch_filter().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
Xs
|
array
|
array of the means (state variable x) of the output of a Kalman filter. |
required |
Ps
|
array
|
array of the covariances of the output of a kalman filter. |
required |
Fs
|
list-like collection of numpy.array
|
State transition matrix of the Kalman filter at each time step. Optional, if not provided the filter's self.F will be used |
None
|
Qs
|
list-like collection of numpy.array
|
Process noise of the Kalman filter at each time step. Optional, if not provided the filter's self.Q will be used |
None
|
inv
|
function
|
If you prefer another inverse function, such as the Moore-Penrose pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv |
numpy.linalg.inv
|
Returns:
| Name | Type | Description |
|---|---|---|
x |
ndarray
|
smoothed means |
P |
ndarray
|
smoothed state covariances |
K |
ndarray
|
smoother gain at each step |
Pp |
ndarray
|
Predicted state covariances |
Examples:
.. code-block:: Python
zs = [t + random.randn()*4 for t in range (40)]
(mu, cov, _, _) = kalman.batch_filter(zs)
(x, P, K, Pp) = rts_smoother(mu, cov, kf.F, kf.Q)
test_matrix_dimensions(z=None, H=None, R=None, F=None, Q=None)
¶
Performs a series of asserts to check that the size of everything is what it should be. This can help you debug problems in your design.
If you pass in H, R, F, Q those will be used instead of this object's value for those matrices.
Testing z (the measurement) is problamatic. x is a vector, and can be
implemented as either a 1D array or as a nx1 column vector. Thus Hx
can be of different shapes. Then, if Hx is a single value, it can
be either a 1D array or 2D vector. If either is true, z can reasonably
be a scalar (either '3' or np.array('3') are scalars under this
definition), a 1D, 1 element array, or a 2D, 1 element array. You are
allowed to pass in any combination that works.
update(z, R=None, H=None)
¶
Add a new measurement (z) to the Kalman filter.
If z is None, nothing is computed. However, x_post and P_post are updated with the prior (x_prior, P_prior), and self.z is set to None.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
(dim_z, 1): array_like
|
measurement for this update. z can be a scalar if dim_z is 1, otherwise it must be convertible to a column vector. If you pass in a value of H, z must be a column vector the of the correct size. |
required |
R
|
np.array, scalar, or None
|
Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used. |
None
|
H
|
np.array, or None
|
Optionally provide H to override the measurement function for this one call, otherwise self.H will be used. |
None
|
update_correlated(z, R=None, H=None)
¶
Add a new measurement (z) to the Kalman filter assuming that
process noise and measurement noise are correlated as defined in
the self.M matrix.
A partial derivation can be found in [1]
If z is None, nothing is changed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
(dim_z, 1): array_like
|
measurement for this update. z can be a scalar if dim_z is 1, otherwise it must be convertible to a column vector. |
required |
R
|
np.array, scalar, or None
|
Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used. |
None
|
H
|
np.array, or None
|
Optionally provide H to override the measurement function for this one call, otherwise self.H will be used. |
None
|
References
.. [1] Bulut, Y. (2011). Applied Kalman filter theory (Doctoral dissertation, Northeastern University). http://people.duke.edu/~hpgavin/SystemID/References/Balut-KalmanFilter-PhD-NEU-2011.pdf
update_sequential(start, z_i, R_i=None, H_i=None)
¶
Add a single input measurement (z_i) to the Kalman filter. In sequential processing, inputs are processed one at a time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
start
|
integer
|
Index of the first measurement input updated by this call. |
required |
z_i
|
array or scalar
|
Measurement of inputs for this partial update. |
required |
R_i
|
np.array, scalar, or None
|
Optionally provide R_i to override the measurement noise of inputs for this one call, otherwise a slice of self.R will be used. |
None
|
H_i
|
np.array, or None
|
Optionally provide H[i] to override the partial measurement function for this one call, otherwise a slice of self.H will be used. |
None
|
update_steadystate(z)
¶
Add a new measurement (z) to the Kalman filter without recomputing the Kalman gain K, the state covariance P, or the system uncertainty S.
You can use this for LTI systems since the Kalman gain and covariance converge to a fixed value. Precompute these and assign them explicitly, or run the Kalman filter using the normal predict()/update(0 cycle until they converge.
The main advantage of this call is speed. We do significantly less computation, notably avoiding a costly matrix inversion.
Use in conjunction with predict_steadystate(), otherwise P will grow without bound.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
(dim_z, 1): array_like
|
measurement for this update. z can be a scalar if dim_z is 1, otherwise it must be convertible to a column vector. |
required |
Examples:
>>> cv = kinematic_kf(dim=3, order=2) # 3D const velocity filter
>>> # let filter converge on representative data, then save k and P
>>> for i in range(100):
>>> cv.predict()
>>> cv.update([i, i, i])
>>> saved_k = np.copy(cv.K)
>>> saved_P = np.copy(cv.P)
later on:
ExtendedKalmanFilter¶
ExtendedKalmanFilter
¶
Bases: object
Implements an extended Kalman filter (EKF). You are responsible for setting the various state variables to reasonable values; the defaults will not give you a functional filter.
You will have to set the following attributes after constructing this object for the filter to perform properly. Please note that there are various checks in place to ensure that you have made everything the 'correct' size. However, it is possible to provide incorrectly sized arrays such that the linear algebra can not perform an operation. It can also fail silently - you can end up with matrices of a size that allows the linear algebra to work, but are the wrong shape for the problem you are trying to solve.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim_x
|
int
|
Number of state variables for the Kalman filter. For example, if you are tracking the position and velocity of an object in two dimensions, dim_x would be 4. This is used to set the default size of P, Q, and u |
required |
dim_z
|
int
|
Number of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x, 1)
|
State estimate vector |
P |
array(dim_x, dim_x)
|
Covariance matrix |
x_prior |
array(dim_x, 1)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x, 1)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
R |
array(dim_z, dim_z)
|
Measurement noise matrix |
Q |
array(dim_x, dim_x)
|
Process noise matrix |
F |
array()
|
State Transition matrix |
H |
array(dim_x, dim_x)
|
Measurement function |
y |
array
|
Residual of the update step. Read only. |
K |
array(dim_x, dim_z)
|
Kalman gain of the update step. Read only. |
S |
array
|
Systen uncertaintly projected to measurement space. Read only. |
z |
ndarray
|
Last measurement used in update(). Read only. |
log_likelihood |
float
|
log-likelihood of the last measurement. Read only. |
likelihood |
float
|
likelihood of last measurment. Read only. Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min. |
mahalanobis |
float
|
mahalanobis distance of the innovation. E.g. 3 means measurement was 3 standard deviations away from the predicted value. Read only. |
Examples:
See my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
likelihood
property
¶
Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.
log_likelihood
property
¶
log-likelihood of the last measurement.
mahalanobis
property
¶
Mahalanobis distance of innovation. E.g. 3 means measurement was 3 standard deviations away from the predicted value.
Returns:
| Name | Type | Description |
|---|---|---|
mahalanobis |
float
|
|
predict(u=0)
¶
Predict next state (prior) using the Kalman filter state propagation equations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
array
|
Optional control vector. If non-zero, it is multiplied by B to create the control input into the system. |
0
|
predict_update(z, HJacobian, Hx, args=(), hx_args=(), u=0)
¶
Performs the predict/update innovation of the extended Kalman filter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
array
|
measurement for this step.
If |
required |
HJacobian
|
function
|
function which computes the Jacobian of the H matrix (measurement function). Takes state variable (self.x) as input, along with the optional arguments in args, and returns H. |
required |
Hx
|
function
|
function which takes as input the state variable (self.x) along with the optional arguments in hx_args, and returns the measurement that would correspond to that state. |
required |
args
|
tuple
|
arguments to be passed into HJacobian after the required state variable. |
(,)
|
hx_args
|
tuple
|
arguments to be passed into Hx after the required state variable. |
(,)
|
u
|
array or scalar
|
optional control vector input to the filter. |
0
|
predict_x(u=0)
¶
Predicts the next state of X. If you need to compute the next state yourself, override this function. You would need to do this, for example, if the usual Taylor expansion to generate F is not providing accurate results for you.
update(z, HJacobian, Hx, R=None, args=(), hx_args=(), residual=np.subtract)
¶
Performs the update innovation of the extended Kalman filter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
array
|
measurement for this step.
If |
required |
HJacobian
|
function
|
function which computes the Jacobian of the H matrix (measurement function). Takes state variable (self.x) as input, returns H. |
required |
Hx
|
function
|
function which takes as input the state variable (self.x) along with the optional arguments in hx_args, and returns the measurement that would correspond to that state. |
required |
R
|
np.array, scalar, or None
|
Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used. |
None
|
args
|
tuple
|
arguments to be passed into HJacobian after the required state
variable. for robot localization you might need to pass in
information about the map and time of day, so you might have
|
(,)
|
hx_args
|
tuple
|
arguments to be passed into Hx function after the required state variable. |
(,)
|
residual
|
function(z, z2)
|
Optional function that computes the residual (difference) between the two measurement vectors. If you do not provide this, then the built in minus operator will be used. You will normally want to use the built in unless your residual computation is nonlinear (for example, if they are angles) |
subtract
|
UnscentedKalmanFilter¶
UnscentedKalmanFilter
¶
Bases: object
Implements the Scaled Unscented Kalman filter (UKF) as defined by Simon Julier in [1], using the formulation provided by Wan and Merle in [2]. This filter scales the sigma points to avoid strong nonlinearities.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim_x
|
int
|
Number of state variables for the filter. For example, if you are tracking the position and velocity of an object in two dimensions, dim_x would be 4. |
required |
dim_z
|
int
|
Number of of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2. This is for convience, so everything is sized correctly on
creation. If you are using multiple sensors the size of |
required |
hx
|
function(x, **hx_args)
|
Measurement function. Converts state vector x into a measurement vector of shape (dim_z). |
required |
fx
|
function(x, dt, **fx_args)
|
function that returns the state x transformed by the state transition function. dt is the time step in seconds. |
required |
points
|
class
|
Class which computes the sigma points and weights for a UKF algorithm. You can vary the UKF implementation by changing this class. For example, MerweScaledSigmaPoints implements the alpha, beta, kappa parameterization of Van der Merwe, and JulierSigmaPoints implements Julier's original kappa parameterization. See either of those for the required signature of this class if you want to implement your own. |
required |
sqrt_fn
|
callable(ndarray)
|
Defines how we compute the square root of a matrix, which has no unique answer. Cholesky is the default choice due to its speed. Typically your alternative choice will be scipy.linalg.sqrtm. Different choices affect how the sigma points are arranged relative to the eigenvectors of the covariance matrix. Usually this will not matter to you; if so the default cholesky() yields maximal performance. As of van der Merwe's dissertation of 2004 [6] this was not a well reseached area so I have no advice to give you. If your method returns a triangular matrix it must be upper triangular. Do not use numpy.linalg.cholesky - for historical reasons it returns a lower triangular matrix. The SciPy version does the right thing as far as this class is concerned. |
None (implies scipy.linalg.cholesky)
|
x_mean_fn
|
callable(sigma_points, weights)
|
Function that computes the mean of the provided sigma points and weights. Use this if your state variable contains nonlinear values such as angles which cannot be summed. .. code-block:: Python |
None
|
z_mean_fn
|
callable(sigma_points, weights)
|
Same as x_mean_fn, except it is called for sigma points which form the measurements after being passed through hx(). |
None
|
residual_x
|
callable(x, y)
|
|
None
|
residual_z
|
callable(x, y)
|
Function that computes the residual (difference) between x and y. You will have to supply this if your state variable cannot support subtraction, such as angles (359-1 degreees is 2, not 358). x and y are state vectors, not scalars. One is for the state variable, the other is for the measurement state. .. code-block:: Python |
None
|
state_add
|
Function that subtracts two state vectors, returning a new
state vector. Used during update to compute |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x)
|
state estimate vector |
P |
array(dim_x, dim_x)
|
covariance estimate matrix |
x_prior |
array(dim_x)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
z |
ndarray
|
Last measurement used in update(). Read only. |
R |
array(dim_z, dim_z)
|
measurement noise matrix |
Q |
array(dim_x, dim_x)
|
process noise matrix |
K |
array
|
Kalman gain |
y |
array
|
innovation residual |
log_likelihood |
scalar
|
Log likelihood of last measurement update. |
likelihood |
float
|
likelihood of last measurment. Read only. Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min. |
mahalanobis |
float
|
mahalanobis distance of the measurement. Read only. |
inv |
function, default numpy.linalg.inv
|
If you prefer another inverse function, such as the Moore-Penrose pseudo inverse, set it to that instead: .. code-block:: Python |
Examples:
Simple example of a linear order 1 kinematic filter in 2D. There is no need to use a UKF for this example, but it is easy to read.
>>> def fx(x, dt):
>>> # state transition function - predict next state based
>>> # on constant velocity model x = vt + x_0
>>> F = np.array([[1, dt, 0, 0],
>>> [0, 1, 0, 0],
>>> [0, 0, 1, dt],
>>> [0, 0, 0, 1]], dtype=float)
>>> return np.dot(F, x)
>>>
>>> def hx(x):
>>> # measurement function - convert state into a measurement
>>> # where measurements are [x_pos, y_pos]
>>> return np.array([x[0], x[2]])
>>>
>>> dt = 0.1
>>> # create sigma points to use in the filter. This is standard for Gaussian processes
>>> points = MerweScaledSigmaPoints(4, alpha=.1, beta=2., kappa=-1)
>>>
>>> kf = UnscentedKalmanFilter(dim_x=4, dim_z=2, dt=dt, fx=fx, hx=hx, points=points)
>>> kf.x = np.array([-1., 1., -1., 1]) # initial state
>>> kf.P *= 0.2 # initial uncertainty
>>> z_std = 0.1
>>> kf.R = np.diag([z_std**2, z_std**2]) # 1 standard
>>> kf.Q = Q_discrete_white_noise(dim=2, dt=dt, var=0.01**2, block_size=2)
>>>
>>> zs = [[i+randn()*z_std, i+randn()*z_std] for i in range(50)] # measurements
>>> for z in zs:
>>> kf.predict()
>>> kf.update(z)
>>> print(kf.x, 'log-likelihood', kf.log_likelihood)
For in depth explanations see my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
Also see the bayesian_filters/kalman/tests subdirectory for test code that may be illuminating.
References
.. [1] Julier, Simon J. "The scaled unscented transformation," American Control Converence, 2002, pp 4555-4559, vol 6.
Online copy:
https://www.cs.unc.edu/~welch/kalman/media/pdf/ACC02-IEEE1357.PDF
.. [2] E. A. Wan and R. Van der Merwe, “The unscented Kalman filter for nonlinear estimation,” in Proc. Symp. Adaptive Syst. Signal Process., Commun. Contr., Lake Louise, AB, Canada, Oct. 2000.
Online Copy:
https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [3] S. Julier, J. Uhlmann, and H. Durrant-Whyte. "A new method for the nonlinear transformation of means and covariances in filters and estimators," IEEE Transactions on Automatic Control, 45(3), pp. 477-482 (March 2000).
.. [4] E. A. Wan and R. Van der Merwe, “The Unscented Kalman filter for Nonlinear Estimation,” in Proc. Symp. Adaptive Syst. Signal Process., Commun. Contr., Lake Louise, AB, Canada, Oct. 2000.
https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [5] Wan, Merle "The Unscented Kalman Filter," chapter in Kalman Filtering and Neural Networks, John Wiley & Sons, Inc., 2001.
.. [6] R. Van der Merwe "Sigma-Point Kalman Filters for Probabilitic Inference in Dynamic State-Space Models" (Doctoral dissertation)
likelihood
property
¶
Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.
log_likelihood
property
¶
log-likelihood of the last measurement.
mahalanobis
property
¶
" Mahalanobis distance of measurement. E.g. 3 means measurement was 3 standard deviations away from the predicted value.
Returns:
| Name | Type | Description |
|---|---|---|
mahalanobis |
float
|
|
__init__(dim_x, dim_z, dt, hx, fx, points, sqrt_fn=None, x_mean_fn=None, z_mean_fn=None, residual_x=None, residual_z=None, state_add=None)
¶
Create a Kalman filter. You are responsible for setting the various state variables to reasonable values; the defaults below will not give you a functional filter.
batch_filter(zs, Rs=None, dts=None, UT=None, saver=None)
¶
Performs the UKF filter over the list of measurement in zs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
zs
|
list - like
|
list of measurements at each time step |
required |
Rs
|
(None, array or list - like)
|
optional list of values to use for the measurement error covariance R. If Rs is None then self.R is used for all epochs. If it is a list of matrices or a 3D array where len(Rs) == len(zs), then it is treated as a list of R values, one per epoch. This allows you to have varying R per epoch. |
None
|
dts
|
(None, scalar or list - like)
|
optional value or list of delta time to be passed into predict. If dtss is None then self.dt is used for all epochs. If it is a list where len(dts) == len(zs), then it is treated as a list of dt values, one per epoch. This allows you to have varying epoch durations. |
None
|
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
saver
|
Saver
|
bayesian_filters.common.Saver object. If provided, saver.save() will be called after every epoch |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
means |
ndarray((n, dim_x, 1))
|
array of the state for each time step after the update. Each entry
is an np.array. In other words |
covariance |
ndarray((n, dim_x, dim_x))
|
array of the covariances for each time step after the update.
In other words |
Examples:
.. code-block:: Python
# this example demonstrates tracking a measurement where the time
# between measurement varies, as stored in dts The output is then smoothed
# with an RTS smoother.
zs = [t + random.randn()*4 for t in range (40)]
(mu, cov, _, _) = ukf.batch_filter(zs, dts=dts)
(xs, Ps, Ks) = ukf.rts_smoother(mu, cov)
compute_process_sigmas(dt, fx=None, **fx_args)
¶
computes the values of sigmas_f. Normally a user would not call this, but it is useful if you need to call update more than once between calls to predict (to update for multiple simultaneous measurements), so the sigmas correctly reflect the updated state x, P.
cross_variance(x, z, sigmas_f, sigmas_h)
¶
Compute cross variance of the state x and measurement z.
predict(dt=None, UT=None, fx=None, **fx_args)
¶
Performs the predict step of the UKF. On return, self.x and self.P contain the predicted state (x) and covariance (P). '
Important: this MUST be called before update() is called for the first time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dt
|
double
|
If specified, the time step to be used for this prediction. self._dt is used if this is not provided. |
None
|
fx
|
callable f(x, dt, **fx_args)
|
State transition function. If not provided, the default function passed in during construction will be used. |
None
|
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
**fx_args
|
keyword arguments
|
optional keyword arguments to be passed into f(x). |
{}
|
rts_smoother(Xs, Ps, Qs=None, dts=None, UT=None)
¶
Runs the Rauch-Tung-Striebel Kalman smoother on a set of
means and covariances computed by the UKF. The usual input
would come from the output of batch_filter().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
Xs
|
array
|
array of the means (state variable x) of the output of a Kalman filter. |
required |
Ps
|
array
|
array of the covariances of the output of a kalman filter. |
required |
Qs
|
Process noise of the Kalman filter at each time step. Optional, if not provided the filter's self.Q will be used |
None
|
|
dt
|
optional, float or array-like of float
|
If provided, specifies the time step of each step of the filter. If float, then the same time step is used for all steps. If an array, then each element k contains the time at step k. Units are seconds. |
required |
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
x |
ndarray
|
smoothed means |
P |
ndarray
|
smoothed state covariances |
K |
ndarray
|
smoother gain at each step |
Examples:
.. code-block:: Python
zs = [t + random.randn()*4 for t in range (40)]
(mu, cov, _, _) = kalman.batch_filter(zs)
(x, P, K) = rts_smoother(mu, cov, fk.F, fk.Q)
update(z, R=None, UT=None, hx=None, **hx_args)
¶
Update the UKF with the given measurements. On return, self.x and self.P contain the new mean and covariance of the filter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
numpy.array of shape (dim_z)
|
measurement vector |
required |
R
|
array((dim_z, dim_z))
|
Measurement noise. If provided, overrides self.R for this function call. |
None
|
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
hx
|
callable h(x, **hx_args)
|
Measurement function. If not provided, the default function passed in during construction will be used. |
None
|
**hx_args
|
keyword argument
|
arguments to be passed into h(x) after x -> h(x, **hx_args) |
{}
|
EnsembleKalmanFilter¶
EnsembleKalmanFilter
¶
Bases: object
This implements the ensemble Kalman filter (EnKF). The EnKF uses an ensemble of hundreds to thousands of state vectors that are randomly sampled around the estimate, and adds perturbations at each update and predict step. It is useful for extremely large systems such as found in hydrophysics. As such, this class is admittedly a toy as it is far too slow with large N.
There are many versions of this sort of this filter. This formulation is due to Crassidis and Junkins [1]. It works with both linear and nonlinear systems.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
array(dim_x)
|
state mean |
required |
P
|
array((dim_x, dim_x))
|
covariance of the state |
required |
dim_z
|
int
|
Number of of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2. |
required |
dt
|
float
|
time step in seconds |
required |
N
|
int
|
number of sigma points (ensembles). Must be greater than 1. |
required |
hx
|
function hx(x)
|
Measurement function. May be linear or nonlinear - converts state x into a measurement. Return must be an np.array of the same dimensionality as the measurement vector. |
required |
fx
|
function fx(x, dt)
|
State transition function. May be linear or nonlinear. Projects state x into the next time period. Returns the projected state x. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x, 1)
|
State estimate |
P |
array(dim_x, dim_x)
|
State covariance matrix |
x_prior |
array(dim_x, 1)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x, 1)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
z |
array
|
Last measurement used in update(). Read only. |
R |
array(dim_z, dim_z)
|
Measurement noise matrix |
Q |
array(dim_x, dim_x)
|
Process noise matrix |
fx |
callable(x, dt)
|
State transition function |
hx |
callable(x)
|
Measurement function. Convert state |
inv |
function, default numpy.linalg.inv
|
If you prefer another inverse function, such as the Moore-Penrose pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv |
Examples:
.. code-block:: Python
def hx(x):
return np.array([x[0]])
F = np.array([[1., 1.],
[0., 1.]])
def fx(x, dt):
return np.dot(F, x)
x = np.array([0., 1.])
P = np.eye(2) * 100.
dt = 0.1
f = EnsembleKalmanFilter(x=x, P=P, dim_z=1, dt=dt,
N=8, hx=hx, fx=fx)
std_noise = 3.
f.R *= std_noise**2
f.Q = Q_discrete_white_noise(2, dt, .01)
while True:
z = read_sensor()
f.predict()
f.update(np.asarray([z]))
See my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
References
- [1] John L Crassidis and John L. Junkins. "Optimal Estimation of Dynamic Systems. CRC Press, second edition. 2012. pp, 257-9.
initialize(x, P)
¶
Initializes the filter with the specified mean and covariance. Only need to call this if you are using the filter to filter more than one set of data; this is called by init
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
array(dim_z)
|
state mean |
required |
P
|
array((dim_x, dim_x))
|
covariance of the state |
required |
predict()
¶
Predict next position.
update(z, R=None)
¶
Add a new measurement (z) to the kalman filter. If z is None, nothing is changed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
array
|
measurement for this update. |
required |
R
|
np.array, scalar, or None
|
Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used. |
None
|
InformationFilter¶
InformationFilter
¶
Bases: object
Create a linear Information filter. Information filters compute the inverse of the Kalman filter, allowing you to easily denote having no information at initialization.
You are responsible for setting the various state variables to reasonable values; the defaults below will not give you a functional filter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim_x
|
int
|
Number of state variables for the filter. For example, if you are tracking the position and velocity of an object in two dimensions, dim_x would be 4. This is used to set the default size of P, Q, and u |
required |
dim_z
|
int
|
Number of of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2. |
required |
dim_u
|
int(optional)
|
size of the control input, if it is being used. Default value of 0 indicates it is not used. |
0
|
self
|
|
required | |
self
|
|
required |
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x, 1)
|
State estimate vector |
P_inv |
array(dim_x, dim_x)
|
inverse state covariance matrix |
x_prior |
array(dim_x, 1)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_inv_prior |
array(dim_x, dim_x)
|
Inverse prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x, 1)
|
Posterior (updated) state estimate. Read Only. |
P_inv_post |
array(dim_x, dim_x)
|
Inverse posterior (updated) state covariance matrix. Read Only. |
z |
ndarray
|
Last measurement used in update(). Read only. |
R_inv |
array(dim_z, dim_z)
|
inverse of measurement noise matrix |
Q |
array(dim_x, dim_x)
|
Process noise matrix |
H |
array(dim_z, dim_x)
|
Measurement function |
y |
array
|
Residual of the update step. Read only. |
K |
array(dim_x, dim_z)
|
Kalman gain of the update step. Read only. |
S |
array
|
Systen uncertaintly projected to measurement space. Read only. |
log_likelihood |
float
|
log-likelihood of the last measurement. Read only. |
likelihood |
float
|
likelihood of last measurment. Read only. Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min. |
inv |
function, default numpy.linalg.inv
|
If you prefer another inverse function, such as the Moore-Penrose pseudo inverse, set it to that instead: kf.inv = np.linalg.pinv |
Examples:
See my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
F
property
writable
¶
State Transition matrix
P
property
¶
State covariance matrix
likelihood
property
¶
Likelihood of last measurement. Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.
log_likelihood
property
¶
log-likelihood of the last measurement.
mahalanobis
property
¶
Mahalanobis distance of measurement. E.g. 3 means measurement was 3 standard deviations away from the predicted value.
Returns:
| Name | Type | Description |
|---|---|---|
mahalanobis |
float
|
|
batch_filter(zs, Rs=None, update_first=False, saver=None)
¶
Batch processes a sequences of measurements.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
zs
|
list - like
|
list of measurements at each time step |
required |
Rs
|
list - like
|
optional list of values to use for the measurement error
covariance; a value of None in any position will cause the filter
to use |
None
|
update_first
|
(bool, optional)
|
controls whether the order of operations is update followed by predict, or predict followed by update. Default is predict->update. |
False
|
saver
|
Saver
|
bayesian_filters.common.Saver object. If provided, saver.save() will be called after every epoch |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
means |
array((n, dim_x, 1))
|
array of the state for each time step. Each entry is an np.array.
In other words |
covariance |
array((n, dim_x, dim_x))
|
array of the covariances for each time step. In other words
|
predict(u=0)
¶
Predict next position.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
ndarray
|
Optional control vector. If non-zero, it is multiplied by B to create the control input into the system. |
0
|
update(z, R_inv=None)
¶
Add a new measurement (z) to the kalman filter. If z is None, nothing is changed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
array
|
measurement for this update. |
required |
R
|
np.array, scalar, or None
|
Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used. |
required |
IMMEstimator¶
IMMEstimator
¶
Bases: object
Implements an Interacting Multiple-Model (IMM) estimator.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filters
|
(N,) array_like of KalmanFilter objects
|
List of N filters. filters[i] is the ith Kalman filter in the IMM estimator. Each filter must have the same dimension for the state |
required |
mu
|
(N,) array_like of float
|
mode probability: mu[i] is the probability that filter i is the correct one. |
required |
M
|
(N, N) ndarray of float
|
Markov chain transition matrix. M[i,j] is the probability of switching from filter j to filter i. |
required |
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x, 1)
|
Current state estimate. Any call to update() or predict() updates this variable. |
P |
array(dim_x, dim_x)
|
Current state covariance matrix. Any call to update() or predict() updates this variable. |
x_prior |
array(dim_x, 1)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x, 1)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
N |
int
|
number of filters in the filter bank |
mu |
(N,) ndarray of float
|
mode probability: mu[i] is the probability that filter i is the correct one. |
M |
(N, N) ndarray of float
|
Markov chain transition matrix. M[i,j] is the probability of switching from filter j to filter i. |
cbar |
(N,) ndarray of float
|
Total probability, after interaction, that the target is in state j. We use it as the # normalization constant. |
likelihood |
(N,) ndarray of float
|
Likelihood of each individual filter's last measurement. |
omega |
(N, N) ndarray of float
|
Mixing probabilitity - omega[i, j] is the probabilility of mixing the state of filter i into filter j. Perhaps more understandably, it weights the states of each filter by: x_j = sum(omega[i,j] * x_i) with a similar weighting for P_j |
Examples:
>>> import numpy as np
>>> from bayesian_filters.common import kinematic_kf
>>> from bayesian_filters.kalman import IMMEstimator
>>> kf1 = kinematic_kf(2, 2)
>>> kf2 = kinematic_kf(2, 2)
>>> # do some settings of x, R, P etc. here, I'll just use the defaults
>>> kf2.Q *= 0 # no prediction error in second filter
>>>
>>> filters = [kf1, kf2]
>>> mu = [0.5, 0.5] # each filter is equally likely at the start
>>> trans = np.array([[0.97, 0.03], [0.03, 0.97]])
>>> imm = IMMEstimator(filters, mu, trans)
>>>
>>> for i in range(100):
>>> # make some noisy data
>>> x = i + np.random.randn()*np.sqrt(kf1.R[0, 0])
>>> y = i + np.random.randn()*np.sqrt(kf1.R[1, 1])
>>> z = np.array([[x], [y]])
>>>
>>> # perform predict/update cycle
>>> imm.predict()
>>> imm.update(z)
>>> print(imm.x.T)
For a full explanation and more examples see my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
References
Bar-Shalom, Y., Li, X-R., and Kirubarajan, T. "Estimation with Application to Tracking and Navigation". Wiley-Interscience, 2001.
Crassidis, J and Junkins, J. "Optimal Estimation of Dynamic Systems". CRC Press, second edition. 2012.
Labbe, R. "Kalman and Bayesian Filters in Python". https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
predict(u=None)
¶
Predict next state (prior) using the IMM state propagation equations.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
array
|
Control vector. If not |
None
|
update(z)
¶
Add a new measurement (z) to the Kalman filter. If z is None, nothing is changed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
array
|
measurement for this update. |
required |
MMAEFilterBank¶
MMAEFilterBank
¶
Bases: object
Implements the fixed Multiple Model Adaptive Estimator (MMAE). This is a bank of independent Kalman filters. This estimator computes the likelihood that each filter is the correct one, and blends their state estimates weighted by their likelihood to produce the state estimate.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
filters
|
list of Kalman filters
|
List of Kalman filters. |
required |
p
|
list-like of floats
|
Initial probability that each filter is the correct one. In general you'd probably set each element to 1./len(p). |
required |
dim_x
|
float
|
number of random variables in the state X |
required |
H
|
Measurement matrix
|
|
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x, 1)
|
Current state estimate. Any call to update() or predict() updates this variable. |
P |
array(dim_x, dim_x)
|
Current state covariance matrix. Any call to update() or predict() updates this variable. |
x_prior |
array(dim_x, 1)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x, 1)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
z |
ndarray
|
Last measurement used in update(). Read only. |
filters |
list of Kalman filters
|
List of Kalman filters. |
Examples:
..code: ca = make_ca_filter(dt, noise_factor=0.6) cv = make_ca_filter(dt, noise_factor=0.6) cv.F[:,2] = 0 # remove acceleration term cv.P[2,2] = 0 cv.Q[2,2] = 0
filters = [cv, ca]
bank = MMAEFilterBank(filters, p=(0.5, 0.5), dim_x=3)
for z in zs:
bank.predict()
bank.update(z)
Also, see my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
References
Zarchan and Musoff. "Fundamentals of Kalman filtering: A Practical Approach." AIAA, third edition.
predict(u=0)
¶
Predict next position using the Kalman filter state propagation equations for each filter in the bank.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
u
|
array
|
Optional control vector. If non-zero, it is multiplied by B to create the control input into the system. |
0
|
update(z, R=None, H=None)
¶
Add a new measurement (z) to the Kalman filter. If z is None, nothing is changed.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
array
|
measurement for this update. |
required |
R
|
np.array, scalar, or None
|
Optionally provide R to override the measurement noise for this one call, otherwise self.R will be used. |
None
|
H
|
np.array, or None
|
Optionally provide H to override the measurement function for this one call, otherwise self.H will be used. |
None
|