Common
common¶
A collection of functions used throughout FilterPy, and/or functions that you will find useful when you build your filters.
Saver¶
Saver
¶
Bases: object
Helper class to save the states of any filter object. Each time you call save() all of the attributes (state, covariances, etc) are appended to lists.
Generally you would do this once per epoch - predict/update.
Then, you can access any of the states by using the [] syntax or by using the . operator.
.. code-block:: Python
my_saver = Saver()
... do some filtering
x = my_saver['x']
x = my_save.x
Either returns a list of all of the state x values for the entire
filtering process.
If you want to convert all saved lists into numpy arrays, call to_array().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
kf
|
object
|
any object with a dict attribute, but intended to be one of the filtering classes |
required |
save_current
|
bool
|
save the current state of |
False
|
skip_private
|
Control skipping any private attribute (anything starting with '_') Turning this on saves memory, but slows down execution a bit. |
False
|
|
skip_callable
|
Control skipping any attribute which is a method. Turning this on saves memory, but slows down execution a bit. |
False
|
|
ignore
|
list of keys to ignore. |
()
|
Examples:
.. code-block:: Python
kf = KalmanFilter(...whatever)
# initialize kf here
saver = Saver(kf) # save data for kf filter
for z in zs:
kf.predict()
kf.update(z)
saver.save()
x = np.array(s.x) # get the kf.x state in an np.array
plt.plot(x[:, 0], x[:, 2])
# ... or ...
s.to_array()
plt.plot(s.x[:, 0], s.x[:, 2])
Source code in bayesian_filters/common/helpers.py
26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 | |
keys
property
¶
list of all keys
__init__(kf, save_current=False, skip_private=False, skip_callable=False, ignore=())
¶
Construct the save object, optionally saving the current state of the filter
Source code in bayesian_filters/common/helpers.py
flatten()
¶
Flattens any np.array of column vectors into 1D arrays. Basically, this makes data readable for humans if you are just inspecting via the REPL. For example, if you have saved a KalmanFilter object with 89 epochs, self.x will be shape (89, 9, 1) (for example). After flatten is run, self.x.shape == (89, 9), which displays nicely from the REPL.
There is no way to unflatten, so it's a one way trip.
Source code in bayesian_filters/common/helpers.py
save()
¶
save the current state of the Kalman filter
Source code in bayesian_filters/common/helpers.py
to_array(flatten=False)
¶
Convert all saved attributes from a list to np.array.
This may or may not work - every saved attribute must have the
same shape for every instance. i.e., if K changes shape due to z
changing shape then the call will raise an exception.
This can also happen if the default initialization in init gives the variable a different shape then it becomes after a predict/update cycle.
Source code in bayesian_filters/common/helpers.py
Discrete White Noise¶
Q_discrete_white_noise(dim, dt=1.0, var=1.0, block_size=1, order_by_dim=True)
¶
Returns the Q matrix for the Discrete Constant White Noise Model. dim may be either 2, 3, or 4 dt is the time step, and sigma is the variance in the noise.
Q is computed as the G * G^T * variance, where G is the process noise per time step. In other words, G = [[.5dt^2][dt]]^T for the constant velocity model.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim
|
int (2, 3, or 4)
|
dimension for Q, where the final dimension is (dim x dim) |
required |
dt
|
float
|
time step in whatever units your filter is using for time. i.e. the amount of time between innovations |
1.0
|
var
|
float
|
variance in the noise |
1.0
|
block_size
|
int >= 1
|
If your state variable contains more than one dimension, such as a 3d constant velocity model [x x' y y' z z']^T, then Q must be a block diagonal matrix. |
1
|
order_by_dim
|
bool
|
Defines ordering of variables in the state vector. [x x' x'' y y' y''] whereas [x y z x' y' z' x'' y'' z''] |
True
|
Examples:
>>> # constant velocity model in a 3D world with a 10 Hz update rate
>>> Q_discrete_white_noise(2, dt=0.1, var=1., block_size=3)
array([[0.000025, 0.0005 , 0. , 0. , 0. , 0. ],
[0.0005 , 0.01 , 0. , 0. , 0. , 0. ],
[0. , 0. , 0.000025, 0.0005 , 0. , 0. ],
[0. , 0. , 0.0005 , 0.01 , 0. , 0. ],
[0. , 0. , 0. , 0. , 0.000025, 0.0005 ],
[0. , 0. , 0. , 0. , 0.0005 , 0.01 ]])
References
Bar-Shalom. "Estimation with Applications To Tracking and Navigation". John Wiley & Sons, 2001. Page 274.
Source code in bayesian_filters/common/discretization.py
Continuous White Noise¶
Q_continuous_white_noise(dim, dt=1.0, spectral_density=1.0, block_size=1, order_by_dim=True)
¶
Returns the Q matrix for the Discretized Continuous White Noise Model. dim may be either 2, 3, 4, dt is the time step, and sigma is the variance in the noise.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim
|
int(2 or 3 or 4)
|
dimension for Q, where the final dimension is (dim x dim) 2 is constant velocity, 3 is constant acceleration, 4 is constant jerk |
required |
dt
|
float
|
time step in whatever units your filter is using for time. i.e. the amount of time between innovations |
1.0
|
spectral_density
|
float
|
spectral density for the continuous process |
1.0
|
block_size
|
int >= 1
|
If your state variable contains more than one dimension, such as a 3d constant velocity model [x x' y y' z z']^T, then Q must be a block diagonal matrix. |
1
|
order_by_dim
|
bool
|
Defines ordering of variables in the state vector. [x x' x'' y y' y''] whereas [x y z x' y' z' x'' y'' z''] |
True
|
Examples:
>>> # constant velocity model in a 3D world with a 10 Hz update rate
>>> Q_continuous_white_noise(2, dt=0.1, block_size=3)
array([[0.00033333, 0.005 , 0. , 0. , 0. , 0. ],
[0.005 , 0.1 , 0. , 0. , 0. , 0. ],
[0. , 0. , 0.00033333, 0.005 , 0. , 0. ],
[0. , 0. , 0.005 , 0.1 , 0. , 0. ],
[0. , 0. , 0. , 0. , 0.00033333, 0.005 ],
[0. , 0. , 0. , 0. , 0.005 , 0.1 ]])
Source code in bayesian_filters/common/discretization.py
Van Loan Discretization¶
van_loan_discretization(F, G, dt)
¶
Discretizes a linear differential equation which includes white noise according to the method of C. F. van Loan [1]. Given the continuous model
x' = Fx + Gu
where u is the unity white noise, we compute and return the sigma and Q_k that discretizes that equation.
Examples:
Given y'' + y = 2u(t), we create the continuous state model of
x' = [ 0 1] * x + [0]*u(t) [-1 0] [2]
and a time step of 0.1:
>>> F = np.array([[0,1],[-1,0]], dtype=float)
>>> G = np.array([[0.],[2.]])
>>> phi, Q = van_loan_discretization(F, G, 0.1)
(example taken from Brown[2])
References
[1] C. F. van Loan. "Computing Integrals Involving the Matrix Exponential." IEEE Trans. Automomatic Control, AC-23 (3): 395-404 (June 1978)
[2] Robert Grover Brown. "Introduction to Random Signals and Applied Kalman Filtering." Forth edition. John Wiley & Sons. p. 126-7. (2012)
Source code in bayesian_filters/common/discretization.py
Linear ODE Discretization¶
linear_ode_discretation(F, L=None, Q=None, dt=1.0)
¶
Not sure what this does, probably should be removed
Source code in bayesian_filters/common/discretization.py
Kinematic Kalman Filter¶
kinematic_kf(dim, order, dt=1.0, dim_z=1, order_by_dim=True, kf=None)
¶
Returns a KalmanFilter using newtonian kinematics of arbitrary order for any number of dimensions. For example, a constant velocity filter in 3D space would have order 1 dimension 3.
Examples:
A constant velocity filter in 3D space with delta time = .2 seconds would be created with
>>> kf = kinematic_kf(dim=3, order=1, dt=.2)
>>> kf.F
>>> array([[1. , 0.2, 0. , 0. , 0. , 0. ],
[0. , 1. , 0. , 0. , 0. , 0. ],
[0. , 0. , 1. , 0.2, 0. , 0. ],
[0. , 0. , 0. , 1. , 0. , 0. ],
[0. , 0. , 0. , 0. , 1. , 0.2],
[0. , 0. , 0. , 0. , 0. , 1. ]])
which will set the state x to be interpreted as
[x, x', y, y', z, z'].T
If you set order_by_dim to False, then x is ordered as
[x y z x' y' z'].T
As another example, a 2D constant jerk is created with
kinematic_kf(2, 3)
Assumes that the measurement z is position in each dimension. If this is not true you will have to alter the H matrix by hand.
P, Q, R are all set to the Identity matrix.
H is assigned assuming the measurement is position, one per dimension dim.
>>> kf = kinematic_kf(2, 1, dt=3.0)
>>> kf.F
array([[1., 3., 0., 0.],
[0., 1., 0., 0.],
[0., 0., 1., 3.],
[0., 0., 0., 1.]])
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim
|
int, >= 1
|
number of dimensions (2D space would be dim=2) |
required |
order
|
int, >= 0
|
order of the filter. 2 would be a const acceleration model with a stat |
required |
dim_z
|
int
|
size of z vector per dimension |
1
|
dt
|
float
|
Time step. Used to create the state transition matrix |
1.0
|
order_by_dim
|
bool
|
Defines ordering of variables in the state vector. [x x' x'' y y' y''] whereas [x y z x' y' z' x'' y'' z''] |
True
|
kf
|
kalman filter like object
|
Provide your own pre-created filter. This lets you use classes other than KalmanFilter. |
None
|
Source code in bayesian_filters/common/kinematic.py
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 | |
Kinematic State Transition¶
kinematic_state_transition(order, dt)
¶
create a state transition matrix of a given order for a given time
step dt.
Source code in bayesian_filters/common/kinematic.py
Runge-Kutta 4th Order¶
runge_kutta4(y, x, dx, f)
¶
computes 4th order Runge-Kutta for dy/dx.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
y
|
scalar
|
Initial/current value for y |
required |
x
|
scalar
|
Initial/current value for x |
required |
dx
|
scalar
|
difference in x (e.g. the time step) |
required |
f
|
ufunc(y, x)
|
Callable function (y, x) that you supply to compute dy/dx for the specified values. |
required |
Source code in bayesian_filters/common/helpers.py
Inverse Diagonal¶
inv_diagonal(S)
¶
Computes the inverse of a diagonal NxN np.array S. In general this will be much faster than calling np.linalg.inv().
However, does NOT check if the off diagonal elements are non-zero. So long as S is truly diagonal, the output is identical to np.linalg.inv().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
S
|
array
|
diagonal NxN array to take inverse of |
required |
Returns:
| Name | Type | Description |
|---|---|---|
S_inv |
array
|
inverse of S |
Examples:
This is meant to be used as a replacement inverse function for the KalmanFilter class when you know the system covariance S is diagonal. It just makes the filter run faster, there is
Source code in bayesian_filters/common/helpers.py
Outer Product Sum¶
outer_product_sum(A, B=None)
¶
Computes the sum of the outer products of the rows in A and B
P = \Sum {A[i] B[i].T} for i in 0..N
Notionally:
P = 0
for y in A:
P += np.outer(y, y)
This is a standard computation for sigma points used in the UKF, ensemble Kalman filter, etc., where A would be the residual of the sigma points and the filter's state or measurement.
The computation is vectorized, so it is much faster than the for loop for large A.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
A
|
(array, shape(M, N))
|
rows of N-vectors to have the outer product summed |
required |
B
|
(array, shape(M, N))
|
rows of N-vectors to have the outer product summed
If it is |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
P |
(array, shape(N, N))
|
sum of the outer product of the rows of A and B |
Examples:
Here sigmas is of shape (M, N), and x is of shape (N). The two sets of code compute the same thing.
>>> P = outer_product_sum(sigmas - x)
>>>
>>> P = 0
>>> for s in sigmas:
>>> y = s - x
>>> P += np.outer(y, y)