Unscented Kalman Filter
UnscentedKalmanFilter¶
Introduction and Overview¶
This implements the unscented Kalman filter.
API Reference¶
UnscentedKalmanFilter¶
UnscentedKalmanFilter
¶
Bases: object
Implements the Scaled Unscented Kalman filter (UKF) as defined by Simon Julier in [1], using the formulation provided by Wan and Merle in [2]. This filter scales the sigma points to avoid strong nonlinearities.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dim_x
|
int
|
Number of state variables for the filter. For example, if you are tracking the position and velocity of an object in two dimensions, dim_x would be 4. |
required |
dim_z
|
int
|
Number of of measurement inputs. For example, if the sensor provides you with position in (x,y), dim_z would be 2. This is for convience, so everything is sized correctly on
creation. If you are using multiple sensors the size of |
required |
hx
|
function(x, **hx_args)
|
Measurement function. Converts state vector x into a measurement vector of shape (dim_z). |
required |
fx
|
function(x, dt, **fx_args)
|
function that returns the state x transformed by the state transition function. dt is the time step in seconds. |
required |
points
|
class
|
Class which computes the sigma points and weights for a UKF algorithm. You can vary the UKF implementation by changing this class. For example, MerweScaledSigmaPoints implements the alpha, beta, kappa parameterization of Van der Merwe, and JulierSigmaPoints implements Julier's original kappa parameterization. See either of those for the required signature of this class if you want to implement your own. |
required |
sqrt_fn
|
callable(ndarray)
|
Defines how we compute the square root of a matrix, which has no unique answer. Cholesky is the default choice due to its speed. Typically your alternative choice will be scipy.linalg.sqrtm. Different choices affect how the sigma points are arranged relative to the eigenvectors of the covariance matrix. Usually this will not matter to you; if so the default cholesky() yields maximal performance. As of van der Merwe's dissertation of 2004 [6] this was not a well reseached area so I have no advice to give you. If your method returns a triangular matrix it must be upper triangular. Do not use numpy.linalg.cholesky - for historical reasons it returns a lower triangular matrix. The SciPy version does the right thing as far as this class is concerned. |
None (implies scipy.linalg.cholesky)
|
x_mean_fn
|
callable(sigma_points, weights)
|
Function that computes the mean of the provided sigma points and weights. Use this if your state variable contains nonlinear values such as angles which cannot be summed. .. code-block:: Python |
None
|
z_mean_fn
|
callable(sigma_points, weights)
|
Same as x_mean_fn, except it is called for sigma points which form the measurements after being passed through hx(). |
None
|
residual_x
|
callable(x, y)
|
|
None
|
residual_z
|
callable(x, y)
|
Function that computes the residual (difference) between x and y. You will have to supply this if your state variable cannot support subtraction, such as angles (359-1 degreees is 2, not 358). x and y are state vectors, not scalars. One is for the state variable, the other is for the measurement state. .. code-block:: Python |
None
|
state_add
|
Function that subtracts two state vectors, returning a new
state vector. Used during update to compute |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
x |
array(dim_x)
|
state estimate vector |
P |
array(dim_x, dim_x)
|
covariance estimate matrix |
x_prior |
array(dim_x)
|
Prior (predicted) state estimate. The _prior and _post attributes are for convienence; they store the prior and posterior of the current epoch. Read Only. |
P_prior |
array(dim_x, dim_x)
|
Prior (predicted) state covariance matrix. Read Only. |
x_post |
array(dim_x)
|
Posterior (updated) state estimate. Read Only. |
P_post |
array(dim_x, dim_x)
|
Posterior (updated) state covariance matrix. Read Only. |
z |
ndarray
|
Last measurement used in update(). Read only. |
R |
array(dim_z, dim_z)
|
measurement noise matrix |
Q |
array(dim_x, dim_x)
|
process noise matrix |
K |
array
|
Kalman gain |
y |
array
|
innovation residual |
log_likelihood |
scalar
|
Log likelihood of last measurement update. |
likelihood |
float
|
likelihood of last measurment. Read only. Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min. |
mahalanobis |
float
|
mahalanobis distance of the measurement. Read only. |
inv |
function, default numpy.linalg.inv
|
If you prefer another inverse function, such as the Moore-Penrose pseudo inverse, set it to that instead: .. code-block:: Python |
Examples:
Simple example of a linear order 1 kinematic filter in 2D. There is no need to use a UKF for this example, but it is easy to read.
>>> def fx(x, dt):
>>> # state transition function - predict next state based
>>> # on constant velocity model x = vt + x_0
>>> F = np.array([[1, dt, 0, 0],
>>> [0, 1, 0, 0],
>>> [0, 0, 1, dt],
>>> [0, 0, 0, 1]], dtype=float)
>>> return np.dot(F, x)
>>>
>>> def hx(x):
>>> # measurement function - convert state into a measurement
>>> # where measurements are [x_pos, y_pos]
>>> return np.array([x[0], x[2]])
>>>
>>> dt = 0.1
>>> # create sigma points to use in the filter. This is standard for Gaussian processes
>>> points = MerweScaledSigmaPoints(4, alpha=.1, beta=2., kappa=-1)
>>>
>>> kf = UnscentedKalmanFilter(dim_x=4, dim_z=2, dt=dt, fx=fx, hx=hx, points=points)
>>> kf.x = np.array([-1., 1., -1., 1]) # initial state
>>> kf.P *= 0.2 # initial uncertainty
>>> z_std = 0.1
>>> kf.R = np.diag([z_std**2, z_std**2]) # 1 standard
>>> kf.Q = Q_discrete_white_noise(dim=2, dt=dt, var=0.01**2, block_size=2)
>>>
>>> zs = [[i+randn()*z_std, i+randn()*z_std] for i in range(50)] # measurements
>>> for z in zs:
>>> kf.predict()
>>> kf.update(z)
>>> print(kf.x, 'log-likelihood', kf.log_likelihood)
For in depth explanations see my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
Also see the bayesian_filters/kalman/tests subdirectory for test code that may be illuminating.
References
.. [1] Julier, Simon J. "The scaled unscented transformation," American Control Converence, 2002, pp 4555-4559, vol 6.
Online copy:
https://www.cs.unc.edu/~welch/kalman/media/pdf/ACC02-IEEE1357.PDF
.. [2] E. A. Wan and R. Van der Merwe, “The unscented Kalman filter for nonlinear estimation,” in Proc. Symp. Adaptive Syst. Signal Process., Commun. Contr., Lake Louise, AB, Canada, Oct. 2000.
Online Copy:
https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [3] S. Julier, J. Uhlmann, and H. Durrant-Whyte. "A new method for the nonlinear transformation of means and covariances in filters and estimators," IEEE Transactions on Automatic Control, 45(3), pp. 477-482 (March 2000).
.. [4] E. A. Wan and R. Van der Merwe, “The Unscented Kalman filter for Nonlinear Estimation,” in Proc. Symp. Adaptive Syst. Signal Process., Commun. Contr., Lake Louise, AB, Canada, Oct. 2000.
https://www.seas.harvard.edu/courses/cs281/papers/unscented.pdf
.. [5] Wan, Merle "The Unscented Kalman Filter," chapter in Kalman Filtering and Neural Networks, John Wiley & Sons, Inc., 2001.
.. [6] R. Van der Merwe "Sigma-Point Kalman Filters for Probabilitic Inference in Dynamic State-Space Models" (Doctoral dissertation)
Source code in bayesian_filters/kalman/UKF.py
32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 743 744 745 746 747 748 749 750 751 752 753 754 755 756 757 758 759 760 761 762 763 764 765 766 767 768 769 770 771 772 773 774 775 776 777 778 779 780 781 782 783 784 785 786 787 788 789 790 791 792 793 794 795 796 797 798 799 800 801 802 803 804 805 806 807 808 809 810 | |
likelihood
property
¶
Computed from the log-likelihood. The log-likelihood can be very small, meaning a large negative value such as -28000. Taking the exp() of that results in 0.0, which can break typical algorithms which multiply by this value, so by default we always return a number >= sys.float_info.min.
log_likelihood
property
¶
log-likelihood of the last measurement.
mahalanobis
property
¶
" Mahalanobis distance of measurement. E.g. 3 means measurement was 3 standard deviations away from the predicted value.
Returns:
| Name | Type | Description |
|---|---|---|
mahalanobis |
float
|
|
__init__(dim_x, dim_z, dt, hx, fx, points, sqrt_fn=None, x_mean_fn=None, z_mean_fn=None, residual_x=None, residual_z=None, state_add=None)
¶
Create a Kalman filter. You are responsible for setting the various state variables to reasonable values; the defaults below will not give you a functional filter.
Source code in bayesian_filters/kalman/UKF.py
282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 | |
batch_filter(zs, Rs=None, dts=None, UT=None, saver=None)
¶
Performs the UKF filter over the list of measurement in zs.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
zs
|
list - like
|
list of measurements at each time step |
required |
Rs
|
(None, array or list - like)
|
optional list of values to use for the measurement error covariance R. If Rs is None then self.R is used for all epochs. If it is a list of matrices or a 3D array where len(Rs) == len(zs), then it is treated as a list of R values, one per epoch. This allows you to have varying R per epoch. |
None
|
dts
|
(None, scalar or list - like)
|
optional value or list of delta time to be passed into predict. If dtss is None then self.dt is used for all epochs. If it is a list where len(dts) == len(zs), then it is treated as a list of dt values, one per epoch. This allows you to have varying epoch durations. |
None
|
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
saver
|
Saver
|
bayesian_filters.common.Saver object. If provided, saver.save() will be called after every epoch |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
means |
ndarray((n, dim_x, 1))
|
array of the state for each time step after the update. Each entry
is an np.array. In other words |
covariance |
ndarray((n, dim_x, dim_x))
|
array of the covariances for each time step after the update.
In other words |
Examples:
.. code-block:: Python
# this example demonstrates tracking a measurement where the time
# between measurement varies, as stored in dts The output is then smoothed
# with an RTS smoother.
zs = [t + random.randn()*4 for t in range (40)]
(mu, cov, _, _) = ukf.batch_filter(zs, dts=dts)
(xs, Ps, Ks) = ukf.rts_smoother(mu, cov)
Source code in bayesian_filters/kalman/UKF.py
530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 | |
compute_process_sigmas(dt, fx=None, **fx_args)
¶
computes the values of sigmas_f. Normally a user would not call this, but it is useful if you need to call update more than once between calls to predict (to update for multiple simultaneous measurements), so the sigmas correctly reflect the updated state x, P.
Source code in bayesian_filters/kalman/UKF.py
cross_variance(x, z, sigmas_f, sigmas_h)
¶
Compute cross variance of the state x and measurement z.
Source code in bayesian_filters/kalman/UKF.py
predict(dt=None, UT=None, fx=None, **fx_args)
¶
Performs the predict step of the UKF. On return, self.x and self.P contain the predicted state (x) and covariance (P). '
Important: this MUST be called before update() is called for the first time.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
dt
|
double
|
If specified, the time step to be used for this prediction. self._dt is used if this is not provided. |
None
|
fx
|
callable f(x, dt, **fx_args)
|
State transition function. If not provided, the default function passed in during construction will be used. |
None
|
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
**fx_args
|
keyword arguments
|
optional keyword arguments to be passed into f(x). |
{}
|
Source code in bayesian_filters/kalman/UKF.py
rts_smoother(Xs, Ps, Qs=None, dts=None, UT=None)
¶
Runs the Rauch-Tung-Striebel Kalman smoother on a set of
means and covariances computed by the UKF. The usual input
would come from the output of batch_filter().
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
Xs
|
array
|
array of the means (state variable x) of the output of a Kalman filter. |
required |
Ps
|
array
|
array of the covariances of the output of a kalman filter. |
required |
Qs
|
Process noise of the Kalman filter at each time step. Optional, if not provided the filter's self.Q will be used |
None
|
|
dt
|
optional, float or array-like of float
|
If provided, specifies the time step of each step of the filter. If float, then the same time step is used for all steps. If an array, then each element k contains the time at step k. Units are seconds. |
required |
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
Returns:
| Name | Type | Description |
|---|---|---|
x |
ndarray
|
smoothed means |
P |
ndarray
|
smoothed state covariances |
K |
ndarray
|
smoother gain at each step |
Examples:
.. code-block:: Python
zs = [t + random.randn()*4 for t in range (40)]
(mu, cov, _, _) = kalman.batch_filter(zs)
(x, P, K) = rts_smoother(mu, cov, fk.F, fk.Q)
Source code in bayesian_filters/kalman/UKF.py
639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717 718 719 720 721 722 723 724 725 726 727 728 729 730 731 732 733 734 735 736 737 738 739 740 741 742 | |
update(z, R=None, UT=None, hx=None, **hx_args)
¶
Update the UKF with the given measurements. On return, self.x and self.P contain the new mean and covariance of the filter.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
z
|
numpy.array of shape (dim_z)
|
measurement vector |
required |
R
|
array((dim_z, dim_z))
|
Measurement noise. If provided, overrides self.R for this function call. |
None
|
UT
|
function(sigmas, Wm, Wc, noise_cov)
|
Optional function to compute the unscented transform for the sigma points passed through hx. Typically the default function will work - you can use x_mean_fn and z_mean_fn to alter the behavior of the unscented transform. |
None
|
hx
|
callable h(x, **hx_args)
|
Measurement function. If not provided, the default function passed in during construction will be used. |
None
|
**hx_args
|
keyword argument
|
arguments to be passed into h(x) after x -> h(x, **hx_args) |
{}
|
Source code in bayesian_filters/kalman/UKF.py
420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 | |
Merwe Scaled Sigma Points¶
MerweScaledSigmaPoints
¶
Bases: object
Generates sigma points and weights according to Van der Merwe's 2004 dissertation[1] for the UnscentedKalmanFilter class.. It parametizes the sigma points using alpha, beta, kappa terms, and is the version seen in most publications.
Unless you know better, this should be your default choice.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n
|
int
|
Dimensionality of the state. 2n+1 weights will be generated. |
required |
alpha
|
float
|
Determins the spread of the sigma points around the mean. Usually a small positive value (1e-3) according to [3]. |
required |
beta
|
float
|
Incorporates prior knowledge of the distribution of the mean. For Gaussian x beta=2 is optimal, according to [3]. |
required |
sqrt_method
|
function(ndarray)
|
Defines how we compute the square root of a matrix, which has no unique answer. Cholesky is the default choice due to its speed. Typically your alternative choice will be scipy.linalg.sqrtm. Different choices affect how the sigma points are arranged relative to the eigenvectors of the covariance matrix. Usually this will not matter to you; if so the default cholesky() yields maximal performance. As of van der Merwe's dissertation of 2004 [6] this was not a well reseached area so I have no advice to give you. If your method returns a triangular matrix it must be upper triangular. Do not use numpy.linalg.cholesky - for historical reasons it returns a lower triangular matrix. The SciPy version does the right thing. |
scipy.linalg.cholesky
|
subtract
|
callable(x, y)
|
Function that computes the difference between x and y. You will have to supply this if your state variable cannot support subtraction, such as angles (359-1 degreees is 2, not 358). x and y are state vectors, not scalars. |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
Wm |
array
|
weight for each sigma point for the mean |
Wc |
array
|
weight for each sigma point for the covariance |
Examples:
See my book Kalman and Bayesian Filters in Python https://github.com/rlabbe/Kalman-and-Bayesian-Filters-in-Python
References
.. [1] R. Van der Merwe "Sigma-Point Kalman Filters for Probabilitic Inference in Dynamic State-Space Models" (Doctoral dissertation)
Source code in bayesian_filters/kalman/sigma_points.py
25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 | |
num_sigmas()
¶
sigma_points(x, P)
¶
Computes the sigma points for an unscented Kalman filter given the mean (x) and covariance(P) of the filter. Returns tuple of the sigma points and weights.
Works with both scalar and array inputs: sigma_points (5, 9, 2) # mean 5, covariance 9 sigma_points ([5, 2], 9*eye(2), 2) # means 5 and 2, covariance 9I
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
An array-like object of the means of length n
|
Can be a scalar if 1D. examples: 1, [1,2], np.array([1,2]) |
required |
P
|
scalar, or np.array
|
Covariance of the filter. If scalar, is treated as eye(n)*P. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
sigmas |
np.array, of size (2n+1, n)
|
Two dimensional array of sigma points. Each column contains all of the sigmas for one dimension in the problem space. Ordered by Xi_0, Xi_{1..n}, Xi_{n+1..2n} |
Source code in bayesian_filters/kalman/sigma_points.py
Julier Sigma Points¶
JulierSigmaPoints
¶
Bases: object
Generates sigma points and weights according to Simon J. Julier and Jeffery K. Uhlmann's original paper[1]. It parametizes the sigma points using kappa.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n
|
int
|
Dimensionality of the state. 2n+1 weights will be generated. |
required |
sqrt_method
|
function(ndarray)
|
Defines how we compute the square root of a matrix, which has no unique answer. Cholesky is the default choice due to its speed. Typically your alternative choice will be scipy.linalg.sqrtm. Different choices affect how the sigma points are arranged relative to the eigenvectors of the covariance matrix. Usually this will not matter to you; if so the default cholesky() yields maximal performance. As of van der Merwe's dissertation of 2004 [6] this was not a well reseached area so I have no advice to give you. If your method returns a triangular matrix it must be upper triangular. Do not use numpy.linalg.cholesky - for historical reasons it returns a lower triangular matrix. The SciPy version does the right thing. |
scipy.linalg.cholesky
|
subtract
|
callable(x, y)
|
Function that computes the difference between x and y. You will have to supply this if your state variable cannot support subtraction, such as angles (359-1 degreees is 2, not 358). x and y |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
Wm |
array
|
weight for each sigma point for the mean |
Wc |
array
|
weight for each sigma point for the covariance |
References
.. [1] Julier, Simon J.; Uhlmann, Jeffrey "A New Extension of the Kalman Filter to Nonlinear Systems". Proc. SPIE 3068, Signal Processing, Sensor Fusion, and Target Recognition VI, 182 (July 28, 1997)
Source code in bayesian_filters/kalman/sigma_points.py
200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 | |
num_sigmas()
¶
sigma_points(x, P)
¶
Computes the sigma points for an unscented Kalman filter given the mean (x) and covariance(P) of the filter. kappa is an arbitrary constant. Returns sigma points.
Works with both scalar and array inputs: sigma_points (5, 9, 2) # mean 5, covariance 9 sigma_points ([5, 2], 9*eye(2), 2) # means 5 and 2, covariance 9I
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
array-like object of the means of length n
|
Can be a scalar if 1D. examples: 1, [1,2], np.array([1,2]) |
required |
P
|
scalar, or np.array
|
Covariance of the filter. If scalar, is treated as eye(n)*P. |
required |
kappa
|
float
|
Scaling factor. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
sigmas |
np.array, of size (2n+1, n)
|
2D array of sigma points :math: .. math:: :nowrap: |
Source code in bayesian_filters/kalman/sigma_points.py
Simplex Sigma Points¶
SimplexSigmaPoints
¶
Bases: object
Generates sigma points and weights according to the simplex method presented in [1].
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
n
|
int
|
Dimensionality of the state. n+1 weights will be generated. |
required |
sqrt_method
|
function(ndarray)
|
Defines how we compute the square root of a matrix, which has no unique answer. Cholesky is the default choice due to its speed. Typically your alternative choice will be scipy.linalg.sqrtm If your method returns a triangular matrix it must be upper triangular. Do not use numpy.linalg.cholesky - for historical reasons it returns a lower triangular matrix. The SciPy version does the right thing. |
scipy.linalg.cholesky
|
subtract
|
callable(x, y)
|
Function that computes the difference between x and y. You will have to supply this if your state variable cannot support subtraction, such as angles (359-1 degreees is 2, not 358). x and y are state vectors, not scalars. |
None
|
Attributes:
| Name | Type | Description |
|---|---|---|
Wm |
array
|
weight for each sigma point for the mean |
Wc |
array
|
weight for each sigma point for the covariance |
References
.. [1] Phillippe Moireau and Dominique Chapelle "Reduced-Order Unscented Kalman Filtering with Application to Parameter Identification in Large-Dimensional Systems" DOI: 10.1051/cocv/2010006
Source code in bayesian_filters/kalman/sigma_points.py
365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 | |
num_sigmas()
¶
sigma_points(x, P)
¶
Computes the implex sigma points for an unscented Kalman filter given the mean (x) and covariance(P) of the filter. Returns tuple of the sigma points and weights.
Works with both scalar and array inputs: sigma_points (5, 9, 2) # mean 5, covariance 9 sigma_points ([5, 2], 9*eye(2), 2) # means 5 and 2, covariance 9I
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
x
|
An array-like object of the means of length n
|
Can be a scalar if 1D. examples: 1, [1,2], np.array([1,2]) |
required |
P
|
scalar, or np.array
|
Covariance of the filter. If scalar, is treated as eye(n)*P. |
required |
Returns:
| Name | Type | Description |
|---|---|---|
sigmas |
np.array, of size (n+1, n)
|
Two dimensional array of sigma points. Each column contains all of the sigmas for one dimension in the problem space. Ordered by Xi_0, Xi_{1..n} |