Tracking and Filtering. Petteri Nurmi

Samankaltaiset tiedostot
Tracking and Filtering. Petteri Nurmi

The CCR Model and Production Correspondence

Capacity Utilization

Efficiency change over time

T Statistical Natural Language Processing Answers 6 Collocations Version 1.0

16. Allocation Models

Returns to Scale II. S ysteemianalyysin. Laboratorio. Esitelmä 8 Timo Salminen. Teknillinen korkeakoulu

Bounds on non-surjective cellular automata

E80. Data Uncertainty, Data Fitting, Error Propagation. Jan. 23, 2014 Jon Roberts. Experimental Engineering

Other approaches to restrict multipliers

Valuation of Asian Quanto- Basket Options

Alternative DEA Models

Huom. tämä kulma on yhtä suuri kuin ohjauskulman muutos. lasketaan ajoneuvon keskipisteen ympyräkaaren jänteen pituus

7.4 Variability management

Kvanttilaskenta - 1. tehtävät

How to handle uncertainty in future projections?

1. SIT. The handler and dog stop with the dog sitting at heel. When the dog is sitting, the handler cues the dog to heel forward.

The Viking Battle - Part Version: Finnish

Information on preparing Presentation

LYTH-CONS CONSISTENCY TRANSMITTER

Positioning Algorithms. Petteri Nurmi

Gap-filling methods for CH 4 data

Review Petteri Nurmi

SIMULINK S-funktiot. SIMULINK S-funktiot

Kvanttilaskenta - 2. tehtävät

Positioning Algorithms. Petteri Nurmi

Returns to Scale Chapters

Statistical design. Tuomas Selander

Indoor Localization I Introduction and Positioning Algorithms Petteri Nurmi

19. Statistical Approaches to. Data Variations Tuomas Koivunen S ysteemianalyysin. Laboratorio. Optimointiopin seminaari - Syksy 2007

Network to Get Work. Tehtäviä opiskelijoille Assignments for students.

Paikka- ja virhe-estimaatin laskenta-algoritmit Paikannusteknologiat nyt ja tulevaisuudessa

TM ETRS-TM35FIN-ETRS89 WTG

C++11 seminaari, kevät Johannes Koskinen

TM ETRS-TM35FIN-ETRS89 WTG

Alternatives to the DFT

Use of spatial data in the new production environment and in a data warehouse

( ( OX2 Perkkiö. Rakennuskanta. Varjostus. 9 x N131 x HH145

Metsälamminkankaan tuulivoimapuiston osayleiskaava

Tynnyrivaara, OX2 Tuulivoimahanke. ( Layout 9 x N131 x HH145. Rakennukset Asuinrakennus Lomarakennus 9 x N131 x HH145 Varjostus 1 h/a 8 h/a 20 h/a

TM ETRS-TM35FIN-ETRS89 WTG

Tietorakenteet ja algoritmit

Results on the new polydrug use questions in the Finnish TDI data

Constructive Alignment in Specialisation Studies in Industrial Pharmacy in Finland

812336A C++ -kielen perusteet,

Capacity utilization

SequentialInferencefor LatentTemporalGaussian Process Models

TM ETRS-TM35FIN-ETRS89 WTG

TM ETRS-TM35FIN-ETRS89 WTG

TM ETRS-TM35FIN-ETRS89 WTG

KONEISTUSKOKOONPANON TEKEMINEN NX10-YMPÄRISTÖSSÄ

,0 Yes ,0 120, ,8

WindPRO version joulu 2012 Printed/Page :47 / 1. SHADOW - Main Result

Mat Seminar on Optimization. Data Envelopment Analysis. Economies of Scope S ysteemianalyysin. Laboratorio. Teknillinen korkeakoulu

WindPRO version joulu 2012 Printed/Page :42 / 1. SHADOW - Main Result

TM ETRS-TM35FIN-ETRS89 WTG

1.3Lohkorakenne muodostetaan käyttämällä a) puolipistettä b) aaltosulkeita c) BEGIN ja END lausekkeita d) sisennystä

( ,5 1 1,5 2 km

Uusi Ajatus Löytyy Luonnosta 4 (käsikirja) (Finnish Edition)

11. Models With Restricted Multipliers Assurance Region Method

PHYS-C0210 Kvanttimekaniikka Exercise 2, extra challenges, week 45

Land-Use Model for the Helsinki Metropolitan Area

TM ETRS-TM35FIN-ETRS89 WTG

On instrument costs in decentralized macroeconomic decision making (Helsingin Kauppakorkeakoulun julkaisuja ; D-31)

f[x i ] = f i, f[x i,..., x j ] = f[x i+1,..., x j ] f[x i,..., x j 1 ] x j x i T n+1 (x) = 2xT n (x) T n 1 (x), T 0 (x) = 1, T 1 (x) = x.

TM ETRS-TM35FIN-ETRS89 WTG

National Building Code of Finland, Part D1, Building Water Supply and Sewerage Systems, Regulations and guidelines 2007

AYYE 9/ HOUSING POLICY

Nintendo Wii Fit -based balance testing to detect sleep deprivation: Approximate Bayesian computation -approach

On instrument costs in decentralized macroeconomic decision making (Helsingin Kauppakorkeakoulun julkaisuja ; D-31)

Teknillinen tiedekunta, matematiikan jaos Numeeriset menetelmät

HARJOITUS- PAKETTI A

Choose Finland-Helsinki Valitse Finland-Helsinki

1.3 Lohkorakenne muodostetaan käyttämällä a) puolipistettä b) aaltosulkeita c) BEGIN ja END lausekkeita d) sisennystä

WAMS 2010,Ylivieska Monitoring service of energy efficiency in housing Jan Nyman,

Exercise 1. (session: )

TM ETRS-TM35FIN-ETRS89 WTG

TM ETRS-TM35FIN-ETRS89 WTG

Operatioanalyysi 2011, Harjoitus 2, viikko 38

Toppila/Kivistö Vastaa kaikkin neljään tehtävään, jotka kukin arvostellaan asteikolla 0-6 pistettä.

Rakennukset Varjostus "real case" h/a 0,5 1,5

VAASAN YLIOPISTO Humanististen tieteiden kandidaatin tutkinto / Filosofian maisterin tutkinto

Mobile Sensing V Motion Analysis. Spring 2015 Petteri Nurmi

Trajectory Analysis. Sourav Bhattacharya, Petteri Nurmi

Lämmitysjärjestelmät

Data quality points. ICAR, Berlin,

EUROOPAN PARLAMENTTI

21~--~--~r--1~~--~--~~r--1~

RINNAKKAINEN OHJELMOINTI A,

HITSAUKSEN TUOTTAVUUSRATKAISUT

Categorical Decision Making Units and Comparison of Efficiency between Different Systems

AS Paikannus- ja navigointimenetelmät

TM ETRS-TM35FIN-ETRS89 WTG

S SÄHKÖTEKNIIKKA JA ELEKTRONIIKKA

A DEA Game II. Juha Saloheimo S ysteemianalyysin. Laboratorio. Teknillinen korkeakoulu

S Sähkön jakelu ja markkinat S Electricity Distribution and Markets

anna minun kertoa let me tell you

Lokaali bilineaarinen (lokaali Lee-Carter) kuolevuusmalli

TIEKE Verkottaja Service Tools for electronic data interchange utilizers. Heikki Laaksamo

FYSA235, Kvanttimekaniikka I, osa B, tentti Tentin yhteispistemäärä on 48 pistettä. Kaavakokoelma ja CG-taulukko paperinipun lopussa.

Transkriptio:

Tracking and Filtering Petteri Nurmi 19.11.2016 1

Questions What are state space models? Why are they relevant in position tracking? What is a Bayesian optimal filter? Which two steps form the filter? What is the Kalman filter? What are its main advantages / disadvantages? What are particle filters? What are their main advantages / disadvantages? 19.11.2016 2

Tracking Tracking refers to monitoring of location related information over time Position tracking: estimating the position of an entity Trajectory tracking: estimating the path that an entity is taking Buddy tracking: estimating when a friend enters/leaves close proximity Focus of this lecture is on so-called filtering techniques for position tracking 19.11.2016 3

Tracking and Point Estimation Point estimation refers to deriving the location of an object without knowledge about previous locations Any technique covered in previous lectures can be used for this purpose Tracking refers to estimating the location of an object given information about previous locations General idea is to average point estimates to reduce errors in individual estimates Can reduces estimation errors, and result in smoother movement trajectories / paths Algorithms that take as input position estimates and return corrected positions as a function of time 19.11.2016 4

Preliminaries Bayes Theorem A theorem that expresses how subjective belief should rationally change to account for evidence Likelihood Prior Posterior P(D), normalizer Common use in parameter estimation: Θ corresponds to parameters of interest D corresponds to measurements 19.11.2016 5

Preliminaries Bayes Theorem Example: Suppose a drug test is 99.2% accurate (positive and negative cases) and that 0.75% of people use the drug What is the probability that an individual testing positive is a user? 19.11.2016 6

Preliminaries Monte Carlo Integration Method for evaluating integrals using repeated random sampling Can be used to approximate expected value of a function of random variables Estimating expected values: Let X be a random variable with probability density f(x) Let g( ) denote an arbitrary function The expectation of g with respect to f(x) is given by: E(g(x)) = g(x) f(x) dx Monte Carlo estimate: E(g(x)) = 1/n i g(x i ) 19.11.2016 7

Monte Carlo Integration Example Assume we know that user s orientation is given by: θ N(θ π / 6, π / 12) What is E(sin(θ))? Generate samples x i N(θ π / 6, π / 12) Calculate 1/n i sin(x i ) Example (Matlab): s = normrnd(pi/6,pi/12,1000,1); sum(sin(s))/ 1000 = 0.477558 sin(pi/6) = 0.50 19.11.2016 8

Preliminaries Importance Sampling Statistical technique for estimating properties of a distribution using samples from another distribution Can be used in the context of Monte Carlo integration when desired distribution is too complex Basic idea: Choose a function g Generate samples from g Estimate w(x) = p(x)/ g(x) Return: μ = w(x) f(x) / n Note: g(x) must have same support as p(x) (i.e., must assign non-zero probabilities to same elements) 19.11.2016 9

State Space Models Mathematical model of a physical system as a set of input, output and state variables Relationships between variables follow differential and/or algebraic equations State variables: smallest set of system variables that can be used to represent the entire state of the system at a given time Common form: x k = Ax k-1 + v y k = Ux k + w State equation Measurement equation 19.11.2016 10

Hidden Markov Models Probabilistic state space model Measurements y k p(y k x k ) System state x k p(x k x k-1 ) (e.g., GPS coord.) (true location) The measurements y k are observed by the system The true state x k is unknown Example: Gaussian random walk State evolves according a Gaussian model with noise: x k x k-1 + N(0,p) Measurements correspond to the system state with added noise: y k x k + N(0,q) 19.11.2016 11

Hidden Markov Models - Example 19.11.2016 12

Hidden Markov Models - Assumptions The dynamic model p(x k x k-1 ) governs the evolution of the system state Assumption I: Future x k is independent of the past given the present x k-1 p(x k x 1:k-1, y 1:k-1 ) = p(x k x k-1 ) Assumption II: Past x k-1 is independent of the future given the present x k p(x k-1 x k:t, y k:t ) = p(x k-1 x k ) The measurements y k are conditionally independent given current system state x k p(y k x 1:k, y 1:k-1 ) = p(y k x k ) 19.11.2016 13

Filtering, Prediction, Smoothing Assume at time k we have the measurements sequence y 1:k-1 available Prediction: Estimating future values of the system Filtering: Estimate current value of the system in a way that it is consistent with the measurements Smoothing: Estimate past values of the system based on a measurement sequence 19.11.2016 14

Bayesian Optimal Filter - Principle Given a sequence of measurements y 1:k, determine the current state of the system x k Bayesian optimal filter returns a distribution over the current state given the measurements p(x k y 1:k ) Requires prior distribution p(x 0 ) State space model: x k p(x k x k-1 ) y k p(y k x k ) Measurement sequence y 1:k =y 1, y k 19.11.2016 15

Bayesian Optimal Filter Prediction and Update Steps We focus on sequential filtering Observed values change over time A sequential Bayesian filter consists of two steps: Prediction: estimating the most likely next value of the system state given the measurements up to that point Given y 1:k-1 estimate the probability distribution p(x k, y 1:k-1 ) Updating: given new sensor measurement y k, refine the distribution of x k to be consistent with y k Corresponds to determining the distribution p(x k, y 1:k ) 19.11.2016 16

Bayesian Optimal Filter Prediction Step Assume the posterior of the previous time is known p(x k-1 y 1:k-1 ) Given a new measurement x k, we have: p(x k, x k-1 y 1:k-1 ) = p(x k x k-1, y 1:k-1 ) p(x k-1 y 1:k-1 ) = p(x k x k-1 ) p(x k-1 y 1:k-1 ) Probability distribution of the current state can be determined using marginalization: p(x k, y 1:k-1 ) = p(x k, x k-1 y 1:k-1 ) dx k-1 = p(x k x k-1 ) p(x k-1 y 1:k-1 )dx k-1 This is known as the Chapman-Kolmogorov equation 19.11.2016 17

Bayesian Optimal Filter Update Step Given a new sensor measurement y k, update the distribution of x k to be consisted with y k p(x k, y 1:k ) = Z k -1 p(y k x k y 1:k-1 ) p(x k, y 1:k-1 ) = Z k -1 p(y k x k ) p(x k, y 1:k-1 ) Normalizer Measurement likelihood Prior: follows from the prediction step (Chapman-Kolmogorov Equation) 19.11.2016 18

Bayesian Optimal Filter Summary Determines new prior for the state x k Combine prior with likelihood of sensor measurement Initialization Initial state given by the prior distribution p(x 0 ) Prediction Given past measurements, estimate the probability over the current state p(x k, y 1:k-1 ) = p(x k x k-1 ) p(x k-1 y 1:k-1 ) dx k-1 Update Given new measurement, update probability of current state to be consistent with the measurement p(x k, y 1:k ) = Z k -1 p(y k x k ) p(x k, y 1:k-1 ) Normalization constant Z k = p(y k x k ) p(x k, y 1:k-1 ) dx k 19.11.2016 19

Kalman Filter A closed form solution for the Bayesian optimal filter for linear and Gaussian equations Assumes system can be modeled using the linear Gauss-Markov model: Transition matrix of the system dynamics White process noise N(0,Q) Measurement model matrix x k = A k-1 x k-1 + q k y k = H k x k + r k White measurement noise N(0,R) p(x k, x k-1 ) = N(x k A k-1 x k-1, Q) p(y k, x k ) = N(y k H k x k, R) 19.11.2016 20

Kalman Filter Prediction Step Prediction equation Assume posterior of previous step is Gaussian p(x k-1 y 1:k-1 ) = N(x k-1 m k-1, P k-1 ) From the Chapman-Kolmogorov equation we get p(x k, y 1:k-1 ) = p(x k x k-1 ) p(x k-1 y 1:k-1 ) dx k-1 = N(x k A k-1 x k-1, Q) N(x k-1 m k-1, P k-1 ) = N(x k A k-1 m k-1, A k-1 P k-1 A T k-1 + Q) 19.11.2016 21

Kalman Filter Prediction Step The probability distribution p(x k, y 1:k-1 ) is represented by two variables: Estimated mean of the state variable Estimate covariance matrix of the state variable The prediction step calculates a priori estimates of these variables A priori estimate of the mean value of state: m* k A k-1 m k-1 A priori estimate of covariance matrix of current state: P* k A k-1 P k-1 A T k-1 + Q 19.11.2016 22

Kalman Filter Update Step The joint distribution of x k and y k is given by p(x k,y k y 1:k-1 ) = p(y k x k ) p(x k, y 1:k-1 ) = N(y k H k x k, R) * N(x k A k-1 m k-1, A k-1 P k-1 A T k-1 + Q) 19.11.2016 23

Kalman Filter Update Step We have p(x k, y 1:k ) = p(x k, y k, y 1:k-1 ) Following the identities given in the Appendix, we can derive the following estimates: 19.11.2016 24

Kalman Filter Update Step Kalman gain Measurement residual Covariance residual 19.11.2016 25

Kalman Filter Summary Prediction step A priori mean and covariance estimated by propagating the values of the previous state using system dynamics Update step Mean given by the combination of a priori state estimate, Kalman gain and measurement residual Covariance given by the combination of a priori covariance estimate, Kalman gain and a term that corresponds to the Kalman gain multiplied by covariance residual 19.11.2016 26

Kalman Filter Example Assume we are tracking the 2D position of an object (x,y), its velocity v and orientation of motion α State model x k = A k-1 x k-1 + Q 19.11.2016 27

Kalman Filter Example Measurement model y k = H k x k + R given by: 19.11.2016 28

Kalman Filter Example 19.11.2016 29

Kalman Filter Properties Only applicable to linear Gaussian models. For nonlinear cases need other techniques Extended Kalman filter Particle filtering If model is time-invariant, Kalman gain converges to a constant and filter becomes stationary Measurements affect only mean estimates, sequence of Kalman gains can be calculated offline and stored for later use 19.11.2016 30

Particle Filtering In real world applications noise often non-gaussian and the state space is non-linear x k = z(x k-1 ) + q k y k = h(x k )+ r k Particle filtering uses Monte Carlo integration recursively to approximate the filtering distribution As shown earlier, for function g( ) and distribution f(x), we have: E(g(x)) = g(x) f(x) dx When f(x) equals the filtering density, we get: E(g(x)) = g(x) p(x k, y 1:k ) dx 1/K j g(x kj ) 19.11.2016 31

Particle Filtering Sequential Importance Sampling Most particle filters variants of the sequential importance sampling algorithm Recursive Monte Carlo implementation of importance sampling Approximates filtering distribution by a weighted set of particles g(x) p(x k, y 1:k ) dx j w j g(x kj ) w j is the importance weight of particle Require that j w j = 1 19.11.2016 32

Sequential Importance Sampling Overview Initialization: Draw K particles according to prior distribution Set w j = 1/ K for all particles Estimation step: Draw K samples from proposal distribution: x kj π(x k x kj, y 1:k ) Update importance weight of particle j w kj = w k-1 j p(y k x kj ) p(x kj x k-1j ) / π(x k j, x 1:k-1j, y 1:k ) Normalize weights so that j w kj = 1 State of filter can be estimated using j w kj x k j 19.11.2016 33

Particle Filtering Sequential Importance Sampling Performance depends on the choice of proposal distribution π(x k x 1:k-1, y 1:k ) Optimal proposal distribution: π(x k, x 1:k-1, y 1:k ) = p(x k x k-1, y k ) Transition prior: π(x k, y 1:k ) = p(x k x k-1 ) è w kj = w k-1 j p(y k x kj ) 19.11.2016 34

Particle Filtering Resampling The filter is degenerate when all but one importance weight are close to zero Resampling used to overcome degeneracy Replicate particles in proportion to their weights Performed using random sampling, e.g., Bootstrapping Need for resampling can be determined by estimating the effective number of particles N EFF = 1 / j (w kj ) 2 19.11.2016 35

Resampling Example Particle index Weights before resampling Index of particles selected through resampling Weights after resampling 1 2 3 4 5 6 7 8 9 10 0.0016 0.7507 0.0000 0.1028 0.0632 0.0000 0.0701 0.0000 0.0116 0.0000 N EFF = 1.7 2 2 2 7 2 7 2 2 5 2 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 0.1 19.11.2016 36

Particle Filtering Example Estimate position of object from a known starting point given estimates of velocity and orientation For simplicity, we assume Gaussian noise Particles maintain estimate of velocity and angle Assume velocity and orientation independent 19.11.2016 37

Particle Filtering Example SIS True value SIR 19.11.2016 38

Case Study Particle Filtering for indoor WiFi positioning Goal: track position of object over time t using noisy point estimates Particles correspond to 3-tuples (x,y,w) where x,y denote the (2D) coordinates of the particle w is the weight of the particle Measurements are vectors r containing RSS values from different access points Assume global indexing for access points so that d i always refers to the same access point Model assumptions: Probabilistic positioning, i.e., we determine locations based on the model P(x,y d) Assume Gaussian model, i.e., P(x,y d) = N(x,y d, Θ) Θ contains parameters of the model (mean, variance) We consider a grid model where x,y is mapped into closest grid and mean and variance estimated for each access point, grid cell pair 19.11.2016 39

Case Study Example of Grid aisles Example of a grid in a section of a supermarket 19.11.2016 40

Particle Filter - Initialization Initialization phase creates the first set of particles Can use the prior distribution of locations, an uniform distribution, or another method Weights of all particles initialized to 1 / N = particle 19.11.2016 41

Particle Filter Initialization Pseudocode for i = 1:N randomlocation ~ P 0 particles(i) = {randomlocation, 1 / N} end How to sample integers from a non-uniform distribution? Let p = (0.1, 0.2, 0.7) Define z = (0.1, 0.3, 1.0) (i.e., cumulative sum of p) u = rand(); i = argmin (u > z) I.e., return the integer with the smallest index for which u > z 19.11.2016 42

Particle Filter Update and Filtering System dynamics used to predict new location for particles Let x t denote the new location of a particle Define d = x t x t-1 as the distance (in grid cells) between the new and old locations of the particle We scatter the particles using N(x t r) exp(-(d+1)) N(x t r) is likelihood of new state given measurement r exp(-(d+1)) is a movement model (i.e., p(x t x t-1 )) The normal distribution ensures particles consistent with measurements (filtering) The exponential distribution ensures movements realistic (update) Once the new states are drawn, the weights of the particles are set according to the likelihood model I.e., w = p(x t r) Recall that need to also calculate number of effective particles and resample the particles if it is too low (i.e., filter is degenerate) 19.11.2016 43

Example: Before Resampling = new location of particle 19.11.2016 44

Example: Before Resampling Assume likelihood given by heatmap below è particles on the right will have small weights after update step = new location of particle 19.11.2016 45

Example: After Resampling Resampling focuses the particles in the appropriate region = new location of particle 19.11.2016 46

Particle Filtering Extensions Auxiliary particle filters Sampling utilizes an auxiliary variable, can be used to reduce variance of estimated states Rao-Blackwellization Estimate components with linear dynamics analytically (Kalman filter) and apply particle filters only for components with non-linear parts KLD sampling Particle filter that adapts the number of particles based on the uncertainty in the current state 19.11.2016 47

Particle Filtering Summary Technique that uses Monte Carlo simulations to implement sequential Bayesian optimal filtering SIS: Sequential Importance Sampling Represent filtering distribution using a weighted set of particles Proposal distribution used to sample state of particles SIR: Sequential Importance Resampling Overcome particle degeneracy by resampling particles when effective number of particles small Trade-off between efficiency and number of particles Higher number of particles provides more accurate approximation, but requires more computational resources 19.11.2016 48

Summary State space Mathematical model of a physical system Consists of input, output and control variables Probabilistic state space model State space model where measurements are random variables and estimates are probabilistic Bayesian optimal filter Probabilistic recursive estimator Provides a probabilistic distribution over the state of a dynamical system 19.11.2016 49

Summary Kalman filter Closed form solution of the Bayesian optimal filter for linear and Gaussian state space models Particle filters Sequential Monte Carlo technique that approximate Bayesian optimal filter Enable to handle non-linear relationships and non- Gaussian noise Resampling essential to avoid degeneracy 19.11.2016 50

Literature Hightower, J. & Borriello, G., Particle Filters for Location Estimation in Ubiquitous Computing: A Case Study, Proceedings of the 6th International Conference on Ubiquitous Computing (Ubicomp), Springer, 2006, 88-106 Wang, H.; Lenz, H.; Szabo, A.; Bamberger, J. & Hanebeck, U. D. WLAN-Based Pedestrian Tracking Using Particle Filters and Low-Cost MEMS Sensors, Proceedings of the 4th Workshop on Positioning, Navigation and Communication (WPNC), IEEE, 2007, 1-7 Widyawan; Klepal, M. & Beauregard, S., A Backtracking Particle Filter for fusing building plans with PDR displacement estimates, Proceedings of the 5th Workshop on Positioning, Navigation and Communication (WPNC), IEEE, 2008, 207-212 Fox, D.; Burgard, W.; Dellaert, F. & Thrun, S., Monte Carlo Localization: Efficient Position Estimation for Mobile Robots., Proceedings of the Sixteenth National Conference on Artificial Intelligence and Eleventh Conference on Innovative Applications of Artificial Intelligence (AAAI/IAAI), MIT Press, 1999, 343-349 Nurmi, P.; Bhattacharya, S. & Kukkonen, J., A grid-based algorithm for on-device GSM positioning, Proceedings of the 12th International Conference on Ubiquitous Computing (UbiComp), 2010, 227-236 19.11.2016 51

Literature Doucet, A.; de Freitas, N. & Gordon, N. (Eds.), Sequential Monte Carlo Methods in Practice, Springer, 2001 Fox, D., Adapting the sample size in particle filters through KLD-sampling International journal of robotics research, 2003, 22, 985-1003 Gilks, W.; Spiegelhalter, D. & Richardson, S., Markov Chain Monte Carlo in Practice Chapman & Hall/CRC, 1996 Thrun, S.; Fox, D.; Burgard, W. & Dellaert, F., Robust Monte Carlo localization for Mobile Robots, Artificial Intelligence, 2001, 128, 99-141 19.11.2016 52

Appendix Multivariate Gaussian Distribution Multivariate Gaussian probability density: Let x and y have Gaussian distributions Then the joint and marginal distributions are 19.11.2016 53

Appendix Multivariate Gaussian Distribution If random variables x and y have the joint Gaussian probability density Then the marginal and conditional densities of x and y are given as follows: 19.11.2016 54