ncaf manchester 2000 1 6 july 2000 graham hesketh information engineering group rolls-royce...

25
NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

Upload: christopher-riley

Post on 20-Jan-2016

214 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 16 July 2000

Graham Hesketh

Information Engineering Group

Rolls-Royce Strategic Research Centre

Page 2: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 26 July 2000

Basic Kalman Filter Equations

)1()1|1()1|(

)1|1(ˆ)1|(ˆ

kkkkk

kkkkT QAAPP

xAxPrediction

Updating (correction)

)1|()1|()|(

)]1|(ˆ)([)1|(ˆ)|(ˆ

)()1|()1|( 1

kkkkkk

kkkkkkk

kkkkk TT

KCPPP

xCyKxx

]RC[CPCPK

Page 3: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 36 July 2000

Definitions

System state: mean estimate

x - vector of size qe.g. for a single channel,constant acceleration modelx1 = positionx2 = velocityx3 = acceleration

Page 4: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 46 July 2000

Definitions

System state: covariance (uncertainty)

P - matrix of size q x qe.g.

P11 = uncertainty (variance) in positionP22 = uncertainty in velocityP12 = covariance in position/velocity. . .

333231

232221

131211

PPP

PPP

PPP

P

Page 5: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 56 July 2000

Definitions

State dynamics: state transition matrix

A - matrix of size q x q

2

)1(ˆ)1(ˆ)1(ˆ)1|(ˆ

...

3211

100

1102

111

100

1021

2

kxkxkxkkx

TtrwT

TT

A

)1(ˆ)1|(ˆ kkk xAx

e.g. for the constant acceleration example

Page 6: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 66 July 2000

Definitions

State dynamics: process noise (uncertainty)

Q - matrix of size q x qe.g. (modelling acceleration as constant with a long duration; 3rd derivative is random and instantaneous)

T interval sampling the tow.r.t.

changeson accelerati random of variance2

00

000

000

2

a

a

Q

Page 7: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 76 July 2000

Definitions

Measurement space: observation matrix

C - matrix of size r x q

)1|(ˆ)1|(ˆ kkkk xCy

)1|()1|(ˆ 1 kkxkky

e.g. for the simple single measurementconstant acceleration model:C = [1 0 0]

Page 8: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 86 July 2000

Definitions

Measurement space: innovation sequence

• e - error between observation and prediction• a.k.a. the innovation sequence• should be zero-mean, random white noise

if not, there is a problem with either» the model, or» a sensor

)1|(ˆ)( kkk xCye

Page 9: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 96 July 2000

Definitions

Measurement space: measurement noise matrix

R - matrix of size r x re.g. for the single measurement case, R = 2

R is the covariance matrix of the measurementnoise. It is usually diagonal (i.e. uncorrelatedsensor noise).

233

222

211

00

00

00

R

Page 10: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 106 July 2000

Definitions

Correction phase: Kalman gain matrix

K - matrix of size q x r

K is the optimal correction factor to obtainthe Minimum Mean Squared Error estimateof the system state mean and covariance

1)()1|()1|( ]RC[CPCPK kkkkk TT

Page 11: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 116 July 2000

Definitions

Confidence: Relative contributions to uncertainty

Predicted measurement - uncertainty due to model

Actual measurement - uncertainty due to sensor noise)(kR

Tkk CCP )1|(

Relative weighting for the innovation error1)()1|()1|( ]RC[CPCCP kkkkk TT

Page 12: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 126 July 2000

Definitions

Correction phase: system stiffness

Model uncertainty >> Measurement noise Believe Sensor (e.g. Gain is high)

Low system stiffness, low lagModel uncertainty << Measurement noise Believe Model (e.g. Gain is low)

High system stiffness, high lag

Relative weighting (Gain) for the innovation error

21

1

1

form theof is

)()1|()1|(

]RC[CPCCP kkkkk TT

Page 13: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 136 July 2000

Definitions

Correction phase: updating state estimates

“Best” estimates of the system state mean andcovariance, at time k, conditioned on allmeasurements up to and including time k

)1|()1|()|(

)]1|(ˆ)([)1|(ˆ)|(ˆ

)()1|()1|( 1

kkkkkk

kkkkkkk

kkkkk TT

KCPPP

xCyKxx

]RC[CPCPK

Page 14: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 146 July 2000

Comparison with Fixed-Coefficient Filters

If the A, C, Q and R matrices do not vary with kthen the single channel CA Kalman filter settlesdown have steady-state Kalman gains which are equivalent to a standard -filter, but withoptimum coefficients selected automatically.

This approach is known as Weiner filtering.It is computationally much simpler than a Kalmanfilter, but the steady-state assumptions may notbe valid in many practical cases.

Page 15: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 156 July 2000

Advantages of Kalman Filters

• Copes with variable A, C, Q and R matrices

• Copes with the large uncertainty of the

initialisation phase

• Copes with missing data

• Provides a convenient measure of estimation

accuracy (via the covariance matrix P)

• Fuses information from multiple-sensors

Page 16: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 166 July 2000

Disadvantages of Kalman Filters

• Computationally complex (especially for

large numbers of sensor channels r) • Requires conditional independence of the

measurement errors, sample-to-sample• Requires linear models for state dynamics

and observation processes (A and C)• Getting a suitable value for Q (a.k.a. tuning)

can be something of a black art

Page 17: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 176 July 2000

Dealing with the disadvantages [1]

• Computationally complex (especially for large numbers of sensor channels r)

Computers are getting faster all the timeNew algorithms for fast matrix inversion

Page 18: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 186 July 2000

Dealing with the disadvantages [2]

• Requires conditional independence of the measurement errors, sample-to-sample

The independence criterion can be removed byusing Covariance Intersection updating

AB

Ca,b,c

Page 19: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 196 July 2000

Covariance Intersection vs Kalman updates

Standard Kalman filter update:

)(

)(11

111

bBaACc

BAC

Covariance Intersection update:

))1((

))1((11

111

bBaACc

BAC

Kalman filter covariance (alternative form):11

2

112

1

2

1 )( BAC

Page 20: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 206 July 2000

Dealing with the disadvantages [3]

• Requires linear models for state dynamics and observation processes (A and C)

This can be overcome by using the EKF(Extended Kalman Filter), but only at theexpense of your sanity and all your free time

A superior alternative to EKF is theUnscented Filter

Page 21: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 216 July 2000

Unscented Filter - General Problem

n-dimensional vector random variable x with mean andcovariance xxPx,

We want to predict the distribution

][xy g

yyPy,

of an m-dimensional vector random variable y, wherey is related to x by the non-linear transformation

In filtering there are two such transformations:

» the state transition

» the observation

)]1(ˆ[)1|(ˆ 1 kgkk xx)]1|(ˆ[)1|(ˆ 2 kkgkk xy

Page 22: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 226 July 2000

Unscented Filter - Solution

Compute the set of 2n points from the rows or columnsof the matrices PnThis set is zero mean with covariance P. Compute a set ofpoints with the same covariance, but with mean as

x ii x

Transform each point as ][ ii g

yCompute and by computing the mean and covarianceof the 2n points in the set }{ i

yyP

This is computationally much simpler and significantlymore accurate (lower MSE) than the corresponding EKF

Page 23: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 236 July 2000

Dealing with the disadvantages [4]

• Getting a suitable value for Q (a.k.a. tuning) can be something of a black art

There are many heuristics for dealing with processnoise (mainly to keep the filter stable); one methodis to switch between alternative models

We have Matlab toolboxes for learning Q (and allthe other matrices) from empirical data, using atechnique called Expectation Maximisation (EM)

Page 24: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 246 July 2000

Summary

• The Kalman Filter is the way to fuse multiple data

sources, providing that their errors are conditionally

independent

• Covariance Intersection is the conservative way to

deal with inter-dependence in distributed systems

• The Unscented Filter is universally superior to the

EKF for all cases involving non-linear transformations

• Many heuristics exist to extend and improve the basic

Kalman filter for tracking and sensor bias detection

Page 25: NCAF Manchester 2000 1 6 July 2000 Graham Hesketh Information Engineering Group Rolls-Royce Strategic Research Centre

NCAF Manchester 2000 256 July 2000

End of Talk

Unless there are any questions – make ‘em brief