markov random field extensions using state space models

9
BAYESIAN STATISTICS 7, pp. 000–000 J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid, D. Heckerman, A. F. M. Smith and M. West (Eds.) c Oxford University Press, 2003 Markov Random Field Extensions using State Space Models CLAUS DETHLEFSEN Department of Mathematical Sciences, Aalborg University, Aalborg, Denmark [email protected] SUMMARY We elaborate on the link between state space models and (Gaussian) Markov random elds. We extend the Markov random eld models by generalising the corresponding state space model. It turns out that several non Gaussian spatial models can be analysed by combining approximate Kalma n lter techniques with importance sampling. We illus trate the ideas by formula ting a model for edge detection in digital images, which then forms the basis of a simulation study. Keywords: EXTENDED KALMAN SMOOTHING; IMAGE RESTORATION; EDGE DETECTION; GAUSSIAN MIXTURES; LA TTICE DA TA. 1. INTRODUCTION The class of state space models is very broad and comprises structural time series models, ARIMA models, cubic spline models and as demonstrated by Lavine (1999) also Markov random eld models. The Kalman lter techniques are powerful tools for inference in such sequential models. Basic references on state space model methodology is in Harv ey (1989), West and Har ris on (1997 ), Durbin and Koo pman (2001 ). In the past decade, interest has been in developing Markov chain Monte Carlo (MCMC) methods for the analysis of complex state state space models, see Carlin et al. (1992), Carter and Kohn (1994), Fr¨ uhwirth-Schnatter (1994) and de Jong and Shephard (1996). Our approach is not based on MCMC, but on iterated extended Kalman smoothing, which may be combined with importance sampling for exact simulation, see Durbin and Koopman (2001 ). Using this method, we avoid the MCMC problems of ensuring that the Markov chain is mixing well and assessing whether the chain has converged. Writing Markov random eld models as state space models following Lavine (1999), makes it possible to use Kalman lter techniques to extend and analyse more complex Mark ov random eld models. We show how to analys e such extensions and illustrate by formulating a model for restoring digital images with focus on nding edges in the image. Ho we ve r, the new class of models also ha ve applicati ons withi n agri cul tura l experiments, see e.g., Besag and Higdon (1999) and within disease mapping, see e.g., Knorr-Held and Rue (2002).

Upload: tao-chen

Post on 08-Apr-2018

227 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 1/9

BAYESIAN STATISTICS 7, pp. 000–000 J. M. Bernardo, M. J. Bayarri, J. O. Berger, A. P. Dawid,D. Heckerman, A. F. M. Smith and M. West (Eds.)c Oxford University Press, 2003 

Markov Random Field Extensionsusing State Space Models

CLAUS DETHLEFSENDepartment of Mathematical Sciences, Aalborg University, Aalborg, Denmark 

[email protected]

SUMMARY

We elaborate on the link between state space models and (Gaussian) Markov random fields. Weextend the Markov random field models by generalising the corresponding state space model. Itturns out that several non Gaussian spatial models can be analysed by combining approximateKalman filter techniques with importance sampling. We illustrate the ideas by formulating amodel for edge detection in digital images, which then forms the basis of a simulation study.

Keywords: EXTENDED KALMAN SMOOTHING; IMAGE RESTORATION; EDGE DETECTION;

GAUSSIAN MIXTURES; LATTICE DATA.

1. INTRODUCTION

The class of state space models is very broad and comprises structural time seriesmodels, ARIMA models, cubic spline models and as demonstrated by Lavine (1999)also Markov random field models. The Kalman filter techniques are powerful tools forinference in such sequential models. Basic references on state space model methodologyis in Harvey (1989), West and Harrison (1997), Durbin and Koopman (2001). Inthe past decade, interest has been in developing Markov chain Monte Carlo (MCMC)methods for the analysis of complex state state space models, see Carlin et al. (1992),Carter and Kohn (1994), Fruhwirth-Schnatter (1994) and de Jong and Shephard (1996).Our approach is not based on MCMC, but on iterated extended Kalman smoothing,

which may be combined with importance sampling for exact simulation, see Durbinand Koopman (2001). Using this method, we avoid the MCMC problems of ensuringthat the Markov chain is mixing well and assessing whether the chain has converged.Writing Markov random field models as state space models following Lavine (1999),makes it possible to use Kalman filter techniques to extend and analyse more complexMarkov random field models. We show how to analyse such extensions and illustrateby formulating a model for restoring digital images with focus on finding edges in theimage. However, the new class of models also have applications within agriculturalexperiments, see e.g., Besag and Higdon (1999) and within disease mapping, see e.g.,Knorr-Held and Rue (2002).

Page 2: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 2/9

2 Claus Dethlefsen 

2. GAUSSIAN STATE SPACE MODELS

The Gaussian state space model involves two processes, namely the latent state process,θt = Gtθt−1 + ωt with ωt ∼ N p(0,W t) and the observation process yt = F T t θt + ν t

with ν t ∼ N d(0,V  t). The latent process is initialised with θ0 ∼ N p(m0,C 0). Weassume that the disturbances {ν t} and {ωt} are both serially independent and mutuallyindependent. The possible time-dependent system matrices F t, Gt, V  t and W t areall considered known for every t. They may also depend on a parameter vector ψ, butthis is suppressed in the notation.

The Kalman filter recursively yields p(θt | Dt), the conditional distribution of  θtgiven all information available, Dt, at current time t,

θt | Dt−1 ∼ N p(

at

 Gtmt−1,

R t

   GtC t−1G

T t +W t)

yt | Dt−1 ∼ N d(

f t   F T t at,

Qt  F T t R tF t + V  t)

θt | Dt ∼ N p(at +

At  R tF tQ

−1t

et    (yt − f t)   

mt

,R t −AtQtAT t   

C t

).

Assessment of the state vector, θt, using all available information, Dn, is called

Kalman smoothing and we write (θt | Dn) ∼ N p(

mt,

C t). Starting with

mn = mn and

C n = C n, the Kalman smoother is a backwards recursion in time, t = n − 1, . . . , 1,

with mt = mt + Bt(mt+1 − at+1) and C t = C t + Bt( C t+1 − R t+1)BT t , where

Bt = C tGT t+1R 

−1t+1. When p is large it is often computationally faster to use the

mathematically equivalent disturbance smoother, see Koopman (1993).

The posterior mode of  p(θ |y) is mT  = (mT 1 , . . . ,mT 

n ). From the definition of conditional densities m maximises p(θ,y) and thus also

log p(θ,y) =n

t=1

log p(yt |θt) +n

t=1

log p(θt |θt−1) + log p(θ0). (1)

The derivative with respect to θ is zero at the maximum, so

m solves the following

equations.

∂ log p(yt |θt)

∂ θt+

∂ log p(θt |θt−1)

∂ θt+

∂ log p(θt+1 |θt)

∂ θt· I [t=n] = 0,

where I [t=n] is an indicator function, which is 1, when t = n, and zero otherwise. Fromthe definition of the state space model this gives

F tV  −1t (yt −F T t θt) −W −1

t (θt −Gtθt−1) +GT t+1W −1

t+1(θt+1 −Gt+1θt) · I [t=n] = 0. (2)

We may now interpret the Kalman smoother as being an algorithm to solve (2) recur-sively.

The log likelihood function for a vector of hyperparameters ψ is given by

l(ψ) =n

i=1

log p(yt |y1, . . . ,yt−1,ψ) = c −1

2

nt=1

log |Qt| + yt − f t

2

Q−1t

, (3)

Page 3: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 3/9

Markov Random Field Extensions using State Space Models 3

where x2Σ = xT Σx and c is a constant. The log likelihood for a given value of ψ can

thus be obtained directly from the Kalman filter. The expression can then be maximisednumerically yielding the maximum likelihood estimate. Approximate standard errors

can be extracted from numerical second derivatives.

3. NON GAUSSIAN STATE SPACE MODELS

Consider a state space model with the observation density p(yt |F T t θt) which may benon Gaussian. The latent process is specified as θt = Gtθt−1 + ωt, where ωt ∼ p(ωt)may be non Gaussian. Conditional on the latent process, the observations are assumedserially independent. For notational convenience, we define λt = F T t θt.

Kitagawa (1987) and Carlin et al. (1992) analysed similar models using numericalapproximation and MCMC techniques, respectively. Another approach to analyse thesetypes of state space models is by particle filtering, see for example Doucet et al. (2000).

The exposition here is due to Durbin and Koopman (2001) and is based on maximisingthe posterior p(θ |y) with respect to θ. This is equivalent to maximising log p(θ,y)given by (1). Differentiation with respect to θt and equating to zero, yields

∂ log p(θ,y)

∂ θt= F t

∂ log p(yt |λt)

∂ λt−

∂ log p(ωt)

∂ ωt+GT 

t+1∂ log p(ωt+1)

∂ ωt+1· I [t=n] = 0.

We assume that the densities are sufficiently well-behaved so that a unique maximumexists and that it solves the above equations. For a discussion of this point, see Durbinand Koopman (2001).

The strategy employed to find the maximum is to obtain an approximation to

the state space model and thus identifying yt, V  t, W 

tby comparing with (2). The

approximation requires an initial value θ which is improved by iteration, the new valuebeing the output from the Kalman smoother, m, in the approximating linear statespace model. The procedure is called iterated extended Kalman smoothing.

We will consider two methods for approximation, depending on the form of thedensities. Both methods ensures a common mode of the approximating model and theoriginal model. The first method also ensures common curvature at the mode, but isnot always applicable, so that the second method must be used.

Method 1: Letting λt = F T tθt, the observation part is linearised as

∂ log p(yt |λt)

∂ λt

≈∂ log p(yt |λt)

∂ λt λt=λt

+∂ 2 log p(yt |λt)

∂ λt∂ λT t λt=λt

(λt − λt)

Comparing this with the first term in (2), we recognise it as a state space model with(linear) observation part specified by

V  −1

t =∂ 2 log p(yt |λt)

∂ λt∂ λT t

λt=λt

yt =

λt −

V  t

∂ log p(yt |λt)

∂ λt

λt=

λt

.

For example, exponential family distributions can be linearised using Method 1.Method 2: In the second approach, it is assumed that the log densities are a

function of (yt − λt)2 or ω2

t , respectively. The method applies to either or both of 

Page 4: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 4/9

4 Claus Dethlefsen 

the derivatives ∂ log p(yt |λt)/∂ λt and ∂ log p(ωt)/∂ ωt evaluated at λt = F T tθt and

ωt =

θt −Gt

θt−1, respectively. The latter term, at time t + 1 and evaluated at

ωt+1 is

also needed for insertion in (2).

Since ∂ log p(yt |λt)

∂ λt= −2

∂ log p(yt |λt)

∂ (yt − λt)2(yt − λt)

and∂ log p(ωt)

∂ ωt= −2

∂ log p(ωt)

∂ ω2t

(θt −Gtθt−1),

we see by comparison with (2) that the approximating model is given by

V  t = −

1

2

∂ log p(yt |λt)

∂ (yt − λt)2

λt=λt

−1

W t = −1

2

∂ log p(ωt)

∂ ω2t

ωt=ωt

−1

.

For example, t-distributions or Gaussian mixtures can be approximated by Method 2.

4. MARKOV RANDOM FIELDS

Let S  denote an I  × J  regular, finite lattice. Elements of  S  are called sites or pixels,and a typical element is denoted s or ij or (i, j), depending on the context. A site s isassociated with two random variables, ys and θs, denoting, respectively, the observed

and the latent value at s. The vector of random variables at a set of sites A ⊆ S  isdenoted yA and at the remaining sites y−A = yS \A. The shorthand notation for yS  is

y. The vector yi = {yij}j=1...J  is the variables in the ith row, and y−i is the vector atsites outside the ith row. Similar notation is used for θ and other derived variables.

Let

T l =

1 −1 0 · · · 0

−1 2. . . . . .

...

0. . . . . . . . . 0

.... . . . . . 2 −1

0 · · · 0 −1 1

l×l

.

Then the prior density is given by Besag (1974),

p(θ) ∝ exp

1

2θT Pθ

, (4)

where P  is the IJ  × IJ  precision matrix

P  = τ −21 T I  ⊗ I J  + τ −2

2 I I  ⊗ T J ,

and τ 21 and τ 22 are hyper parameters, which we assume are known. The prior is improper,since the precision matrix is singular. The parameters τ 21 and τ 22 measure the degree of dependency in the row- and column-direction, respectively.

The observations y are assumed to be normally distributed given θ,

y |θ ∼ N IJ (F T θ, Σ).

Page 5: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 5/9

Markov Random Field Extensions using State Space Models 5

We will assume that Σ and F  are given so that in the following, the posterior is proper.The posterior distribution of θ is given by Bayes’ Theorem,

θ |y ∼ N IJ (m,C ), (5)

where C  = (P  + F Σ−1F T )−1 and m = C Σ−1y. The aim is to assess the posteriordistribution, p(θ |y), in a computationally attractive way. Note that the matrices tobe inverted in the above expression are of size IJ  × IJ .

The Markov random field model is equivalent to the following state space modelin the sense that their posterior distributions are identical. The state space model isevolving following the rows instead of time.

yixi

θi ∼ N 

F T i θiHθi

,

Σi 0

0 τ 21I J −1

(6)

θi | θi−1 ∼ N (θi−1, τ 22I J ) (7)

p(θ1) ∝ 1, (8)

where

H =

1 −1 0 · · · 0

0 1. . . . . .

......

. . . . . . −1 00 · · · 0 1 −1

.

Thus yi are the observed rows, θi are the corresponding latent variables and xi are

so-called pseudo observations. The analysis of the model is carried out conditional onthe pseudo observations being observed to zero as this ensures the equivalence of thestate space model with the Markov random field model. In other words, θ |x = 0

corresponds to the Markov random field prior (4) and θ |x = 0,y corresponds to theposterior (5). This equivalence was established by Lavine (1999).

5. EXTENSIONS TO MARKOV RANDOM FIELD MODELS

We now extend the Markov random field model (6)–(8). For notational convenience,we assume F i = I  and Σi = σ2I . The model can then be written in coordinate formas

yij | θij ∼ N (θij, σ2

)xij | (θij, θi,j+1) ∼ N (θij − θi,j+1, τ 21 ) (9)

θij | θi−1,j ∼ N (θi−1,j , τ 22 ). (10)

The idea is to substitute any or all of the Gaussian distributions with more generaldistributions. These may be mixed in any way and this opens up for a large class of models. For example, the observations may be Poisson distributed conditional on aMarkov random field similar to the model analysed by Christensen and Waagepetersen(2002). Another example is the fertility model considered by Besag and Higdon (1999)where t-distributions are used to allow for observational outliers and for jumps in the

underlying fertility. In both examples, MCMC methods were used to analyse the mod-els. They were reanalysed in Dethlefsen (2002) using our methodology which does notmake use of MCMC. Our approach is approximative, but using importance sampling

Page 6: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 6/9

6 Claus Dethlefsen 

as described in Durbin and Koopman (2001), it is possible to assess various functionsof interest using exact simulation.

We illustrate our approach by substituting the distributions in (9) and (10) by a

mixture of Gaussian distributions, arriving at an image restoration model. Furtherillustrations are given in Dethlefsen (2002).

5.1. Application in Digital Image Analysis

Let the observed digital image y be represented by the grey scale values yij in an I × J lattice made up of pixels ij, for i = 1, . . . , I  and j = 1, . . . , J  . We assume that yij is anindirect observation of the noise-free pixel-value θij, so that the noise-free image is θ.

Let M(µ,k,v21, v2

2) = kN (µ, v21) + (1 − k)N (µ, v2

2), be a mixture of two univariateGaussian distributions. Let X  ∼ M(0, k , v2

1, v22) and denote the density of  X  by p(x).

The log density is a function of  x2, so we use Method 2 to approximate p(x) in thepoint, ξ. Thus, p(x) is approximated by a Gaussian distribution with zero mean andvariance v2, where

v2 =k exp[−ξ/(2v2

1)]/v1 + (1 − k)exp[−ξ/(2v22)]/v2

k exp[−ξ/(2v21)]/v3

1 + (1 − k)exp[−ξ/(2v22)]/v3

2

. (11)

We now formulate a model for image restoration by replacing (9)–(10) with

xij | (θij, θi,j+1) ∼ M(θij − θi,j+1, k , τ  2, cτ 2)

θij | θi−1,j ∼ M(θi−1,j , k , τ  2, cτ 2),

where k is the probability of having variance τ 

2

, and c > 1 is the scaling factor for thevariance of a jump. When c approaches 1, the model approaches the Gaussian Markovrandom field model where edges tend to be blurred due to the smoothing nature of thismodel. For larger values of c the model implies that a larger difference in neighbouringvalues is needed in order to classify the jump as an edge.

In the approximating state space model, each pixel is associated with two variances.These are calculated using (11) with ξ substituted with E[θij |x = 0,y] and E[θij −θi,j+1 |x = 0,y], respectively, and improved by iteration. If these variances are bothsmall, the pixel is in a smooth part of the image. If one or both are large, this indicatesan edge in either left-right and/or up-down directions.

Simulation Example.To illustrate the methodology, we simulate an image and denoise it using the imagerestoration model. Implementation was done using R, see Ihaka and Gentleman (1996).The programs are available from the web page www.math.auc.dk/∼dethlef.

The simulated image is built-up by four additive parts, a) an “egg box” function b)a solid disc c) a solid rectangle and d) independent, Gaussian noise. The sum of thesefour contributions determines the grey scale value in each pixel.

Figure 1 shows a simulated image (left) of size 128 ×128 together with the posteriormode (middle) and residuals (right) of the smoothed image obtained by the Kalmansmoother using the Gaussian Markov random field model with parameters obtainedfrom maximum likelihood estimation.

The log likelihood is obtained from (3), but without the terms resulting from thepseudo observations. It turned out to be useful to concentrate σ2 out of the likeli-hood, leaving only the fraction τ 2/σ2 to be estimated by the numerical maximisation

Page 7: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 7/9

Markov Random Field Extensions using State Space Models 7

Figure 1 To the left is shown a simulated image. The middle image shows the posterior modefound by Kalman smoothing using the Gaussian Markov random field model. The right imageshows the residual image.

algorithm. For conveniency, we worked with the transformed parameter log τ /σ, allow-ing the maximiser to suggest any real number as input. In all runs, we have chosenm0j = 128 and C 0 = 1000 · I .

The resulting maximum likelihood estimates were τ 2 = 338 and σ2 = 117, and theposterior mode of θ is displayed in Figure 1 (middle). The result is very smooth andedges are blurred, as expected from the model. The residual image in Figure 1 (right)also suggests that the edges are over-smoothed.

Given a value of log τ /σ, the Kalman smoother took approximately 3 minutes in Ron a SUN Enterprise 220R machine. Maximum likelihood estimation of this parameter

took approximately 90 minutes in R running on a SUN Enterprise 220R machine.

Figure 2 The posterior mode (left) after 50 iterations using the iterated extended Kalman smoother. The residual image is shown in the middle and to the right is shown the average of the two variances for each pixel calculated via (11).

The restored image after 50 iterations of the iterated extended Kalman smootherfrom the image restoration model is seen in Figure 2 (left) along with the residualimage (middle). The parameters chosen were σ2 = 60, τ 2 = 50, k = 0.95 and c =

25. These were chosen by tuning, since all attempts to perform maximum likelihoodestimation failed due to numeric instabilities. One run with 50 iterations in the modeltook approximately 5 hours in R.

Page 8: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 8/9

8 Claus Dethlefsen 

The image to the right in Figure 2 shows the average of the up-down and left-right variances calculated in each iteration by (11). As seen, the edges are now found,although the “egg box” function confuses slightly. The edges in the posterior mode

of  θ are clearer than the result from the Gaussian Markov random field model. Theresidual image resembles white noise and indicates a good fit.

6. DISCUSSION

We provide an alternative to MCMC analysis of spatial models. For non Gaussianstate space models, the iterated extended Kalman smoother is capable of finding anapproximating Gaussian state space model with the same posterior mode. This allowsus to construct Markov random field models with non Gaussian increments. Then, theapproximating state space model can be used as importance density as described inDurbin and Koopman (2001), to provide exact sampling of quantities of interest.

In the image restoration example, we experience a weakness of our method. Whenthe lattice is high-dimensional, the iterated extended Kalman smoother is slow. Forthis reason, we have not employed importance sampling in the example, but the resultfrom the approximating state space model seems very satisfactory.

We find that the methodology has a great potential and has a wide range of ap-plications. This is illustrated in Dethlefsen (2002) with examples from agriculturalexperiments.

ACKNOWLEDGEMENTS

I am indebted to my Ph.D. supervisor Søren Lundbye-Christensen for inspiring discus-

sions.

REFERENCES

Besag, J. (1974). Spatial interaction and the statistical analysis of lattice systems (with discussion).J. Roy. Statist. Soc. B  36, 192–236.

Besag, J. and Higdon, D. (1999). Bayesian analysis of agricultural field experiments (with discussion).J. Roy. Statist. Soc. B  61, 691–746.

Carlin, B. P., Polson, N. G., and Stoffer, D. S. (1992). A Monte Carlo approach to nonnormal andnonlinear state-space modeling. J. Amer. Statist. Assoc. 87, 493–500.

Carter, C. K. and Kohn, R. (1994). On Gibbs Sampling for State Space Models. Biometrika 81, 541–553.

Christensen, O. F. and Waagepetersen, R. (2002). Bayesian prediction of spatial count data using gen-

eralised linear mixed models. Biometrics 58 (to appear).de Jong, P. and Shephard, N. (1995). The simulation smoother for time series models. Biometrika  82,

339-350.

Dethlefsen, C. (2002). Space Time Problems and Applications. Ph.D. Thesis, Aalborg University.

Doucet, A., Godsill, S. J. and West, M. (2000). Monte Carlo filtering and smoothing with application totime-varying spectral estimation. Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing , volume II, 701–704.

Durbin, J. and Koopman, S. J. (2001). Time Series Analysis by State Space Methods. Oxford: OxfordUniversity Press.

Fruhwirth-Schnatter, S. (1994), Data Augmentation and Dynamic Linear Models. J. Time Series Anal-ysis 15, 183–202.

Harvey, A. C. (1989). Forecasting, structural time series models and the Kalman filter . Cambridge:

University Press.Ihaka, R. and Gentleman, R. (1996). R: A language for data analysis and graphics. J. Comp. Graph.

Statist. 5, 299–314.

Page 9: Markov Random Field Extensions using State Space Models

8/7/2019 Markov Random Field Extensions using State Space Models

http://slidepdf.com/reader/full/markov-random-field-extensions-using-state-space-models 9/9

Markov Random Field Extensions using State Space Models 9

Kitagawa, G. (1987). Non-gaussian state-space modeling of nonstationary time series (with discussion).J. Amer. Statist. Assoc. 82, 1032–1063.

Knorr-Held, L. and Rue, H. (2002), On block updating in Markov random field models for diseasemapping. Scandinavian J. Statist. (to appear).

Koopman, S. J. Disturbance smoother for state space models. Biometrika  80, 117–126.

Lavine, M. (1999). Another look at conditionally Gaussian Markov random fields. Bayesian Statistics 6 (J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith, eds.). Oxford: University Press.

West, M. and Harrison, J. (1997). Bayesian Forecasting and Dynamic Models, Berlin: Springer.