eece 522 notes_28 ch_12b

Upload: karim2005

Post on 04-Apr-2018

226 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    1/20

    1

    12.6 Sequential LMMSE EstimationSame kind if setting as for Sequential LS

    Fixed number of parameters (but here they are modeled as random)

    Increasing number of data samples

    ][][][ nnn wHx +=Data Model:

    (n+1)1x[n] = [x[0] x[n]]T

    p1unknown PDF

    known mean & cov

    (n+1)1w[n] = [w[0] w[n]]T

    unknown PDF

    known mean & cov

    Cw must be diagonal

    with elements 2n

    & w are uncorrelated

    =][

    ]1[

    ][n

    n

    n Th

    H

    H

    (n+1)pknown

    Goal: Given an estimate ]1[ n based on x[n 1], when new

    data samplex[n] arrives, update the estimate to ][ n

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    2/20

    2

    Development of Sequential LMMSE Estimate

    Our Approach Here: Use vector space ideas to derive solution

    for DC Level in White Noise then write down general

    solution.][][ nwAnx +=

    For convenience Assume bothA and w[n] have zero mean

    Givenx[0] we can find the LMMSE estimate

    ]0[]0[}])[{(

    ])}[({]0[

    ]}0[{

    ]}0[{22

    2

    220xx

    nwAE

    nwAAEx

    xE

    AxEA

    A

    A

    +=

    +

    +=

    =

    Now we seek to sequentially update this estimate

    with the info fromx[1]

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    3/20

    3

    From Vector Space View: A

    x[0]

    x[1]0A

    1A

    First projectx[1] ontox[0] to get ]0|1[x

    Estimate new data

    given old data

    Prediction!

    Notation: the estimate

    at 1 based on 0 Use Orthogonality Principle

    ]0|1[]1[]1[~ xxx = is to x[0]

    This is the new, non-redundant info provided by datax[1]It is called the innovation

    x[0]x[1]0A

    ]1[~x

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    4/20

    4

    Find Estimation Update by ProjectingA onto Innovation

    ]1[~]}1[~{

    ]}1[~{]1[~

    ]1[~

    ]1[~

    ,]1[~]1[~

    ]1[~]1[~

    , 221 xxE

    xAEx

    x

    xAx

    x

    x

    xAA

    ===

    Gain: k1

    Recall Property: Two Estimates from data just add:

    ]0[]1[~ xx

    ]1[~

    10

    101

    xkA

    AAA

    +=

    +=

    [ ]]0|1[]1[ 101 xxkAA +=

    x[0]

    x[1]0

    A

    ]1[~x1A

    1APredicted

    New Data

    New

    DataOld

    EstimateGain

    I nnovat ion is Old Dat a

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    5/20

    5

    The Innovations Sequence

    The Innovations Sequence is

    Key to the derivation & implementation of Seq. LMMSE

    A sequence of orthogonal (i.e., uncorrelated) RVs

    Broadly significant in Signal Processing and Controls

    ]0[x

    { }],2[~],1[~],0[~ xxx

    ]0|1[]1[ xx

    ]1|2[]2[ xx

    Means: Based on

    ALL data up to n = 1

    (inclusive)

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    6/20

    6

    General Sequential LMMSE Estimation

    Initialization No Data Yet! Use Prior Information

    }{ 1 E= Est imat e

    CM =1 MMSE Mat r ix

    Update Loop For n = 0, 1, 2,

    nnTnn

    nnn

    hMh

    hMk

    12

    1

    +

    =

    Gain Vect or Calculat ion

    [ ]11 ][ += nTnnnn nx hk Est imat e Updat e

    [ ] 1= nTnnn MhkIM MMSE Mat r ix Updat e

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    7/20

    7

    Sequential LMMSE Block Diagram

    ][][][ nnn wHx +=

    =

    ][

    ]1[][

    n

    nn

    Th

    HH

    Data Model

    kn

    21 ,, nnn hM

    Compute

    Gain

    ][~ nx

    z-1

    1

    n

    +

    +

    x[n]

    Tnh

    1]1|[ = n

    Tnnnx h

    +

    InnovationObservation

    11][ + n

    Tnnn nx hk

    n

    DelayUpdatedEstimate

    Previous

    EstimatenhPredicted

    Observation

    Exact Same St r uct ur e as f or Sequent ial Linear LS!!

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    8/20

    8

    Comments on Sequential LMMSE Estimation

    1. Same structure as for sequential linear LS. BUT they solve

    the estimation problem under very different assumptions.

    2. No matrix inversion required So computationally Efficient

    3. Gain vector kn weighs confidence in new data (2n) against allprevious data (Mn-1)

    when previous data is better, gain is small dont use new data much

    when new data is better, gain is large new data is heavily used

    4. If you know noise statistics 2n and observation rows hnTover

    the desired range ofn:

    Can run MMSE Matrix Recursion without data measurements!!! This provides a Predictive Performance Analysis

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    9/20

    9

    12.7 Examples Wiener FilteringDuring WWII, Norbert Wiener developed the mathematical ideas

    that led to the Wiener filter when he was working on ways toimprove anti-aircraft guns.

    He posed the problem in C-T form and sought the best linear

    filter that would reduce the effect of noise in the observed A/C

    trajectory.

    He modeled the aircraft motion as a wide-sense stationaryrandom process and used the MMSE as the criterion for

    optimality. The solutions were not simple and there were many

    different ways of interpreting and casting the results.

    The results were difficult for engineers of the time to understand.

    Others (Kolmogorov, Hopf, Levinson, etc.) developed these ideas

    for the D-T case and various special cases.

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    10/20

    10

    Weiner Filter: Model and Problem Statement

    Signal Model: x[n] =s[n] + w[n]

    Observed: Noisy Signal

    Model as WSS, Zero-MeanCxx = Rxx

    covariance matrix

    correlation

    matrix}{ TE xxRxx =

    }}){})({{( TEEE xxxxCxx =

    Desired Signal

    Model as WSS, Zero-MeanCss = Rss

    NoiseModel as WSS, Zero-Mean

    Cww = Rww

    Same if zero-mean

    Problem Statement: Processx[n] using a linear filter to providea de-noised version of the signal that has minimum MSE

    relative to the desired signal

    LMMSE Problem!

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    11/20

    11

    Filtering, Smoothing, Prediction

    Terminology for three different ways to cast the Wiener filter problem

    Filtering Smoothing Prediction

    Given:x[0],x[1], ,x[n] Given:x[0],x[1], ,x[N-1] Given:x[0],x[1], ,x[N-1]

    Find: ]1[,],1[],0[ Nsss 0,][ >+ llNxFind: ][ ns Find:

    xCC xxx 1 =

    x[n]

    ]0[s

    2

    1 n

    ]1[s

    ]2[s

    ]3[s

    3

    x[n]

    2

    1

    x[n]

    ]5[x

    2

    1 4 5

    Note!!

    n n3 3

    ]0[

    s ]1[

    s ]2[s ]3[s

    All t hr ee solved using General LMMSE Est .

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    12/20

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    13/20

    13

    Comments on Filtering: FIR Wiener

    xaxRRr

    a

    Twwss

    Tss

    T

    ns =+= ### $### %&

    1)(~][

    [ ]

    [ ]T

    nn

    Tnnn

    aaa

    nhhh

    01

    )()()(

    ...

    ][...]1[]0[

    =

    =h

    =

    =n

    k

    n knxkhns

    0

    )( ][][][

    Wiener Filter as

    Time-Varying FIR

    Filter Causal!

    Length Grows!Wiener - Hopf Filt er ing Equat ions

    [ ]

    T

    ssssssss

    sswwss

    nrrr

    xx

    ][...]1[]0[

    )(

    =

    =+

    r

    rhRR

    R## $## %&

    =

    ][

    ]1[

    ]0[

    ][

    ]1[

    ]0[

    ]0[]1[][

    ]1[]0[]1[

    ][]1[]0[

    )(

    )(

    )(

    Toeplitz&Symmetricnr

    r

    r

    nh

    h

    h

    rnrnr

    nrrr

    nrrr

    ss

    ss

    ss

    n

    n

    n

    xxxxxx

    xxxxxx

    xxxxxx

    ''

    ###### $###### %& "

    '(''

    "

    "

    In Principle: Solve WHF Eqs for filter h at each nIn Practice: Use Levinson Recursion to Recursively Solve

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    14/20

    14

    Comments on Filtering: IIR Wiener

    Can Show: as n Wiener filter becomes Time-InvariantThus: h(n)[k] h[k]

    Then the Wiener-Hopf Equations become:

    ,1,0][][][0

    ==

    =

    llrklrkh ssk

    xx

    and these are solved using so-called Spectral Factorization

    And the Wiener Filter becomes IIR Time-Invariant:

    =

    =0

    ][][][

    k

    knxkhns

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    15/20

    15

    Revisit the FIR Wiener: Fixed Length L

    =

    =1

    0

    ][][][L

    k

    knxkhns

    ]6[s

    The way the Wiener filter was formulated above, the length of filter

    grew so that the current estimate was based on all the past data

    Reformulate so that current estimate is based on onlyL most recent

    data: x[3] x[4] x[5] x[6] x[7] x[8] x[9]

    ]7[s]8[s

    Wiener - Hopf Filt er ing Equat ions f or WSS Pr ocess w/ Fixed FI R

    [ ]Tssssssss

    sswwss

    nrrr

    xx

    ][...]1[]0[

    )(

    =

    =+

    r

    rhRRR

    ## $## %&

    =

    ][

    ]1[

    ]0[

    ][

    ]1[

    ]0[

    ]0[]1[][

    ]1[]0[]1[

    ][]1[]0[

    Toeplitz&Symmetric

    nr

    r

    r

    nh

    h

    h

    rnrnr

    nrrr

    nrrr

    ss

    ss

    ss

    xxxxxx

    xxxxxx

    xxxxxx

    ''

    ####### $####### %&"

    '(''

    "

    "

    Solve W- H Filt er ing Eqs ONCE f or f ilt er h

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    16/20

    16

    Comments on Smoothing: FIR Smoother

    WxxRRRsW

    =+=

    ### $### %&1

    )( wwssss Each row ofW like a FIR Filter Time-Varying Non-Causal!

    Block-Based

    To interpret this Consider N=1 Case:

    ]0[1

    ]0[]0[]0[

    ]0[]0[

    SNRLow,0SNRHigh,1

    xSNR

    SNRx

    rr

    rs

    wwss

    ss

    #$#%&

    +=

    +=

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    17/20

    17

    Comments on Smoothing: IIR Smoother

    Estimates[n] based on {,x[1],x[0],x[1],}

    =

    =

    k

    knxkhns ][][][Time-Invariant &

    Non-Causal IIR Filter

    The Wiener-Hopf Equations become:

    Pww( f)H(f) 0 whenPss(f)

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    18/20

    18

    Relationship of Prediction to AR Est. & Yule-WalkerWiener-Hopf Prediction Equations

    [ ]Txxxxxxxxxxxx

    Nlrlrlr ]1[...]1[][ ++==

    r

    rhR

    +

    +=

    ]1[

    ]1[][

    ][

    ]1[]0[

    ]0[]2[]1[

    ]2[]0[]1[]1[]1[]0[

    Toeplitz&Symmetric

    Nlr

    lrlr

    nh

    hh

    rNrNr

    NrrrNrrr

    xx

    xx

    xx

    xxxxxx

    xxxxxx

    xxxxxx

    ''

    ######## $######## %&"

    '(''

    ""

    For l=1 we get EXACTLY the Yule-Walker Eqs used in

    Ex. 7.18 to solve for the ML estimates of the AR parameters!!

    FIR Prediction Coefficients are estimated AR parms

    Recall: we first estimatedthe ACF lags rxx[k] using the data

    Then used the estimates to find estimates of the AR parameters

    xxxx rhR =

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    19/20

    19

    Relationship of Prediction to Inverse/Whitening Filter

    )(1

    1

    za

    u[k] x[k]

    )(za

    ARModel

    ][ kx

    +

    Inverse Filter: 1a(z)

    FIR Pred.

    Signal Observed

    u[k]

    White NoiseWhite Noise

    1-Step PredictionImagination

    &

    Modeling

    Physical Reality

    R lt f 1 St P di ti F AR(3)

  • 7/30/2019 Eece 522 Notes_28 Ch_12b

    20/20

    20

    Results for 1-Step Prediction: For AR(3)

    0 20 40 60 80 100-6

    -4

    -2

    0

    2

    4

    Sample Index, k

    SignalValue

    Signal Prediction

    Error

    At each kwe predictx[k] using past 3 samples

    Application to Data Compression

    Smaller Dynamic Range of Error gives More Efficient Binary Coding(e.g., DPCM Differential Pulse Code Modulation)