topics in mmse estimation for sparse approximation
DESCRIPTION
*. Topics in MMSE Estimation for Sparse Approximation. Joint work with Irad Yavneh Matan Protter Javier Turek The CS Department, The Technion. Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel. - PowerPoint PPT PresentationTRANSCRIPT
Topics in MMSE Estimation for Sparse Approximation
Michael Elad The Computer Science Department The Technion – Israel Institute of technology Haifa 32000, Israel
Workshop: Sparsity and ComputationHausdorff center of Mathematics University of Bonn June 7-11, 2010
*
Joint work with
Irad Yavneh Matan Protter Javier Turek
The CS Department, The Technion
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
2
Part I - Motivation Denoising
By Averaging Several Sparse Representations
3
Sparse Representation Denoising
K
N
DA fixed Dictionary
α
x
222
00 y.t.smin
D
Sparse representation modeling: Assume that we get a noisy
measurement vector
Our goal – recovery of x (or α). The common practice –
Approximate the solution of
y x v vwhere is AWGN
D
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
4
Orthogonal Matching Pursuit
Ki1forrdzmin)i(ECompute 1niz
0
00
0
Sandyyr
0,0nD
2
nr
1nn
Initialization Main Iteration
1.2.3.4.5.
)i(E)i(E,Ki1.t.siChoose 00
nn Spsup.t.symin:LS
Dnn yr:sidualReUpdate D
}i{SS:SUpdate 01nnn
StopYesNo
OMP finds one atom at a time for approximating the solution of
222
00 y.t.smin
D
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
5
Using several Representations
Consider the denoising problem
and suppose that we can find a group of J candidate
solutions
such that
222
00 y.t.smin
D
J1jj
222j
00j
y
Nj
D
Basic Questions: What could we do with such
a set of competing solutions in order to better denoise y?
Why should this help? How shall we practically find
such a set of solutions?
Relevant work: [Leung & Barron (’06)] [Larsson & Selen (’07)] [Schintter et. al. (`08)] [Elad and Yavneh (’08)] [Giraud (‘08)] [Protter et. al. (‘10)] …
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
6
Generating Many Representations
Our Answer: Randomizing the OMP
Ki1forrdzmin)i(ECompute 1niz
0
00
0
Sandyyr
0,0nD
2
nr
1nn
Initialization Main Iteration
1.2.3.4.5.
)i(E)i(E,Ki1.t.siChoose 00
nn Spsup.t.symin:LS
Dnn yr:sidualReUpdate D
}i{SS:SUpdate 01nnn
Stop
)i(EcexpyprobabilitwithiChoose0
YesNo* Larsson and
Schnitter propose a more complicated and deterministic tree pruning method
*
For now, lets set the parameter c manually for best performance. Later we shall
define a way to set it automatically
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
7
Lets Try
00 0 10
y
Proposed Experiment : Form a random D. Multiply by a sparse vector α0
( ). Add Gaussian iid noise (σ=1) and
obtain . Solve the problem
using OMP, and obtain . Use RandOMP and obtain . Lets look at the obtained
representations …
100
200
D
1000RandOMPj j 1
0
200 2
min s.t. y 100
D
+=v y
OMP
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
8
Some Observations
0 10 20 30 400
50
100
150
Candinality
His
togr
am
Random-OMP cardinalitiesOMP cardinality
85 90 95 100 1050
50
100
150
200
250
300
350
Representation ErrorH
isto
gram
Random-OMP errorOMP error
0 0.1 0.2 0.3 0.40
50
100
150
200
250
300
Noise Attenuation
His
togr
am
Random-OMP denoisingOMP denoising
0 5 10 15 200.05
0.1
0.15
0.2
0.25
0.3
0.35
Cardinality
Noi
se A
ttenu
atio
n
Random-OMP denoisingOMP denoising
00
20 22
0 2
ˆy
D DD
22
y D
We see that•The OMP gives
the sparsest solution
•Nevertheless, it is not the most effective for denoising.
•The cardinality of a representation does not reveal its efficiency.
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
9
The Surprise … (to some of us)
0 50 100 150 200-3
-2
-1
0
1
2
3
index
valu
e
Averaged Rep.Original Rep.OMP Rep.
1000 RandOMPj
j 1
1ˆ 1000
Lets propose the average
as our representation
This representation IS NOT SPARSE AT ALL but its noise attenuation is: 0.06 (OMP gives 0.16)
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
10
0 0.1 0.2 0.3 0.4 0.50
0.05
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
OMP Denoising Factor
Ran
dOM
P D
enoi
sing
Fac
tor
OMP versus RandOMP resultsMean Point
Cases of zero solution, where
Repeat this Experiment …•Dictionary (random) of
size N=100, K=200 •True support of α is 10• σx=1 and ε=10 •We run OMP for
denoising. •We run RandOMP
J=1000 times and average
•Denoising is assessed by
J
1jRandOMPjJ
1ˆ
220
220
yˆ
D
DD
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
22
y 100
11
Part II - Explanation It is
Time to be More Precise
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
12
Our Signal Model
K
N
DA fixed Dictionary
α
x D is fixed and known. Assume that α is built by:
Choosing the support s with probability P(s) from all the 2K possibilities Ω.
Lets assume that P(iS)=Pi are drawn independently.
Choosing the αs coefficients using iid Gaussian entries .
The ideal signal is x=Dα=Dsαs.The p.d.f. P(α) and P(x) are clear and known
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
2xN 0,
13
Adding Noise
K
N
DA fixed Dictionary
α
x
yv
+Noise Assumed:The noise v is additive white Gaussian vector with probability Pv(v)
The conditional p.d.f.’s P(y|), P(|y), and even P(y|s), P(s|y), are all clear and well-
defined (although they may appear nasty).
2
2
2yxexpCxyP
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
14
The Key – The Posterior P(|y)
P | yWe have access to
MAP MMSE
MAP ArgMax P( | y)ˆ MMSE E | y
The estimation of x is done by estimating α and multiplication by D.
These two estimators are impossible to compute, as we show next.
Oracle known
support s
oracle
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
*
* Actually, there is a delicate problem with this definition, due to the unavoidable mixture of continuous and discrete PDF’s. The solution is to estimate the MAP’s support S.
15
Lets Start with The Oracle
yP
P|yPy|Ps,y|P sss
2x
2s
s 2expP
2
2ss
s 2y
exp|yPD
2x
2s
2
2ss
s 22yexpy|P D
s1sTs2
1
2xsTs2s hy111ˆ
QDIDD
* When s is known
*
Comments: • This estimate is
both the MAP and MMSE.
• The oracle estimate of x is obtained by multiplication by Ds.
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
s
s s s
T 1|s| s ss s
x
P(y| s) P(y| s, )P( )d
....h h log(det( ))exp 2 2
Q Q
Based on our prior for generating the support
i ji s j s
P s P 1 P
16
The MAP Estimation
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
MAP
s s
P(y| s)P(s)s ArgMax P(s| y) ArgMax P(y)
T 1MAP s ss s
si
ji s j sx
h h log(det( ))s ArgMax exp 2 2P 1 P
Q Q
17
The MAP Estimation
Implications:
The MAP estimator requires to test all the possible supports for the maximization. For the found support, the oracle formula is used.
In typical problems, this process is impossible as there is a combinatorial set of possibilities.
This is why we rarely use exact MAP, and we typically replace it with approximation algorithms (e.g., OMP). Topics in MMSE Estimation
For Sparse ApproximationBy: Michael Elad
ij
T 1s ss s
MAP
s
i s j sx
h h log(det( ))2 2s ArgMax
log lP o 1 Pg
Q Q
18
s1ss hˆs,y|E Q
This is the oracle for s, as we have seen before
The MMSE Estimation
T
ij
1
i s
s s
j sx
s s
P(s| y) P(s) P(y| s) ...h h log( P 1det( ))e 2 Pxp 2
Q Q
s
MMSE s,y|E)y|s(Py|Eˆ
s s
MMSE ˆ)y|s(Pˆ
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
19
The MMSE Estimation
s
MMSE s,y|E)y|s(Py|Eˆ
s s
MMSE ˆ)y|s(Pˆ
Implications:
The best estimator (in terms of L2 error) is a weighted
average of many sparse representations!!! As in the MAP case, in typical problems one cannot
compute this expression, as the summation is over a combinatorial set of possibilities. We should propose approximations here as well.
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
20
This is our c in
the Random-
OMP
The Case of |s|=1 and Pi=P
T 1s ss s
2T 2x
i2 2 2
ij
i j sx
x
s
h h log(det( ))P(s| y) exp 2 2P 1
1exp (y d)2
P
Q Q
The i-th atom in D Based on this we can propose a greedy
algorithm for both MAP and MMSE: MAP – choose the atom with the largest inner product (out of
K), and do so one at a time, while freezing the previous ones (almost OMP).
MMSE – draw at random an atom in a greedy algorithm, based on the above probability set, getting close to P(s|y) in the overall draw (almost RandOMP).
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
21
Comparative Results The following results correspond to a small dictionary (10×16), where the combinatorial formulas can be evaluated as well.Parameters: • N,K: 10×16• P=0.1
(varying cardinality)
• σx=1• J=50 (RandOMP)• Averaged over 1000
experiments
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
0.60.2 0.4 0.8 10
0.2
0.4
0.6
0.8
1
Rel
ativ
e R
epre
sent
atio
n M
ean-
Squ
ared
-Err
or
OracleMMSEMAPOMPRand-OMP
22
Part III – Diving In A Closer
Look At the Unitary Case IDDDD TT
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
23
Few Basic Observations
2 2T x
s s s2 2 2 2x x
Ts s2 2 s
2oracle 1 2x
ss s 2 2 s sx
1 1
1 1h y
h cˆ
Q D D I I
D
Q
yTDLet us denote
(The Oracle)
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
s
2 2x
log(det( )) 1S log2 1 c
QT 1 2 2s ss
2 s 2h h c
2
Q
24
Back to the MAP Estimation
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
T 1s ss s i
ji s j sx
h h log(det( ) P 1)P S| y exp 2 2 P
Q Q
ii s i si
222 ii2
P q11 cc
PP S| y exp
25
The MAP Estimator
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
ii s
qP S| y
222 i
i i2i
P 1 ccq exp 1 P
is obtained by maximizing the
expression
MAPS
Thus, every i such that qi>1 should be in the support, which leads to 22 2
i i 2MAP 2i i
i2c logcˆ P 1 c0 Otherwise
1 P
-3 -2 -1 0 1 2 3-3
-2
-1
0
1
2
3
iMA
P
P=0.1=0.3x=1
ig
26
The MMSE Estimation
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
Some algebra
MMSE 2ii i
i
q cˆ 1 q
222 i
i i2i
P 1 ccq exp 1 P
-3 -2 -1 0 1 2 3-3
-2
-1
0
1
2
3
iMM
SE
P=0.1=0.3x=1
and we get that
This result leads to a dense representation vector. The curve is a
smoothed version of the MAP one.
27
What About the Error ?
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
n2oracle 1 2 2
s i2 i 1E trace ... c gˆ
Q
2 2MMSE MMSE oracle1ss2 2s
n n2 2 4 2 2
i i i ii 1 i 1
E P s| y traceˆ ˆ ˆ
... c g c g g
Q
2 2MAP MAP MMSE MMSE2 2
n n2 2 4 2 MAP
i i i i ii 1 i 1
E Eˆ ˆ ˆ ˆ
... c g c g I (1 2g)
ii
i
qg 1 q
28Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
A Synthetic Experiment
0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 20
0.05
0.1
0.15
0.2
0.25
The following results correspond to a dictionary of size (100×100)Parameters: • n,K: 100×100• P=0.1• σx=1• Averaged over
1000 experimentsThe average errors are shown relative to n2
Empirical OracleTheoretical OracleEmpirical MMSETheoretical MMSEEmpirical MAPTheoretical MAP
Relative Mean-Squared-Error
29
Part IV - Theory Estimation Errors
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
30
Useful Lemma
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
Then .
Let (ak,bk) k=1,2, … ,n be pairs of positive real numbers. Let m be the index of a pair such that
k m
k m
a ak .b b
n
kk 1 mn
mk
k 1
a abb
Equality is obtained only if all the ratios ak/bk are equal.
n n2MMSE 2 2 4 2 2
i i i i2 i 1 i 1E c g c g gˆ
n n2MAP 2 2 4 2 MAP
i i i i i2 i 1 i 1E c g c g I (1 2g)ˆ
n2oracle 2 2i2 i 1
E c gˆ
We are interested in this result because :
This leads to …
31
Theorem 1 – MMSE Error
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
2k
kk
P 1 cG 1 P
22MMSE
m2 m2oracle2
m
1 e1 2log GE ˆ 4G 42E ˆ 1 OtherwiseG e
Define . Choose m such that .
m kk, G G
this error ratio bound becomes
kP P 1K
2MMSE22oracle2
E ˆConst logK
E ˆ
32
Theorem 2 – MAP Error
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
2k
kk
P 1 cG 1 P
2 1MAPm
2 m2oracle2
m
11 2log G eE ˆ G2E ˆ 1 OtherwiseG e
Define . Choose m such that .
m kk, G G
this error ratio bound becomes
kP P 1K
2MMSE22oracle2
E ˆConst logK
E ˆ
33
The Bounds’ Factors vs. P
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
10-4 10-3 10-2 10-1 1000
5
10
15
20
25
P
Bou
nd F
acto
r
1m
m
m
11 2log G eG21 OtherwiseG e
2
mm
m
1 e1 2log G4G 421 OtherwiseG e
Parameters: • P=[0,1]• σx=1 =0.3
Notice that the tendency of
the two estimators to align for P0
is not reflected in these bounds.
34
Part V – We Are Done Summary and Conclusions
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad
35
Today We Have Seen that …
By finding the sparsest
representation and using it to recover the clean signal
How ? Sparsity and
Redundancy are used for
denoising of signals/images
Can we do better?
Yes! Averaging
several rep’s lead to better denoising, as
it approximates
the MMSEMore on these (including the slides and the relevant papers) can be found in http://www.cs.technion.ac.il/~elad
Unitary case?
MAP and MMSE enjoy a closed-form, exact and cheap formulae.
Their error is bounded and tightly
related to the oracle’s error
Topics in MMSE Estimation For Sparse ApproximationBy: Michael Elad