approximation algorithms for model-based compressive sensingyuejiec/ece18898g_notes/... ·...

Post on 15-Jul-2020

6 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Approximation Algorithms for Model-Based Compressive

SensingJaron Chen

Agenda

• Motivation

• Introduction to Approximate Model Algorithms

• Mathematical Background

• Approximate Model Iterative Hard Thresholding

• Approximate Model Compressive Sampling Matching Pursuit

• Improved Recovery via Boosting

• Summary

Motivation

Compressive sensing allows for a sparse signal to be recovered from a limited number of measurements

Model-based compressive sensing uses the structure of the signal to recover the signal with even fewer measurements

Motivation: Robust Sparse Recovery

𝐴 is an m-by-n matrix.

𝑥 is the original n-dimensional k-sparse signal.

𝑒 is the noise vector.𝑦 = 𝐴𝑥 + 𝑒

𝑥 − ො𝑥 2 ≤ 𝐶 𝑒 2

where 𝐶 is the constant approximation factor.

The number of measurements needed: m = 𝑂 𝑘 logn

𝑘.

Large n can cause m to be very large.

Introduction to Approximate Model Algorithms

Approximation-Tolerant Model-Based Compressive Sensing.Using careful signal modeling can overcome this limitation. One way is through a method known as approximation-tolerant model-based compressive sensing. This framework includes sparse-recovery algorithms that only require approximate solutions.

This can reduce the number of measurements needed to recover thesignal.

Mathematical Background

Mathematical Definitions

Let [n] denote the set {1,2,…,n} and Ω ⊆ [n].

(𝑥Ω) 𝑖 = ቊ𝑥𝑖 , 𝑖 ∈ Ω0, otherwise

For a matrix 𝐴 ∈ ℝm×n,

𝐴Ω is the submatrix with the columns corresponding to Ω (𝐴Ω ∈ ℝm× Ω ).

Structured Sparsity Model

Let the structured sparsity model 𝑀 ⊆ ℝn, which is the set of vectors 𝓜= {𝑥 ∈ ℝn | supp 𝑥 ⊆ S for some S ∈𝕄}, where𝕄= Ω1, … , Ω𝑙and 𝑙 is the size of 𝓜.

Let 𝕄+ = {Ω ⊆[n]| Ω ⊆ 𝑆 for some 𝑆 ∈ 𝕄}

Therefore, the set of vectors 𝓜= {𝑥 ∈ ℝn | supp 𝑥 ∈ 𝕄+}

Model Restricted Isometry Property

The matrix 𝐴 ∈ ℝm×n satisfies the (𝛿, 𝕄)-model-RIP if this equality holds for all 𝑥 with supp(𝑥) ∈ 𝕄+.

1 − 𝛿 𝑥 22 ≤ 𝐴𝑥 2

2 ≤ 1 + 𝛿 𝑥 22

Let 𝐴 ∈ ℝm×n satisfy the RIP. Let Ω ∈ 𝕄+, 𝑥 ∈ ℝn, and y ∈ℝm. The following properties will hold:

𝐴Ω𝑇𝑦

2≤ 1 + 𝛿 𝑦 2

𝐴Ω𝑇𝐴Ω𝑥 2

≤ 1 + 𝛿 𝑥 2

(𝐼 − 𝐴Ω𝑇𝐴Ω)𝑥 2

≤ 1 + 𝛿 𝑥 2

Model-projection oracle

Model-projection oracle is a function 𝑀: ℝn ⟶ Ƥ n that follows the output model sparsity and optimal model projection properties.

Output model sparsity: 𝑀 𝑥 ∈ 𝕄+.

Optimal model projection:

Let Ω′ = 𝑀 𝑥 . Then, 𝑥 − 𝑥Ω′ 2 = minΩ∈𝕄 𝑥 − 𝑥Ω 2.

Model Iterative Hard Thresholding

One of the most popular algorithms for sparse recovery is iterative hard thresholding (IHT). This can be modified to apply to the model 𝓜.

𝑥𝑖+1← 𝑀 𝑥𝑖 + 𝐴𝑇 𝑦 − 𝐴𝑥𝑖

where 𝑥1, the initial signal estimate, is 0. This is executed until convergence.

Measurement Bound

Let k=max𝛺∈𝕄 Ω and 0 < 𝛿 < 1.

There must be a constant c such that m≥𝑐

𝛿2𝑘 log

1

𝛿+ log 𝕄 + 𝑡

for any 𝑡 > 0.

m= 𝑂 𝑘 + log 𝕄 = 𝑂(𝑘)

Incorrect Approach

For structured sparsity models, computing the optimal model-projection can be difficult. This can be simplified by using approximate model-projection oracles.

In a standard compressive sensing setting, the model consists of the set of all 𝑘-sparse signals. Thus, the oracle T𝑘 ∙ returns the 𝑘 coefficients with the largest magnitude of 𝑥.

Let c be an arbitrary constant and 𝑇𝑘′ be a projection oracle such that

for any 𝑎 ∈ ℝ:𝑎 − 𝑇𝑘

′ 𝑎 2 ≤ 𝑐 𝑎 − 𝑇𝑘(𝑎)

Incorrect Approach

Adapting this to the Model-IHT,

𝑥𝑖+1← 𝑇𝑘′ 𝑥𝑖 + 𝐴𝑇 𝑦 − 𝐴𝑥𝑖

where the first iteration is 𝑥1 ← 𝑇𝑘′ 𝐴𝑇𝑦

Why is this approach incorrect?

Consider a 1-sparse signal 𝑥 with 𝑥1 = 1 and 𝑥𝑖 = 0 for 𝑖 ≠ 1 and a

matrix 𝐴 with 𝛿, 𝑂 1 -RIP for small 𝛿.

𝑥𝑖+1← 𝑇𝑘′ 𝑥𝑖 + 𝐴𝑇 𝑦 − 𝐴𝑥𝑖

𝑥1 = 𝑥0 = 0

Algorithms Assumptions

The algorithms use two projection oracles. Given 𝑥 ∈ ℝn, a tail approximation oracle returns a support Ω𝑡 in the model such that the norm of the tail 𝑥 − 𝑥Ω𝑡 2

is approximately minimized. A head

approximation oracle returns a support Ωℎ in the model such that the norm of the tail 𝑥Ω𝑡 2

is approximately minimized.

Approximate Oracles

Head Approximation Oracle

Let 𝕄,𝕄𝐻 ⊆ Ƥ n , 𝑝 ≥ 1, and 𝑐𝐻 ∈ ℝ.

𝐻: ℝn ⟶ Ƥ n is a 𝑐𝐻 , 𝕄,𝕄𝐻 , 𝑝 -head approximation oracle if output model sparsity and optimal model projection properties hold.

Output model sparsity: 𝐻 𝑥 ∈ 𝕄𝐻+ .

Head approximation:

Let Ω′ = 𝐻 𝑥 . Then, 𝑥Ω′ 𝑝 ≥ 𝑐𝐻 𝑥Ω 𝑝 for all Ω∈𝕄.

Approximation Oracles

Tail Approximation Oracle

Let 𝕄,𝕄𝑇 ⊆ Ƥ n , 𝑝 ≥ 1, and 𝑐𝑇 ∈ ℝ.

𝑇: ℝn ⟶ Ƥ n is a 𝑐𝑇 , 𝕄,𝕄𝑇 , 𝑝 -tail approximation oracle if output model sparsity and optimal model projection properties hold.

Output model sparsity: 𝑇 𝑥 ∈ 𝕄𝑇+.

Tail approximation:

Let Ω′ = 𝑇 𝑥 . Then, 𝑥 − 𝑥Ω′ 𝑝 ≥ 𝑐𝑇 𝑥 − 𝑥Ω 𝑝 for all Ω∈𝕄.

Approximate Algorithms

Approximate Model Iterative Hard Thresholding

Approximate Model-IHT Algorithm

Assumptions on Algorithm

1. 𝑥 ∈ ℝn and 𝑥 ∈𝓜

2. 𝑦 = 𝐴𝑥 + 𝑒 for 𝑒 ∈ ℝm

3. 𝑇 is a 𝑐𝑇 , 𝕄,𝕄𝑇 , 2 -tail approximation oracle.

4. 𝐻 is a 𝑐𝐻 ,𝕄𝑇 ⊕𝕄,𝕄𝐻 , 2 -head approximation oracle.

5. A has the (𝛿, 𝕄⊕𝕄𝑇⊕𝕄𝐻)-model RIP.

Let the sum ℂ = 𝔸⊕𝔹 = Ω + Γ |Ω ∈ 𝔸 and Γ ∈ 𝔹

Model-RIP on relevant vectors

Let 𝑟𝑖 = 𝑥 − 𝑥𝑖, Ω = supp 𝑟𝑖 , and Γ = supp(𝐻 𝑏𝑖 ). For all 𝑥′ ∈ℝn with supp(𝑥′) ⊆ Ω⋃𝑇,

1 − 𝛿 𝑥′ 22 ≤ 𝐴𝑥′ 2

2 ≤ 1 + 𝛿 𝑥 22

Proof: Because supp 𝑥𝑖 ∈ 𝕄𝑇 and supp(𝑥) ∈ 𝕄, supp 𝑥 − 𝑥𝑖 ∈

𝕄𝑇⊕𝕄, and therefore, Ω ∈ 𝕄𝑇⊕𝕄. supp 𝐻 𝑏𝑖 ∈ 𝕄𝐻 , so Ω⋃Γ ∈

𝕄⊕𝕄𝑇⊕𝕄𝐻, which allows model-RIP to be performed.

Geometric Convergence

Let 𝑟𝑖 = 𝑥 − 𝑥𝑖 where 𝑥𝑖 is the signal estimate at iteration 𝑖.𝑟𝑖+1

2≤ 𝛼 𝑟𝑖

2+ 𝛽 𝑒 2

where α = 1 + 𝑐𝑇 𝛿 + 1 − 𝛼02 ,

𝛽 = 1 + 𝑐𝑇𝛽0

𝛼0+

𝛼0𝛽0

1−𝛼02+ 1 + 𝛿 ,

𝛼0= 𝑐𝐻 1 − 𝛿 − 𝛿 and 𝛽0 = (1 + 𝑐𝐻) 1 + 𝛿

Proof of Geometric Convergence

𝑎 = 𝑥𝑖 + 𝐻 𝑏𝑖

Using the triangle inequality,

𝑥 − 𝑥𝑖+12= 𝑥 − 𝑇 𝑎 2

≤ 𝑥 − 𝑎 2 + 𝑎 − 𝑇 𝑎 2

≤ 1 + 𝑐𝑇 𝑥 − 𝑎 2

= 1 + 𝑐𝑇 𝑥 − 𝑥𝑖 − 𝐻(𝑏𝑖)2

= 1 + 𝑐𝑇 𝑟𝑖 − 𝐻(𝐴𝑇𝐴𝑟𝑖 + 𝐴𝑇𝑒)2

Lemma Proof

Let Ω = supp 𝑟𝑖 and Γ = supp(𝐻 𝑏𝑖 ).

𝑟Γ𝑐𝑖 ≤ 1 − 𝛼0

2 𝑟𝑖2+

𝛽0𝛼0

+𝛼0𝛽0

1 − 𝛼02

𝑒 2

where 𝛼0 = 𝑐𝐻 1 − 𝛿 − 𝛿 and 𝛽0 = (1 + 𝑐𝐻) 1 + 𝛿

Lemma Proof: Lower Bound on 𝑏Γ𝑖

2

𝑏𝑖= 𝐴𝑇 𝑦 − 𝐴𝑥𝑖 = 𝐴𝑇𝐴𝑥𝑖 + 𝐴𝑇𝑒

Using the head-approximation and triangle inequality properties,

𝑏Γ𝑖

2= 𝐴Γ

𝑇𝐴𝑟𝑖 + 𝐴Γ𝑇𝑒

2

≥ 𝑐𝐻 𝐴Ω𝑇𝐴𝑟𝑖 + 𝐴Ω

𝑇 𝑒2

≥ 𝑐𝐻 𝐴Ω𝑇𝐴Ω𝑟

𝑖2− 𝑐𝐻 𝐴Ω

𝑇 𝑒2

≥ 𝑐𝐻 1 − 𝛿 𝑟𝑖2− 𝑐𝐻 1 + 𝛿 𝑒 2

Lemma Proof: Upper Bound on 𝑏Γ𝑖

2

𝑏𝑖= 𝐴𝑇 𝑦 − 𝐴𝑥𝑖 = 𝐴𝑇𝐴𝑥𝑖 + 𝐴𝑇𝑒

Using the triangle inequality property and restricted isometry property,

𝑏Γ𝑖

2= 𝐴Γ

𝑇𝐴𝑟𝑖 + 𝐴Γ𝑇𝑒

2= 𝐴Γ

𝑇𝐴𝑟𝑖 − 𝑟Г𝑖 + 𝑟Г

𝑖 + 𝐴Γ𝑇𝑒

2

≤ 𝐴Γ𝑇𝐴𝑟𝑖 − 𝑟Г

𝑖2+ 𝑟Г

𝑖2+ 𝐴Γ

𝑇𝑒2

≤ 𝐴Γ⋃Ω𝑇 𝐴𝑟𝑖 − 𝑟Γ⋃Ω

𝑖2+ 𝑟Г

𝑖2+ 1 + 𝛿 𝑒 2

≤ 𝛿 𝑟𝑖2+ 𝑟Г

𝑖2+ 1 + 𝛿 𝑒 2

Lemma Proof

Combining the lower and upper bounds on 𝑏Γ𝑖

2,

𝑟Γ𝑖 ≥ 𝛼0 𝑟𝑖

2− 𝛽0 𝑒 2

where 𝛼0 = 𝑐𝐻 1 − 𝛿 − 𝛿 and 𝛽0 = (1 + 𝑐𝐻) 1 + 𝛿

Lemma Proof: Case 1

If 𝛼0 𝑟𝑖2≤ 𝛽0 𝑒 2,

𝑟Γ𝑐𝑖

2≤𝛽0𝛼0

𝑒 2

because 𝑟𝑖2> 𝑟Γ𝑐

𝑖

2.

Lemma Proof: Case 2

If 𝛼0 𝑟𝑖2≥ 𝛽0 𝑒 2,

𝑟Γ𝑖

2≥ 𝑟𝑖

2𝛼0 − 𝛽0

𝑒 2

𝑟𝑖 2

Knowing 𝑟𝑖2= 𝑟Г

𝑖

2+ 𝑟Γ𝑐

𝑖

2,

𝑟Γ𝑐𝑖

2≤ 𝑟𝑖

21 − 𝛼0 − 𝛽0

𝑒 2

𝑟𝑖 2

2

Lemma Proof: Case 2 (continued)

The 1 − 𝛼0 − 𝛽0𝑒 2

𝑟𝑖2

2

term can be reduced.

𝜔0 = 𝛼0 − 𝛽0𝑒 2

𝑟𝑖 2

𝜔0 < 1 because 𝛼0 𝑟𝑖2≥ 𝛽0 𝑒 2, 𝛼0 < 1, and 𝛽0 ≥ 1.

If 0 < 𝜔 < 1,

1 − 𝜔02 ≤

1

1 − 𝜔2−

𝜔

1 − 𝜔2𝜔0

Lemma Proof: Case 2 (continued)

𝑟Γ𝑐𝑖

2≤ 𝑟𝑖

2

1

1 − 𝜔2−

𝜔

1 − 𝜔2𝛼0 − 𝛽0

𝑒 2

𝑟𝑖 2

1 − 𝜔𝛼0

1 − 𝜔2𝑟𝑖

2+

𝜔𝛽0

1 − 𝜔2𝑒 2

1−𝜔𝛼0

1−𝜔2determines the convergence rate, and this is minimized if 𝜔 = 𝛼0.

𝑟Γ𝑐𝑖

2≤ 1 − 𝛼0

2 𝑟𝑖2+

𝛼0𝛽0

1 − 𝛼02

𝑒 2

Lemma Proof (continued)

Combining the results from cases 1 and 2,

𝑟Γ𝑐𝑖 ≤ 1 − 𝛼0

2 𝑟𝑖2+

𝛽0𝛼0

+𝛼0𝛽0

1 − 𝛼02

𝑒 2

Proof of Geometric Convergence (continued)

𝑟𝑖 − 𝐻(𝐴𝑇𝐴𝑟𝑖 + 𝐴𝑇𝑒)2

can be bounded.

Let Ω = supp 𝑟𝑖 and Γ = supp 𝐻 𝐴𝑇𝐴𝑟𝑖 + 𝐴𝑇𝑒 .

𝑟𝑖 −𝐻(𝐴𝑇𝐴𝑟𝑖 + 𝐴𝑇𝑒)2= 𝑟Γ

𝑖 + 𝑟Γ𝑐𝑖 − 𝐴Γ

𝑇𝐴𝑟𝑖 + 𝐴Γ𝑇𝑒

2

≤ 𝐴Γ𝑇𝐴𝑟𝑖 − 𝑟Γ

𝑖2+ 𝑟Γ𝑐

𝑖

2+ 𝐴Γ

𝑇𝑒2

≤ 𝐴Γ⋃Ω𝑇 𝐴𝑟𝑖𝑟Γ⋃Ω

𝑖2+ 𝑟Γ𝑐

𝑖

2+ 𝐴Γ

𝑇𝑒2

≤ 𝛿 𝑟𝑖2+ 1 − 𝛼0

2 𝑟𝑖2+

𝛽0

𝛼0+

𝛼0𝛽0

1−𝛼02+ 1 + 𝛿 𝑒 2

Proof of Geometric Convergence (continued)

𝑥 − 𝑥𝑖+12= 1 + 𝑐𝑇 𝑟𝑖 − 𝐻(𝐴𝑇𝐴𝑟𝑖 + 𝐴𝑇𝑒)

2

Combining RIP, lemma, and bound on 𝑟𝑖 −𝐻(𝐴𝑇𝐴𝑟𝑖 + 𝐴𝑇𝑒)2

,

𝑥 − 𝑥𝑖+12≤ 𝛼 𝑥 − 𝑥𝑖

2+ 𝛽 𝑒 2

where α = 1 + 𝑐𝑇 𝛿 + 1 − 𝛼02 and

𝛽 = 1 + 𝑐𝑇𝛽0

𝛼0+

𝛼0𝛽0

1−𝛼02+ 1 + 𝛿

This means AM-IHT exhibits robust signal recovery.

Geometric Converge in Noiseless Case

In the noiseless case,

𝛼 = (1 + 𝑐𝑇) 𝛿 + 1 − 𝑐𝐻 1 − 𝛿 − 𝛿 2

For convergence, 𝛼 < 1. Assume 𝛿 is very small. In order for AM-IHT to converge,

𝛼 ≈ 1 + 𝑐𝑇 1 − 𝑐𝐻2 < 1

𝑐𝐻2 > 1 −

1

1 + 𝑐𝑇2

AM-IHT achieves geometric convergence comparable to other model-based compressive sensing methods.

Approximate Model-IHT Algorithm

Approximate Model Compressive Sampling Matching

Pursuit

Approximate Model-CoSaMP

This algorithm for model-based compressive sensing with approximate projection oracles focuses on recovering signals from structured sparsity models.

Algorithm

Assumptions on Algorithm

1. 𝑥 ∈ ℝn and 𝑥 ∈𝓜

2. 𝑦 = 𝐴𝑥 + 𝑒 for 𝑒 ∈ ℝm

3. 𝑇 is a 𝑐𝑇 , 𝕄,𝕄𝑇 , 2 -tail approximation oracle.

4. 𝐻 is a 𝑐𝐻 ,𝕄𝑇 ⊕𝕄,𝕄𝐻 , 2 -head approximation oracle.

5. A has the (𝛿, 𝕄⊕𝕄𝑇⊕𝕄𝐻)-model RIP.

Geometric convergence of AM-CoSaMP

Let 𝑟𝑖 = 𝑥 − 𝑥𝑖 where 𝑥𝑖 is the signal estimate at iteration 𝑖.𝑟𝑖+1

2≤ 𝛼 𝑟𝑖

2+ 𝛽 𝑒 2

where α = 1 + 𝑐𝑇1+𝛿

1−𝛿1 − 𝛼0

2 ,

𝛽 = 1 + 𝑐𝑇1+𝛿

1−𝛿

𝛽0

𝛼0+

𝛼0𝛽0

1−𝛼02

+2

1−𝛿,

𝛼0= 𝑐𝐻 1 − 𝛿 − 𝛿 and 𝛽0 = (1 + 𝑐𝐻) 1 + 𝛿

Geometric Convergence Proof

Using triangle equality, tail approximation, and RIP,

𝑟𝑖+12= 𝑥 − 𝑥𝑖+1

2

≤ 𝑥𝑖+1 − 𝑧2+ 𝑥 − 𝑧 2

≤ 𝑐𝑇 𝑥 − 𝑧 2 + 𝑥 − 𝑧 2

= (1 + 𝑐𝑇) 𝑥 − 𝑧 2

≤ (1 + 𝑐𝑇)𝐴 𝑥−𝑧 2

1−𝛿

= (1 + 𝑐𝑇)𝐴𝑥−𝐴𝑧 2

1−𝛿

Geometric Convergence Proof (continued)

Because 𝐴𝑥 = 𝑦 − 𝑒 and 𝐴𝑧 = 𝐴𝑆𝑧𝑆,

𝑟𝑖+12≤ 1 + 𝑐𝑇

𝑦−𝐴𝑆𝑧𝑆 2

1−𝛿+

𝑒 2

1−𝛿

≤ (1 + 𝑐𝑇)𝑦−𝐴𝑆𝑥𝑆 2

1−𝛿+

𝑒 2

1−𝛿

Geometric Convergence Proof (continued)

𝑦 = 𝐴𝑥 + 𝑒 = 𝐴𝑆𝑥𝑆 + 𝐴𝑆𝑐𝑥𝑆𝑐 + 𝑒

𝑟𝑖+12≤ 1 + 𝑐𝑇

𝐴𝑆𝑐𝑥𝑆𝑐 2

1−𝛿+ 1 + 𝑐𝑇

2 𝑒 2

1−𝛿

≤ 1 + 𝑐𝑇1+𝛿

1−𝛿𝑥𝑆𝑐 2 + 1 + 𝑐𝑇

2 𝑒 2

1−𝛿

= 1 + 𝑐𝑇1+𝛿

1−𝛿𝑥 − 𝑥𝑖

𝑆𝑐 2+ 1 + 𝑐𝑇

2 𝑒 2

1−𝛿

≤ 1 + 𝑐𝑇1+𝛿

1−𝛿𝑟Γ𝑐𝑖

2+ 1 + 𝑐𝑇

2 𝑒 2

1−𝛿

Geometric Convergence Proof (continued)

𝑟𝑖+12≤ 𝛼 𝑟𝑖

2+ 𝛽 𝑒 2

where α = 1 + 𝑐𝑇1+𝛿

1−𝛿1 − 𝛼0

2 ,

𝛽 = 1 + 𝑐𝑇1+𝛿

1−𝛿

𝛽0

𝛼0+

𝛼0𝛽0

1−𝛼02

+2

1−𝛿,

𝛼0= 𝑐𝐻 1 − 𝛿 − 𝛿 and 𝛽0 = (1 + 𝑐𝐻) 1 + 𝛿

This means AM-CoSaMP exhibits robust signal recovery.

Geometric Converge in Noiseless Case

Assume 𝑒 = 0 and 𝛿 is very small.For convergence, 𝛼 < 1. Assume 𝛿 is very small. In order for AM-CoSaMP to converge,

𝛼 ≈ 1 + 𝑐𝑇 1 − 𝑐𝐻2 < 1

𝑐𝐻2 > 1 −

1

1 + 𝑐𝑇2

This is identical to the convergence of AM-IHT.

Algorithm

Improved Recovery via Boosting

Why do these algorithms need to be boosted?

By definition, 𝑐𝑇 ≥ 1, and thus 𝑐𝐻 ≥3

2. If 𝑐𝑇 is very large, this means

the tail-approximation oracle can only give a crude approximation. This forces 𝑐𝐻 to have to be very accurate, which can constrain the choice of approximation algorithms.

Boosting Algorithm

Theorem

Let 𝐻 be a 𝑐𝐻 ,𝕄,𝕄𝐻 , 𝑝 -head-approximation algorithm with 0 <

𝑐𝐻 ≤ 1 and 𝑝 ≥ 1. Then, BoostHead(𝑥, 𝐻, 𝑡) is a ( 1 − 1 − 𝑐𝐻𝑝 𝑡

1

𝑝,

𝕄,𝕄𝐻 , 𝑝)-head-approximation algorithm. BoostHead runs in time 𝑂 𝑡𝑇𝐻 , where 𝑇𝐻 is the time complexity of 𝐻.

Main Result

Let 𝑇 and 𝐻 be approximate projection oracles with 𝑐𝑇 ≥ 1 and 0 < 𝑐𝐻 < 1.

𝛾 =

1 −1

1 + 𝑐𝑇− 𝛿

2

+ 𝛿

1 − 𝛿

𝑡 =log(1 − 𝛾2)

log(1 − 𝑐𝐻2)

+ 1

If using AM-IHT with 𝑇 and BoostHead(𝑥, 𝐻, 𝑡) as projection oracles, the signal estimate will satisfy

𝑥 − ො𝑥 2 ≤ 𝐶 𝑒 2where ො𝑥 is the returned signal estimate.

Summary

Approximate Model-Based Algorithms

-AM-IHT

-AM-CoSaMP

Relaxation of Requirements via Boosting

Other research

Using model-based CoSaMP, the amount of LIDAR sensors needed can be reduced by as much as 85%.[2]

References

[1] Chinmay Hegde,Piotr Indyk,and Ludwig Schmidt. Approximation algorithms for model-based compressive sensing. Information Theory, IEEE Transactions, 61(9):5129–5147, 2015.

[2] A. Kadambi and P. T. Boufounos. Coded aperture compressive 3-d lidar. In 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pages 1166–1170, April 2015.

top related