deanna needell - ucla mathematics

72
Exponential decay of reconstruction error from binary measurements of sparse signals Deanna Needell Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters

Upload: others

Post on 21-Nov-2021

4 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Deanna Needell - UCLA Mathematics

Exponential decay of reconstruction error frombinary measurements of sparse signals

Deanna Needell

Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters

Page 2: Deanna Needell - UCLA Mathematics

Outline

G Introduction

G Mathematical Formulation & Methods

G Practical CS

G Other notions of sparsityG Heavy quantizationG Adaptive sampling

Page 3: Deanna Needell - UCLA Mathematics

The mathematical problem

1. Signal of interest f ∈Cd (=CN×N )

2. Measurement operator A :Cd →Cm (m ¿ d)

3. Measurements y =A f +ξ

y

=

A

f

+

ξ

4. Problem: Reconstruct signal f from measurements y

Page 4: Deanna Needell - UCLA Mathematics

Sparsity

Measurements y =A f +ξ.

y

=

A

f

+

ξ

Assume f is sparse:

G In the coordinate basis: ‖ f ‖0def= |supp( f )| ≤ s ¿ d

G In orthonormal basis: f = B x where ‖x‖0 ≤ s ¿ d

In practice, we encounter compressible signals.F fs is the best s-sparse approximation to f

Page 5: Deanna Needell - UCLA Mathematics

Many applications...

G Radar, Error Correction

G Computational Biology, Geophysical Data Analysis

G Data Mining, classification

G Neuroscience

G Imaging

G Sparse channel estimation, sparse initial state estimation

G Topology identification of interconnected systems

G ...

Page 6: Deanna Needell - UCLA Mathematics

Sparsity...

Sparsity in coordinate basis: f=x

Page 7: Deanna Needell - UCLA Mathematics

Reconstructing the signal f frommeasurements y

F `1-minimization [Candès-Romberg-Tao]

Let A satisfy the Restricted Isometry Property and set:

f = argming

‖g‖1 such that ‖A f − y‖2 ≤ ε,

where ‖ξ‖2 ≤ ε. Then we can stably recover the signal f :

‖ f − f ‖2 . ε+ ‖x −xs‖1ps

.

This error bound is optimal.

Page 8: Deanna Needell - UCLA Mathematics

Restricted Isometry Property

G A satisfies the Restricted Isometry Property (RIP) when there is δ< c

such that

(1−δ)‖ f ‖2 ≤ ‖A f ‖2 ≤ (1+δ)‖ f ‖2 whenever ‖ f ‖0 ≤ s.

G m ×d Gaussian or Bernoulli measurement matrices satisfy the RIP withhigh probability when

m & s logd .

G Random Fourier and others with fast multiply have similar property:m & s log4 d .

Page 9: Deanna Needell - UCLA Mathematics

Other recovery methods

Greedy Algorithms

G If A satisfies the RIP, then A∗A is “close” to the identity on sparsevectors

G Use proxy p = A∗y = A∗Ax ≈ x

G Threshold to maintain sparsity: x = Hs(p)

G Repeat

G (Iterative Hard Thresholding)

Page 10: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing Problem

I Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired vianonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.

I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,

assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 11: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.

I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,

assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 12: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.

I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,

assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 13: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,

assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 14: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,

assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 15: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 16: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero

, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 17: Deanna Needell - UCLA Mathematics

The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via

nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,

yi = sign〈ai , x〉, i = 1, . . . ,m.

I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,assuming the `2-normalization of x (why?),

‖x−∆(y)‖ ≤ γ

provided the oversampling factor satisfies

λ :=m

s ln(n/s)≥ f (γ)

for f slowly increasing when γ decreases to zero, equivalently

‖x−∆(y)‖ ≤ g(λ)

for g rapidly decreasing to zero when λ increases.

Page 18: Deanna Needell - UCLA Mathematics

A visual

Page 19: Deanna Needell - UCLA Mathematics

A visual

Page 20: Deanna Needell - UCLA Mathematics

Existing Theoretical Results

I Convex optimization algorithms [Plan–Vershynin 13a, 13b].

I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)

‖∆LP(y)‖2

∥∥∥∥2

. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/4.

I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

Page 21: Deanna Needell - UCLA Mathematics

Existing Theoretical Results

I Convex optimization algorithms [Plan–Vershynin 13a, 13b].

I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)

‖∆LP(y)‖2

∥∥∥∥2

. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/4.

I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

Page 22: Deanna Needell - UCLA Mathematics

Existing Theoretical Results

I Convex optimization algorithms [Plan–Vershynin 13a, 13b].

I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)

‖∆LP(y)‖2

∥∥∥∥2

. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/4.

I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

Page 23: Deanna Needell - UCLA Mathematics

Existing Theoretical Results

I Convex optimization algorithms [Plan–Vershynin 13a, 13b].

I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)

‖∆LP(y)‖2

∥∥∥∥2

. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/4.

I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

Page 24: Deanna Needell - UCLA Mathematics

Existing Theoretical Results

I Convex optimization algorithms [Plan–Vershynin 13a, 13b].

I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)

‖∆LP(y)‖2

∥∥∥∥2

. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/4.

I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp

‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.

Page 25: Deanna Needell - UCLA Mathematics

Limitations of the Framework

I Power decay is optimal since

‖x−∆opt(y)‖2 & λ−1

even if supp(x) known in advance [Goyal–Vetterli–Thao 98].

I Geometric intuition

Sn−1

x

http://dsp.rice.edu/1bitCS/choppyanimated.gif

I Remedy: adaptive choice of dithers τ1, . . . , τm in

yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.

Page 26: Deanna Needell - UCLA Mathematics

Limitations of the Framework

I Power decay is optimal since

‖x−∆opt(y)‖2 & λ−1

even if supp(x) known in advance [Goyal–Vetterli–Thao 98].

I Geometric intuition

Sn−1

x

http://dsp.rice.edu/1bitCS/choppyanimated.gif

I Remedy: adaptive choice of dithers τ1, . . . , τm in

yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.

Page 27: Deanna Needell - UCLA Mathematics

Limitations of the Framework

I Power decay is optimal since

‖x−∆opt(y)‖2 & λ−1

even if supp(x) known in advance [Goyal–Vetterli–Thao 98].

I Geometric intuition

Sn−1

x

http://dsp.rice.edu/1bitCS/choppyanimated.gif

I Remedy: adaptive choice of dithers τ1, . . . , τm in

yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.

Page 28: Deanna Needell - UCLA Mathematics

Limitations of the Framework

I Power decay is optimal since

‖x−∆opt(y)‖2 & λ−1

even if supp(x) known in advance [Goyal–Vetterli–Thao 98].

I Geometric intuition

Sn−1

x

http://dsp.rice.edu/1bitCS/choppyanimated.gif

I Remedy: adaptive choice of dithers τ1, . . . , τm in

yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.

Page 29: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 =

Hs(

xt + x− xt

)

, so that ‖x− xt+1‖2 ≤ R/

4t+12t+1

.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 30: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 =

Hs(

xt + x− xt

)

, so that ‖x− xt+1‖2 ≤ R/

4t+12t+1

.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 31: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 =

Hs(

xt + x− xt

)

, so that ‖x− xt+1‖2 ≤ R/

4t+12t+1

.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 32: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 =

Hs(

xt + x− xt

)

, so that ‖x− xt+1‖2 ≤ R/4t+1

2t+1

.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 33: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 = Hs(xt + x− xt), so that ‖x− xt+1‖2 ≤ R/

4t+1

2t+1.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 34: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 = Hs(xt + x− xt), so that ‖x− xt+1‖2 ≤ R/

4t+1

2t+1.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 35: Deanna Needell - UCLA Mathematics

Exponential Decay: General Strategy

I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that

‖x− x‖2 ≤ R/4.

I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.

I For t = 0, 1, . . ., estimate x− xt by x− xt , then set

xt+1 = Hs(xt + x− xt), so that ‖x− xt+1‖2 ≤ R/

4t+1

2t+1.

I After T iterations, number of measurements is m = qT , and

‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .

I Software step needed to compute the thresholds τi = 〈ai , xt〉.

Page 36: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 37: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 38: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 39: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 40: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 41: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 42: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Convex Optimization

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Dithers τ1, . . . , τq: independent N (0,R2).

I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.

I If q ≥ cδ−4s ln(n/s), then w/hp

‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.

I Pros: dithers are nonadaptive.

I Cons: slow, post-quantization error not handled.

Page 43: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 44: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).

I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 45: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 46: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 47: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 48: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.

I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 49: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.

I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 50: Deanna Needell - UCLA Mathematics

Order-One Scheme Based on Hard Thresholding

I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as

u = H ′s(A∗sign(Ax)).

I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto

x

2Ru2Rv

w

I Use other half to estimate the direction of x−w applyinghard thresholding again.

I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.

Page 51: Deanna Needell - UCLA Mathematics

Measurement Errors

I Pre-quantization error e ∈ Rm in

yi = sign(〈ai , x〉 − τi + ei ).

I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),

then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the convex-optimization and hard-thresholding schemes.

I Post-quantization error f ∈ {±1}m in

yi = fi sign(〈ai , x〉 − τi ).

I If card({i : f ti = −1}) ≤ ηq throughout, then

‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the hard-thresholding scheme.

Page 52: Deanna Needell - UCLA Mathematics

Measurement Errors

I Pre-quantization error e ∈ Rm in

yi = sign(〈ai , x〉 − τi + ei ).

I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),

then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the convex-optimization and hard-thresholding schemes.

I Post-quantization error f ∈ {±1}m in

yi = fi sign(〈ai , x〉 − τi ).

I If card({i : f ti = −1}) ≤ ηq throughout, then

‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the hard-thresholding scheme.

Page 53: Deanna Needell - UCLA Mathematics

Measurement Errors

I Pre-quantization error e ∈ Rm in

yi = sign(〈ai , x〉 − τi + ei ).

I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),

then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the convex-optimization and hard-thresholding schemes.

I Post-quantization error f ∈ {±1}m in

yi = fi sign(〈ai , x〉 − τi ).

I If card({i : f ti = −1}) ≤ ηq throughout, then

‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the hard-thresholding scheme.

Page 54: Deanna Needell - UCLA Mathematics

Measurement Errors

I Pre-quantization error e ∈ Rm in

yi = sign(〈ai , x〉 − τi + ei ).

I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),

then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the convex-optimization and hard-thresholding schemes.

I Post-quantization error f ∈ {±1}m in

yi = fi sign(〈ai , x〉 − τi ).

I If card({i : f ti = −1}) ≤ ηq throughout, then

‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the hard-thresholding scheme.

Page 55: Deanna Needell - UCLA Mathematics

Measurement Errors

I Pre-quantization error e ∈ Rm in

yi = sign(〈ai , x〉 − τi + ei ).

I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),

then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the convex-optimization and hard-thresholding schemes.

I Post-quantization error f ∈ {±1}m in

yi = fi sign(〈ai , x〉 − τi ).

I If card({i : f ti = −1}) ≤ ηq throughout, then

‖x− xT‖2 ≤ R 2−T = R exp(−cλ)

for the hard-thresholding scheme.

Page 56: Deanna Needell - UCLA Mathematics

Numerical Illustration

3 4 5 6 7 8 9 100

0.005

0.01

0.015

0.02

0.025

0.03

t

||x−

x*|

| / ||x||

1000 hard−thresholding−based testsn=100, s=15, m=100000, T=10 (4 minutes)

perfect one−bit measurements

prequantization noise (st. dev.=0.1)

bit flips (5%)

1 1.5 2 2.5 3 3.5 4 4.5 50

0.005

0.01

0.015

0.02

0.025

0.03

0.035

0.04

t

||x−

x*|

| / ||x||

1000 second−order−cone−programming−based testsn=100, s=10, m=20000, T=5 (11 hours)

perfect one−bit measurements

prequantization noise (st. dev.=0.005)

Page 57: Deanna Needell - UCLA Mathematics

Numerical Illustration, ctd

0 1 2 3 4 5 6 7 8 9

x 104

10−4

10−3

10−2

m/(s log(n/s))

||x−

x*|

| / ||x||

500 hard−thresholding−based testsn=100, m/s=2000:2000:20000, T=6 (9 hours)

s=10

s=15

s=20

s=25

0 100 200 300 400 500 600 700 80010

−3

10−2

10−1

m/(s log(n/s))

||x−

x*|

| / ||x||

100 second−order−cone−programming−based testsn=100, m/s=20:20:200, T=5 (11 hours)

s=10

s=15

s=20

s=25

Page 58: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs

I Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),

then with w/hp∣∣∣∣∣√π/2

q〈Aw, sign(Ax)〉 − 〈w, x〉

∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.

I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq

can be written as

e = Au with

{‖u‖2 ≤ d‖e‖2/

√q,

‖u‖1 ≤ d ′√s∗‖e‖2/

√q,

where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with

w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22

for all x ∈ Rn with ‖x‖0 ≤ s.

Page 59: Deanna Needell - UCLA Mathematics

Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.

I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),then with w/hp∣∣∣∣∣

√π/2

q〈Aw, sign(Ax)〉 − 〈w, x〉

∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.

I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq

can be written as

e = Au with

{‖u‖2 ≤ d‖e‖2/

√q,

‖u‖1 ≤ d ′√s∗‖e‖2/

√q,

where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with

w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22

for all x ∈ Rn with ‖x‖0 ≤ s.

Page 60: Deanna Needell - UCLA Mathematics

Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),

then with w/hp∣∣∣∣∣√π/2

q〈Aw, sign(Ax)〉 − 〈w, x〉

∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.

I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq

can be written as

e = Au with

{‖u‖2 ≤ d‖e‖2/

√q,

‖u‖1 ≤ d ′√s∗‖e‖2/

√q,

where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with

w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22

for all x ∈ Rn with ‖x‖0 ≤ s.

Page 61: Deanna Needell - UCLA Mathematics

Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),

then with w/hp∣∣∣∣∣√π/2

q〈Aw, sign(Ax)〉 − 〈w, x〉

∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.

I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq

can be written as

e = Au with

{‖u‖2 ≤ d‖e‖2/

√q,

‖u‖1 ≤ d ′√s∗‖e‖2/

√q,

where s∗ = q/ ln(n/q).

I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then withw/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22

∣∣∣∣ ≤ δ‖x‖22for all x ∈ Rn with ‖x‖0 ≤ s.

Page 62: Deanna Needell - UCLA Mathematics

Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),

then with w/hp∣∣∣∣∣√π/2

q〈Aw, sign(Ax)〉 − 〈w, x〉

∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.

I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq

can be written as

e = Au with

{‖u‖2 ≤ d‖e‖2/

√q,

‖u‖1 ≤ d ′√s∗‖e‖2/

√q,

where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with

w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22

for all x ∈ Rn with ‖x‖0 ≤ s.

Page 63: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:

I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 64: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:

I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 65: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).

I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 66: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 67: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 68: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),

I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 69: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),

I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 70: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].

I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 71: Deanna Needell - UCLA Mathematics

Ingredients for the Proofs, ctd

I Random hyperplane tessellations of√sBn

1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

I Random hyperplane tessellations of√sBn

1 ∩ Bn2 :

I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈

√sBn

1 ∩ Bn2 with

sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy

‖x− x′‖2 ≤ δ.

Page 72: Deanna Needell - UCLA Mathematics

Thank you!

E-mail:G [email protected]

Web:G www.cmc.edu/pages/faculty/DNeedell

References:G R. Baraniuk, S. Foucart, D. Needell, Y. Plan, M. Wootters. Exponential decay of reconstruction error

from binary measurements of sparse signal, submitted.

G E. J. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccuratemeasurements. Communications on Pure and Applied Mathematics, 59(8):1207U1223, 2006.

G E. J. Candès, Y. C. Eldar, D. Needell and P. Randall. Compressed sensing with coherent and redundantdictionaries. Applied and Computational Harmonic Analysis, 31(1):59-73, 2010.

G M. A. Davenport, D. Needell and M. B. Wakin. Signal Space CoSaMP for Sparse Recovery withRedundant Dictionaries, submitted.

G D. Needell and R. Ward. Stable image reconstruction using total variation minimization. J. FourierAnalysis and Applications, to appear.

G Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming, Comm. Pure Appl.Math., to appear.