blind deconvolution from multiple sparse inputsyuejiec/papers/sparsebd_spl...blind deconvolution...

8
1 Blind Deconvolution from Multiple Sparse Inputs Liming Wang, Member, IEEE, and Yuejie Chi, Member, IEEE Abstract—Blind deconvolution is an inverse problem when both the input signal and the convolution kernel are unknown. We propose a convex algorithm based on `1-minimization to solve the blind deconvolution problem, given multiple observations from sparse input signals. The proposed method is related to other problems such as blind calibration and finding sparse vectors in a subspace. Sufficient conditions for exact and stable recovery using the proposed method are developed which shed light on the sample complexity. Finally, numerical examples are provided to showcase the performance of the proposed method. Index Terms—blind deconvolution, blind calibration, convex programming, dictionary learning, sparsity I. I NTRODUCTION B Lind deconvolution is a classical inverse problem that ubiquitously appears in various areas of signal processing [1], communications [2] and array processing [3]. For such a problem, the observation is often posed as a convolution of the signal with some kernel, or filter. In addition, the explicit knowledge of the convolution kernel nor the signal is unknown a priori. The goal of blind deconvolution is to recover both the signal and the kernel from the observation. Motivated by multi-channel blind deconvolution [4] and the joint recovery problem of signals and sensor parame- ters in array signal processing [3], the observation can be formulated as y = g ~ x 2 R n , where ~ denotes the circular convolution, g 2 R n denotes the kernel and x 2 R n denotes the input signal. Recently, this problem has drawn lots of research attention, where algorithms with provable performance guarantees are developed by assuming both g and x satisfy certain sparsity or subspace constraints [5]– [8]. However, these assumptions may be difficult to verify in practice, particularly for the kernel. In the case when multiple input signals are present, the observations are given as y i = g ~ x i , for 1 i p, where p is the number of observations. It is known that [9], as long as p is sufficiently large, the problem is identifiable and can be solved via a least-squares approach by exploiting the cross- correlations of the observations under some mild conditions of the kernel. In this letter, we reconsider this problem by assuming the input signals x i ’s are sparse, which is motivated by applications of compressed sensing [10]. Identifiability under this setup is recently studied in [11]. It is natural to seek a kernel g such that the inputs are made as sparse as possible, however, such a direct consideration is not computationally L. Wang is with Department of Electrical and Computer Engineer- ing, The Ohio State University, Columbus, OH 43210, USA. Email: [email protected]. Y. Chi is with Department of Electrical and Computer Engineering and Department of Biomedical Informatics, The Ohio State University, Columbus, OH 43210, USA. Email: [email protected]. This work is supported in part by AFOSR under the grant FA9550-15-1- 0205 and by ONR under the grant N00014-15-1-2387. feasible. Alternatively, with a mild assumption that the kernel is invertible, we propose a convex optimization algorithm based on ` 1 -minimization, which can be solved efficiently. Sufficient conditions for exact and stable recovery using the proposed method are developed under a Bernoulli-Subgaussian model of the sparse inputs, which shed light on the sample complexity. In contrast, the alternating minimization algorithm in [12] is heuristic and lack performance analysis. Our approach is mostly inspired by [13], [14] for exact dic- tionary learning with sparse input signals, where the problem can be regarded as a special case of learning an invertible circulant dictionary. Furthermore, the problem is also related to finding sparse vectors in a subspace [15], [16] and blind calibration [17], [18], which will be detailed later. II. PROBLEM FORMULATION Let x i 2 R n denote the i-th sparse input signal and the i-th observation y i 2 R n can be expressed as y i = g ~ x i = C (g)x i , i =1,...,p. (1) The common kernel or filter C (g) 2 R nn is the circulant matrix spanned by g =[g 1 ,...,g n ] T , given as C (g)= 2 6 6 6 4 g 1 g n ··· g 2 g 2 g 1 ··· g 3 . . . . . . . . . . . . g n g n-1 ··· g 1 3 7 7 7 5 . (2) The filter g is called invertible if C (g) is invertible. Denote Y =[y 1 ,..., y p ] 2 R np and X =[x 1 ,..., x p ] 2 R np , we can rewrite (1) as Y = C (g)X. (3) Furthermore, the sparse signal matrix X is assumed of the following Bernoulli-Subgaussian model, which is standard for modeling sparse signals. Definition 1 (Bernoulli-Subgaussian model, [13]). X is said to satisfy the Bernoulli-Subgaussian model with parameter 2 (0, 1), if X = R, where is an i.i.d. Bernoulli matrix with parameter , and R is an independent random matrix with i.i.d. symmetric random variables that satisfy μ = E[|R i,j |] 2 [1/10, 1], E[R 2 i,j ] 1, and P(|R i,j | >t) 2 exp(-t 2 /2), 8t> 0. We note that the sparsity of X is manipulated by the Bernoulli distribution, and the non-zero entries of X obey the Subgaussian distribution, thereby facilitating a very general model of the sparse signal X. Our goal is to recover both g and X from Y. Clearly, the problem is not uniquely identifiable, due to the fact that we can

Upload: others

Post on 14-Oct-2020

6 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

1

Blind Deconvolution from Multiple Sparse InputsLiming Wang, Member, IEEE, and Yuejie Chi, Member, IEEE

Abstract—Blind deconvolution is an inverse problem when both

the input signal and the convolution kernel are unknown. We

propose a convex algorithm based on `1-minimization to solve the

blind deconvolution problem, given multiple observations from

sparse input signals. The proposed method is related to other

problems such as blind calibration and finding sparse vectors in

a subspace. Sufficient conditions for exact and stable recovery

using the proposed method are developed which shed light on

the sample complexity. Finally, numerical examples are provided

to showcase the performance of the proposed method.

Index Terms—blind deconvolution, blind calibration, convex

programming, dictionary learning, sparsity

I. INTRODUCTION

B

Lind deconvolution is a classical inverse problem thatubiquitously appears in various areas of signal processing

[1], communications [2] and array processing [3]. For such aproblem, the observation is often posed as a convolution ofthe signal with some kernel, or filter. In addition, the explicitknowledge of the convolution kernel nor the signal is unknowna priori. The goal of blind deconvolution is to recover boththe signal and the kernel from the observation.

Motivated by multi-channel blind deconvolution [4] andthe joint recovery problem of signals and sensor parame-ters in array signal processing [3], the observation can beformulated as y = g ~ x 2 Rn, where ~ denotes thecircular convolution, g 2 Rn denotes the kernel and x 2 Rn

denotes the input signal. Recently, this problem has drawnlots of research attention, where algorithms with provableperformance guarantees are developed by assuming both g

and x satisfy certain sparsity or subspace constraints [5]–[8]. However, these assumptions may be difficult to verify inpractice, particularly for the kernel.

In the case when multiple input signals are present, theobservations are given as yi = g~xi, for 1 i p, where p

is the number of observations. It is known that [9], as long asp is sufficiently large, the problem is identifiable and can besolved via a least-squares approach by exploiting the cross-correlations of the observations under some mild conditionsof the kernel. In this letter, we reconsider this problem byassuming the input signals xi’s are sparse, which is motivatedby applications of compressed sensing [10]. Identifiabilityunder this setup is recently studied in [11]. It is natural to seeka kernel g such that the inputs are made as sparse as possible,however, such a direct consideration is not computationally

L. Wang is with Department of Electrical and Computer Engineer-ing, The Ohio State University, Columbus, OH 43210, USA. Email:[email protected].

Y. Chi is with Department of Electrical and Computer Engineering andDepartment of Biomedical Informatics, The Ohio State University, Columbus,OH 43210, USA. Email: [email protected].

This work is supported in part by AFOSR under the grant FA9550-15-1-0205 and by ONR under the grant N00014-15-1-2387.

feasible. Alternatively, with a mild assumption that the kernelis invertible, we propose a convex optimization algorithmbased on `1-minimization, which can be solved efficiently.Sufficient conditions for exact and stable recovery using theproposed method are developed under a Bernoulli-Subgaussianmodel of the sparse inputs, which shed light on the samplecomplexity. In contrast, the alternating minimization algorithmin [12] is heuristic and lack performance analysis.

Our approach is mostly inspired by [13], [14] for exact dic-tionary learning with sparse input signals, where the problemcan be regarded as a special case of learning an invertiblecirculant dictionary. Furthermore, the problem is also relatedto finding sparse vectors in a subspace [15], [16] and blindcalibration [17], [18], which will be detailed later.

II. PROBLEM FORMULATION

Let xi 2 Rn denote the i-th sparse input signal and the i-thobservation yi 2 Rn can be expressed as

yi = g ~ xi = C(g)xi, i = 1, . . . , p. (1)

The common kernel or filter C(g) 2 Rn⇥n is the circulantmatrix spanned by g = [g1, . . . , gn]

T , given as

C(g) =

2

6664

g1 gn · · · g2

g2 g1 · · · g3...

.... . .

...gn gn�1 · · · g1

3

7775. (2)

The filter g is called invertible if C(g) is invertible. DenoteY = [y1, . . . ,yp] 2 Rn⇥p and X = [x1, . . . ,xp] 2 Rn⇥p, wecan rewrite (1) as

Y = C(g)X. (3)

Furthermore, the sparse signal matrix X is assumed of thefollowing Bernoulli-Subgaussian model, which is standard formodeling sparse signals.

Definition 1 (Bernoulli-Subgaussian model, [13]). X is saidto satisfy the Bernoulli-Subgaussian model with parameter ✓ 2(0, 1), if X = ⌦ � R, where ⌦ is an i.i.d. Bernoulli matrixwith parameter ✓, and R is an independent random matrixwith i.i.d. symmetric random variables that satisfy

µ = E[|Ri,j |] 2 [1/10, 1], E[R2i,j ] 1,

and P(|Ri,j | > t) 2 exp(�t

2/2), 8t > 0.

We note that the sparsity of X is manipulated by theBernoulli distribution, and the non-zero entries of X obey theSubgaussian distribution, thereby facilitating a very generalmodel of the sparse signal X.

Our goal is to recover both g and X from Y. Clearly, theproblem is not uniquely identifiable, due to the fact that we can

Page 2: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

2

always rewrite (1) as yi = (�Rkg)~ (�

�1Rkxi), where Rk

is a circulant shift matrix by k, k = 1, . . . , n� 1, and � 6= 0

is an arbitrary scalar. Hence, recovery should be interpretedin the sense that g and X are accurately recovered up to acirculant shift and a scaling factor.

Since X is sparse, we aim to seek the sparsest X thatreproduces the observations:

{ˆg, ˆX} = argmin

g2Rn,X2Rn⇥p

kXk0,

subject to C(g)X = Y,X 6= 0, (4)

where k · k0 is the entry-wise `0 norm, which is unfortunatelycomputationally infeasible. A natural question is under whatconditions it is guaranteed that there exists a unique pair{g,X} such that Y = C(g)X, i.e., the identifiability of blinddeconvolution problem. One sufficient condition is establishedin [11], [13], where under the Bernoulli-Subgaussian modelon X, it follows that the problem (4) is identifiable with highprobability, provided that g is invertible, ✓ 2 (1/n, 1/4) andp > Cn log n for some constant C. Throughout the paper, wefocus on the case where g is invertible.

III. A CONVEX OPTIMIZATION APPROACH

A. The Convex FormulationAlthough it is desirable to develop a more tractable formu-

lation than (4) to efficiently solve the problem, the problemis still non-convex even if we relax the `0 norm to `1 normdue to the bilinear constraint. Hence, we seek an alternativeapproach. Recall that any circulant matrix C(g) admits theeigenvalue decomposition [19]:

C(g) = F

H · diag(ˆg) · F, (5)

where F 2 Cn⇥n is the discrete Fourier transform (DFT)matrix, and ˆ

g = Fg. Since g is invertible, gi 6= 0 for all1 i n, then the inverse of C(g) is also circulant and canbe written as

C(g)�1= F

H · diag(ˆg)�1 · F := C(h), (6)

where h denotes the inverse filter of g. In particular,C(g)C(h) = C(h)C(g) = In, where In denotes the identitymatrix of size n.

Motivated by the observation that C(h)Y = C(g)�1Y = X

is sparse, rather than aiming at reconstructing C(g), we alter-natively seek to recover C(h), by considering the followingconvex optimization algorithm:

ˆ

v = argmin

v2RnkC(v)Yk1, (7)

where k · k1 is the entry-wise `1 norm to motivate sparsesolutions. However, the above algorithm (7) admits a trivialsolution ˆ

v = 0. In order to avoid this case, an additionallinear constraint can be added to (7), yielding the algorithm

ˆ

v =argmin

v2RnkC(v)Yk1, subject to e

T1 v = 1. (8)

where e1 = [1, 0, · · · , 0]T . This constraint not only avoids thetrivial solution, but also eliminates the scaling ambiguity. Thereason for the choice of e1 is made clear when we analyze

the performance. Once ˆ

v is obtained, one can recover X asˆ

X = C(v)Y.

B. Connection to Other ProblemsFirst, the problem under study is related to learning an

invertible dictionary [13], [14], where one aims to recoveran invertible A 2 Rn⇥n and a sparse coefficient matrixX 2 Rn⇥p from their product Y = AX. The trick used indeveloping (8) is also reminiscent of the ER-SpUD algorithmin [13]. Although it is possible to use ER-SpUD for ourproblem by ignoring the circulant structure of the dictionary, itrequires solving p different subproblems, with each one similarto the complexity of (8), which is much more demanding.

By writing C(v) = F

Hdiag(Fv)F, the objective functionof (8) can be rewritten as

kFHdiag(Fv)FYk1 =

��vec[FHdiag(Fv)FY]

��1

=

��(Y

TF� F

H)Fv

��1,

where vec[· ] vectorizes the argument matrix, and � denotesthe Khatri-Rao product [20]. Hence, the problem (8) can thenbe interpreted as finding a sparse vector in the structuredsubspace S = (Y

TF� F

H)F 2 Cpn⇥n [15], [16].

Finally, note that we can rewrite (3) as FY = diag(

ˆ

g)FX

by multiplying F on both sides. The problem of simul-taneously recovering ˆ

g and X is then equivalent to blindcalibration of a compressed sensing system [17], [18], wherethe sparsifying basis is the DFT matrix and ˆ

g is the unknowncalibration vector. In this case, the proposed algorithm (8)becomes equivalent to the algorithm in [17], [18] for gaincalibration, wherein it is only studied numerically withoutperformance analysis.

IV. THEORETICAL ANALYSIS

We now provide theoretical analysis of the proposed methodin (8). We first establish the sufficient conditions under whichthe proposed method allows exact and stable recovery, thenwe comment on when these conditions hold with a highprobability under the Bernoulli-Subgaussian model of X. LetS be the support of X and [· ]S be the argument matrixrestricted on the support S. Denote |h|(i) as the ith largestentry of h in the absolute value, and without loss of generalitylet |h|(1) = 1 to eliminate the scaling ambiguity. We firstestablish following results that are useful later. For any v 2 Rn

we have

EkC(v)Xk1 = EpX

i=1

kC(v)xik1 = npE�����

nX

i=1

xn+1�ivi

����� ,

Ek[C(v)X]Sk1 = |S|E�����

nX

i=1

xn+1�ivi

����� , (9)

and if n✓ � 2,

E�����

nX

i=1

xn+1�ivi

����� �µ

4

r✓

n

kvk1, (10)

where the last inequality follows from [13, Lemma 16]. Wefurther make the following assumptions:

Page 3: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

3

p

θ

10 20 30 40 50 60 70 80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

p

θ

10 20 30 40 50 60 70 80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

p

θ

10 20 30 40 50 60 70 80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.80

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

(a) � = 0.05 (b) � = 0.1 (c) � = 0.2

Figure 1: Phase transition of the proposed algorithm with respect to ✓ and p for various values of �. The gray level for eachcell denotes the recovery success rate.

A1) There exists 0 < � < 1 such that for all w 2 Rn:��kC(w)Xk1 � EkC(w)Xk1

�� �EkC(w)Xk1, (11)

and��k[C(w)X]Sk1 � Ek[C(w)X]Sk1

�� �Ek[C(w)X]Sk1.(12)

A2) There exists 0 < �1, �2 < 1 such that

(1� �1)µ✓np kXk1 (1 + �1)µ✓np (13)

and(1� �2)✓np |S| (1 + �2)✓np. (14)

With the above assumptions, we present the main theoremwhich quantifies the sufficient conditions for exact recovery.

Theorem 1. Assume 2/n ✓ (1� �)/(2(1 + �)(1 + �2)).Under the assumptions A1) and A2), the proposed algorithm(8) achieves exact recovery, i.e., uniquely identifies h upto the shift and scaling ambiguity, provided that |h|(2) (1��)�2(1+�)(1+�2)✓

4(1+�1)p✓n

.

Proof. Rather than working with the original problem (8),we first transform it into an equivalent problem. Applying achange of variable v = C(h)w = h~w, we have

C(v)Y = C(h)C(w)C(g)X = C(w)C(h)C(g)X = C(w)X,

ande

T1 v = e

T1 C(h)w = r

Tw,

where r = [h1, hn, . . . , h2]T is the first row of C(h), and a

permuted version of h. Then (8) is equivalent to

ˆ

w = argmin

w2RnkC(w)Xk1, subject to r

Tw = 1. (15)

Although we cannot directly implement (15) since X and r

are unknown, but it is more convenient to analyze. We orderthe entries of r in magnitude as |r|(1) � |r|(2) � · · · � |r|(n).Without loss of generality, we assume j

⇤= 1 is the index

of the largest entry of r in magnitude, and r1 = 1. Clearlye1 is a feasible solution of (15). Our goal is to demonstratethat under the assumptions of Theorem 1, it is also the uniquesolution of (15), which will in turn imply that h is the uniquesolution of (8).

Denote the solution of (15) as ˆ

w = w1+w2 = w1e1+w2,where w1 is supported on the first entry, and w2 is supported

on {2, . . . , n}. Due to feasibility of ˆ

w, we have

r

T(w1 +w2) = w1 + r

Tw2 = 1,

and it results in w1 = 1 � r

Tw2. Recall S as the support

of X, and its complement set is denoted as S

c. Let ↵ =

E |Pn

i=1 xn+1�i(w2)i| and we have

kC( ˆw)Xk1 = kC(w1)X+ [C(w2)X]S + [C(w2)X]Sck1= kw1X+ [C(w2)X]Sk1 + k[C(w2)X]Sck1 (16)= kw1X+ [C(w2)X]Sk1 + kC(w2)X� [C(w2)X]Sk1�

�1� |rTw2|

�kXk1 � 2 k[C(w2)X]Sk1 + kC(w2)Xk1

��1� |rTw2|

�kXk1 � 2(1 + �)E k[C(w2)X]Sk1

+ (1� �)E kC(w2)Xk1 (17)� kXk1 � |r|(2)kw2k1kXk1 � 2(1 + �)|S|↵+ (1� �)np↵

where (16) follows from the decomposability of the `1 normon disjoint support sets and (17) follows from A1).

In order to show �|r|(2)kw2k1kXk1� 2(1+ �)|S|↵+(1��)np↵ � 0, we need

|r|(2) (1� �)np� 2(1 + �)(1 + �2)✓np

(1 + �1)µ✓npkw2k1↵, (18)

and above inequality holds if

|r|(2) (1� �)� 2(1 + �)(1 + �2)✓

(1 + �1)µ✓

µ

4

r✓

n

, (19)

which is assumed in the statement of the theorem. Notethat (18) and (19) follow from A2) and (10), respectively.Therefore, we show that

�|r|(2)kw2k1kXk1 � 2(1 + �)|S|↵+ (1� �)np↵ � 0

Meanwhile, we have kC( ˆw)Xk1 kXk1. As a result,kC( ˆw)Xk1 = kXk1 and e1 is the unique solution of (15).

Note that the assumption A2) essentially postulates h to bea vector with most of its energy concentrated on one entry.Consequently, its inverse filter g also possesses this property.Therefore, Theorem 1 implies that (8) achieves exact recovery,provided that the filter g does not deviate too far from e1.

A. Discussions of the AssumptionsWith additional conditions, A1) and A2) can be established

with high probabilities. For example, it can be shown that A1)holds with probability 1� o(1), provided that p � C0n log

4n

Page 4: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

4

n

p

5 10 15 20 25 30 35 40

2

4

6

8

10

12

14

16

18

20

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

n

p

5 10 15 20 25 30 35 40

2

4

6

8

10

12

14

16

18

20

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

n

p

5 10 15 20 25 30 35 40

2

4

6

8

10

12

14

16

18

20

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

(a) ✓ = 0.3 (b) ✓ = 0.4 (c) ✓ = 0.5

Figure 2: Phase transition of the proposed algorithm with respect to n and p for various values of ✓ when � = 0.3. The graylevel for each cell denotes the recovery success rate. The dashed line depicts the envelope of the plot.

0 0.001 0.002 0.003 0.004 0.005 0.006 0.007 0.008 0.009 0.010

0.005

0.01

0.015

0.02

0.025

0.03

0.035

∥N∥∞

RelativeError

Figure 3: Relative recovery error of the proposed algorithmwith respect to the observation noise.

for some positive constant C0 [14]. For the assumption A2),it is known from [13, Lemma 16 and 18] that (13) holdswith probability at least 1 � 4 exp(�C1�

21✓np), where C1 is

a positive constant, and (14) holds with probability at least1�2 exp(�2�

22✓np/3). Combining these results with Theorem

1, it is straightforward to argue that the proposed algorithm (8)admits exact recovery as long as p � C0n log

4n and h is spiky

enough. This sample complexity matches the identifiabilitybound in [11], [13] up to logarithmic factors.

B. Stability of the Proposed MethodWe now establish a stability result for the proposed method

under observation noise, by showing that the recovery onlysuffers a small deviation provided the noise level is smallenough, under the same conditions as Theorem 1.

Theorem 2. Consider the model Y = C(g)X+N, where N isthe observation noise. Under the same condition as Theorem 1,the solution ˆ

v of the proposed algorithm (8) satisfies

kˆv � hk1 4npkhk21kNk1L� npkhk1kNk1

, (20)

where L = ((1� �)np�2(1+ �)(1+ �2)np✓)µ4

q✓n , provided

that |h|(2) (1��)�2(1+�)(1+�2)✓

8(1+�1)p✓n

and kNk1 Lnpkhk1

. Here,k · k1 denotes the entry-wise `1 norm.

Due to lack of space, the proof of Theorem 2 is presentedin the supplemental material.

V. NUMERICAL EXAMPLE

We present several numerical examples to showcase theperformance of the proposed algorithm. In these examples,

the entries of X are drawn i.i.d. from a Bernoulli-Gaussiandistribution. The parameter for the Bernoulli distribution is✓ and the Gaussian distribution is chosen as the standardnormal distribution N (0, 1). The filter g is generated asg = e1 + �r to avoid shift ambiguity, where r is composedof i.i.d. standard Gaussian entries and � is a small constant.We rescale the estimate ˆ

X to have the same `1 norm as X,and an exact recovery is declared whenever the relative errork ˆX � Xk1/kXk1 < 10

�3. The proposed algorithm (8) is astandard convex programming which is solved numerically viathe CVX package [21].

Fig. 1 shows the phase transition of the proposed algorithmwith respect to ✓ and p when n = 10 for various values of �,where the recovery success rate is calculated over 20 MonteCarlo simulations per cell. It can be seen that the performanceimproves when g becomes more peaky, which also serves asa validation of Theorem 1. In order to explore the samplecomplexity of the proposed algorithm, Fig. 2 shows the phasetransition with respect to n and p under � = 0.3 and variousvalues of ✓, where the recovery success rate is calculatedover 20 Monte Carlo simulations per cell. From the numericalresults, we conjecture that the algorithm succeeds with a highprobability as soon as p scales as a polynomial of log n, indi-cating very few samples are sufficient for blind deconvolutionwith sparse inputs. We refer the readers to additional numericalexperiments in the supplementary material.

In Fig. 3, we demonstrate how the relative recovery error ofthe proposed algorithm changes with respect to the amplitudeof the observation noise. We set n = 10, � = 0.3, ✓ = 0.3

and p = 5. A uniform random noise on [�kNk1,�kNk1] isapplied, and the recovery error is averaged over 10 trails. Wecan see that the proposed algorithm yields a stable recovery,provided that the amplitude of the noise is small enough.

VI. CONCLUSION

Blind deconvolution with multiple sparse inputs has beenstudied in this letter. Because of the sparsity assumption ofthe signal, an `1-minimization algorithm has been proposedto solve the problem efficiently, where sufficient conditionsfor exact and stable recovery have been developed. We havedemonstrated that our algorithm yields promising results.Through numerical simulations we conjecture that both theidentifiably and performance guarantee of the proposed algo-rithm have rooms for improvement in terms of the samplecomplexity, which we leave for future work.

Page 5: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

5

REFERENCES

[1] H. Q. Nguyen, S. Liu, and M. N. Do, “Subspace methods for compu-tational relighting,” in IS&T/SPIE Electronic Imaging. InternationalSociety for Optics and Photonics, 2013, pp. 865 703–865 703.

[2] S.-i. Amari, S. C. Douglas, A. Cichocki, and H. H. Yang, “Multichannelblind deconvolution and equalization using the natural gradient,” in IEEESignal Processing Workshop on Signal Processing Advances in WirelessCommunications. IEEE, 1997, pp. 101–104.

[3] A. Paulraj and T. Kailath, “Direction of arrival estimation by eigen-structure methods with unknown sensor gain and phase,” in IEEEInternational Conference on Acoustics, Speech, and Signal Processing,vol. 10. IEEE, 1985, pp. 640–643.

[4] F. Sroubek and J. Flusser, “Multichannel blind iterative image restora-tion,” IEEE Transactions on Image Processing, vol. 12, no. 9, pp. 1094–1106, 2003.

[5] K. Lee, Y. Li, M. Junge, and Y. Bresler, “Stability in blind deconvolutionof sparse signals and reconstruction by alternating minimization,” in Int.Conf. Sampling Theory and Applications (SampTA), 2015.

[6] Y. Chi, “Guaranteed blind sparse spikes deconvolution via lifting andconvex optimization,” IEEE Journal of Selected Topics in Signal Pro-cessing, vol. PP, no. 99, pp. 1–1, 2016.

[7] A. Ahmed, B. Recht, and J. Romberg, “Blind deconvolution using con-vex programming,” Information Theory, IEEE Transactions on, vol. 60,no. 3, pp. 1711–1732, 2014.

[8] S. Ling and T. Strohmer, “Self-calibration and biconvex compressivesensing,” arXiv preprint arXiv:1501.06864, 2015.

[9] G. Xu, H. Liu, L. Tong, and T. Kailath, “A least-squares approach toblind channel identification,” Signal Processing, IEEE Transactions on,vol. 43, no. 12, pp. 2982–2993, 1995.

[10] D. Donoho, “Compressed sensing,” IEEE Transactions on InformationTheory, vol. 52, no. 4, pp. 1289 –1306, April 2006.

[11] Y. Li, K. Lee, and Y. Bresler, “A unified framework for identifiabilityanalysis in bilinear inverse problems with applications to subspace andsparsity models,” arXiv preprint arXiv:1501.06120, 2015.

[12] J. Gao, E. Ertin, S. Kumar, and M. al’Absi, “Contactless sensing ofphysiological signals using wideband RF probes,” in 2013 AsilomarConference on Signals, Systems and Computers. IEEE, 2013, pp. 86–90.

[13] D. A. Spielman, H. Wang, and J. Wright, “Exact recovery of sparsely-used dictionaries,” arXiv preprint arXiv:1206.5882, 2012.

[14] K. Luh and V. Vu, “Dictionary learning with few samples and matrixconcentration,” arXiv preprint arXiv:1503.08854, 2015.

[15] Q. Qu, J. Sun, and J. Wright, “Finding a sparse vector in a subspace:Linear sparsity using alternating directions,” in Advances in NeuralInformation Processing Systems, 2014, pp. 3401–3409.

[16] L. Demanet and P. Hand, “Scaling law for recovering the sparsestelement in a subspace,” Information and Inference, p. iau007, 2014.

[17] R. Gribonval, G. Chardon, and L. Daudet, “Blind calibration forcompressed sensing by convex optimization,” in Acoustics, Speech andSignal Processing (ICASSP), 2012 IEEE International Conference on.IEEE, 2012, pp. 2713–2716.

[18] C. Bilen, G. Puy, R. Gribonval, and L. Daudet, “Convex optimization ap-proaches for blind sensor calibration using sparsity,” Signal Processing,IEEE Transactions on, vol. 62, no. 18, pp. 4847–4856, 2014.

[19] R. A. Horn and C. R. Johnson, Matrix Analysis. Cambridge UniversityPress, 2012.

[20] C. Khatri and C. R. Rao, “Solutions to some functional equationsand their applications to characterization of probability distributions,”Sankhya: The Indian Journal of Statistics, Series A, pp. 167–180, 1968.

[21] M. Grant, S. Boyd, and Y. Ye, “CVX: Matlab software for disci-plined convex programming,” Online accessible: http://stanford. edu/˜boyd/cvx, 2008.

Page 6: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

1

Blind Deconvolution from Multiple Sparse Inputs:Supplementary Material

Liming Wang, Member, IEEE, and Yuejie Chi, Member, IEEE

I. PROOF OF THEOREM 2Proof. We follow the same steps and symbols in the proof of Theorem 1, and (8) is now equivalent to

w = argminw2Rn

kC(w)X+ C(h)C(w)Nk1, subject to rTw = 1.

By the proof of Theorem 1, we have

kC(w)Xk1 � kXk1 � |r|(2)kw2k1kXk1 � 2(1 + �)|S|↵+ (1� �)np↵

� kXk1 + ((1� �)np� 2(1 + �)|S|)µ4

r✓

n� |r|(2)kXk1)kw2k1 (1)

� kXk1 + ((1� �)np� 2(1 + �)(1 + �2)np✓)µ

4

r✓

n� |r|(2)kXk1)kw2k1. (2)

Via the assumption on |r|(2) and A2), we have

|r|(2) (1� �)� 2(1 + �)(1 + �2)✓

(1 + �1)µ✓

µ

8

r✓

n(3)

(1� �)np� 2(1 + �)(1 + �2)np✓

kXk1µ

8

r✓

n. (4)

Plug (4) in (2), we have

kC(w)Xk1 � kXk1 + ((1� �)np� 2(1 + �)(1 + �2)np✓)µ

4

r✓

nkw2k1 (5)

Denote L = ((1� �)np� 2(1 + �)(1 + �2)np✓)µ4

q✓n , and we simply have kC(w)Xk1 � kXk1 + Lkw2k1.

Hence,

kC(w)X+ C(h)C(w)Nk1 � kC(w)Xk1 � kC(h)C(w)Nk1� kC(w)Xk1 � kw1C(h)Nk1 � kC(h)C(w2)Nk1 (6)� kC(w)Xk1 � kC(h)Nk1 � npkw2k1kC(h)Nk1 (7)� kC(w)Xk1 � kC(h)Nk1 � npkw2k1khk1kNk1� kXk1 + Lkw2k1 � kC(h)Nk1 � npkw2k1khk1kNk1, (8)

where (6) and (7) follow from the inequalities kC(h)C(w2)Nk1 npkC(h)Nk1kw2k1 and kC(h)Nk1 kNk1khk1.

On the other hand, we have kC(w)X+C(h)C(w)Nk1 kXk1+ kC(h)Nk1 due to optimality of w. Combiningthe lower and upper bounds, we have

kXk1 + Lkw2k1 � kC(h)Nk1 � npkw2k1khk1kNk1 kXk1 + kC(h)Nk1. (9)

Page 7: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

2

Therefore, we obtain

kw2k1 2kC(h)Nk1

L� npkhk1kNk1(10)

2npkhk1kNk1L� npkhk1kNk1

, (11)

provided that

kNk1 L

npkhk1. (12)

We know that e1 is the solution of (8) when N = 0, and the estimation perturbation caused by the noise can beexpressed as

kw � e1k1 = k(w1 � 1)e1 +w2k1 (13) |rTw2|+ kw2k1 (14) 2kw2k1, (15)

where the last inequality follows from krk1 = 1. Let v denote the solution to (8). Due to the transformationv = C(h)w = h~w, we have

kw � e1k1 = kC(g)v � C(g)hk1 2kw2k1. (16)

Multiply kC(h)k`1 on both sides of the inequality where k· k`1 denotes the `1 operator norm of the argument matrix,we have

kC(h)k`12kw2k1 � kC(h)k`1kC(g)v � C(g)hk1 (17)� kv � hk1, (18)

where the last inequality follows from the fact kC(h)k`1 � kC(h)vk1

kvk1, for any non-vanishing v 2 Rn. Moreover,

since kC(h)k`1 = khk1, we obtain

kv � hk1 4npkhk21kNk1

L� npkhk1kNk1, (19)

provided thatkNk1 L

npkhk1. (20)

II. ADDITIONAL NUMERICAL EXAMPLE

We note that the condition in Theorem 1 is only a sufficient condition. Practically, we found that the algorithmworks well for many other situations. For example, in the following experiment, we consider the kernel to begenerated by i.i.d. standard Gaussian N (0, 1). We set n = 10, � = 0.3, ✓ = 0.3 and p = 5. The generated g is

g =�0.8404 �0.888 0.1001 �0.5445 0.3035 �0.6003 0.49 0.7394 1.712 �0.1941

�T,

and the generated data X is

Page 8: Blind Deconvolution from Multiple Sparse Inputsyuejiec/papers/SparseBD_SPL...blind deconvolution problem, given multiple observations from sparse input signals. The proposed method

3

X =

0

BBBBBBBBBBBBBB@

0 �1.089 0 0 00 0 0.7481 0.2916 0

�1.214 0 0 0.1978 00 0 0 1.588 00 0 0 �0.8045 �0.6669

1.533 0.08593 0 0 0�0.7697 0 0 0 0

0 0 0 0 00 0 0 0 00 0 �0.1961 �1.166 0

1

CCCCCCCCCCCCCCA

.

As we discussed in the letter, recovery should be interpreted in the sense that g and X are accurately recoveredup to a scaling factor. The estimated kernel g after rescaling is

g =�0.8404 �0.888 0.1001 �0.5445 0.3035 �0.6003 0.49 0.7394 1.712 �0.1941

�T,

and the estimated data X after rescaling is

X =

0

BBBBBBBBBBBBBB@

2.725 · 10�7 �1.089 2.386 · 10�7 �4.214 · 10�7 6.012 · 10�9

�7.748 · 10�7 �4.716 · 10�7 0.7481 0.2916 3.531 · 10�8

�1.214 �1.881 · 10�7 3.338 · 10�7 0.1978 �1.124 · 10�7

�2.258 · 10�7 7.215 · 10�8 1.279 · 10�7 1.588 �2.883 · 10�7

3.282 · 10�7 4.697 · 10�8 �9.976 · 10�8 �0.8045 �0.66691.533 0.08593 �4.976 · 10�9 �8.278 · 10�8 �2.883 · 10�7

�0.7697 4.697 · 10�8 2.399 · 10�7 �7.026 · 10�8 �1.124 · 10�7

�4.469 · 10�7 7.215 · 10�8 �3.979 · 10�8 �1.102 · 10�7 3.531 · 10�8

�1.999 · 10�7 �1.881 · 10�7 �1.244 · 10�7 �2.69 · 10�8 6.012 · 10�9

9.122 · 10�8 �4.716 · 10�7 �0.1961 �1.166 �2.046 · 10�7

1

CCCCCCCCCCCCCCA

.

Even though the settings do not necessarily satisfy the condition in Theorem 1, an exact recovery is still achieved.