6) fast sparse image reconstruction

11
534 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011 Fast Sparse Image Reconstruction Using Adaptive Nonlinear Filtering Laura B. Montefusco, Damiana Lazzaro, and Serena Papi Abstract—Compressed sensing is a new paradigm for signal recovery and sampling. It states that a relatively small number of linear measurements of a sparse signal can contain most of its salient information and that the signal can be exactly recon- structed from these highly incomplete observations. The major challenge in practical applications of compressed sensing consists in providing efficient, stable and fast recovery algorithms which, in a few seconds, evaluate a good approximation of a compressible image from highly incomplete and noisy samples. In this paper, we propose to approach the compressed sensing image recovery problem using adaptive nonlinear filtering strategies in an iter- ative framework, and we prove the convergence of the resulting two-steps iterative scheme. The results of several numerical ex- periments confirm that the corresponding algorithm possesses the required properties of efficiency, stability and low computational cost and that its performance is competitive with those of the state of the art algorithms. Index Terms—Compressed sensing, -minimization, median filters, nonlinear filters, sparse image recovery, total variation. I. INTRODUCTION A. Compressed Sensing Paradigm C OMPRESSED Sensing is a new paradigm for signal recovery and sampling. It states that a relatively small number of linear measurements of a sparse signal can contain most of its salient information. It follows that signals that have a sparse representation in a transform domain can be exactly recovered from these measurements by solving an optimization problem of the form minimize subject to (1) where is an measurement matrix, , which is required to possess the restricted isometry property (RIP) [6], [8], is the unknown signal that is -sparse in the trans- form domain described by the orthogonal matrix , and is the coefficients’ vector of the reconstruction in that domain. The number of given measurements for which we obtain the perfect recovery depends upon the length and the sparsity level of the original signal, and on the ac- quisition matrix [6], [8]. Manuscript received in October 2009; revised July 2010; accepted July 19, 2010. Date of publication July 29, 2010; date of current version January 14, 2011. This work was supported in part by Spinner2013 funds, by Miur, R.F.O. Projects, and by a grant given by Spinner2013 Project. The associate editor co- ordinating the review of this manuscript and approving it for publication was Dr. Rick P. Millane. The authors are with the Department of Mathematics, University of Bologna, 40126 Bologna, Italy (e-mail: [email protected]). Digital Object Identifier 10.1109/TIP.2010.2062194 If the unknown signal has sparse gradient, it has been shown in [7], [8] that it can be recovered by casting problem (1) as subject to (2) This formulation is particularly suited to the image recovery problem, since many images can be modeled as piece- wise-smooth functions containing a substantial number of jump discontinuities. Exact measurements are often not possible in real problems, so if the measurements are corrupted with random noise, namely we have with , the original signal can be reconstructed with an error comparable to the noise level by solving the minimum problem subject to (3) or subject to (4) Sparse signals are an idealization that we rarely encounter in applications, but real signals are quite often compressible with respect to an orthogonal basis. This means that, if expressed in that basis, their coefficients exhibit exponential decay when sorted by magnitude. As a consequence, compressible signals are well approximated by -sparse signals and the compressed sensing paradigm guarantees that from linear measurements we can obtain a reconstruction with an error comparable to that of the best possible -terms approximation within the sparsi- fying basis [7]. B. Recovery Algorithms A crucial point in the practical application of compressed sensing is the numerical solution of problems (1)–(4). A large amount of research has been aimed at finding fast algorithms for solving such problems. In fact, although (1)–(4) are convex optimization problems that can be solved using several stan- dard methods such as interior points methods, when the prob- lems are large, as in the case of real images, their practical use is limited by the extensive computation required to generate a solution. Recently, efficient computational methods for prob- lems of the form (1)–(4) have been developed. They include it- erated shrinkage methods [3], [4], [15], Bregman iterative al- gorithms [5], [31], [36], gradient projection algorithms [19], fixed point continuation algorithms [23], iterative reweighted algorithms [11], [16] and the specialized interior point method 1057-7149/$26.00 © 2011 IEEE

Upload: rama-krishna-reddy-gudooru

Post on 28-Apr-2015

47 views

Category:

Documents


0 download

DESCRIPTION

ram

TRANSCRIPT

Page 1: 6) Fast Sparse Image Reconstruction

534 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011

Fast Sparse Image Reconstruction UsingAdaptive Nonlinear Filtering

Laura B. Montefusco, Damiana Lazzaro, and Serena Papi

Abstract—Compressed sensing is a new paradigm for signalrecovery and sampling. It states that a relatively small numberof linear measurements of a sparse signal can contain most ofits salient information and that the signal can be exactly recon-structed from these highly incomplete observations. The majorchallenge in practical applications of compressed sensing consistsin providing efficient, stable and fast recovery algorithms which,in a few seconds, evaluate a good approximation of a compressibleimage from highly incomplete and noisy samples. In this paper,we propose to approach the compressed sensing image recoveryproblem using adaptive nonlinear filtering strategies in an iter-ative framework, and we prove the convergence of the resultingtwo-steps iterative scheme. The results of several numerical ex-periments confirm that the corresponding algorithm possesses therequired properties of efficiency, stability and low computationalcost and that its performance is competitive with those of the stateof the art algorithms.

Index Terms—Compressed sensing, �-minimization, medianfilters, nonlinear filters, sparse image recovery, total variation.

I. INTRODUCTION

A. Compressed Sensing Paradigm

C OMPRESSED Sensing is a new paradigm for signalrecovery and sampling. It states that a relatively small

number of linear measurements of a sparse signal can containmost of its salient information. It follows that signals that havea sparse representation in a transform domain can be exactlyrecovered from these measurements by solving an optimizationproblem of the form

minimize subject to (1)

where is an measurement matrix, , whichis required to possess the restricted isometry property (RIP) [6],[8], is the unknown signal that is -sparse in the trans-form domain described by the orthogonal matrix ,and is the coefficients’ vector of the reconstruction

in that domain. The number of given measurements forwhich we obtain the perfect recovery depends upon the length

and the sparsity level of the original signal, and on the ac-quisition matrix [6], [8].

Manuscript received in October 2009; revised July 2010; accepted July 19,2010. Date of publication July 29, 2010; date of current version January 14,2011. This work was supported in part by Spinner2013 funds, by Miur, R.F.O.Projects, and by a grant given by Spinner2013 Project. The associate editor co-ordinating the review of this manuscript and approving it for publication wasDr. Rick P. Millane.

The authors are with the Department of Mathematics, University of Bologna,40126 Bologna, Italy (e-mail: [email protected]).

Digital Object Identifier 10.1109/TIP.2010.2062194

If the unknown signal has sparse gradient, it has been shownin [7], [8] that it can be recovered by casting problem (1) as

subject to (2)

This formulation is particularly suited to the image recoveryproblem, since many images can be modeled as piece-wise-smooth functions containing a substantial number of jumpdiscontinuities.

Exact measurements are often not possible in real problems,so if the measurements are corrupted with random noise, namelywe have

with , the original signal can be reconstructed withan error comparable to the noise level by solving the minimumproblem

subject to (3)

or

subject to (4)

Sparse signals are an idealization that we rarely encounter inapplications, but real signals are quite often compressible withrespect to an orthogonal basis. This means that, if expressedin that basis, their coefficients exhibit exponential decay whensorted by magnitude. As a consequence, compressible signalsare well approximated by -sparse signals and the compressedsensing paradigm guarantees that from linear measurementswe can obtain a reconstruction with an error comparable to thatof the best possible -terms approximation within the sparsi-fying basis [7].

B. Recovery Algorithms

A crucial point in the practical application of compressedsensing is the numerical solution of problems (1)–(4). A largeamount of research has been aimed at finding fast algorithmsfor solving such problems. In fact, although (1)–(4) are convexoptimization problems that can be solved using several stan-dard methods such as interior points methods, when the prob-lems are large, as in the case of real images, their practical useis limited by the extensive computation required to generate asolution. Recently, efficient computational methods for prob-lems of the form (1)–(4) have been developed. They include it-erated shrinkage methods [3], [4], [15], Bregman iterative al-gorithms [5], [31], [36], gradient projection algorithms [19],fixed point continuation algorithms [23], iterative reweightedalgorithms [11], [16] and the specialized interior point method

1057-7149/$26.00 © 2011 IEEE

Page 2: 6) Fast Sparse Image Reconstruction

MONTEFUSCO et al.: FAST SPARSE IMAGE RECONSTRUCTION USING ADAPTIVE NONLINEAR FILTERING 535

proposed in [24] for solving the equivalent quadratic problemwith linear inequality constraints. In [35] an algorithmic frame-work for problems of the form (2) and (4) is also proposed,that represents a generalization of the iterate shrinkage methods.Other kinds of approaches to solving problems (1)–(4) are theiterative greedy algorithms which include OMP [33], ROMP[29] and COSAMP [30], and the combinatorial algorithms [13].While detailed studies and many comparisons between the per-formance and efficiency of the previously mentioned algorithmsare given in the literature for the reconstruction of sparse 1-D-signals, it is not completely clear how they work for large 2-Dand 3-D problems, expecially from the computational speedpoint of view. From the results of [2], [24], and [27] it seems thatthe run time necessary to reconstruct 256 256 and 512 512images ranges from one hour to a few minutes. These valuesare still too high to allow for their widespread use in several3-D practical applications, as, for example, medical imaging.The only algorithm that, up to the present, seems to be reallyeffective for large scale problems is the split-Bregman algo-rithm proposed in [22]. In that paper, it is shown that, by mini-mizing energies involving the reconstruction total variation, andusing a “split” formulation combined with Bregman iteration,it is possible to obtain sufficiently accurate reconstructions of256 256 sparse gradient images in a few tens of iterations.

Recently, in [28], a new nonlinear filtering strategy has beenproposed in the context of a penalized approach to the com-pressed sensing signal recovery problem. A suitable filter is usedaccording to the considered minimization problem and a fastflexible algorithm has been realized for its solution. The empir-ical results presented in [28] show that this nonlinear filteringapproach succeeds in the perfect recovery of sparse and sparsegradient 1-D-signals from highly incomplete data, and that thecomputing time is competitive with the most efficient state ofthe art algorithms.

In this paper, we consider an extension of this nonlinearfiltering approach to the multidimesional case. As in [22], wefocus on a recovery problem where the optimal solution, inaddition to satisfying the acquisition constraints, has minimal“bounded variation norm,” namely, it minimizes . Theoptimal reconstruction is evaluated by solving a sequenceof total variation regularized unconstrained subproblems,where both isotropic and anisotropic TV estimates have beenconsidered. For each value of the penalization parameter theunconstrained subproblems are approched making use of atwo-step iterative procedure based upon the forward-backwardsplitting method [12]. Interestingly, in compressed sensingcontext, where the acquisition matrix is obtained as randomlychosen rows of an orthogonal transform, the two steps of theiterative procedure become an enforcing of the current iterate tobe consistent with the given measurements, and a total variationfiltering step. Similar results are also obtained if we considera different sparsity inducing norm in the compressed sensingformulation, with the only difference that the nonlinear filter ischosen according to the corresponding variational subproblems.

The contributions of this paper are threefold. First we givea theoretical justification of the filtering approach proposed in[28] by considering it as a particular case of a more general al-ternating minimization strategy based upon the proximal oper-

ator splitting method. This allowed us to state its convergenceproperty in the compressed sensing framework. Second, we ex-tend it to the box constrained case which is shown to be partic-ularly useful for the CS image reconstruction problem. Finally,the stability properties of the proposed method are highlightedby a wide experimentation that also assesses the reconstrcutioncapabilities and the speed of the corresponding algorithm forundersampled compressible image recovery. The paper is or-ganized as follows. In Section II, we present the general formof the proposed iterative reconstruction method and we state itsconvergence properties. The corresponding algorithm is given inSection III toghether with an exaustive discussion on the choiceof the free parameters. The results of several numerical experi-ments are shown in Section IV, where the performance of theproposed method and its speed is highlighted and comparedwith that of the Split Bregman algorithm [22].

II. RECONSTRUCTION APPROACH

Notations

We set here our notation and state the results we will use inthe following.

Let be a randomly generated binary mask, suchthat the point-to-point product with any , denotedby , represents a random selection of the elements of ,namely, we have

withifif .

Let be an orthogonal transform acting on an image. We denote by

the randomly subsampled orthogonal transform of .We have the following.Proposition: Under the compressed sensing assumptions on

the measurement matrix [6], [8], the subsampled orthogonaltransform satisfies

(5)

(6)

Proof: Assertion (5) follows from the orthogonality of therows of , while (6) is a consequence of the assumption that

possesses the restricted isometry property.

A. Iterative Reconstruction Method and Its ConvergenceProperty

The general compressed sensing problem can now be statedusing the previous notations.

Given the mask , such that, the sparseness inducing norm

, and linear measurements of an unknown image, obtained by randomly subsampling the action of an

orthogonal 2-D Transform , then the input data can be repre-sented as

Page 3: 6) Fast Sparse Image Reconstruction

536 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011

We want to find that solves

subject to (7)

In the case of input data perturbed by additive white Gaussiannoise with standard deviation

the problem can be cast as

subject to (8)

where represents the noise level that, with high prob-ability, satisfies , and can be estimated (see [7]Section III) as

(9)

If contains an -norm the optimization problems (7)–(8)can be very difficult to solve directly, due to the nondifferen-tiability of . To overcome this problem we use the wellknown penalization approach that considers a sequence of un-constrained minimization subproblems of the form

(10)

where represent a decreasingsequence of penalization parameter values enforcing the con-straints to be satisfied.

The convergence of the penalization method to the solutionof the original constrained problem has been established (undervery mild conditions) when . Unfortunately, ingeneral, using very small penalization parameter values makesthe unconstrained subproblems very ill-conditioned and difficultto solve. In the present context, we do not have this limitation,since we will approach these problems implicitly, thus, avoidingthe need to deal with ill-conditioned linear systems. This is ob-tained by evaluating an approximation of the solution of (10)iteratively, using an operator splitting strategy (frequently con-sidered in the literature to solve -regularized problems [12],[15], [22], [35]), and taking advantage of the particular structureof the resulting problems.

Specifically, recalling that the proximal operator of theconvex functional is defined as the function of satisfying

(11)

for a suitable value of the parameter , the splitting approachfinds the minimum of (10) iteratively, by requiring that at eachiteration the argument of the proximal operator satisfies

(12)

We, therefore, obtain for each value of the penalization param-eter , the following two-step iterative algorithm:

(13)

where the second step consists in a convex minimization thatadmits a unique minimizer.

Regarding the convergence of this two-step approach, we re-call that the convergence of this forward-backward technique isproved in [12] for the general problem

(14)

by requiring that:• and are proper, lower semicontinuos, convex, and

coercive;• is differentiable with continuous gra-

dient, with .In our case, where and

, the condition for the convergence ofthe two-step algorithm (13) becomes

Relation (6) guarantees the convergence of our iterative proce-dure for the choice , but bigger values can be also usedaccording to the value of the Restricted Isometry Constant ofthe considered measurement matrix.

The following proposition characterizes the first step of theiterative algorithm (13) and proves the equivalence of this up-dating step with the projection phase of the algorithm NFCS in[28] when .

Proposition: When the linear operator corresponds to asubsampled orthogonal transform and , the updating step

corresponds to an exact enforcement of the constraint, namely

(15)

Proof: Relation (15) follows immediately from (5). In fact,we have

and by (5) we obtain

The second step of the proposed split method is typicallya variational formulation of a denoising step, where the noisyimage is represented by the output of the previous updating

Page 4: 6) Fast Sparse Image Reconstruction

MONTEFUSCO et al.: FAST SPARSE IMAGE RECONSTRUCTION USING ADAPTIVE NONLINEAR FILTERING 537

step. If , its exact solution is obtained using thewell known soft thresholding operator [15], [28], that is

When , an approximate solution of this stepcan be efficiently evaluated using one of the discrete totalvariation minimization algorithms given in the literature (seeSection II-A).

B. Bound Constrained Reconstruction

When dealing with image reconstruction problems it is wellknown that image intensity values have to be not negative and

, with . This suggests that we could insert moreinformation in the compressed sensing reconstruction problemby adding a bound constraint to (7) and (8), and considering:find that solves

subject to (16)

or

subject to (17)

where is the closed convex set

The solution of (16) and (17) can be evaluated by a penalizedstrategy as in (10) and using a splitting approach, but this timewe consider the following splitting:

and (18)

being the indicator function on , and we use the gra-dient-descent proximal operator iteration of [12]

(19)

The corresponding bound constrained two-step iterative algo-rithm is the following:

(20)

The convergence of this new version of the algorithm is stillguaranteed, for , by the results of [12].

III. RECONSTRUCTION ALGORITHM

The proposed penalized splitting approach (10) and (13)/(20)corresponds to an algorithm whose structure is characterized bytwo-level iteration. There is an outer loop, which progressivelydiminishes the penalization parameter in order to obtain the

convergence to the global minimum, and an inner loop, whichiteratively, using the two-step approach, minimizes the penal-ization function for the given value of .

The general scheme of the bound constrained algorithm is thefollowing.

Algorithm NFCS-2D

Step A-0: Initialization.

Given , , , , , ,and such that .

Set , and .

Step A-1: Start with the outer iterations

while ( and )

Step B-0: Start with the inner iterations

;

Step B-1:

Updating Step:

Constrained Nonlinear filtering Step:

Convergence Test:

if

go to Step B-1

otherwise go to Step A-2.

Step A-2: Outer Iteration Updating

endwhile

Terminate with as an approximation of

Remark: The automatic stopping criterion of the outer loopdepends upon which problem we are considering. If we want torecover an exactly sparse gradient image from noisy-free acqui-sitions the parameter can be set to 0, and with of theorder of the machine precision, we should obtain a numericallyexact reconstruction. On the other hand, if we deal with com-pressible images or noisy data the stopping rule is governed by

.Note that there is considerable freedom in the proposed algo-

rithm. In fact, both the orthogonal transform and the sparsity

Page 5: 6) Fast Sparse Image Reconstruction

538 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011

inducing norm can be chosen according to the characteris-tics of the reconstruction problem. In [28], several nonlinear 1-Dfilters have been widely experimented for different compressedsensing signal reconstruction problems, and their capabilitiesand efficiency have been analyzed. In this context, we are mainlyconcerned with the image reconstruction problem and, sincemany real images can be well approximated with sparse gradientsignals, we have only considered the choice ,namely the case in which represents the total variation ofthe image. Alternative choices, more suited to the solution ofparticular recovery problems, are, of course, possible.

A. Discrete Total Variation Minimization Algorithms

When the minimization step of the iterativealgorithm becomes

(21)

This is the well known Rudin–Osher–Fatemi (ROF) totalvariation based model [32] for image denoising, which isnow one of the most successful tools for image restoration.This model is originally designed for continuous signals andits Euler-Lagrange equation is a nonlinear partial differentialequation. When applying this model to a digital image, onehas to choose the numerical scheme carefully to take accountof nonlinearity. In recent years, some efficient iterative algo-rithms have been proposed to overcome the problems of thecorresponding discrete formulation [9], [10], [14], [21], [25].We have considered and experimented for compressed sensingimage recovery the following two algorithms: the digital totalvariation filter proposed in [10], and a very recent new medianfilter, proposed in [25] to solve a more general minimizationproblem. These algorithms are both iterative but differ in theway they discretize the image gradient: the first one considersan isotropic discretization of the total variation, while thesecond uses an anisotropic gradient estimate that gives differentweights to the neighbors taken into account for the evaluation[21]. In the following, we briefly describe the main steps of thenew median filter method and we propose a recursive variantthat avoids the computational burden of the iterative applica-tion. For a more detailed description of both the considereddiscrete TV algorithms we refer to the original papers.

Median filters are well known nonlinear filters used in imagedenoising problems. They act by shifting apoint wide window along the two directions of the input imageand replacing the central value with the median of the orderedset inside the window. These filters enjoy the following min-imum property: the output of a weighted median filter acting onthe set can be defined asthe value of minimizing

where are non negative weights.

Fig. 1. 8-neighbors structure used for both isotropic and anisotropic gradientevaluation.

In [25] the authors consider the more general minimizationproblem

(22)

and show that, if the weights are non negative and satisfiessuitable conditions, the minimizer of (22) is a median

(23)

where the values, , depend upon the weights andon the inverse function of .

If we consider the 3 3 region surrounding the pixel , wecan interpret as the norm of the anisotropicdiscretization of the gradient of , obtained using the 8-neigh-bors of Fig. 1 and assigning different weights to the white andgray pixels, as follows [21]:

Then, by fixing all the pixel values except at the central point,and choosing , from the general result(23) we have

(24)

where

and

Page 6: 6) Fast Sparse Image Reconstruction

MONTEFUSCO et al.: FAST SPARSE IMAGE RECONSTRUCTION USING ADAPTIVE NONLINEAR FILTERING 539

Fig. 2. 256� 256 test images. (a) Shepp_Logan Phantom. (b) Head image. (c) Boat image.

The output of this new median filter, thus, minimizes thelocal total variation. By iteratively applying this new medianformula pixel-by-pixel until convergence, we obtain the globalminimum.

We remark that, since the bottleneck in the computation of thefiltered value (24) is the sorting of the 17 elements, in [25] theauthors, by exploiting the structure of the sequence in the me-dian formula, have suggested a cheaper sorting algorithm thatrequires only 40 comparisons.

The results known for the classical median filter enable us tofurther reduce the computational cost. In fact, in [20] the authorshave shown that the action of the iterated median filter can be ob-tained using only one iteration of its recursive version. We have,therefore, applied the following recursive new median dilter

(25)

that, in just one iteration, yields very good results.

B. Choice of the Parameter Values

As occurs in the majority of numerical optimization methods,there are some free parameters in the proposed algorithm. Thefirst parameter is the starting value of the penalization param-eter and its reducing rate . Following the result of [24], whichstates that for the unique minimum of (10) is thezero vector, we have used (almost always)and, for some noisy cases, . The value ofthe reducing rate has been chosen as belonging to the interval

. In spite of the empirical nature of these choices,we have experimentally seen that they work very well for allthe images used to test the proposed algorithm. In particular,we have used a small value for the noise-free sparsegradient case, and or for the other cases. Dif-ferent choices for , are also valid, but they affectthe convergence speed. The second parameter is , used tostop the algorithm both for compressible images and clean dataand for all cases of noisy data. Due to the approximately sparsenature of compressible images we do not expect to reach per-fect reconstruction, even if the available data are noise-free. Soa seemly acceptable choice for a stopping criterion could be to

set , namely, to stop the algorithm when theconstraints are met within a relative precision of the order of0.1%. When the data are noisy and the noise level is known,a possible choice of could be . Since the value of

often overestimates the error norm, we have used a more flex-ible stopping criterion, setting , with .This choice is motivated by our experimental results that haveshown that the proposed filtering approach not only avoids theerror amplification during the reconstruction process, but actu-ally reduces them. The choice would stop the algorithmtoo early, without exploiting its denoising capabilities. The thirdparameter is , which represents a mean for tuning the precisionrequest in the inner iterations. A higher precision is responsiblefor an increase in the computing time, but can produce a moreaccurate reconstruction. So, in the attempt to find a good com-promise between speed and reconstruction quality, we have usedvalues of for sparse gradient images and exact data smallerthan for compressible images and noisy data, that is or

, respectively. Regarding the choice of the parameter wehave always set , even if a suitable greater value could beused to speed up the convergence of the algorithm. For example,in the experiment of Table II, withneeds 241 iterations to reach a precision of . With

the same result can be obtained with only 174 iter-ations. Anyway, the choice of the right value of requires amore detailed theoretical analysis, which is beyond the scope ofthe present paper.

The mentioned values for the free parameters are the resultof a wide experimentation and can represent a suggestion for aninitial guess when using the algorithm. We are aware that settingthe right parameter values is a difficult practical problem, butthis is a common drawback of all the state of the art compressedsensing algorithms.

IV. NUMERICAL EXPERIMENTS

In this section, we demonstrate the effectiveness of the pro-posed image reconstruction algorithm by reporting several nu-merical experiments that highlight its reconstruction capabili-ties, stability and speed. In all the experiments, we have usedsimulated data. This choice is motivated by the need to give anobjective quantitative evaluation of the effectiveness of the pro-posed algorithm by using reconstructed image quality. In fact,

Page 7: 6) Fast Sparse Image Reconstruction

540 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011

Fig. 3. 256� 256 Acquisition masks. I: SparseMRI mask corresponding to 75% undersampling. II: Radial mask with 60 rays corresponding to 77% undersam-pling. III: 2-D tensor product Gaussian mask corresponding to 77% undersampling. IV: 2-D Gaussian mask corresponding to 90% undersampling.

the only visual inspection of the reconstructed images, as givenin [22] and [24], is not really enough to compare the perfor-mance of different reconstruction algorithms.

The image quality has been evaluated using the PSNR value,defined as

PSNRRMSE

with

RMSEError

(26)

where is the maximum value of the image gray levelrange and

Error

The PSNR values that we give in the different experimentsrefer to the first, minimum energy reconstruction.

As already mentioned, we have considered only the sparsegradient case and in all our experiments we have used both theanisotropic and isotropic discrete approximations of the totalvariation evaluated using the 8-neighboring pixels depicted inFig. 1. Since we have experimentally seen that it is not importantto find a very accurate solution of the variational problem (21),for all the experiments we have fixed to four the number of iter-ations of the isotropic estimate yielded by the digital total vari-ation filter. This choice represents a good compromise betweenaccuracy and efficiency. A higher iteration number increases thecomputing time without a real improvement in the reconstruc-tion. The anisotropic estimate is obtained using just one itera-tion of the recursive new median filter. Since both the consid-ered algorithms deal only with unconstrained minimization, wehave added at the end of both algorithms a projection step on theconstraint set to approximate the solution of the constrainedcase. We remark that, while the two methods substantially per-form very well in all the experiments, we have noted some dif-ferences in their behavior that justify the use of both estimate.

The images used in our experimentation, shown in Fig. 2, arethe Shepp–Logan Phantom, the typical test image of the com-puted tomography literature and an example of a sparse gradient

image, and two compressible images widely used for compar-isons between different methods, the Head and the Boat images.The grey level range of the images has been transformed to be

. All the experiments are performed using subsampled fre-quency acquisitions, but, in order to demonstrate the capabilitiesof our nonlinear filtering method, we have tested it using fourdifferent acquisition strategies corresponding to the masks givenin Fig. 3. More precisely, mask I is evaluated using the free soft-ware SparseMRI [26], mask II is a classic 60 ray mask, mask IIIis obtained as a tensor product of two 1-D-Gaussian masks [28],and mask IV is generated as 2-D normally distributed randompoints. We have considered both exact and perturbed acquisi-tions. In the latter case, we have added white Gaussian noisewith standard deviation to the measurements. The corre-sponding noise level has been estimated using (9). Since thisvalue usually overestimates the true error norm, in Tables II–IVwe have shown both the values of and , in order to betterappreciate the role of the parameter .

The results shown in Sections IV-A–IV-C are obtained byrunning a Matlab implementation of the bound constrainedversion of NFCS-2-D on a PC with an Intel Pentium4 HT3.4 Ghz processor and 3 GB of RAM under Windows XP.For comparison purposes we have also run on the same PCthe free Matlab code mrics.m by Tom Goldstein, available athttp://www.math.ucla.edu/tagoldst/code.html, which imple-ments the Split Bregman method.

A. Exactly Sparse Gradient Images

To demonstrate the performance of our adaptive nonlinearfiltering approach, we have tested the bound constrained ver-sion of NFCS-2-D on the Shepp–Logan phantom, by consid-ering undersampled frequency acquisitions, both exact andperturbed with white Gaussian noise ( and ).The undersampling in the Fourier domain was obtained usingboth mask I ( , 75% undersampling) and mask II( , 77% undersampling). In both cases, we have used

, and for noise-freedata and , , and fornoisy data. The results, displayed in Table I and Table II, con-firm the high level of accuracy of the reconstructions obtainedand highlight the better performance of the anisotropic filteringapproach, with respect to the isotropic one, both from the accu-rancy and speed point of view.

Page 8: 6) Fast Sparse Image Reconstruction

MONTEFUSCO et al.: FAST SPARSE IMAGE RECONSTRUCTION USING ADAPTIVE NONLINEAR FILTERING 541

TABLE ISHEPP-LOGAN PHANTOM WITH MASK I, � � �����

TABLE IISHEPP-LOGAN PHANTOM WITH MASK II, � � �����

Fig. 4. Boat image reconstruction. (a) Isotropic reconstruction from exact data: ���� � ��. (b) Isotropic reconstruction from noisy data �� � �����:���� � ����. (c) Isotropic reconstruction from noisy data �� � ����: ���� � � ���.

B. Compressible Images

Truly piecewise constant images are rarely encounteredin practical applications, but several real images can be wellapproximated by piecewise smooth images. This kind of 2-Dsignal is known to be compressible, namely, well describedby its -largest coefficients in a suitable transform basis. Inorder to evaluate the performance of our recovery algorithmfor compressible images, we have experimented the tensorproduct Gaussian mask III on the 256 256 Boat image(shown in Fig. 2), which is known to be compressible in theHaar wavelet basis [17], namely, to be well approximated bya sparse gradient image. The number of frequency acquisi-tions considered is , namely 22.7% of the whole

Fourier data set ( 77% undersampling). As in the previousexample we have considered both exact data and perturbeddata ( and ). The parameter values were set to

and for noise-freedata and noisy data, respectively, and , and

for all cases.The results of the experimentation are given in Table III.

For comparison, we also display the -term nonlinear ap-proximation error relative to the Haar basis andevaluated using . The reconstructions obtained usingthe isotropic TV estimate are shown in Fig. 4.

In the last series of experiments we applied our nonlinearfiltering strategy to recover the 256 256 Head image from

Page 9: 6) Fast Sparse Image Reconstruction

542 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011

TABLE IIIBOAT IMAGE WITH MASK III, � � �����

Fig. 5. Head image reconstruction from exact data, � � ����. (a) Minimum energy reconstruction ����� � �����. (b) Isotropic reconstruction after 123iterations ����� � ���.

TABLE IVHEAD IMAGE WITH MASK IV. � � ����

highly undersampled frequency acquisitions. We consideredfrequency measurements ( 90% undersampling)

obtained using the 2-D Gaussian mask IV of Fig. 3. Again weconsidered exact and perturbed data ( and )and we evaluated the -term Haar nonlinear approximationerror for . The parameter values used in this exper-imentation were set to ,and for all cases. Table IV and Fig. 5 display the resultsof these very significant experiments.

The results obtained for compressible images lead us to thefollowing considerations.

• In all the experiments the isotropic gradient estimate yieldsthe most accurate result. This and other performed ex-

periments, not reported here, allow us to deduce that theisotropic algorithm is better suited to recovering piecewisesmooth than piecewise constant signals.

• The recovery time is very low for all algorithms, but inall our experiments the recursive new median filter is thefastest. In particular, 1.13 s for a 256 256 image recon-struction from only 10% of the Fourier data is really im-pressive.

• In both the noisy cases ( and ) the re-construction error is far less than the sum of the -termHaar wavelet approximation error and the error norm. Thismeans that our adaptive nonlinear filtering strategy notonly keeps the perturbations from “blowing up,” as one

Page 10: 6) Fast Sparse Image Reconstruction

MONTEFUSCO et al.: FAST SPARSE IMAGE RECONSTRUCTION USING ADAPTIVE NONLINEAR FILTERING 543

TABLE VCOMPARISONS WITH THE SPLIT BREGMAN ALGORITHM

would expect in an ill-posed inverse problem, but actu-ally reduces them. This stability result of the proposed ap-proach, together with the speed of the recovery algorithm,makes this novel method widely applicable to real 2-D and3-D problems.

C. Comparisons With the Split Bregman Algorithm

In order to assess the competitive performance of the pro-posed bound constrained algorithm, we haveexperimented the Split Bregman method [22] as implementedin the free software mrics.m by Tom Goldstein on the same setof test images. We have inserted the stopping criterion

as suggested in [22] for the outer iteration loop inthe case of noisy data. Since the results of this algorithm alsostrongly depend upon the free parameter values, we have pre-sented two sets of results. The first was obtained using the valuessuggested in the comments of mrics.m, the second using the bestvalues chosen experimentally from the many possible combi-nations of the parameters , , and the inner iteration number.The outer iteration number is automatically chosen by the pre-viously mentioned stopping criterion for the noisy case, while,for the noise-free case, we have stopped the iterations when

for sparse gradient images and whenfor compressible images. The results

of this experimentation are displayed in Table V and the itera-tion numbers are given as (in-out), where in indicates the inneriterations and out the outer iterations. For comparison we havealso shown the iteration number and computing time used by

stopped when a reconstruction quality similar

to the best quality obtained using the Split Bregman algorithmis reached. The parameter values of are the sameas in the previous tables.

A careful analysis of the results presented in Table V and acomparison with those of the previous Tables show that whenboth algorithms are used to reach their best reconstruction re-sults their computing times are comparable, butyields a better reconstruction for noisy data or compressible im-ages, while, when the algorithms are compared in terms of sim-ilar reconstructions, is faster.

V. CONCLUSION

For the solution of the compressed sensing reconstructionproblem we have proposed an efficient iterative algorithm,based upon a penalized splitting approach and an adaptive non-linear filtering strategy, and its convergence property has beenestablished. The capabilities, in terms of accuracy, stability,and speed of NFCS-2D, are illustrated by the results of severalnumerical experiments and comparisons with a state of the artalgorithm. We remark that, even if we have analyzed the sparsegradient case with undersampled frequency acquisitions, ourapproach is completely general, and works for different kindsof measurements and different choices of the function . Infact, since this function plays the role of the penalty functionin the variational approach of the image denoising problem, itis possible to exploit the different proposals of the denoisingliterature in order to select new filtering strategies (e.g., [1] andthe references therein), perhaps more suited to the differentpractical recovery problems. Examples of the use of other

Page 11: 6) Fast Sparse Image Reconstruction

544 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 20, NO. 2, FEBRUARY 2011

filtering strategies, even if considered in a different context, aregiven in [18] and [34]. A lot of work remains to be done. Inparticular, a much more detailed theoretical study is necessaryto find an objective automated way of selecting good values forthe free parameters of the algorithm. At present, as far as weknow, no similar analysis has been performed, even for the beststate of the art algorithms.

REFERENCES

[1] A. Antoniadis and J. Fan, “Regularization of wavelet approximations,”J. Amer. Statist. Assoc., vol. 961, no. 455, pp. 939–967, 2001.

[2] R. Berinde and P. Indyk, Sparse Recovery Using Sparse Random Ma-trices. Cambridge, MA: MIT Press, 2008.

[3] J. Bioucas-Dias and M. Figueiredo, “A new TwIST: Two step iterativeshrinkage-thresholding algortihms for image restoration,” IEEE Trans.Image Process., vol. 16, no. 12, pp. 2992–3004, Dec. 2007.

[4] K. Bredies and D. A. Lorenz, “Iterated hard shrinkage for minimizationproblems with sparsity constraints,” J. Sci. Comput., vol. 30, no. 2, pp.657–683, 2008.

[5] J. Cai, S. Osher, and Z. Shen, “Linearized Bregman iterations for com-pressed sensing,” Math. Comput., to be published.

[6] E. J. Candés, “Compressive sampling,” in Proc. Int. Congr. Math.,Madrid, Spain, 2006, vol. 3, pp. 1433–1452.

[7] E. J. Candés, J. Romberg, and T. Tao, “Stable signal recovery from in-complete and inaccurate measurements,” Commun. Pure Appl. Math.,vol. 59, no. 8, pp. 1207–1223, 2006.

[8] E. J. Candés, J. Romberg, and T. Tao, “Robust uncertainty principle:Exact signal reconstruction from highly incomplete frequency informa-tion,” IEEE Trans. Inf. Theory, vol. 52, no. 2, pp. 489–509, Feb. 2006.

[9] A. Chambolle, “An algorithm for total variation minimization and ap-plications,” J. Math. Imag. Vis., vol. 20, pp. 89–97, 2004.

[10] T. F. Chan, S. Osher, and J. Shen, “The digital TV filter and nonlineardenoising,” IEEE Trans. Image Process., vol. 10, no. 2, pp. 231–241,Feb. 2001.

[11] R. Chartrand and W. Yin, “Iteratively reweighted algorithms forcompressive sensing,” in Proc. 33rd Int. Conf. Acoust., Speech, SignalProcess., 2008, pp. 3869–3872.

[12] P. L. Combettes and V. R. Wajs, “Signal recovery by proximal forward-backward splitting,” SIAM J. Multiscale Model. Sim., vol. 4, no. 4, pp.1168–1200, Nov. 2005.

[13] G. Cormode and S. Muthukrishnan, “Combinatorial algorithms forcompressed sensing,” Lecture Notes Comput. Sci., vol. 4056, pp.280–294, 2006.

[14] J. Darbon and M. Sigelle, “A fast and exact algorithm for total variationminimization,” in PROC. 2nd Iberian Conf. Pattern Recognit. ImageAnal., 2005, vol. 3522, pp. 351–359.

[15] I. Daubechies, M. Defrise, and C. De Mol, “An iterative thresholdingalgorithm for linear inverse problems with a sparsity constraint,”Commun. Pure Appl. Math., vol. LVII, pp. 1413–1457, 2004.

[16] I. Daubechies, R. DeVore, M. Fornasier, and C. S. Gunturk, “Iterativelyre-weighted least square minimization for sparse recovery,” Commun.Pure Appl. Math., vol. 63, no. 1, pp. 1–38, Jun. 2008.

[17] R. A. DeVore, B. Jawerth, and B. J. Lucier, “Image compressionthrough wavelet transform coding,” IEEE Trans. Inf. Theory, vol. 38,no. 3, pp. 719–746, Mar. 1992.

[18] K. Egiazarian, A. Foi, and V. Katkovnik, “Compressed sensing imagereconstruction via recursive spatially adaptive filtering,” in Proc. IEEEInt. Conf. Image Process., 2007, pp. 549–552.

[19] M. Figueiredo, R. Nowak, and S. Wright, “Gradient projection forsparse reconstruction: Application to compressed sensing and otherinverse problems,” IEEE J. Sel. Topics Signal Process., vol. 1, no. 4,pp. 586–598, Dec. 2007.

[20] N. C. Gallagher, Jr and G. W. Wise, “A theoretical analysis of the prop-erties of median filters,” IEEE Trans. Acoust. Speech, Signal Process.,vol. ASSP-29, no. 6, pp. 1136–1141, Dec. 1981.

[21] D. Goldfarb and W. Yin, “Parametric maximum flow algorithms forfast total variation minimization,” Rice Univ., Houston, TX, CAAMRep. TR07–09, 2007.

[22] T. Goldstein and S. Osher, “The split Bregman algorithm for L1 regu-larized problems,” UCLA, Los Angeles, CA, UCLA CAM Rep. 08–29,2008.

[23] T. Hale, W. Yin, and Y. Zhang, “Fixed point continuation method for� -minimization: Methodology and convergence,” SIAM J. Optim., vol.19, no. 3, pp. 1107–1130, 2008.

[24] S. Kim, K. Koh, M. Lustig, and S. Boyd, “An efficient method forcompressed sensing,” in Proc. IEEE Int. Conf. Image Process., 2007,vol. 3, pp. 117–120.

[25] Y. Li and S. Osher, “A new median formula with applications to PDEbased denosing,” Commun. Math. Sci. , vol. 7, no. 3, pp. 741–753, 2009.

[26] M. Lustig, D. Donoho, and J. Pauly, “Sparse MRI: The application ofthe compressed sensing for rapid MR imaging,” Magn. Reson. Med.,vol. 58, no. 6, pp. 1182–1195, 2007.

[27] J. Ma, “Compressed sensing by inverse scale space and curvelet thresh-olding,” Appl. Math. Comput., vol. 206, pp. 980–988, 2008.

[28] L. Montefusco, D. Lazzaro, and S. Papi, “Nonlinear filtering for sparsesignal recovery from incomplete measurements,” IEEE Trans. SignalProcess., vol. 57, no. 7, pp. 2494–2502, Jul. 2009.

[29] D. Needell and R. Vershynin, “Uniform uncertainty principle andsignal recovery via regularized orthogonal matching pursuit,” [Online].Available: http://arxiv.org/abs/0707.4203

[30] D. Needell and J. A. Tropp, Applied and Computational HarmonicAnalysis “COSAMP: Iterative signal recovery from incomplete and in-accurate samples,” .

[31] S. Osher, Y. Mao, B. Dong, and W. Yin, “Fast linearized Bregman it-eration for compressive sensing and sparse denoising UCLA, Los An-geles, CA, UCLA CAM Rep. 08–37, 2008.

[32] L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation basednoise removal algorithms,” Phys. D, vol. 60, pp. 259–268, 1992.

[33] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measure-ments via orthogonal matching pursuit,” IEEE Trans. Inf. Theory, vol.53, no. 12, pp. 4655–4666, Dec. 2007.

[34] J. Trzasko, A. Manduca, and E. Borisch, “Robust kernel methods forsparse MR image reconstruction,” Lecture Notes Comput. Sci., vol.4791, pp. 809–816, 2007.

[35] S. J. Wrigth, R. D. Nowak, and M. A. T. Figueiredo, “Sparse recon-struction by separable approximation,” in IEEE Int. Conf. Acoust.,Speech Signal Process., 2008, pp. 3373–3376.

[36] W. Yin, S. Osher, D. Goldfarb, and J. Darbon, “Bregman iterative algo-rithms for � minimization with applications to compressed sensing,”SIAM J. Imag. Sci., vol. 1, pp. 143–168, 2008.

Laura B. Montefusco received the Laurea degree in physics from the Univer-sity of Bologna, Bologna, Italy, in 1969.

From 1970 to 1986, she was a Researcher with the Department of Mathe-matics, University of Bologna. Since, 1986 she has been a Full Professor. Shespent some years at the University of Messina, Messina, Italy, and in 1991 shemoved to the Department of Mathematics, University of Bologna, where sheteaches and conducts research in the area of numerical analysis, approximationtheory, digital image processing and compressed sensing. During these years shehas been responsible for several Italian CNR and MURST research projects.Her research interests include approximation theory, numerical linear algebraand digital signal and image processing.

Ms. Montefusco is a member of the Society for Industrial and Applied Math-ematics (SIAM).

Damiana Lazzaro received the Laurea degree in mathematics from the Uni-versity of Messina, Messina, Italy, in 1988, and the Ph.D. degree in appliedmathematics in 1995 with a dissertation on wavelet theory.

She joined the Numerical Analysis Group, Department of Mathematics, Uni-versity of Bologna, where she is currently a Researcher. Her major researchinterests include wavelet and multiwavelet theory and applications, signal andimage processing, parallel computing, and compressed sensing.

Serena Papi received the Laurea degree in computer science from the Univer-sity of Bologna, Cesena, Italy, in 1999, and the Ph.D. degree in applied mathe-matics from the University of Padova, Padova, Italy, in 2003 with a dissertationon wavelet and multiwavelet denoising.

She joined the Numerical Analysis Group, Department of Mathematics, Uni-versity of Bologna, where she is currently in a Postdoctoral position. Her majorresearch interests include wavelet and multiwavelet theory and applications,signal and image processing, and compressed sensing.