coupled variational image decomposition and
TRANSCRIPT
-
8/12/2019 Coupled Variational Image Decomposition And
1/14
IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013 2233
Coupled Variational Image Decomposition andRestoration Model for Blurred Cartoon-Plus-Texture
Images With Missing Pixels
Michael K. Ng, Xiaoming Yuan, and Wenxing Zhang
AbstractIn this paper, we develop a decomposition model torestore blurred images with missing pixels. Our assumption isthat the underlying image is the superposition of cartoon andtexture components. We use the total variation norm and itsdual norm to regularize the cartoon and texture, respectively.We recommend an efficient numerical algorithm based on thesplitting versions of augmented Lagrangian method to solve theproblem. Theoretically, the existence of a minimizer to the energyfunction and the convergence of the algorithm are guaranteed.
In contrast to recently developed methods for deblurring images,the proposed algorithm not only gives the restored image, butalso gives a decomposition of cartoon and texture parts. Thesetwo parts can be further used in segmentation and inpaintingproblems. Numerical comparisons between this algorithm andsome state-of-the-art methods are also reported.
Index Terms Cartoon and texture, deblurring, image decom-position, variable splitting method.
I. INTRODUCTION
IMAGE decomposition is an important problem in imageprocessing [2][4], [7], [16], [26], [34]. It plays a significantrole in realm of object recognition, biomedical engineering,
astronomical imaging, etc. The target image is required tobe decomposed into two meaningful components. One is the
geometrical part or sketchy approximation of an image which
is called cartoon component, and the other is the oscillating
part or small scale special patterns of an image which is called
texture component. Mathematically, the cartoon componentcan be described by a piecewise smooth (or a piecewise
constant) function whilst the texture component is commonly
oscillating. Because of their different properties, it is more
Manuscript received January 17, 2012; revised October 29, 2012; acceptedJanuary 13, 2012. Date of publication February 11, 2013; date of currentversion March 29, 2013. The work of M. K. Ng was supported by RGCGrants and HKBU FRG Grants. The work of X. Yuan was supported by
Hong Kong General Research Fund under Grant 203311. The associate editorcoordinating the review of this manuscript and approving it for publicationwas Dr. Farhan A. Baqai.
M. K. Ng is with the Centre for Mathematical Imaging and Vision andDepartment of Mathematics, Hong Kong Baptist University, Kowloon Tong,Hong Kong (e-mail: [email protected]).
X. Yuan is with the Department of Mathematics, Hong Kong BaptistUniversity, Kowloon Tong, Hong Kong (e-mail: [email protected]).
W. Zhang is with the School of Mathematical Sciences, University ofElectronic Science and Technology of China, Chengdu 610051, China(e-mail: [email protected]).
Color versions of one or more of the figures in this paper are availableonline at http://ieeexplore.ieee.org.
Digital Object Identifier 10.1109/TIP.2013.2246520
efficient and effective to separate them for image processing
and image analysis. The main task herein is to extract the
cartoon and texture components from a given image with
degradation, e.g., blurry and/or missing pixels.For a target image f Rn, the image decomposition is to
derive f=u +v, whereu and vrepresent the cartoon and tex-ture, respectively. We herein treat a two-dimensional or higher-
dimensional image by vectorizing it as a one-dimensional
vector, e.g., in lexicographic order. The first model for imagedecomposition is the popular image denoising model proposed
by Rudin et al. [33]. The model is
minuRn ,vRn
|u|1+ v22 subject to u+ v= f (1)
where: Rn Rn Rn is the first-order derivative operator,and > 0 is a parameter to control the decomposition of f
into u and v. Hereafter, for any x= (x1,x2, . . . ,xn )T Rn,xp :=n
i=1|xi |p 1
p represents the p-norm of x, and for
any y=(y1,y2) Rn Rn ,|y|denotes a vector in Rn whoseentries are given by
|y|i :=(y1)2i+ (y2)2i, i= 1, 2, . . . , n.It follows that |y|p = (
ni=1|y|pi )
1p . |x|1 is the
well-known total variation (TV) norm of x. A substantialadvantage of TV norm in image processing is that it can
recover piecewise constant functions without smoothing sharp
discontinuities (i.e., it can preserve the edges in images). By
exploiting model in (1), the cartoon part of a given image can
be extracted. However, if the original image contains sometexture patterns, such as the scene of meadow and the surface
of stone or cloth, the model in (1) may remove such texture
patterns (see [26] for instance).
For any u
R
n , considering the semi-norm in a Sobolev
spaceu1,p := |u|p p1
it follows that the TV norm is just the semi-norm 1,1. Thedual norm of 1,p , denoted by 1,q with 1p+ 1q = 1(q= corresponding to p= 1, and vice versa), is definedas
v1,q :=inf|g|q| v= divg, g Rn Rn (2)
where div := T is the divergence operator (see [1] formore details). It is interesting to note that for an image v
1057-7149/$31.00 2013 IEEE
-
8/12/2019 Coupled Variational Image Decomposition And
2/14
2234 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
with oscillating patterns,v1, is smaller thanv2 (see[2], [25], [37] for numerical verification). Indeed Meyer [26]
proved that the 1,-norm can seize texture components(or oscillating patterns) in an image. This property is useful
to design an energy minimization model to extract texture
patterns from an image. As a result, Meyer proposed the model
min
uRn
,vRn
|u
|1
+v
1,
subject to u
+v
= f (3)
for image decomposition.
Since the corresponding Euler-Lagrange equation of (3)
cannot be expressed directly, it is hard to compute a numerical
solution. Vese and Osher [37] tackled the model in (3) by
solving its approximation as follows:
minuRn ,gRnRn
|u|1+ u+ divg f22+ |g|p (4)
with p 1, and here > 0 and > 0 are parametersto balance three terms in the objective function. In (4),u represents the cartoon part and v= divg is the texturepart. Note that the second term in the objective function of
(4) forces f u+ divg and the last term penalizes thetexture part v (recall the definition of 1, in (2) andthe fact|g|= limp
g21+g22p). Obviously, when
p , the model in (4) approximates the model in (3).In [37], the gradient method is employed to solve the Euler-
Lagrange equation of (4), and also it has been demonstrated
the solution of (4) can be used for texture discrimination and
segmentation.
In [29], an alternative model for image decomposition was
proposed as follows:
minu
Rn ,v
Rn
|u|1+v1,2 subject to u+ v= f (5)
where 1,2 is used instead of 1,, to extract thetexture component of an image. We note that
v1,2=
v, 1v v Rn (6)
where, denotes the inner product in the Euclidean spaceand is the Laplacian operator with definition as
v= div(v) v Rn .
Thus the model in (5) is easier to tackle numerically than the
models in (3) and (4).There are other image decomposition models in the litera-
ture. Aujol et al. [4] proposed a constrained model for imagedecomposition
minuRn ,vRn
|u|1+ 12 f u v22subject to v1, (7)
where > 0 and > 0 are parameters to control the
decomposition of f into cartoon and texture parts. Theyexploited the alternating minimization (AM) scheme for the
model in (7). The corresponding u- and v-subproblems can
be solved approximately by Chambolles projection method
[9]. Cai et al. [8] proposed a tight frame based method for
handling inpainting and image decomposition simultaneously.
Their model is given by
min1Rm ,2Rm
2i=1
diag(ui )i1
+2
i=1
2 (I A iATi )i2
2+ 1
2 P f2
i=1
ATi i2
2
(8)
where Ai Rmn (i= 1, 2) are the tight frames that bothcartoon and texture components have sparse approximations
in their transformed domains, respectively; ui Rm (i= 1, 2)are thresholding parameters; and [0, +] is a trade-offbetween regularization and fidelity terms. Therein, the authorssuggested to apply the proximal forward-backward splitting
method to solve (8).
On the other hand, Chan et al. [12] showed the necessity
of simultaneous image inpainting and blind deconvolution
for total variation models, rather than performing these two
tasks sequentially. This is the first work to exploit the addi-
tional information provided when coupling the inpainting anddeblurring problems. In [14], Daubechies and Teschke studieda variational image restoration model by means of wavelets
for performing simultaneous decomposition, deblurring and
denoising. In [24], Kim and Vese used the image decompo-
sition model for image deblurring. Recently, a TV-curvelets
image decomposition model was considered and developed
in [25].In this paper, we study an image decomposition model for
images with corruptions, e.g., blurry and/or missing pixels:
minuRn ,gRnRn
|u|1+1
2K(u+divg)f22+|g|p (9)
with p 1. Numerically, we only consider the case of p=1, 2 and so that the resulting subproblems possess closed-form solutions or can be easily solved up to high accuracy.
Here, K: Rn Rn is a linear operator; >0 and >0 aretrade-offs to balance the image f into the cartoon u and the
texture divg, respectively. Different choices of K correspond
to the observed images with different corruptions: (i) images
with blurry, i.e., K = B where B is the blurring matrixassociated with a spatially invariant point spread function; (ii)
images with missing pixels (herein we consider the case ofmissing pixels with zero values), i.e., K= S where S is abinary matrix (also the so-called mask) to represent missing
pixels; (iii) images with blurry and missing pixels, i.e., K
=S B where S is a mask and B is a blurring matrix. Therefore,model (9) is more general than the image decomposition model
(4). For solving the (9), we recommend the alternating direc-tion method of multipliers with Gaussian back substitution
recently developed in [23], which originates from the ideaof splitting the classical augmented Lagrangian method. The
recommended method can provide fast and accurate solutions
with guaranteed convergence. We will report some numericalresults to show the effectiveness of the model (9) and the
efficiency of the algorithm in [23].
The rest of the paper is organized as follows. In Section II,
Some preliminaries involving convex programming and the
-
8/12/2019 Coupled Variational Image Decomposition And
3/14
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2235
variable splitting method are stated. We first reformulate the
model (9) as an optimization problem with separable structure,
then elaborate the procedure on solving the resulting subprob-
lems by algorithm in [23]. In Section III, some numerical
examples for image decomposition are tested to demonstrate
the effectiveness of the model (9) and the efficiency of
the algorithm in [23]. Finally, we conclude our paper inSection IV.
I I . VARIABLE S PLITTING M ETHODS FORS TRUCTURED
CONVEX P ROGRAMMING
A. Preliminaries
In this subsection, we summarize some basic concepts and
properties that will be used in subsequent analysis.
For any x Rn and symmetric positive definite matrixH Rnn , xH : =
x,H x denotes the H-normof x. I represents the identity operator. Let : Rn (, +] be a function. The domainand epigraphof aredefined as dom :={x Rn | (x) 0 is a penaltyparameter to the linear constraint. Suppose that (x1 ,x
2 ,x
3 )
is an optimal solution of optimization problem in (10) and
is the optimal solution of its dual problem. Then, for any(x1,x2,x3) X1 X2 X3 and Rl , we have (see themonograph [31] for details)
L(x1 ,x2 ,x
3 , ) L(x1 ,x2 ,x3 , ) L(x1,x2,x3, ).
Algorithm 1 The Extension of ADM for the Optimization
Problem in (10)
Require: Choose arbitrary > 0, tolerance > 0 and the
initial point v0 =(x02 ,x03 , 0) Rm2 Rm3 Rl .1: repeat
2: xk+11 =argminL(x1,x
k2 ,x
k3 ,
k)|x1 X1
3: xk+12 =argminL(xk+11 ,x2,x
k3 ,
k)|x2 X24: xk+13 =argminL(xk+11 ,xk+12 ,x3, k)|x3 X35: k+1 =k
A1xk+11 + A 2xk+12 + A 3xk+13 b
6: untilvk vk+1H <
Hence, the first-order optimality condition of optimization
problem in (10) can be characterized by the following varia-
tional form: Finding(x1 ,x2 ,x
3 ,
) X1 X2 X3 Rl andi i (xi) (i= 1, 2, 3) such that
x1x1 , 1 A T1 0 x1 X1x2x2 , 2 A T2 0 x2 X2x3x3 , 3 A
T3 0 x3 X3,3i=1 Aixi b 0 Rl .
(12)
Let W denote the set of(x1 ,x2 ,x
3 ,
) satisfying (12), andmoreover,
V := (x2 ,x3 , )|(x1 ,x2 ,x3 , )W .
Furthermore, for notational convenience, we denote
v :=x2x3
and H :=AT2 A 2 0 00 AT3 A3 0
0 0 1
I
. (13)
Evidently, H is positive definite if Ai (i= 2, 3) are of fullcolumn rank.
Extending straightforwardly the algorithmic framework
of the classical alternating direction method of multipli-
ers (ADM) in [17] and [18] to the problem (10), we obtain
the iterative scheme of Algorithm 1.
It has been shown numerically in [30], [35] that Algorithm 1
performs very well empirically. Unfortunately, to the best of
our knowledge, its convergence is still open. This lack ofconvergence thus has inspired some variants of ADM-based
methods, e.g., [21][23], [35]. Those variants are based on
different algorithmic frameworks and each of them is particu-
larly efficient for some different kinds of applications. In this
paper, we focus on applying the most recent one proposed in
[23], i.e., the ADM with Gaussian back substitution, to solvethe model in (9).
To elaborate on the application of the algorithm in [23] tothe problem in (10), we first define
M :=AT2 A2 0 0AT3 A2 AT3 A 3 0
0 0 1I
. (14)
The application of the algorithm in [23] to the problem in (10)
is summarized as Algorithm 2.
Remark 1: Compared to Algorithm 1, Algorithm 2 requires
an additional step of Gaussian back substitution, i.e., Steps
-
8/12/2019 Coupled Variational Image Decomposition And
4/14
2236 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
Algorithm 2 Alternating Direction Method With Gaussian
Back Substitution for (10)
Require: Choose arbitrary > 0 and [0.5, 1), tolerance > 0 and the initial point v0 = (x02 ,x03 , 0) Rm2 R
m3 Rl .1: repeat
2: xk1= argmin L(x1,x
k2 ,x
k3 ,
k)|x1 X13: x
k2=
argmin L( xk1 ,x2,x
k3 ,
k)
|x2
X2
4: xk3= argminL( xk1 ,xk2 ,x3, k)|x3 X3
5: k =k A1 xk1+ A 2 xk2+ A 3 xk3 b6: H1MT(vk+1 vk)=(vk vk)7: x
k+11 = xk1
8: untilvk vkH <
6-7 in Algorithm 2. As shown in [23], the matrix H1MT isa upper-triangular block matrix. Thus, Step 6 of Algorithm 2
can be implemented easily. In fact, it can be specified as
x
k+12 = xk2+ ( xk2xk2 ) (AT2 A2)1AT2 A 3( xk3xk3 ),
xk+13 = xk3+ ( xk3xk3 ),k+1 =k + (k k).
(15)
The convergence of Algorithm 2 has been analyzed in [23],and we refer to Appendix I for details.
C. Implementation of Algorithm 2 to (9)
Now, we elucidate the implementation details of Algo-
rithm 2 to the model in (9).
First, we show that the model in (9) can be reformulated
as a special case of problem in (10). Thus, both Algorithm 1
and Algorithm 2 are applicable. By introducing the auxiliary
variables x Rn Rn , y Rn and z Rn Rn , the modelin (9) can be rewritten as
min |x|1+1
2K y f22+|z|p
subject to x= uy= u+ div gz= g (16)
with p 1. Thus, the problem in (16) is a special case ofproblem in (10) with the following specifications:
1) x1 := g, x2 := u, x3 := (x,y,z), X1 := Rn RnX2 :
=R
n and X3 :
=(Rn
R
n )
R
n
(Rn
R
n );
2) 1(x1) : = 0, 2(x2) := 0 and 3(x3) : = |x|1+ 12 K y f22+|z|p;3)
A1:= 0div
I
A2:=I
0
A3:=I 0 00 I 0
0 0 I
and b:=0.
Therefore, the model in (9) falls into the form of problem
in (10) with above specifications. Note that the strategy of
regrouping (x,y,z, u, g) into three decoupled blocks is not
unique. For example, we can also set x1 := (u,z), x2 :=(x, g) and x3 : = y. Accordingly, 1(x1) : = |z|p,2(x2) : = |x|1, 3(x3) : = 12 K y f22, coefficientmatrices in the linear constraints are
A1:=
0I 0
0 I
A2 :=
I 00 div
0
I
A3:= 0I
0
and b:=0.
It is, however, easy to show that this different regrouping leads
to subproblems which are essentially identical with those of
the previous strategy.According to (11), we specify the augmented Lagrangian
function of (16) as
L(x1,x2,x3, )
=
|x
|1
+
1
2K y
f
22
+
|z
|p
+
1
2ux
1
1
2
2
+ 22
u+ div gy 22
2
2
+ 32
zg 33
2
2
where = (1, 2, 3) (Rn Rn ) Rn (Rn Rn ) isthe Lagrange multiplier and i > 0 (i= 1, 2, 3) are penaltyparameters. We herein adopt the variant penalties to different
linear constraints as in [28]. Generally, this strategy can render
better numerical performance for Algorithm 2. We list the
x1-, x2- and x3-subproblem (correspondingly, the g-, u- and
(x,y,z)-subproblem) in Algorithm 2 as follows:
1) g-subproblem corresponds to the following optimization
problem:
gk = argming
2
2
uk + div gy k k2
2
2
2
+ 32
zk g k3
3
2
2
and it is equivalent to the linear system:
(2divTdiv+3I)g= divT2(y
k uk) + k2
+3zk k3
which can be easily handled by fast Fourier transform(FFT) if periodic boundary condition is exploited to
the divergence operator, or by discrete cosine transform
(DCT) if reflective boundary condition is exploited (seee.g., [20, Chapter 7]).
2) u-subproblem is equivalent to
uk = argminu
1
2
uxk k1
1
2
2
+22
u+ divgk y k k2
2
2
2
.
-
8/12/2019 Coupled Variational Image Decomposition And
5/14
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2237
It amounts to solving the linear system:
(1T +2I)u= T(1xk +k1)+k2+2(yk divgk) (17)
and FFT (resp. DCT) can be utilized if periodic (resp.
reflective) boundary condition is exploited to first order
derivative operator.3) (x,y,z)-subproblem corresponds to the following opti-
mization problem and each subproblem can be solved
separably
( xk,yk,zk)= arg min
x,y,z|x|1+
1
2K y f22+ |z|p
+ 12
uk x k1
1
2
2
+ 22
uk +divgk y22
2
2
+ 32
z gk 33
2
2
a) x-subproblem can be solved explicitly by the soft-
thresholding operator:
xk = argminx
1|x|1+
1
2
x uk + k1
1
2
2
= S 1
uk
k1
1
where, for any c > 0, Sc() defined asSc(g):=g min{c, |g|}
g
|g| g Rn Rn (18)
and ( g|g|)i should be taken as 0 if|g|i= 0.
b) y-subproblem is equivalent to
yk = argminy
1
2K y f22
+ 22
uk + divgk y k2
2
2
2
which is a linear system as follows
(KT K+2I)y= KT f+ 2(uk + divgk) k2.(19)
The computational effort on solving (19) is depen-
dant on the operatorK. IfKis an identity, diagonal
or downsampling matrix, we can solve (19) directly
since its coefficient matrix is diagonal. If K isa blurring matrix, it can be solved efficiently by
FFT or DCT. More specifically, when the periodicboundary condition is applied to K, FFT can be
used to diagonalize the blurring matrix K; while ifthe reflective boundary condition is considered and
K itself is of double symmetric, DCT can be uti-
lized. We refer to [20, Chapter 4] for more details.When K= S B, we recommend the preconditionedconjugate gradient (PCG) method [19] or Barzilai-
Borwein (BB) method [5] to solve an approximate
solution.
c) z-subproblem is equivalent to
zk = argminz
3|z|p+
1
2
z gk k3
3
2
2
= prox 3
||p
gk +
k3
3
(20)
where proxc||p () denotes the proximal function(see e.g., [13], [27] for more details) of the function
c||pfor a constantc > 0. It is easy to verify thatzk can be easily computed via this formula whenp= 1 or 2. When p= , we recommend somesubroutines, e.g., [15], [36], to seek an approximate
solution ofzk. We refer to Appendix II for detailsof solving the z-subproblem with different values
of p.
4) The Gaussian back substitution step, i.e., Step 6 of
Algorithm 2 can be executed easily. More specifically,the x2-subproblem in (15) is
1T +2Iuk+1 uk uk uk
= 1T
xk xk
+2
yk y k
which can can be easily solved as the linear system
in (17).
III. EXPERIMENTALR ESULTS
In this section, we apply Algorithms 1 and 2 to solve themodel (9) with different choices of K. More specifically, we
test four cases: K= I, K= S with S being a binary maskin image inpainting, K= B with B being a blurring matrix,and K
= S B being the composition of a binary matrix and
a blurring matrix. As in the literature [4], we assume thatthe cartoon and texture parts in an image are not correlated.
We thus take the correlation between the cartoon u and the
texturev, which is computed by
Corr(u, v) := cov(u, v)var(u) var(v)
to measure the quality of decomposition, where var() andcov(, )refer to the variance and covariance of given variables,respectively.
The images to be tested are displayed in Fig. 1. Note that the
image (c) in Fig. 1 is a synthetic image superposed by images
(a) and (b) with the ratio of 7:3. In the upcoming experiments,
all tested images are re-scaled into [0, 1]. All the codes were
written in MATLAB (R2009b) and were run on a personal
Lenovo laptop computer with Intel(R) Core (TM) 2.30 GHZ
and 8G memory.
A. Example 1: K= IIn this setting, the model in (9) reduces to the model
in (4), i.e., image decomposition on clean images. We focus
on images (d) and (e) in Fig. 1 for this case.
Algorithms 1 and 2 are both applicable for the model
in (9). We first test Algorithm 2 (ADMGB" for short) for
-
8/12/2019 Coupled Variational Image Decomposition And
6/14
2238 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
(a) (b) (c) (d)
(e) (f) (g) (h)
Fig. 1. Testing images (a) 512
512 TomJerry image, (b) 512
512 Wool image, (c) combined 512
512 TomJerry and Wool image, (d) 256
256 a
part ofBarbara image, (e) 250 248 Weave image, (f) 512 512 Barbara image, (g) 256 256 Woodimage, and (h) 256 256 Brickimage.
p = 1
p = 2
p =
Fig. 2. Image decomposition on clean images by ADMGB with different values of p.
the model in (9) with different values of p. As analyzed in
Subsection II-C, the resulting z-subproblems are different for
different choices of p (see also Appendix II). When p= ,we use the algorithm in [36] to obtain an approximation of
the resulting z-subproblem. We take = 101, = 103,
1 = 2 = 3 = 10 for the tested values of p in thisexperiment, and all initial points required by ADMGB are
taken as zero vectors (we remark that since the model in (9) isconvex and ADMGB converge globally, the numerical results
with different initial guesses differ very slightly up to the
-
8/12/2019 Coupled Variational Image Decomposition And
7/14
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2239
TABLE I
IMAGED ECOMPOSITION ON C LEANI MAGES
Barbara (gray) Weave
Iter CPU Corr Iter CPU Corr
p=1 29 2.24 0.0284 40 10.4 0.0292p=2 29 2.24 0.0280 40 11.2 0.0288p= 2 9 2.47 0.0280 40 12.2 0.0288
10 20 30 40 50 60 70 80 90 1000.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
Iteration No.
Corr(u,v)
p=1
p=2
p=
5 10 15 20 250.01
0.02
0.03
0.04
0.05
0.06
0.07
0.08
0.09
CPU time (s)
Corr(u,v)
p=1
p=2
p=
Fig. 3. Changes of Corr(u, v) with respect to iterations and computing timefor ADMGB with p=1, 2, .
stopping convergence criterion). Here, the stopping criterion
is used as follows:
Tol=maxuk+1 u k
max{1, uk} ,vk+1 vkmax{1, vk}
102. (21)
We report the numerical results in Table I, where the numberof iteration (Iter), computing time in seconds (CPU) and
the obtained correlations (Corr) of the decomposed two
parts are reported when the stopping criterion (21) reaches.
The results in Table I show a tiny difference with respect todifferent choices of ps, which coincide with the conclusion
in [37]. In Fig. 2, we display the decomposed cartoons andtextures with different values of p; and in Fig. 3, we plot the
changes of Corr(u, v) with respect to iterations and computingtime for the image (e). Neither the decompositions in Fig. 2
nor the changes in Fig. 3 have discernible differences. These
results further show that the effectiveness of the model in (9)is not sensitive to different values of p.
Now, we compare ADMGB with Algorithm 1 (denoted
by ADME), the AM based method in [3] (denoted by
AABC), the proximal forward-backward splitting method
Iteration No.
Corr(u,v)
VO
AABC
PFBS
ADMGB
ADME
2 4 6 8 10 12 14 16 18 20
0.05
0.1
0.15
0.2
0.25
CPU time (s)
Corr(u,v)
VO
AABC
PFBS
ADMGB
ADME
10 20 30 40 50 60 70 80
0.05
0.1
0.15
0.2
0.25
Fig. 4. Changes of Corr(u, v) with respect to iterations and computing timefor image (c).
in [8] (denoted by PFBS) and the PDE based method
in [37] (denoted by VO) on image decomposition. For this
comparison, we focus on image (c) in Fig. 1. AABC applies
an alternating minimization scheme to solve the model in (7),and both the resulting u - andv-subproblems should be solved
iteratively. We execute 10 iterations of Chambolles projectionmethod in [9] to solve these subproblems at each iteration. For
the parameters and , the tuned values are = 102
and=10 in order to obtain a minimal correlation between thecartoon and texture. Note that these choices are different from
the suggested values in [4]. To implement PFBS, we choose
= 5 in the model in (8). As in [8], we use the piecewiselinear polynomial tight frame (see [32] for its derivation)
and the redundant local discrete cosine tight frame to obtain
sparse representations of the cartoon and texture, respectively.Specifically, the level of tight frame for cartoon is taken as 2;
the window size and frequency of tight frame for texture arechosen as 64 and 6, respectively. VO is a PDE-based approach.
It aims at the model in (4) and solves the corresponding
Euler-Lagrange equations. Theoretically, VO approaches to
the model in (3) when p . However, a large value ofp may lead to difficulty in the numerical implementation.
Moreover, the difference between the decomposed parts withp= 1 and p > 1 by VO is not significant, we hence choosep=1 for VO as suggested in [37]. The parameters in VO aretaken as = 102 and = 15. For ADME and ADMGB,five parameters should be tuned, including the two parameters
and controlling the decomposition of cartoon and textureparts respectively, and the penalty parameters i (i= 1, 2, 3).We choose = 101, = 103, 1= 2= 5 and 3= 30for both algorithms. Since the parameter in ADMGB can
be arbitrarily close to 1, we choose =1 empirically. For the
-
8/12/2019 Coupled Variational Image Decomposition And
8/14
2240 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
Fig. 5. Image decomposition on synthetic image with K= I. From left column to right column are decompositions by VO, PFBS, AABC, ADME, andADMGB.
initial iterates of the above methods, u0
= v0
= 0 are set inAABC (see also [4]); u0 = f, g01= xf2 |f| , g02=
y f
2 |f |are set in VO (see also [37]); u 0,x0, y 0,z 0 and0i (i= 1, 2, 3)are set to be zero vectors in ADMGB and ADME.
We plot the changes of Corr(u, v) with respect to iterations
and computing time in Fig. 4. It shows that ADME can reach
a relatively lower correlation within fewer iterations and less
computing time, and ADMGB is slightly less efficient thanADME. However, as we have mentioned, the convergence of
ADME is still unclear while the convergence of ADMGB has
been established in [23]. Therefore, ADMGB can be used as
a practical surrogate of ADME in the following experiments.
In Fig. 5, we display the decomposed cartoons and textures
by different methods. We see that some textures are containedin the decomposed cartoon by VO and a blurry part exists in
the decomposed cartoon by PFBS. The image decompositionsgiven by AABC, ADME and ADMGB are very close.
B. Example: K= SNow we consider the image decomposition on an image
with missing pixels. For a binary mask S, the indices where
the entries are zeros, represent the locations of missing pixels.
For this case, we test the image (d) in Fig. 1. Comparisons of
ADMGB with PFBS are also reported. We generate a 256256 mask with 15.3% of missing pixels. The signal-to-noise
ratio is defined by
SNR=20 log10x
xxwherexis a reconstructed image and x is a true image. TheSNR value is used to measure the quality of a reconstructed
image. For the corrupted image listed in Fig. 7, its SNR value
is 7.94 dB.We choose = 102, = 5 103 and 1 = 2 =
3=5 102 for ADMGB. The initial iterates for Algorithm2 are all zeros vectors. The tight frames representing the
cartoon and texture required by PFBS are generated the same
0 5 10 15 20 25 30 35 40 45 500
5
10
15
20
Iteration No.
SNR
(dB)
PFBS
ADMGB
0 1 2 3 4 5 6 7 8 9 100
5
10
15
20
CPU time (s)
SNR
(dB)
PFBS
ADMGB
Fig. 6. Changes of SNR with respect to iterations and computing time forimage decomposition with missing pixels.
as in Subsection III-A. The level of tight frame for cartoon is
taken as 2; the window size and frequency of tight frame fortexture are chosen to be 64 and 8, respectively. We run both
ADMGB and PFBS for 50 iterations and plot their SNR curves
of reconstructed image (i.e., by superposing the decomposedcartoon and texture) with respect to iterations and computing
time in Fig. 6. From these curves, we see that ADMGB is
faster than PFBS. Finally, we display the decomposed images
of these two methods in Fig. 7.
-
8/12/2019 Coupled Variational Image Decomposition And
9/14
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2241
u v u+ v
u v u+ v
Fig. 7. Image decomposition on image with missing pixels. Top: Original image with missing pixels. Center row: Decompositions by PFBS. Bottom row:Decompositions by ADMGB.
TABLE II
IMAGE D ECOMPOSITION ONI MAGEW IT HB LURRY
Image Blur SNR0 Iter CPU SNR Corr
Gaussian (5,3) 19.3 39 9.6 34.7 0.0767
Gaussian (9,3) 16.5 40 9.7 26.1 0.0754
Out-of-focus(3) 18.5 39 9.5 32.9 0.0766
Weave Out-of-focus(5) 15.9 39 9.6 28.2 0.0708
Gaussian (11,3) 16.8 48 39.4 20.5 0.0738
Gaussian (21,3) 16.5 57 45.3 18.8 0.0971
Out-of-focus(7) 15.6 44 36.8 24.9 0.0571
Barbara Out-of-focus(11) 14.2 53 41.3 22.4 0.0626
C. Example: K= BNow we consider the case of the model in (9) with K= B
where B is a blurring matrix. We test both the out-of-focus blur
and the Gaussian blur, and focus on the images (e)-(f) in Fig.1. Since the methods VO, AABC and PFBS in Subsection III-
A are not applicable to the case of (9) with K= B , we onlyimplement ADMGB for this case. To implement ADMGB, we
take = 5105, = 105, 1= 2= 3= 102. Theinitial guess is a zero vector and the stopping criterion is (21).
In Table II, we report the numerical performance of
ADMGB for images with different blurs. More specifically,
the second column in this table has the following explanation.
For example, Gaussian (5,3) means the blur is generated
by MATLABfunction fspecial(gaussian,5,3)and
Out-of-focus (7) means the radius of the out-of-focus blur
is 7. Moreover, SNR0 represents the initial SNR value of
the blurred image.In Fig. 8, we display the decomposed cartoons and textures
for the cases of Out-of-focus (3) and Out-of-focus (7) inTable II. To view the decomposed cartoon and texture clearly,
we also zoom the area highlighted by the red rectangle in the
Barbara image. The images labeled by u+ v in Fig. 8 areobtained by superposing u and v.
D. Example: K= S BFinally, we test the case of the model in (9) with K= S B
where S is a binary matrix and B is a blurring matrix, i.e.,
we consider decomposing images with both blurry and missingpixels. We focus on images (g)-(h) in Fig. 1 in the experiments,
and we only test ADMGB since as far as we know, there isno other applicable algorithm for this problem.
Both true images are convolved by the out-of-focus blurwith a radius 5, and they are further corrupted by 256 256masks. The mask contains 26.4% missing pixels for the
images of Wood and Brick. The initial SNR values ofthe corrupted images are 5.35 dB for Wood and 3.24 dB for
Brick, respectively. See the first column in Fig. 9.
We take= 5105,=105,1=2=3=102 forADMGB and the initial iterates are all zero vectors. Recall that
-
8/12/2019 Coupled Variational Image Decomposition And
10/14
2242 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
Fig. 8. Image decomposition on blurred image. Top row: For image Weave. Center row: For imageBarbara. Bottom row: Zooms on the decomposedBarbara.
TABLE III
CORRELATIONS , l2 -N ORM OFT EXTUREC OMPONENT ANDT UNEDPARAMETERS FORD ECOMPOSITIONS BYD IFFERENTMETHODS
Combined Model Separated Procedure I Separated Procedure II
Corr(u, v)=0.0264 Corr(u, v)=0.0302 Corr(u, v)=0.0651
v2=23.97 v2=22.76 v2=18.13K= S (,)=(102, 5103) Task 1 (see [10]):(, )=(50, 10) Task 1: (,)=(101, 103)
Task 2: (,)=(101, 103) Task 2 (see [10]):(,)=(100, 10)Corr(u, v)=0.0397 Corr(u, v)=0.0420 Corr(u, v)=0.0935
v2=17.46 v2=15.92 v2=11.53K= S B (,)=(5 105, 5103) Task 1 (see [10]):(, )=(50, 10) Task 1: (,)=(101, 103)
Task 2: (,)=(101, 103) Task 2 (see [10]):(,)=(100, 10)*Refers to the Notations of Regularization Parameters Used in [10].
the y-subproblem in (19) should be solved iteratively. In our
experiment, we call the MATLAB function: pcg to obtain anapproximate solution of this subproblem, with the tolerance .
We run ADMGB for 50 iterations and plot the evolutions
of SNR with respect to iterations and computing time with
differents in Fig. 10. The numerical results with =104after 50 ADMGB iterations are illustrated in Fig. 9.
E. Numerical Verification of the Model in (9)
The model in (9) considers the tasks of image restora-
tion (inpainting and/or deblurring), and image decomposition
together. As analyzed in [10], it is interesting to study how
this combined computational model competes with separated
treatment where the tasks of image restoration and imagedecomposition are tackled individually. In this subsection, we
focus on the cases of K = S and K = S B to illustratethe effectiveness of the combined model in (9). We compare
it with the following separated computational procedures asfollows.
1) Separated Procedure I. We first perform inpainting
and/or deblurring a corrupted image to perform theimage restoration by using the method given in [10], and
then decompose the restored image to obtain the cartoon
and texture components by using the model in (9) with
K= I.
-
8/12/2019 Coupled Variational Image Decomposition And
11/14
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2243
Fig. 9. Image decomposition on blurredimage with missing pixels. From left column to right column: blurredimages with missing pixels, cartoons, textures,
reconstructed images (cartoon+texture).
0
2
4
6
8
10
12
14
16
18
20
Iteration No.
SNR
(dB)
= 102
= 5103
= 103
= 5104
= 104
0
2
4
6
8
10
12
14
16
18
20
CPU time (s)
SNR
(dB)
= 102
= 5103
= 103
= 5104
= 104
0 5 10 15 20 25 30 35 40 45 50
0 20 40 60 80 100 120
Fig. 10. Changes of SNR with respect to iterations and computing timefor image decomposition when K= S B with different accuracy in solvingy-subproblem (for Woodimage).
2) Separated Procedure II. We first perform decomposing
a corrupted image to derive the cartoon and texture
components by the model in (9) with K= I, and thencarry out inpainting and/or deblurring to the cartoon and
texture components by using the method given in [10].
For the cases of K= S and K= S B, we use a maskwith 9.2% missing pixels for S and out-of-focus blur with a
radius 3 for B. The corrupted images for both cases are listedin column (a) of Fig. 11. The initial SNR values for both
corrupted images are 10.87 dB and 9.08 dB for the cases of
K= S and K= S B , respectively. In numerical experiment,the parameters in the combined model are chosen manually by
testing different values of and in order to determine the
result with the smallest correlation of the decomposed cartoonand texture components. For the both separated Procedures
I and II, we consider the parameters in the decompositionmodel in (9) and in the restoration method given in [10]
together, and choose them manually by testing different valuesin order to obtain the smallest correlation between the cartoon
component and the texture component. The tuned values of
these parameters are given in Table III.
The initial guesses of all the methods are set to be zero
vectors. When K= S B, the y-subproblem in (19) for thecombined model is solved iteratively by pcg with tolerance
as 104. The decomposed images by the combined model arelisted in column (a) of Fig. 12; For separated Procedure I,
we first restore the corrupted images by applying the methodin [10] to get the restorations in column (b) of Fig. 11, then
the restored images are decomposed into cartoons and textures
in column (b) of Fig. 12; For separated Procedure II, we first
decompose the corrupted images by the model (9) with K= Ito derive the cartoon components (column (c) of Fig. 11)
and texture components (column (d) of Fig. 11), then we
respectively restore these cartoon and texture components by
applying the method in [10] to get the restored cartoons andtextures in column (c) of Fig. 12.
To compare the effectiveness of decompositions by different
approaches, we list the correlation values of decomposed
cartoon and texture components, and their 2-norm of cor-
responding texture vectors in Table III. The data in this
table show that the combined model by solving (9) to obtain
decomposed images with lower correlations than those by
-
8/12/2019 Coupled Variational Image Decomposition And
12/14
2244 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
(a) (b) (c) (d)
Fig. 11. Corrupted images and restored images by separated procedures. Top row: for K= S. Bottom row: for K= S B. (a) Corrupted images. (b) Restoredimages after the restoration in separated Procedure I. (c) Cartoon components after the decomposition in separated Procedure II. (d) Texture components afterthe decomposition in separated Procedure II.
(a) (b) (c)
Fig. 12. Decomposed images for K = S (top two rows) and K = S B (bottom two rows). (a) Combined model. (b) SeparatedProcedure I. (c) Separated Procedure II.
separated Procedures I and II. The decomposition results can
be seen from Fig. 12. In Fig. 13, we superpose the cartoon
and texture components decomposed by the combined model
and separated Procedures I and II for the case K= S B. From
this Figure, we observe that the SNR value of the superposed
image from the combined model in (9) is higher than those
by the separated Procedures I and II. In summary, we verify
the necessity of considering the combined model in (9) to
-
8/12/2019 Coupled Variational Image Decomposition And
13/14
NG et al.: COUPLED VARIATIONAL IMAGE DECOMPOSITION AND RESTORATION MODEL 2245
(b)
(a) (c)
Fig. 13. Superposed images for K= S B from decomposed cartoons andtextures. (a) Ombined model (SNR= 16.11 dB). (b) Separated Procedure I(SNR= 15.70 dB). (c) Separated Procedure II (SNR= 12.72 dB).
decompose an image with blur and/or missing pixels. Thisassertion coincides with the observation in [12].
IV. CONCLUSION
In this paper, we have developed a coupled model for
decomposing the cartoon and texture components of an image
with blurry and/or missing pixels. An efficient numerical
algorithm with guaranteed convergence is applied to solve
the proposed model. Experimental results have shown thatthe effectiveness of the new model and the efficiency of the
adopted algorithm. In our numerical experiments, we found
that the proposed model may not be effective when a largeregion of pixels are missed. For this case, an alternative
strategy is to add a curvature term to the total variation term
(e.g., [6], [10], [11]) or use some nonconvex models (e.g.,[25]). This is our interest in future research.
APPENDIX I
CONVERGENCE OFA LGORITHM 2
The convergence of Algorithm 2 has been analyzed in [23],and here we only summarize it in the following theorems
without detailed proof.Theorem 1: Ifvk vkH= 0, it follows that the iterate
(xk1 ,xk2 ,x
k3 ,
k) satisfies (12), and hence (xk1 ,xk2 ,x
k3 ) is a
solution of (10).
Theorem 2: Let{(xk1 ,xk2 ,xk3 , k)} be the sequence gener-ated by Algorithm 2. Then we have
1) The sequence{vk} is bounded.2) lim
kvk vkH= 0.
3) Any cluster point of the sequence {( xk1 ,xk2 ,xk3 ,k)}satisfies (12).
4) The sequence{vk} converges to some point{v} V.
APPENDIX I I
SOLVING THEz -S UBPROBLEM (20) WIT H D IFFERENT
VALUES OF p
Without loss of generality, we reduce the z -subproblem (20)
into the following simple pattern
zk =argminz
|z|p+1
2z b22 (22)
where is a positive scalar and b= (b1, b2) Rn Rn . Itfollows from [13], [27] that:
1) If p=1, we havezk = S (b), where S () is the soft-thresholding operator defined in (18).
2) If p=2, we have
zk =b min{|b|, } b|b| .
3) If p= , we havezk =b P(b)
where P() is the projection operator onto = {zR
n
R
n
| |z
|1
}. It can be efficiently solved by
many existing subroutines, e.g., [15], [36].
REFERENCES
[1] R. Adams, Sobolev Spaces, Pure and Applied Mathematics. SanFrancisco, CA, USA: Academic, 1975.
[2] J.-F. Aujol and A. Chambolle, Dual norms and image decompositionmodels, Int. J. Comput. Vis., vol. 63, no. 1, pp. 85104, Jun. 2005.
[3] J.-F. Aujol, G. Aubert, L. Blanc-Frau, and A. Chambolle, Imagedecomposition into a bounded variation component and an oscillatingcomponent, J. Math. Imaging Vis., vol. 22, no. 1, pp. 7188, Jan. 2005.
[4] J.-F. Aujol, G. Gilboa, T. Chan, and S. Osher, Structure-texture imagedecompositionmodeling, algorithms, and parameter selection, Int. J.Comput. Vis., vol. 67, no. 1, pp. 111136, Apr. 2006.
[5] J. Barzilai and J. Borwein, Two-point step size gradient methods, IMAJ. Numer. Anal., vol. 8, no. 1, pp. 141148, 1988.
[6] M. Bertalmio, G. Sapiro, V. Caselles, and C. Ballester, Image inpaint-ing, in Proc. 27th Annu. Conf. Comput. Graph. Interact. Tech., 2000,pp. 417424.
[7] M. Bertalmio, L. Vese, G. Sapiro, and S. Osher, Simultaneous structureand texture image inpainting, IEEE Trans. Image Process., vol. 12, no.8, pp. 882889, Aug. 2003.
[8] J.-F. Cai, R. H. Chan, and Z. Shen, Simultaneous cartoon and textureinpainting, Inverse Prob. Imaging, vol. 4, no. 3, pp. 379395, Aug.2010.
[9] A. Chambolle, An algorithm for total variation minimization andapplications, J. Math. Imaging Vis., vol. 20, nos. 12, pp. 8997, Jan.2004.
[10] R. H. Chan, J. Yang, and X. Yuan, Alternating direction method forimage inpainting in wavelet domain, SIAM J. Imaging Sci., vol. 4, no.3, pp. 807826, Sep. 2011.
[11] T. F. Chan, S. H. Kang, and J. Shen, Eulers elastica and curvature-based inpainting, SIAM J. Appl. Math., vol. 63, no. 2, pp. 564592,
2002.[12] T. F. Chan, A. M. Yip, and F. E. Park, Simultaneous total variation
image inpainting and blind deconvolution, Int. J. Imag. Syst. Tech., vol.15, no. 1, pp. 92102, Jul. 2005.
[13] P. L. Combettes and V. R. Wajs, Signal recovery by proximal forward-backward splitting, SIAM J. Multiscale Model. Simul., vol. 4, no. 4,pp. 11681200, 2005.
[14] I. Daubechies and G. Teschke, Variational image restoration by meansof wavelets: Simultaneous decomposition, deblurring and denoising,
Appl. Comput. Harmon. Anal., vol. 19, no. 1, pp. 116, 2005.[15] J. Duchi, S. Gould, and D. Koller, Projected subgradient methods for
learning sparse Gaussian, in Proc. Conf. Uncertainty Artif. Intell., 2008,pp. 18.
[16] M. J. Fadili, J. L. Starck, J. Bobin, and Y. Moudden, Image decompo-sition and separation using sparse representations: An overview, Proc.
IEEE, vol. 98, no. 6, pp. 983994, Jun. 2010.
-
8/12/2019 Coupled Variational Image Decomposition And
14/14
2246 IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 22, NO. 6, JUNE 2013
[17] D. Gabay and B. Mercier, A dual algorithm for the solution of nonlinearvariational problems via finite-element approximations, Comput. Math.
Appl., vol. 2, no. 1, pp. 1740, 1976.[18] R. Glowinski and A. Marrocco, Sur lapproximation par elements
finis dordre un, et la resolution par penalisation-dualite dune classe deproblemes de Dirichlet nonlineaires, Rev. Francaise dAut. Inf. Rech.Oper., vol. 9, no. 2, pp. 4176, 1975.
[19] G. H. Golub and C. F. Van Loan, Matrix Computations. Baltimore, MD,USA: Johns Hopkins Univ. Press, 1996.
[20] P. Hansen, J. Nagy, and D. OLeary, Deblurring Images: Matrices,
Spectra, and Filtering. Philadelphia, PA, USA: SIAM, 2006.[21] B. S. He, Parallel splitting augmented Lagrangian methods for
monotone structured variational inequalities, Comput. Optim. Appl., vol.42, no. 2, pp. 195212, Mar. 2009.
[22] B. S. He, M. Tao, and X. M. Yuan. (2011, Aug.). A splitting methodfor separable convex programming, IMA J. Num. Anal., Revision[Online]. Available: http://www.optimization-online.org/ARCHIVE-DIGEST/2010-06.html
[23] B. S. He, M. Tao, and X. M. Yuan, Alternating direction method withGaussian back substitution for separable convex programming, SIAM
J. Optim., vol. 22, no. 2, pp. 313340, Apr. 2012.[24] Y. Kim and L. Vese, Image recovery using functions of bounded
variation and Sobolev spaces of negative differentiability, Inverse Prob.Imaging, vol. 3, no. 1, pp. 4368, Feb. 2009.
[25] P. Maure, J. F. Aujol, and G. Peyr, Locally parallel texture modeling,SIAM J. Imag. Sci., vol. 4, no. 1, pp. 413447, Mar. 2011.
[26] Y. Meyer, Oscillating patterns in image processing and nonlinear
evolution equations, (University Lecture Series), vol. 22, Providence,RI, USA: AMS, 2002.
[27] J. Moreau, Proximit et dualit dans un espace Hilbertien, Bulletin dela Societe Mathematique de France, vol. 93, pp. 273299, 1965.
[28] M. K. Ng, P. Weiss, and X. M. Yuan, Solving constrained total-variationproblems via alternating direction methods, SIAM J. Sci. Comput., vol.32, no. 5, pp. 27102736, 2010.
[29] S. Osher, A. Sole, and L. Vese Image decomposition and restorationusing total variation minimization and the H1 norm, Multiscale
Model. Simul., vol. 1, no. 3, pp. 349370, 2003.[30] Y. Peng, A. Ganesh, J. Wright, W. Xu, and Y. Ma, Robust align-
ment by sparse and low-rank decomposition for linearly correlatedimages, inProc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2010,pp. 763770.
[31] T. Rockafellar, Convex Analysis. Princeton, NJ, USA: Princeton Univer-sity Press, 1970.
[32] A. Ron and Z. Shen, Affine systems in L2(Rd): The analysis of the
analysis operator, J. Funct. Anal., vol. 148, no. 2, pp. 408-447, Aug.1997.
[33] L. Rudin, S. Osher, and E. Fatemi, Nonlinear total variation based noiseremoval algorithms, Phys. D., vol. 60, no. 1, pp. 259-268, 1992.
[34] J. L. Starck, M. Elad, and D. Donoho, Image decomposition via thecombination of sparse representations and a variational approach, IEEETrans. Image Process., vol. 14, no. 10, pp. 15701582, Oct. 2005.
[35] M. Tao and X. M. Yuan, Recovering low-rank and sparse componentsof matrices from incomplete and noisy observations, SIAM J. Optim.,vol. 21, no. 1, pp. 57-81, 2011.
[36] E. van den Berg and M. P. Friedlander, Probing the Pareto frontier forbasis pursuit solutions, SIAM J. Sci. Comp., vol. 31, no. 2, pp. 890-912,2008.
[37] L. Vese and S. Osher, Modeling textures with total variation mini-mization and oscillating patterns in image processing, J. Sci. Comput.,vol. 19, no. 1, pp. 553572, 2003.
Michael K. Ng received the B.Sc. and M.Phil.degrees from the University of Hong Kong, HongKong, in 1990 and 1992, respectively, and the Ph.D.degree from the Chinese University of Hong Kong,Hong Kong, in 1995.
He is currently a Professor with the Department ofMathematics, Hong Kong Baptist University, HongKong. He was a Research Fellow with the ComputerSciences Laboratory, Australian National University,from 1995 to 1997, and an Assistant Professor andan Associate Professor with the University of Hong
Kong from 1997 to 2005. His current research interests include bioinformatics,data mining, image processing, scientific computing, and data mining
Dr. Ng is on the editorial boards of several international journals.
Xiaoming Yuan received the Bachelors and Mas-ters degrees from Nanjing University, Nanjing,China, and the Ph.D. degree from the City Universityof Hong Kong, Hong Kong.
He is currently an Assistant Professor with theDepartment of Mathematics, Hong Kong BaptistUniversity, Hong Kong. His current research inter-ests include numerical optimization, with its appli-cations in image processing and statistics.
Wenxing Zhang received the B.S. degree in math-ematics from Shandong Normal University, Jinan,
China, and the Ph.D. degree in computational math-ematics from Nanjing University, Nanjing, China, in2006 and 2012, respectively.
He is currently a Lecturer with the Universityof Electronic Science and Technology of China,Chengdu, China. From 2010 to 2011, he was aResearch Assistant with the Department of Mathe-matics, Hong Kong Baptist University, Hong Kong.His current research interests include optimization
theory and algorithms, and their applications in image processing.