a bivariate spline approach for image enhancement

27
A Bivariate Spline Approach for Image Enhancement Qianying Hong * and Ming-Jun Lai Department of Mathematics The University of Georgia Athens, GA 30602 March 4, 2010 Abstract A new approach for image enhancement is developed in this paper. It is based on bivariate spline functions. By choosing a local region from an image to be enhanced, one can use bivariate splines with minimal surface area to enhance the image by reducing noises, smoothing the contrast, e.g. smoothing wrinkles and removing stains or damages from images. To establish this approach, we first discuss its mathematical aspects: the existence, uniqueness and stability of the splines of minimal surface area and propose an iterative algorithm to compute the fitting splines of minimal surface area. The convergence of the iterative solutions will be established. In addition, the fitting splines of minimal surface area are convergent as the size of triangulation goes to zero in theory. Finally, several numerical examples are shown to demonstrate the effectiveness of this new approach. 1 Introduction Image enhancement such as image denoising, deblurring, and impainting has been a research topics for many decades. In particular, image denoising has received a great deal of attention in the literature and in practice. In addition to engineers’s clever design of various digital filters (cf., e.g. [24]), mathematicians also have proposed many approaches such as wavelet shrinkage approach, anisotropic diffusive PDE approach, and total variational PDE approach. There is a wealth literature on these soft and hard thresholding techniques together with wavelets and tight framelets for image de-noising. See, e.g. [9] and [10], [11], [8], [7]), and [25]) as well as the Rudin-Osher-Fatemi total variational model and the Perona-Malik anisotropic diffusion model for image de-noising proposed in [28] and in [27] which have been well studied and developed in last twenty years. See e.g. [26] and [3] and the references therein for the theoretical aspects of these two PDE approaches. As the PDE arising from these approaches are nonlinear, many numerical methods, e.g., Chambolle’s algorithm, various finite difference methods, various finite element methods, and various fourth order PDE methods, have developed to implement the ROF and PM models. See, e.g. [30] and the references therein for recent development of mathematical image processing. In this paper, we propose a bivariate spline approach for image enhancement which has not been well studied in the literature so far. Let us describe our spline approach as follows. We first select a polygonal region to be enhanced from a given image of interest, partition the region into a triangulation and then use bivariate splines to find a fitting spline surface with area minimized and finally replace the region to be enhanced by the spline surface so that the image over the region is smooth. Heuristically, the noises and stains cause the bumpiness of the fitting spline surfaces. Minimizing the surface area of the fitting spline * [email protected] [email protected]. This author is partly supported by the National Science Foundation under grant DMS-0713807 1

Upload: others

Post on 02-Dec-2021

1 views

Category:

Documents


0 download

TRANSCRIPT

A Bivariate Spline Approach for Image Enhancement

Qianying Hong ∗and Ming-Jun Lai †

Department of MathematicsThe University of Georgia

Athens, GA 30602

March 4, 2010

Abstract

A new approach for image enhancement is developed in this paper. It is based on bivariate splinefunctions. By choosing a local region from an image to be enhanced, one can use bivariate splines withminimal surface area to enhance the image by reducing noises, smoothing the contrast, e.g. smoothingwrinkles and removing stains or damages from images. To establish this approach, we first discussits mathematical aspects: the existence, uniqueness and stability of the splines of minimal surfacearea and propose an iterative algorithm to compute the fitting splines of minimal surface area. Theconvergence of the iterative solutions will be established. In addition, the fitting splines of minimalsurface area are convergent as the size of triangulation goes to zero in theory. Finally, several numericalexamples are shown to demonstrate the effectiveness of this new approach.

1 Introduction

Image enhancement such as image denoising, deblurring, and impainting has been a research topics formany decades. In particular, image denoising has received a great deal of attention in the literature and inpractice. In addition to engineers’s clever design of various digital filters (cf., e.g. [24]), mathematicians alsohave proposed many approaches such as wavelet shrinkage approach, anisotropic diffusive PDE approach,and total variational PDE approach. There is a wealth literature on these soft and hard thresholdingtechniques together with wavelets and tight framelets for image de-noising. See, e.g. [9] and [10], [11], [8],[7]), and [25]) as well as the Rudin-Osher-Fatemi total variational model and the Perona-Malik anisotropicdiffusion model for image de-noising proposed in [28] and in [27] which have been well studied and developedin last twenty years. See e.g. [26] and [3] and the references therein for the theoretical aspects of thesetwo PDE approaches. As the PDE arising from these approaches are nonlinear, many numerical methods,e.g., Chambolle’s algorithm, various finite difference methods, various finite element methods, and variousfourth order PDE methods, have developed to implement the ROF and PM models. See, e.g. [30] and thereferences therein for recent development of mathematical image processing.

In this paper, we propose a bivariate spline approach for image enhancement which has not been wellstudied in the literature so far. Let us describe our spline approach as follows. We first select a polygonalregion to be enhanced from a given image of interest, partition the region into a triangulation and thenuse bivariate splines to find a fitting spline surface with area minimized and finally replace the region to beenhanced by the spline surface so that the image over the region is smooth. Heuristically, the noises andstains cause the bumpiness of the fitting spline surfaces. Minimizing the surface area of the fitting spline∗[email protected][email protected]. This author is partly supported by the National Science Foundation under grant DMS-0713807

1

function will reduce the bumpiness and hence, reduce the noises and remove the stains. For example,after an image is de-noised by using a favorable method, we can choose some regions of interest from thede-noised image and apply our bivariate spline approach to further reduce noises over these interestedregions.

We use a bivariate spline space to approximate a image f based on the scattered data values fi, i =1, · · · , n. Let 4 be a triangulation of Ω and

Srd(4) = s ∈ Cr(Ω), s|t ∈ Pd,∀t ∈ 4

be the spline space of degree d and smoothness r over triangulation 4, where Pd is the space of all poly-nomials of degree ≤ d and t is a triangle in 4. For properties of bivariate splines and their computations,see [21] and [2]. As function f may not be very smooth over Ω, we mainly use r = 0, 1 and d = 3, 4, 5. Forconvenience, let S be one of these spline spaces Srd(4) for a fixed r and d with d > r. It is known thatthere are discrete least squares splines and penalized least squares splines available in the literature whichcan be used to reduce the noises from the data values. See a survey paper [19] on bivariate splines fordata fitting. We would like to improve our ability of removing noises, stains and/or enhancing pictures.As the expression for surface area of a function is closely related to the BV semi-norm of the function,the success of ROF model for image denoising suggests our bivariate spline approach will have a betterperformance than standard discrete and penalized least squares splines.

Let us describe our approach more precisely. Let

Id(u) =∫

Ω

√1 + |∇u(x)|2dx+

12λ

1n

n∑i=1

|u(xi)− fi|2 (1)

be an energy functional, where ∇ = (D1, D2) is the gradient operator with D1 and D2 denoting thederivatives with respect to the first and second variables. Note that the first term in (1) is the standardarea of the surface of u, Our bivariate spline approach is to solve the following minimization problem:

minId(u), u ∈ S. (2)

It is easy to see that Id is convex functional. However, it is not strictly convex. As S is a finite dimensionalspace, the above problem (2) will have a solution. However, it may not be unique unless the data locationssatisfy some conditions. Thus, we shall explain such conditions to ensure the uniqueness. Next we shalldiscuss the stability of the minimization. Letting sf and sg be two minimizers associated with data sets fand g, the stability means that ‖sf −sg‖ in L2 norm can be bounded by ‖f −g‖ in `2 norm. This providesa theoretical support for numerical computation. Note that there are a class of functions f ∈ BV(Ω) whichsatisfy the conditions in (??). To see that sf approximates F as the size |4| of triangulation goes to zero,let us fix a function f ∈ H = BV(Ω) in the following sense. Suppose that uf ∈ H is the solution of theminimization problem below:

minI(u), u ∈ H, (3)

where, AΩ is the area of polygonal domain Ω and

I(u) =∫

Ω

√1 + |∇u|2dx+

12λ

1AΩ

∫Ω

|u− f |2dx. (4)

The existence and uniqueness of the minimizer of (3) can be seen in [1] and [31].As H is an infinite dimensional space and as f is not given everywhere over Ω, it is not easy to

compute such an uf . We shall use our bivariate splines to approximate it. That is, we shall show thatsf approximates uf under the assumption that uf ∈ W 1,1(Ω). Thus, our bivariate spline approachprovides a numerical method to solve the minimization problem (3). In addition we shall construct aniterative algorithm to approximate sf since the energy functional is a nonlinear one. Indeed, as S is a

2

finite dimensional space, suppose that S is spanned by φi, i = 1, ..., N . The minimization problem (2) isequivalent to solving the following

∂tId(sf + tφj , f)

∣∣t=0

=∫

Ω

∇sf · ∇φj√1 + |∇sf |2

dx+1λ

n∑i=1

1n

(sf (xi)− fi)φj = 0 (5)

for all basis functions φj . Similar to the minimization problem (3) we shall also consider

minI(u), u ∈ S. (6)

Let Sf be the minimizer. This minimization is equivalent to solving the following nonlinear equations:

∂tI(Sf + tφj , f)

∣∣t=0

=∫

Ω

∇Sf · ∇φj√1 + |∇Sf |2

dx+1λ

1AΩ

∫Ω

(Sf − f)φjdx = 0 (7)

for all basis functions φj , j = 1, · · · , N . That is, the two minimization problems (2) and (6) are verysimilar. These equations lead to easy iterative algorithms. We shall show that these iterative algorithmsare convergent. In this way, we complete establishing our approach.

We have implemented our bivariate spline approach discussed in this paper and applied our approachfor noises and stains removal. In the first set of examples, we use two standard images. We first segment theimages into several local regions (by hand, just for experimental purpose) and then triangulate these regionsinto triangulations. Two triangulated images will be shown. Over each region, we apply our approach toreduce noises. A comparison with a standard denoising method is given to show the effectiveness of ourspline approach. Another set of examples is to remove stains, e.g., words from images. Finally an imageenhancement is demonstrated by removing the wrinkles from a face image. These examples demonstratethat our bivariate spline approach work very well.

2 Preliminary

We simply outline some properties of bivariate splines which will be used in later sections. First of all, weneed

Definition 2.1 Let β < ∞. A triangulation 4 is said to be β-quasi-uniform provided that |4| ≤ βρ4,where |4| is the maximum of the diameters of the triangles in 4, and ρ4 is the minimum of the radii ofthe in-circles of triangles of 4.

Let W k,p(Ω) denotes the Sobolev space of local summable functions u : Ω → R such that for eachmulti-index α with |α| ≤ k, Dαu exists in weak sense and belongs to Lp(Ω). Let |f |Ω,m,p denotes the Lpnorm of the mth derivatives of f over Ω that is,

|u|Ω,k,p :=

∑ν+µ=k

‖Dν1D

µ2u‖

pΩ,p

1/p

, for 1 ≤ p <∞, (8)

and ‖f‖Ω,p =(

1AΩ

∫Ω

|f(x)|qdx)1/p

be the standard Lp(Ω) norm.

We first use the so-called Markov inequality to compare the size of the derivative of a polynomial withthe size of the polynomial itself on a given triangle t. (See [21] for a proof. )

3

Theorem 2.1 Let t := 〈v1, v2, v3〉 be a triangle, and fix 1 ≤ q ≤ ∞. Then there exists a constant Kdepending only on d such that for every polynomial p ∈ Pd, and any nonnegative integers α and β with0 ≤ α+ β ≤ d,

‖Dα1D

β2 p‖q,t ≤

K

ρα+βt

‖p‖q,t, 0 ≤ α+ β ≤ d, (9)

where ρt denotes the radius of the largest circle inscribed in t.

Next we have the following approximation property (cf. [20] and [21]):

Theorem 2.2 Assume d ≥ 3r+2 and let4 be a triangulation of Ω. Then there exists a quasi-interpolatoryoperator Qf ∈ Srd(4) mapping f ∈ L1(Ω) into Srd(4) such that Qf achieves the optimal approximationorder: if f ∈Wm+1,p(Ω),

‖Dα1D

β2 (Qf − f)‖Ω,p ≤ C|4|m+1−α−β |f |Ω,m+1,p (10)

for all α+ β ≤ m+ 1 with 0 ≤ m ≤ d. Here the constant C depends only on the degree d and the smallestangle θ4 and may be dependent on the Lipschitz condition on the boundary of Ω.

Bivariate splines have been used for data fitting for a long time. Theoretical results on the existence,uniqueness and approximation as well as numerical computational algorithms and experiments are wellknown (cf. [2], [15], [16], [17], and [19]). Let us present a brief summary. Let s0,f ∈ S be the splinesatisfying

L(s0,f − f) = minL(s− f), s ∈ S. (11)

where L(s − f) =1n

n∑i=1

|s(xi) − fi|2. s0,f is called the least squares spline of f . Another data fitting

method is called penalized least squares spline method. Letting

Em(f) =1AΩ

∫Ω

∑i+j=m

((m

i

)(Di

1Dj2f)2

)dx1dx2

be the energy functional, and λ > 0 be a fixed real number, we let sλ,f ∈ S be the spline solving thefollowing minimization problem:

mins∈SL(s− f) + λEm(s) (12)

for a fixed integer m = 2.Suppose that the data locations V = xi, i = 1, · · · , n have the property that for every s ∈ S and

every triangle τ ∈ 4, there exist a positive constant F1, independent of s and τ , such that

F1‖s‖∞,τ ≤

( ∑v∈V∩τ

s(v)2

)1/2

. (13)

Let F2 be the largest number of data sites in a triangle τ ∈ 4. That is, we have( ∑v∈V∩τ

s(v)2

)1/2

≤ F2‖s‖∞,τ , (14)

where ‖s‖∞,τ denotes the maximum norm of s over domain τ .The approximation properties of the discrete and penalized least squares splines are known (cf. [16],

[17], and [19]).

4

Theorem 2.3 Suppose that d ≥ 3r+2 and 4 is a β quasi-uniform triangulation. Suppose that there existtwo positive constants F1 and F2 such that (13) and (14) are satisfied. Then there exists a constant Cdepending on d and β such that for every function f in Sobolev space Wm+1,∞(Ω) with 1 ≤ m ≤ d suchthat

‖f − sλ,f‖∞,Ω ≤ CF2

F1|4|m+1|f |m+1,∞,Ω +

λ

(F 21 |4|)

C|f |2,∞,Ω

for 0 < λ ≤ F1|4|2/β2. In particular, when λ = 0, we have

‖f − s0,f‖∞,Ω ≤ CF2

F1|4|m+1|f |m+1,∞,Ω. (15)

Here m is an integer between 0 and d.

3 Existence, Uniqueness and Stability of Minimizers Sf and sfin S

We begin with the following

Lemma 3.1 Assume that the triangulation 4 is β-quasi-uniform. The minimization problem (6) has oneand only one solution over the finite dimensional space S.

Proof. In this problem (6), it is easy to see that the functional I(u) is strictly convex over the finitedimensional space S. Hence, there exists a unique minimizer sf solving the minimization problem (6).

Next we need to show that the solution of our problem (2) exists and is unique under a sufficientcondition discussed in the preliminary section.

Lemma 3.2 Assume that the triangulation 4 is β-quasi-uniform and suppose that the data sites xi, i =1, · · · , n satisfy the condition (13) above. Then the minimization problem (2) has one and only one solution.

Proof. Mainly, we need to show that Id(u) is strictly convex under the condition in (13). Suppose thatthere are two minimizers s1 and s2 of (2). Then any convex combination αs1 +(1−α)s2 is also a minimizersince Id is convex. We claim that if s1(xi) 6= s2(xi) for some i between 1 ≤ i ≤ n, then

Id(αs1 + (1− α)s2) < αId(s1) + (1− α)Id(s2).

Indeed, letting us say s1(xn) 6= s2(xn),

(αs1(xn) + (1− α)s2(xn))2 < α(s1(xn))2 + (1− α)(s2(xn))2.

Thus, Id(αs1 + (1−α)s2) < Id(s1). It is a contradiction. Therefore, we s1(xi) = s2(xi), i = 1, · · · , n. Nowwe use the condition (13) to see s1 − s2 is a zero spline function. That is, s1 ≡ s2. This shows that Id isstrictly convex over finite dimensional space S. Thus, there exists a unique minimizer of the minimization(2).

Next we show that the solutions of the minimization (6) and (2) are stable. Let Sf ∈ S be the solutionof (6) associated with the given function f . Suppose we have another function g. Let Sg ∈ S be theminimizer of (6) associated with g. For more precise, we let

I(u, f) =∫

Ω

√1 + |∇u|2dx+

12λAΩ

∫Ω

|u− f |2dx

and similar for I(u, g).

5

Lemma 3.3 The norm of the difference of Sf and Sg is bounded by the norm of the difference of f andg, i.e.

‖Sf − Sg‖2 ≤ ‖f − g‖2. (16)

Proof. It is clear that I(u, f) is differentiable with respect to u. The uniqueness of Sf and Sg byLemma 3.1 implies that all directional derivatives of I(u, f) and I(u, g) in (7) vanish in the space of finitedimensional space S spanned by φj , j = 1, ..., n. That is,

∂tI(Sf + tφj , f)|t=0 =

∫Ω

∇Sf · ∇φj√1 + |∇Sf |2

dx+1λ

1AΩ

∫Ω

(Sf − f)φjdx = 0 (17)

∂tI(Sg + tφj , g)|t=0 =

∫Ω

∇Sg · ∇φj√1 + |∇Sg|2

dx+1λ

1AΩ

∫Ω

(Sg − g)φjdx = 0 (18)

Subtracting (18) from (17), we have,∫Ω

∇Sf · ∇φj√1 + |∇Sf |2

− ∇Sg · ∇φj√1 + |∇Sg|2

dx+1λ

1AΩ

∫Ω

(Sf − Sg)φjdx =1λ

1AΩ

∫Ω

(f − g)φjdx. (19)

Since Sf and Sg are in the finite dimensional space S, we can write Sf−Sg =n∑j=1

cjφj for some coefficients

cj . Thusn∑j=1

cj∇φj = ∇(Sf −Sg). Multiply cj to (19) for each j and sum them up from j = 1, · · · , n. We

get

λ

∫Ω

(∇Sf√

1 + |∇Sf |2− ∇Sg√

1 + |∇Sg|2

)· ∇(Sf − Sg)dx+

1AΩ

∫Ω

|Sf − Sg|2dx

=1AΩ

∫Ω

(f − g)(Sf − Sg)dx. (20)

Notes that ρ(t) =√

1 + |t|2 is a convex differentiable multivariate function for t ∈ R2, where |t| standsfor the norm of vector t. Its Hessian matrix Hρ(t) is not negative. By Taylor expansion, for some ξ1 andξ2, we have(

t1√1 + |t1|2

− t2√1 + |t2|2

)· (t1 − t2) = (t1 − t2)T (Hρ(ξ1) +Hρ(ξ2))(t1 − t2) ≥ 0 (21)

for any t1, t2 ∈ R2. In particular, let t1 = ∇Sf and t2 = ∇Sg in (21). We integrate (21) over Ω to have∫Ω

(∇Sf√

1 + |∇Sf |2− ∇Sg√

1 + |∇Sg|2

)· ∇(Sf − Sg)dx ≥ 0. (22)

Using the above inequality, the term in (20) becomes∫Ω

|Sf − Sg)|2dx ≤∫

Ω

(f − g)(Sf − Sg)dx

Apply Cauchy-Schwarz inequality to the right-hand side of above equation, we have∫Ω

|Sf − Sg)|2dx ≤(∫

Ω

|f − g|2dx)1/2 (∫

Ω

|Sf − Sg|2dx)1/2

.

6

The desired inequality (16) follows. These complete the proof.Let

‖f‖2d =1n

n∑i=1

|f(xi)|2

be the norm for vector f = (f(x1), · · · , f(xn))T . Let sf ∈ S be the solution of (2) associated with thegiven function f . Similarly, let sg ∈ S be the minimizer of (2) associated with g. By using the samemethod in the proof of Lemma 3.3, we can prove

Lemma 3.4 The norm of the difference of spline functions sf and sg over discrete points x1, · · · , xn isbounded by the norm of the difference of f and g over the discrete points, i.e.

‖sf − sg‖d ≤ ‖f − g‖d. (23)

Recall from [31] that there exists a unique uf ∈ H = BV(Ω) which solves the minimization problem(3). Our next task is to establish the stability of the solution uf in H.

Theorem 3.1 Let uf be the minimizer of the following functional

I(u, f) =∫

Ω

√1 + |∇u|2dx+

12λ‖u− f‖22

in H for any f ∈ L2(Ω). Similarly, let ug be the minimizer of I(u, g) for g ∈ L2(Ω). Then

‖uf − ug‖2 ≤ ‖f − g‖2.

Proof. Let J(u) =∫

Ω

√1 + |∇u|2dx. Since J(u) is convex, we have

J(ug) ≥ J(uf ) + 〈∂J(uf ), ug − uf 〉.

Since I(u, f) defined above is convex, 〈∂E(uf , f), v〉 = 〈∂J(uf ), v〉 +1λ〈uf − f, v〉 = 0 for all v ∈ H. It

follows thatJ(ug) ≥ J(uf ) +

1λ〈f − uf , ug − uf 〉.

Similarly, we have J(uf ) ≥ J(ug) +1λ〈g − ug, uf − ug〉. Let us add these two inequalities together to

conclude that0 ≥ 1

λ〈f − g − uf + ug, ug − uf 〉.

or 〈ug − uf , ug − uf 〉 ≤ 〈f − g, uf − ug〉. Using the Cauchy-Schwarz inequality finishes the proof.

4 Approximation of Sf to uf

Recall that Sf is the solution to the minimization problem (6) and uf is the solution to (3). In thissection we shall show that Sf approximates uf under some sufficient conditions. Although we can usethe arguments in [4] to show the convergence of Sf to uf , the convergence will require uf ∈ W 2,∞(Ω).As an image function may not have this high regularity, we use the following approach to establish theconvergence under an assumption that uf ∈W 1,1(Ω).

7

Lemma 4.1 Let uf be the solution to (3). For any u ∈ H,

12λ‖u− uf‖22 ≤ I(u)− I(uf ). (24)

In particular, we have1

2λ‖Sf − uf‖22 ≤ I(Sf )− I(uf ). (25)

Proof. Using the concept of subdifferentiation and its basic property (see, e.g. [13]), we have

0 = ∂J(uf )− 1λAΩ

(f − uf ) and 〈∂J(uf ), u− uf 〉 ≤ J(u)− J(uf ),

where J(u) =∫

Ω

√1 + |∇u|2dx. From the above equations, it follows

1λAΩ

∫Ω

(f − uf )(u− uf )dx ≤∫

Ω

√1 + |∇u|2 −

√1 + |∇uf |2dx. (26)

We can write

I(u)− I(uf )

=∫

Ω

√1 + |∇u|2 −

√1 + |∇uf |2dx+

12λ|AΩ|

(∫Ω

|u− f |2dx−∫

Ω

|uf − f |2dx)

=∫

Ω

√1 + |∇u|2 −

√1 + |∇uf |2dx+

12λ|AΩ|

(∫Ω

(u− uf + uf − f)2dx−∫

Ω

|uf − f |2dx)

=∫

Ω

√1 + |∇u|2 −

√1 + |∇uf |2dx+

1λ|AΩ|

∫Ω

(u− uf )(uf − f)dx+1

2λ|AΩ|

∫Ω

|u− uf |2dx

≥ 12λ|AΩ|

∫Ω

|u− uf |2dx

by (26). Therefore the inequality (24) holds. Hence, we have (25).Next we need to show that I(Sf ) − I(uf ) → 0. To this end, we recall two standard concepts. Since

Ω ⊂ R2 is an region with piecewise smooth boundary ∂Ω and uf is assumed to be in W 1,1(Ω), using theextension theorem in [29], there exists a linear operator E : W 1,1(Ω)→W 1,1(R2) such that,(i)E(uf )|Ω = uf(ii)E maps W 1,1(Ω) continuously into W 1,1(R2):

‖E(uf )‖W 1,1(R2) ≤ C‖uf‖W 1,1(Ω). (27)

Thus, without loss of generality we may assume uf ∈W 1,1(R2) with (27).Next we introduce the mollification of uf as a tool to estimate the error between I(Sf ) and I(uf ). The

mollifier η : R2 → R is defined by

η :=

C exp

(1

|x|2−1

), if |x| < 1

0, if |x| ≥ 1

with the constant C > 0 selected so that∫R2 ηdx = 1. And set

ηε(x) :=1ε2η(xε

).

8

It is easy to see∫R2 ηε(x)dx = 1, and support(ηε) ⊂ B(0, ε). Define the mollification uεf of uf by convolution

uεf := ηε ∗ uf in Ω.

That is,

uεf (x) =∫

Ωε

ηε(x− y)uf (y)dy =∫B(x,ε)

ηε(x− y)uf (y)dy,

where Ωε := x ∈ R2 | dist(x,Ω) < ε. It is known that ‖uεf − uf‖2 → 0 as ε→ 0. See, e.g. [14].Our general plan to show I(Sf )− I(uf )→ 0 is to establish the following sequence of inequalities:

I(uf ) ≤ I(Sf ) ≤ I(Quεf ) ≤ I(uεf ) + err(|4|, ε) ≤ I(uf ) + errε + err(|4|, ε),

where Quεf is a spline approximation of uεf as in Theorem 2.2 and err(|4|, ε) and errε are error terms whichwill go to zero when ε and |4| go to zero.

We first show I(uεf ) approximates I(uf ).

Lemma 4.2 Let uεf be the mollification of uf defined above. Then I(uεf ) approximates I(uf ), when ε→ 0.

Proof. First we show thatI(uεf ) ≤ I(uf ) + errε

for an error term errε which is given as follows:

errε =∫

Ωε\Ω

√1 + |∇uf (y)|2dy +

12λ(‖uεf − uf‖2 + 2‖uεf − uf‖‖uf − f‖

)which will be shown to go to zero when ε→ 0.

By the convexity of√

1 + |t|2 and the property of the mollifier, we have

√1 + |∇uεf (x)|2 =

√√√√1 +

∣∣∣∣∣∫B(0,ε)

ηε(x− y)∇uf (y)dy

∣∣∣∣∣2

≤∫B(x,ε)

ηε(x− y)√

1 + |∇uf (y)|2dy.

It follows that ∫Ω

√1 + |uεf (x)|2dx ≤

∫Ω

∫B(x,ε)

ηε(x− y)√

1 + |∇uf (y)|2dy. (28)

≤∫

Ωε

√1 + |∇uf (y)|2dy

∫B(y,ε)

ηε(x− y)dx

=∫

Ωε

√1 + |∇uf (y)|2dy.

=∫

Ω

√1 + |∇uf (y)|2dy +

∫Ωε\Ω

√1 + |∇uf (y)|2dy.

By uf ∈W 1,1(R2) and (27), it follows that∫Ωε\Ω

√1 + |∇uf (y)|2dy ≤

∫Ωε\Ω

(1 + |∇uf (y)|) dy → 0, as ε→ 0.

Next we have

12λ‖uεf − f‖22 ≤

12λ(‖uεf − uf‖22 + 2‖uεf − uf‖2‖uf − f‖2 + ‖uf − f‖22

).

9

Since ‖uεf − uf‖2 → 0 as in [14] and ‖uf − f‖2 is bounded because1

2λ‖uf − f‖2 ≤ I(0),

12λ(‖uεf − uf‖2 + 2‖uεf − uf‖‖uf − f‖

)→ 0, as ε→ 0.

This finishes the proof of errε → 0.The fact ‖uεf − uf‖2 → 0 and the inequality (28) implies uεf ∈ W 1,1(Ω). Since uf is the minimizer in

BV (Ω), it follows thatI(uf ) ≤ I(uεf ) ≤ I(uf ) + errε,

which implies I(uεf ) approximates I(uf ) when ε→ 0.Next we need to estimate I(Quεf ) − I(uεf ). To do so, we need |uεf |Ω,1,1. Note that | · |Ω,k,q is the

semi-norm defined in (8).

Lemma 4.3 For fixed ε > 0, uεf ∈W 2,1(Ω), and

|uεf |Ω,2,1 ≤C

ε‖uf‖W 1,1(Ω) (29)

for a constant C > 0.

Proof. First, we prove uεf ∈ W 2,1(Ω), that is Dαuεf ∈ L1, for each multi-index α with |α| ≤ 2. Recallthat uεf (x) =

∫B(x,ε)

ηε(x− y)uf (y)dx. Since | · | is convex, we have

|uεf (x)| ≤∫B(x,ε)

ηε(x− y)|uf (y)|dy,

It follows that ∫Ω

|uεf (x)|dx ≤∫

Ω

∫B(x,ε)

ηε(x− y)|uf (y)|dydx

=∫

Ωε

|uf (y)|dy∫B(y,ε)

ηε(x− y)dx

= ‖uf‖Ωε,1 ≤ C‖uf‖W 1,1(Ω)

by the inequality (27).Replacing uεf by D1u

εf and D2uεf above, respectively, we can prove ‖D1uf‖2, ‖D2uf‖2 are bounded

in a similar manner.Next we consider ‖D1D1u

εf‖2. Writing

D1D1uεf = D1uf ∗D1ηε =

∫B(x,ε)

D1uf (y)D1ηε(x− y)dy

it follows that

‖D1D1uεf‖Ω,1 =

∫Ω

∣∣∣∣∣∫B(x,ε)

D1uf (y)D1ηε(x− y)dy

∣∣∣∣∣ dx≤

∫Ω

∫B(x,ε)

|D1uf (y)||D1ηε(x− y)|dydx

≤∫

Ωε

|D1uf (y)|∫B(y,ε)

|D1ηε(x− y)|dxdy

=∫

Ωε

|D1uf (y)|dy∫B(0,ε)

|D1ηε(x)|dx

10

Since for |x| ≤ ε,

|D1ηε(x)| =∣∣∣∣Cε2 exp

(ε2

|x|2 − ε2

)2ε2x

(|x|2 − ε2)2

∣∣∣∣≤ 2C sup

|x|≤ε

exp

(−ε2

ε2 − |x|2

(ε2 − |x|2)2

=2Cε3

sup|x|≤ε

exp

(−ε2

ε2 − |x|2

)(ε2

ε2 − |x|2

)2

Changing the variable by t :=ε2

ε2 − |x|2, t ∈ [1,∞), it is easy to check that exp(−t)t2 has a maximum

4 exp(−2). Thus,

|D1ηε(x)| ≤ 8C exp(−2)ε3

.

It follows

‖D1D1uεf (x)‖Ω,1 ≤ ‖D1uf‖Ω,1

8C exp(−2)ε3

∫B(0,ε)

1dx

≤ ‖uf‖W 1,1(Ω)16πCexp(−2)

ε.

= ‖uf‖W 1,1(Ω)C ′

ε(30)

for a positive constant C ′. Using the similar approach, it can be showed that ‖D1D2uεf‖2 and ‖D2D2u

εf‖2

have the same upper bound as in (30). And thus, we prove that

|uεf |Ω,2,1 ≤C

ε‖uf‖W 1,1(Ω)

for some C > 0.For a triangulation 4 = ti of Ω, recall |4| := maxti∈4|ti|, where |ti| denotes the diameter of its

circumference circle. Using Theorem 2.2, we can choose s := Quεf ∈ S such that

‖Dα1D

β2 (s− uεf )‖Ω,1 ≤ C|4|2−α−β |uεf |Ω,2,1 (31)

for all α+ β = 1.

Lemma 4.4 Let s := Quεf ∈ S. Then I(s) approximates I(uεf ), when |4|ε → 0.

Proof. We can estimate the difference between |I(s)− I(uεf )| by

|I(s)− I(uεf )|

≤∣∣∣∣∫

Ω

√1 + |∇s|2 −

√1 + |∇uεf |2dx

∣∣∣∣+1

∣∣‖s− f‖22 − ‖uεf − f‖22∣∣≤

∣∣∣∣∫Ω

√1 + |∇s|2 −

√1 + |∇uεf |2dx

∣∣∣∣+1

2λ(‖s− uεf‖22 + 2‖s− uεf‖2 ‖uεf − f‖

).

Let

err(|4|, ε) :=∣∣∣∣∫

Ω

√1 + |∇s|2 −

√1 + |∇uεf |2dx

∣∣∣∣+1

2λ(‖s− uεf‖22 + 2‖s− uεf‖2‖uεf − f‖2

)11

be an error function. We shall see that err(|4|, ε) → 0, as |4|/ε → 0. Using a Sobolev inequality (cf.[14]), we have

‖s− uεf‖2 ≤ C‖∇(s− uεf )‖1 ≤C|4|ε‖uf‖W 1,1(Ω)

by using the inequality in (31) with α + β = 1 together with (29). Using the estimates in (31) and (29)again, we have ∣∣∣∣∫

Ω

√1 + |∇s|2 −

√1 + |∇uεf |2dx

∣∣∣∣ =

∣∣∣∣∣∣∫

Ω

|∇s|2 − |∇uεf |2√1 + |∇s|2

√1 + |∇uεf |2

dx

∣∣∣∣∣∣≤

∫Ω

|∇s−∇uεf ||∇s+∇uεf |√1 + |∇s|2

√1 + |∇uεf |2

dx

≤ ‖∇(s− uεf )‖Ω,1

≤ C ′|4|ε‖uf‖W 1,1(Ω).

Here both C and C ′ are positive constants independent of ε,4, uf . It is not hard to see ‖uεf − f‖2 isbounded because

‖uεf − f‖2 ≤ ‖uεf − uf‖2 + ‖uf − f‖2 ≤ 1 + 2A2Ω + ‖f‖22

as ‖uεf − uf‖2 ≤ 1 if ε small enough and by using the property of the minimizer uf . Therefore, it can beconcluded that err(|4|, ε)→ 0, as |4|/ε→ 0, and thus I(s) approximates I(uεf ).

Summarizing the discussion above, we have

Theorem 4.1 Suppose that uf ∈W 1,∞(Ω). Then Sf approximates uf in L2 when |4| → 0.

Proof. Since S ⊂ H, we have I(uf ) ≤ I(Sf ). Also s ∈ S implies I(Sf ) ≤ I(s). By Lemma 4.2 and 4.4,we have

I(uf ) ≤ I(Sf ) ≤ I(s) ≤ I(uεf ) + err(|4|, ε) ≤ I(uf ) + errε + err(|4|, ε).

For each triangulation 4, we choose ε =√|4| which ensures |4|/ε→ 0. Thus, the above error terms go

to zero as |4| → 0. By Lemma 4.1, Sf converges to uf in L2 norm.Under the assumption that the data locations xi := xi,n, i = 1, · · · , n are evenly distributed over Ω in

the following sense:

limn→∞

1n

n∑i=1

|u(xi)− fi|2 =1AΩ

∫Ω

|u(x)− f(x)|2dx,

we can prove the convergence of sf to uf in the similar fashion. The proof is left for interested reader.

5 Numerical Algorithms and Their Convergence

We now study how to compute numerically the minimizer Sf of (6) as well an the minimizer sf of (2). Firstof all, we shall definite two computational algorithms, one for Sf and one for sf . These algorithms willgive us two iterative sequences of spline functions. Then we shall prove some properties of the iterativesequences. With these preparation, we shall prove that our numerical algorithms are convergent anditerative solutions converge to Sf and sf .

The following iterations will be used to approximate Sf .

Algorithm 5.1 Given u(k), we find u(k+1) ∈ S such that∫Ω

∇u(k+1) · ∇φj√1 + |∇u(k)|2

dx+1

λAΩ

∫Ω

u(k+1)φjdx =1

λAΩ

∫Ω

fφjdx, for all j = 1, ..., n. (32)

12

We first show that the above iteration is well defined. Since u(k+1) ∈ S, it can be written as u(k+1) =∑ni c

(k+1)i φi. Plugging it in (32), we have

n∑i

c(k+1)i

(∫Ω

∇φi · ∇φj√1 + |∇u(k)|2

dx+1

λAΩ

∫Ω

φiφjdx

)=

1λAΩ

∫Ω

fφjdx, j = 1, ...n. (33)

Denote by

D(k) := (d(k)i,j )N×N with d

(k)i,j = λ

∫Ω

∇φi · ∇φj√1 + |∇u(k)|2

dx

M := (mi,j)N×N with mi,j =1AΩ

∫Ω

φiφjdx,

v := (vj , j = 1, · · · , N) with vj =1AΩ

∫Ω

fφjdx.

Then to solve equation of (33) is equivalent to solving the equation

(D(k) +M)c(k+1) = v, (34)

where c(k+1) = [c(k+1)1 , c

(k+1)2 , ..., c

(k+1)n ]T .

Lemma 5.1 (34) has a unique solution c(k+1).

Proof. It is easy to prove D(k) is semi-positive definite and M is positive defined because, for any nonzeroc = (ci)n,

cTD(k)c = λ

∫Ω

|∑ni ci∇φi|2√

1 + |∇u(k)|2dx ≥ 0

and

cT M c =1AΩ

∫Ω

|n∑i

ciφi|2dx > 0

Moreover (D(k) +M) is also positive definite, and hence invertible. So (34) has a unique solution.

Lemma 5.2 u(k) are bounded in L2 norm by ‖f‖2 for all k > 0. That is,

‖u(k+1)‖2 ≤ ‖f‖2. (35)

Since u(k+1) ∈ S, by using Theorem 2.1, we have

‖∇u(k+1)‖2 ≤ C‖f‖2

where C is dependent on β and |4|.

Proof. Multiply (c(k+1))T to both hand-side of (34), we have

λ

∫Ω

|∇u(k+1)|2√1 + |∇u(k)|2

dx+1AΩ

∫Ω

|u(k+1)|2dx =1AΩ

∫Ω

fu(k+1)dx. (36)

Since the first term of (36) are nonnegative, we have

‖u(k+1)‖22 ≤1AΩ

∫Ω

fu(k+1) ≤ ‖f‖2‖u(k+1)‖2 (37)

13

which yields‖u(k+1)‖2 ≤ ‖f‖2,

and hence ‖u(k+1)‖2 is bounded if f ∈ L2(Ω). The second part of the results of Lemma 5.2 followsstraightforwardly by using Markov’s inequality in Theorem 2.1.

Similarly, we have an iterative algorithm for approximation of sf .

Algorithm 5.2 Given u(k) ∈ S, we find u(k+1) ∈ S such that

λ

∫Ω

∇u(k+1) · ∇φj√1 + |∇u(k)|2

dx+1n

n∑i=1

u(k+1)(xi)φj(xi) =1n

n∑i=1

f(xi)φj(xi), for all j = 1, ..., n. (38)

We can prove that the iterative algorithm is well defined if we assume the condition (13). Indeed, asabove, we write u(k+1) =

∑ni c

(k+1)i φi. Plugging it in (38), we have

n∑i

c(k+1)i

∫Ω

∇φi · ∇φj√1 + |∇u(k)|2

dx+1n

n∑`=1

φi(x`)φj(x`)

)=

1n

n∑`=1

f(x`)φj(x`), j = 1, ...n. (39)

These lead to the following system of equations

(D(k) + M)c(k+1) = v, (40)

where the matrix M and vector v are slightly different from the matrix M and the vector v above.

Lemma 5.3 Suppose that the data locations satisfy the condition (13). Then (40) has a unique solutionc(k+1).

Proof. We have seen that D(k) is semi-positive definite. We now show that M is positive defined. Forany nonzero c = (ci)n,

cT M c =1n

n∑`=1

|n∑i

ciφi(x`)|2 > 0.

Indeed, if the above term is zero, then for each triangle t, we have

F1‖n∑i

ciφi‖∞,t ≤n∑`=1

|n∑i

ciφi(x`)|2 = 0

which implies that the spline function∑ni ciφi ≡ 0 and hence, the coefficients ci are all zero. This is a

contradiction. Thus, M is positive.That is, (D(k) + M) is also positive definite, and hence invertible. So (34) has a unique solution.Similarly, we can show that u(k) is bounded in the following sense:

Lemma 5.4 Suppose that the data locations satisfy the condition (13). Then u(k) are bounded for allk > 0. That is,

n∑i=1

|u(k+1)(xi)|2 ≤n∑i=1

|fi|2. (41)

In addition,

‖∇u(k+1)‖22 ≤ Cn∑i=1

|fi|2,

where C is dependent on β and |4|.

14

Proof. Multiply (c(k+1))T to both hand-side of (34), we have

λ

∫Ω

|∇u(k+1)|2√1 + |∇u(k)|2

dx+1n

n∑`=1

|u(k+1)(x`)|2 =1n

n∑`=1

f`u(k+1)(x`). (42)

By Cauchy-Schwarz’s inequality, we haven∑`=1

f`u(k+1)(x`) ≤

12

n∑`=1

|f`|2 +12

n∑`=1

|u(k+1)(x`)|2.

It follows

λ

∫Ω

|∇u(k+1)|2√1 + |∇u(k)|2

dx+1

2n

n∑`=1

|u(k+1)(x`)|2 ≤1

2n

n∑`=1

|f`|2. (43)

Thus we have the first part of Lemma 5.4. To see the second part, we prove the maximum norm‖∇u(k+1)‖∞,Ω is bounded by the right-hand side of (41). Indeed, by Theorem 2.1, we have

‖∇u(k+1)‖∞,Ω ≤C

|4|‖u(k+1)‖∞,Ω.

As the maximum norm of u(k+1) is achieved at a triangle t ∈ 4, we use the condition (13) to see

‖u(k+1)‖∞,Ω ≤1F1

(n∑`=1

|u(k+1)(x`)|2)1/2

≤ 1F1

(n∑`=1

|f`|2)1/2

It follows that the second part of Lemma 5.4 is true.Next we need to show that the iterative algorithms above converge. We first consider the iterative

solutions u(k) from Algorithm 5.1.

Lemma 5.5 If u(k+1) is the solution of our Algorithm 5.1, then the following inequality holds

I(u(k))− I(u(k+1)) ≥ 12λAΩ

‖u(k) − u(k+1)‖2. (44)

where I(·) is the energy functional given in (4).

Proof. First of all we use (32) to have

1λAΩ

∫Ω

(f − u(k+1))(u(k) − u(k+1))dx =∫

Ω

∇u(k) · ∇u(k+1)√1 + |∇u(k)|2

dx−∫

Ω

|∇u(k+1)|2√1 + |∇u(k)|2

dx

since u(k) − u(k+1) is a linear combination of φj , j = 1, · · · , n. Then the following inequality follows.

1λAΩ

∫Ω

(f − u(k+1))(u(k) − u(k+1))dx ≤∫

Ω

|∇u(k)|2

2√

1 + |∇u(k)|2dx−

∫Ω

|∇u(k+1)|2

2√

1 + |∇u(k)|2dx. (45)

Now we are ready to prove (44). The difference between I(u(k)) and I(u(k+1)) is

I(u(k))− I(u(k+1))

=∫

Ω

√1 + |∇u(k)| −

√1 + |∇u(k+1)|2dx+

12λAΩ

∫Ω

|u(k) − f |2 − |u(k+1) − f |2dx

=∫

Ω

√1 + |∇u(k)| −

√1 + |∇u(k+1)|2dx+

12λAΩ

∫Ω

(u(k) − u(k+1))(u(k) + u(k+1) − 2f)dx

=∫

Ω

√1 + |∇u(k)| −

√1 + |∇u(k+1)|2dx+

∫Ω

1λAΩ

(u(k+1) − f)(u(k) − u(k+1))

15

+1

2λAΩ

∫Ω

|u(k) − u(k+1)|2dx

which yields the result of this lemma since the first two terms in the last equation above is not negative.Indeed, by applying (45), we have∫

Ω

√1 + |∇u(k)| −

√1 + |∇u(k+1)|2dx+

1λAΩ

∫Ω

(u(k+1) − f)(u(k) − u(k+1))

≥∫

Ω

√1 + |∇u(k)|2 −

√1 + |∇u(k+1)|2dx−

∫Ω

|∇u(k)|2

2√

1 + |∇u(k)|2dx+

∫Ω

|∇u(k+1)|2

2√

1 + |∇u(k)|2dx

=∫

Ω

2 + |∇u(k)|2 + |∇u(k+1)|2

2√

1 + |∇u(k)|2−√

1 + |∇u(k+1)|2dx

≥∫

Ω

√1 + |∇u(k)|2

√1 + |∇u(k+1)|2√

1 + |∇u(k)|2−√

1 + |∇u(k+1)|2dx = 0.

We have thus established the proof.We are ready to show the convergence of u(k) to the minimizer Sf .

Theorem 5.1 The sequence u(k) obtained from Algorithm 5.1 converges to the true minimizer Sf .

Proof. By Lemma 5.2, the sequence u(k) is bounded. Actually, we know ‖u(k)‖2 ≤ ‖f‖2. So there mustbe a convergent subsequence u(nj). Suppose u(nj) → u. By Lemma 5.5, we see I(u(k)) is a decreasingsequence and bounded below, so I(u(k)) is convergent as well as any subsequence of it. We use Lemma5.5 to have

‖u(nj+1) − u‖22 ≤ 2‖u(nj+1) − u(nj)‖22 + 2‖u(nj) − u‖22≤ 4λ(I(u(nj))− I(u(nj+1))) + 2‖u(nj) − u‖22 → 0

which implies u(nj+1) → u.According to Markov’s inequality, i.e. Theorem 2.1, we have∫

Ω

|∇u(nj) −∇u|2dx ≤ β2

|4|2

∫Ω

|u(nj) − u|2dx. (46)

It follows from the convergence of u(nj) → u that ∇u(nj) → ∇u in L2 norm as well. Replace u(nj) byu(nj+1) above, we have ∇u(nj+1) → ∇u too by the convergence of u(nj+1) → u. As u(nj), u(nj+1) and uare spline functions in S. The convergence of u(nj) and u(nj+1) to u, respectively implies the coefficientsof u(nj) and u(nj+1) in terms of the basis functions φj , j = 1, · · · , N are convergent to the coefficients ofu, respectively.

Since u(nj+1) solves the equations (32), we have∫Ω

∇u(nj+1) · ∇φj√1 + |∇u(nj)|2

dx =∫

Ω

f − u(nj+1)

λAΩφidx (47)

for all φi, i = 1, · · · , N . Letting j →∞, we obtain∫Ω

∇u · ∇φi√1 + |∇u|2

dx =∫

Ω

f − uλAΩ

φidx (48)

for all i = 1, · · · , N . That is, u is a local minimizer. Since the functional is convex, a local minimizer isthe global minimizer and hence, u = Sf . Thus all convergent subsequences of u(k) converge to Sf .

Similarly, letting u(k) be the sequence from Algorithm 5.2, we can show the convergence of u(k) to theminimizer sf . The proof of the following theorem is left to the interested reader.

Theorem 5.2 Suppose that the data sites xi, i = 1, · · · , n satisfy the condition in (13). Then the sequenceu(k) obtained from Algorithm 5.2 converges to the true minimizer sf .

16

Figure 1: Triangulations of Various Regions

6 Numerical Results

We have implemented our bivariate spline approach in MATLAB and simulated several image enhancementexperiments: image denoising, stain removal and wrinkle removal. We report these numerical results inthe following examples.

Example 6.1 (Image Denoising) Let us first consider a standard problem of image denoising. Let Lbe an image Lena. We use the standard discrete Perona-Malik (PM) model first to reduce the noises of L.Then we choose several regions of interest to further reduce noises. In Fig. 1, a triangulation of severalregions of the image L is shown. We apply our minimal surface area spline approach to suppress noisesfrom these regions. In Fig. 2 we show a noised image (using Gaussian noises 30N(0, 1)), a denoised imageby using our approach, the exact image, and a denoised image by using the discrete PM model. We putall four images in one page so that we can compare them to see that our approach works very well.

Next we repeat the experiment above for another well-known image peppers. We first triangulate someregions of interest as shown in Fig. 3. Then we apply our spline approach. ¿From Fig. 4 we can see thatthe denoised image by our approach is smooth in the regions where our approach is applied and they aresmoother than the original exact image. The denoised image by the Perona-Malik approach looks bumpyin many local regions.

Example 6.2 (Stain Removal or Damaged Image Repair) Next example is removing stains fromimages or repair damaged images. One example is to remove letters on the human face as shown in Fig.5. Another two examples are the removal of scratches with different background images as shown in Fig.6 and Fig. 7

Example 6.3 (Wrinkle Removal) Finally we present a wrinkle removal experiment. We are interestedin removing some wrinkles from a human face. We identify a couple of regions of interest and apply ourbivariate spline approach to the regions of interest. In Fig. 8, two images are shown. The human face onthe right is clearly enhanced in the areas near eyes and cheeks.

17

Figure 2: A noised LENA image and the denoised image by our approach (top row) and the original imageand the denoised image by the Perona-Malik method(bottom row)

18

Figure 3: Triangulations of Various Regions of Image Peppers

19

Figure 4: A noised peppers image and the denoised image by our approach(top row) and the exact imageand denoised image by the Perona-Malik method(bottom row)

20

Figure 5: Lena with a word near her nose and the Lena with the word removed

21

Figure 6: In the clockwise direction from the top left corner, a true image, a damaged image with scratches,a triangulation of the demaged image, the resulting image from our bivariate spline approach

22

Figure 7: In the clockwise direction from the top left corner, a true image, a damaged image with scratches,a triangulation of the demaged image, the resulting image from our bivariate spline approach

23

Figure 8: A face with wrinkles on the left and the face with reduced wrinkles on the right

24

7 Remarks

We have the following remarks in order.

Remark 7.1 In energy functional (1), we may replace the integrand by

Id(u) =∫

Ω

√ε+ |∇u(x)|2dx+

12λ

1n

n∑i=1

|u(xi)− fi|2 (49)

When ε → 0+, this problem is related to the well-known ROF model for image denoising. Our bivariatespline approach can be used for image denoising. For another example, one can consider

Id(u) =∫

Ω

(ε+ |∇u(x)|2)p/2dxdy +1

2λ1n

n∑i=1

|u(xi)− fi|2 (50)

for p ≥ 1. Our bivariate spline approach can be extended to these settings easily. We leave the study to[18].

Remark 7.2 Furthermore, one can consider

Id(u) =∫

Ω

√1 + |∆u(x)|2dxdy +

12λ

1n

n∑i=1

|u(xi)− fi|2 (51)

with ∆ being the standard Laplace operator. This model for image denoising has been studied by X. Tai andhis collaborators. We can use bivariate splines to minimize the energy functional Id above to do scattereddata fitting and study the mathematical properties of this fourth order model. Again we leave the study to[18].

Remark 7.3 On the other hand, one can consider,

Id,q(u) =∫

Ω

(ε+ |∇u(x)|2)q/2dx+1

2λ1n

n∑i=1

|u(xi)− fi|2 (52)

with 0 < q < 1. We shall study the minimization of the above energy functional. Our MATLAB programsfor the numerical results discussed in this paper can be easily modified to fit into this setting and then wecan simulate the effectiveness of for image denoising when ε→ 0+. More results may be reported elsewhere.

References

[1] Acar R., C.R. Vogel (1994): Analysis of bounded variation penalty methods for ill-posed problems,Inverse Problems, 10, 1217-1229.

[2] G. Awanou, M. J. Lai, and P. Wenston, The multivariate spline method for numerical solution ofpartial differential equations and scattered data interpolation, in Wavelets and Splines: Athens 2005,edited by G. Chen and M. J. Lai, Nashboro Press, Nashville, TN, 2006, 24–74.

[3] G. Aubert and P. Kornprobst, Mathematical Problems in Image Processing, Springer, 2006.

[4] F. Ciarlet, The Finite Element Method for Elliptic Problems, North- Holland, Amsterdam, New York,1978.

[5] A. Chambolle, An algorithm for total variation minimization and applications, Journal of Mathemat-ical Imaging and Vision, 20 (2004), Issue 1-2, 89-97.

25

[6] Chambolle, A., and Lions, P.-L.: Image recovery via total variation minimization and related prob-lems, Numer. Math. 76(2) (1997), 167-188.

[7] A. Chambolle and B. J. Lucier, Interpreting translation-invariant wavelet shrinkage as a new imagesmoothing scale space, IEEE Trans. Image Process., 10(2001), 993–1000.

[8] A. Chambolle, R. A. DeVore, N. Lee, and B. J. Lucier, Nonlinear wavelet image processing: variationalproblems, compression, and noise removal through wavelet shrinkage, IEEE Trans. Image Process.,7(1998), 319–335.

[9] D. L. Donoho, De-Noising by Soft Thresholding, IEEE Trans. Info. Theory 43, (1993) pp. 933-936.

[10] D. L. Donoho and J. Johnston, Ideal spatial adaptation by wavelet shrinkage, Biometrika, 81(1994),425–455.

[11] D. L. Donoho and J. Johnston, Minimax estimation via wavelet shrinkage, Annals of statistics, 26(1998), 879–921.

[12] Duval, V. J.-F. Aujol and L. Vese, A projected gradient algorithm for color image decomposition,CMLA Preprint 2008–21.

[13] I. Ekeland and R. Temam, Convex Analysis and Variational Problem, Cambridge University Press,1999.

[14] L. C. Evans, Partial Differential Equations, American Mathematical Society, 2002, pp 629-631.

[15] M. von Golitschek, M. J. Lai, L. L. Schumaker, Error bounds for minimal energy bivariate polynomialsplines, Numer. Math. 93(2002), 315–331.

[16] M. von Golitschek and L. L. Schumaker, Bounds on projections onto bivariate polynomial spline spaceswith stable local bases, Const. Approx. 18(2002), 241–254.

[17] M. von Golitschek and L. L. Schumaker, Penalized least squares fitting, Serdica 18 (2002), 1001–1020.

[18] Q. Hong, Bivariate Splines for Image Denoising, Ph. D. Dissertation, University of Georgia, Athens,Georgia, 2010.

[19] M.-J. Lai, Multivariate splines for data fitting and approximation, Approximation Theory XII, SanAntonio, 2007, edited by M. Neamtu and L. L. Schumaker, Nashboro Press, 2008, Brentwood, TN,pp. 210–228.

[20] M. -J. Lai and L. L. Schumaker, Approximation power of bivariate splines, Advances in Comput.Math., 9(1998), pp. 251–279.

[21] M. -J. Lai and L. L. Schumaker, Spline Functions on Triangulations , Cambridge University Press,2007.

[22] M. -J. Lai and L. L. Schumaker, Domain decomposition technique for scattered data interpolation andfitting, SIAM J. Numerical Analysis, 47(2009), 911-928.

[23] M. -J. Lai and J. Wang, Convergence of a Central Difference Discretization of ROF Model for ImageDenoising, manuscript, submitted, 2009.

[24] J. S. Lim, Two-Dimensional Signal and Image Processing, Prentice Hall, 1989.

[25] K. Nam, Tight wavelet frame construction and its application for image processing, Doctoral Disser-tation, The university of Georgia, 2005.

26

[26] Y. Meyer, Oscillating patterns in image processing and nonlinear evolution equations, volume 22 ofUniversity Lecture Series, American Mathematical Society, Providence, RI, 2001.

[27] P. Paroma and J. Malik, Scale-Space and Edge Detection Using Anisotropic Diffusion, 12 (1990), pp.629–639.

[28] L. Rudin, Osher, S., Fatemi, E., Nonlinear total variation based noise removal algorithms. Physica D60 (1992), 259-268.

[29] E. M. Stein, Singular Integrals and Differentiability Properties of Functions, Princeton UniversityPress, 1970.

[30] X.-C.-Tai et al., eds., Scale Space and Variational Methods in Computer Vision, Lecture Notes inComputer Science, Vol. 5567, Springer Berlin, Heidelberg, 2009.

[31] L. Vese, A study in the BV space of a denoising-deblurring variational problem, Applied Maht. Optim,44(2001), 131–161.

[32] J. Wang and Lucier, B., Error bounds for finite-difference methods for Rudin-Osher-Fatemi imagesmoothing, submitted, 2009.

27