chapter 2 analysis of adaptive fe algorithms for the poisson … · chapter 2 analysis of adaptive...

50
Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be a real Hilbert space endowed with norm k.k W , and let W 0 denote its dual space, i.e., the space of all bounded linear functionals f : W R equipped with the norm kf k W 0 := sup wW,w6=0 |f (w)| kwk W . (2.1.1) Consider the following abstract variational problem (AVP) Given f W 0 , find u W such that (2.1.2) a(u, w)= f (w) for all w W. Definition 2.1.1 A bilinear form a(., .): W × W R is called continuous if there exists a constant ς> 0 such that |a(u, w)|≤ ς kuk W kwk W for all u, w W. (2.1.3) Definition 2.1.2 A bilinear form a(., .) is called W -elliptic (or coercive), if there exists a constant α> 0 such that, a(w, w) αkwk 2 W for all w W. (2.1.4) The question about existence and uniqueness of the solution of (AVP) is answered by the Lax-Milgram lemma ([35, 13]): 7

Upload: buiduong

Post on 17-May-2019

231 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Chapter 2

Analysis of Adaptive FEAlgorithms for the PoissonProblem

2.1 Abstract Variational Problem

Let W be a real Hilbert space endowed with norm ‖.‖W , and let W ′ denote its dual space,i.e., the space of all bounded linear functionals f : W → R equipped with the norm

‖f‖W ′ := supw∈W,w 6=0

|f(w)|‖w‖W

. (2.1.1)

Consider the following abstract variational problem (AVP)

Given f ∈ W ′,

find u ∈ W such that (2.1.2)

a(u,w) = f(w) for all w ∈ W.

Definition 2.1.1 A bilinear form a(., .) : W ×W → R is called continuous if there existsa constant ς > 0 such that

|a(u,w)| ≤ ς‖u‖W‖w‖W for all u,w ∈ W. (2.1.3)

Definition 2.1.2 A bilinear form a(., .) is called W -elliptic (or coercive), if there exists aconstant α > 0 such that,

a(w,w) ≥ α‖w‖2W for all w ∈ W. (2.1.4)

The question about existence and uniqueness of the solution of (AVP) is answered bythe Lax-Milgram lemma ([35, 13]):

7

Page 2: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Lemma 2.1.3 (Lax-Milgram) Let W be a (real) Hilbert space, a(., .) : W × W → R abilinear, continuous and W-elliptic form, and f : W → R be a bounded linear functional.Then there exists a unique solution u ∈ W to (AVP), and moreover

‖u‖W ≤ 1

α‖f‖W ′ , (2.1.5)

where the constant α stems from the coercivity condition.

2.1.1 Galerkin Method

In the following let us assume that the conditions of the Lax-Milgram lemma hold. Withthe aim to build a numerical approximation to the solution of the abstract variationalproblem, we choose a sequence of subspaces (Wj)j∈N ⊂ W , such that

for all w ∈ W infwj∈Wj

‖w − wj‖W → 0 as j →∞. (2.1.6)

For each Wj ⊂ W we can formulate the following discrete problem

Given f ∈ W ′,

find uj ∈ Wj such that (2.1.7)

a(uj, wj) = f(wj) for all wj ∈ Wj.

Since Wj is a subspace of W , the conditions of Lax-Milgram lemma are true also forthe discrete problem providing the existence and uniqueness of the discrete solution uj,that is called a Galerkin approximation. More precisely, its properties are stated in thefollowing theorem([32]).

Theorem 2.1.4 (Properties of a Galerkin approximation)Under the assumptions of the Lax-Milgram lemma, there exists a unique solution uj to

(2.1.7). Furthermore it is stable, since independently of the choice of the subspace Wj itsatisfies

‖uj‖W ≤ 1

α‖f‖W ′ . (2.1.8)

Moreover, with u being the solution of (AVP), we have that

‖u− uj‖W ≤ ς

αinf

wj∈Wj

‖u− wj‖W . (2.1.9)

In the following we consider the special case that the bilinear form a(., .) is symmet-ric. The assumptions concerning symmetry, continuity and coercivity imply that we canintroduce an energy inner product and energy norm induced by a(., .):

(u,w)E := a(u,w) for all u,w ∈ W, (2.1.10)

8

Page 3: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

‖u‖E := a(u, u)1/2 for all u ∈ W, (2.1.11)

and the following norm equivalence holds

α1/2‖u‖W ≤ ‖u‖E ≤ ς1/2‖u‖W for all u ∈ W. (2.1.12)

Now, let u and uj be the solutions of (AVP) and its discrete analog (2.1.7), respectively.Comparing (2.1.2) and (2.1.7) we infer that

a(u− uj, wj) = 0 for all wj ∈ Wj (2.1.13)

showing that the error u− uj of the Galerkin approximation is orthogonal to the approx-imation space Wj with respect to the energy inner product. As a consequence, we have

‖u− wj‖2E = ‖uj − wj‖2

E + ‖u− uj‖2E for all wj ∈ W, (2.1.14)

and so the following lemma clearly holds true.

Lemma 2.1.5 If, in addition to the conditions of the Lax-Milgam lemma, the bilinear forma(., .) : W ×W → R is symmetric, then the Galerkin approximation uj, being the solutionof (2.1.7), is the best approximation to the solution of (AVP) from the approximation spaceWj, i.e.,

‖u− uj‖E = infwj∈Wj

‖u− wj‖E. (2.1.15)

So note that in view of estimate (2.1.9) and property (2.1.6), we have

limj→∞

uj = u in W. (2.1.16)

However, this result says nothing about the rate of convergence of the approximate solutionstowards the exact one. For this we need additional information, namely, properties of theapproximation spaces Wj and the regularity of the solution. One of the possible choicesfor Wj is provided by the finite element method.

2.2 Finite Element Approximation Spaces

As we have seen in the previous section, to use the Galerkin method for the numericalsolution of (AVP) we need to specify the approximation spaces Wj. The finite elementmethod is a specific procedure of constructing computationally efficient approximationspaces which are called finite element spaces. In many cases of practical interest we dealwith a boundary value problem posed over a domain Ω ⊂ Rd with Lipschitz-continuousboundary. Depending on the boundary value problem under consideration, W is for ex-ample one of the spaces H1

0 (Ω), H1(Ω), H20 (Ω), H(div, Ω), H(curl, Ω) etc. Here we shall

confine ourselves mostly to variational problems defined on H10 (Ω). This corresponds to

9

Page 4: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

a second order elliptic boundary value problem. In particular, we start with the Poissonproblem

Given f ∈ H−1(Ω) := (H10 (Ω))′,

find u ∈ H10 (Ω) such that (2.2.1)

a(u,w) :=

Ω

∇u · ∇w = f(w) for all w ∈ H10 (Ω)

where we assume that the domain Ω is a polytope. The bilinear form a(·, ·) : H10 (Ω) ×

H10 (Ω) → R is continuous, coercive and symmetric. So on H1

0 (Ω), the ‖ · ‖H10 (Ω)-norm and

the energy-norm ‖ · ‖E differ at most a constant multiple. We switch between these normsat our convenience. Defining A : H1

0 (Ω) → H−1(Ω) by (A u)(w) = a(u,w), (2.2.1) can berewritten as

A u = f. (2.2.2)

We will measure the error of any approximation for u in the energy norm ‖ · ‖E, with dualnorm

‖f‖E′ := sup06=w∈H1

0 (Ω)

|f(w)|‖w‖E

, (f ∈ H−1(Ω)). (2.2.3)

Equipped with these norms, A : H10 (Ω) → H−1(Ω) is an isomorphism.

The first step within the finite element procedure is to produce a triangulation T of thedomain Ω ([12]), i.e., a finite collection of closed polytopes K (the ‘elements’) such that

• Ω = ∪K∈T K

• the interior of each polytope K ∈ T is non empty, i.e., int(K) 6= ∅• for each pair of distinct K1, K2 ∈ T one has int(K1) ∩ int(K2) = ∅

In view of constructing a sequence of subspaces (Wj)j∈N we will consider a sequence oftriangulations (Tj)j∈N.

Definition 2.2.1 A sequence of triangulations (Tj)j∈N is called quasi-uniform if

maxK∈Tj

diam(K) h minK∈Tj

diam(K). (2.2.4)

Here and throughout the thesis, in order to avoid the repeated use of generic butunspecified constants, by C . D we mean that C can be bounded by a multiple of D,independently of parameters which C and D may depend on, where in particular we thinkof j. Obviously, C & D is defined as D . C, and by C h D we mean that C . D andC & D.

Definition 2.2.2 If for any pair of distinct polytopes K1, K2 ∈ T with K1 ∩K2 6= ∅, theirintersection is a common lower dimensional face of K1 and K2, i.e., for d = 3 it is either acommon face, side or vertex, then the triangulation T is called conforming and otherwisewe shall call it a nonconforming triangulation. A vertex of any K ∈ T is called a vertexof T . It is called a non-hanging vertex if it is a vertex for all K ∈ T that contain it,otherwise it is called a hanging vertex (see Fig. 2.1).

10

Page 5: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.1: Non-hanging and hanging vertices.

Once we set up the triangulation Tj, the next step is to construct the corresponding ap-proximation space Wj. For some k ∈ N>0, for each element K ∈ Tj let P (K) be a space ofalgebraic polynomials of some fixed finite dimension that includes the space Pk(K) of allpolynomials of total degree less or equal to k. We define

Wj := v ∈ Cr(Ω) ∩∏

K∈Tj

P (K) : v|∂Ω = 0 (2.2.5)

where r ∈ −1, 0, 1, · · · and C−1(Ω) := L2(Ω). Depending on k and of the type ofpolytopes being used, the space P (K) will be chosen to be sufficiently large such that,despite of the intersection with Cr(Ω), the following direct or Jackson estimate is valid:

For all p ∈ [1,∞], s ≤ t ≤ k + 1, s < r + 1p

infwj∈Wj

‖u− wj‖W sp (Ω) . ht−s

j ‖u‖W tp(Ω) for all u ∈ W t

p(Ω) (2.2.6)

where hj := maxK∈Tjdiam(K). Since measured in L2(Ω), the error behaves at best like

O(hk+1), the space Wj is said to have order k + 1.Defining, for a triangulation T , AT : WT → (WT )′ ⊃ H−1(Ω) by (AT vT )(wT ) =

a(vT , wT ) for vT , wT ∈ WT , uT := A −1T f is the Galerkin approximation of u. One easily

verifies that for any f ∈ H−1(Ω), ‖A −1T f‖E ≤ ‖f‖E′ .

Definition 2.2.3 If, for each polytope K ∈ Tj, the ratio of the radii of the smallest cir-cumscribed and the largest inscribed ball of K is bounded uniformly in K ∈ Tj and j ∈ N,then the sequence of triangulations (Tj)j∈N is called shape-regular.

In finite element analysis we need the shape regularity requirement to avoid very smallangles in the triangulation since typical FEM estimates depend on the minimal angle ofthe triangulation and deteriorate when it becomes small.

For a shape regular sequence of triangulations, we have the following inverse or Bern-stein estimate (e.g., [31]): With hj := minK∈Tj

diam(K) and s ≤ t < r + 1p,

‖wj‖W tp(Ω) . hs−t

j ‖wj‖W sp (Ω) for all wj ∈ Wj. (2.2.7)

Since W 12 (Ω) = H1(Ω), note that the Bernstein inequality in particular implies that for

r = 0, the space Wj defined in (2.2.5) is contained in H10 (Ω). Therefore, in the following

11

Page 6: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

we may confine the discussion to the case r = 0. Furthermore, as elements we will considerexclusively d-simplices, i.e., triangles for d = 2 and tetrahedra for d = 3. Finally, as spacesP (K) we consider Pk(K) for some k ∈ N>0. That is, for some triangulations Tj of Ω ⊂ Rd

into d-simplices, we consider

Wj := H10 (Ω) ∩ C(Ω) ∩

∏K∈Tj

Pk(K). (2.2.8)

In order to perform efficient computations with this space we need a basis consistingof functions that have supports that are as small as possible. For this goal, we will firstselect a set ai : i ∈ Jj of points in Ω such that each u ∈ Wj is uniquely determined by itsvalues at these points, and, conversely, that for each ~c = (ci)i∈Jj

∈ R]Jj there is a wj ∈ Wj

with wj(ai) = ci, i ∈ Jj. We will call such a set a nodal set.For each K ∈ Tj we select the set of points

SK = x ∈ K : λK(x) ∈ k−1Nd+1, (2.2.9)

where λK(x) ∈ Rd+1 are the barycentric coordinates of x with respect to K, see Figures2.2, 2.3, 2.4. Now, assuming that Tj is conforming, a nodal set is the union of these sets,excluding those on the boundary of Ω. Indeed, note that each p ∈ Pk(K) is uniquelydetermined by its values in the points from the set SK . Then the statement follows fromthe fact that for u ∈ Wj the degrees of freedom on each face of K ∈ Tj uniquely identifythe restriction of u to that face.

Given a nodal set ai : i ∈ Jj, we define a corresponding basis φi : i ∈ Jj ⊂ Wj by

φi(am) =

1 if i = m0 if i 6= m.

(2.2.10)

The set φi : i ∈ Jj is called a nodal basis and each wj ∈ Wj can be uniquely representedas

wj =∑i∈Jj

wj(ai)φi. (2.2.11)

In case of a possibly nonconforming triangulation, we define the nodal set as

x ∈ ∪K∈TjSK | x /∈ ∂Ω and if x ∈ K ′ ∈ Tj then x ∈ SK′. (2.2.12)

The basis for Wj is then again defined by (2.2.10).For each w ∈ C(Ω) we define the nodal interpolant

Ijw =∑i∈Jj

w(ai)φi. (2.2.13)

The following theorem gives estimates for the interpolation error w − Ijw ([31, 13])

12

Page 7: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.2: Sets SK in 2D and 3D for k = 1

Figure 2.3: Sets SK in 2D and 3D for k = 2

Theorem 2.2.4 Let (Tj)j∈N be a shape-regular sequence of triangulations. Then

‖w − Ijw‖W sp (Ω) . ht+1−s

j ‖w‖W t+1p (Ω) for all w ∈ W t+1

p (Ω), (2.2.14)

for parameters s, t and p as in (2.2.6) (and r = 0), with the additional condition t+1 > dp.

Remark 2.2.5 Note that this theorem implies (2.2.6) for t + 1 > dp. To obtain (2.2.6)

without this additional condition, one can apply so-called quasi-interpolants ([31, 14]).

Finally we note that in case the domain Ω is not a polytope, in order to provide a goodapproximation of the boundary of Ω, isoparametric elements are used, which have curvedboundaries. For more details about the construction of finite element spaces we refer thereader to the books ([13, 9, 2, 32, 10]).

2.3 Local Mesh Refinement, Adaptivity and Optimal Complexity Algorithms

2.3.1 Approximation of a given function. Linear versus nonlinearand adaptive approximations

In this section, as an example to illustrate ideas, we consider the problem of approximatinga given function f : Ω → R by piecewise constant functions, where Ω := [0, 1]. For

13

Page 8: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.4: Sets SK in 2D and 3D for k = 3

W ⊂ L∞(Ω), by e(f, W ) we denote the error in uniform norm (L∞-norm) of the bestapproximation of f by elements of W , i.e.,

e(f, W ) := infw∈W

‖f − w‖L∞(Ω). (2.3.1)

We will describe here three types of approximation, namely linear, nonlinear and adaptive.Let T be a triangulation of Ω, i.e., a finite collection of essentially disjoint closed

subintervals K such that Ω = ∪K∈T K. Let W 0(T ) be the space of piecewise constantfunctions relative to T , i.e.,

W 0(T ) :=∏K∈T

P0(K). (2.3.2)

Let us first consider a uniform triangulation T , meaning that all subintervals have equallength. From the estimate (2.2.6) we infer that

infw∈W 0(T )

‖f − w‖L∞(Ω) . ]T −1‖u‖W 1∞(Ω) for all f ∈ W 1∞(Ω), (2.3.3)

i.e., the optimal order 1 for piecewise constant approximation is attained for f ∈ W 1∞(Ω).

What is more, in ([18]) it is shown that

e(f, W 0(T )) h ]T −α ⇔ f ∈ Lip(α, L∞(Ω)). (2.3.4)

Here, the space Lip(α,Lp(Ω)), 0 < α ≤ 1, 0 < p ≤ ∞ is the space of all functionsf ∈ Lp(Ω) for which

‖f(·+ h)− f(·)‖Lp[0,1−h) ≤ Mhα, 0 < h < 1 (2.3.5)

with the smallest M ≥ 0 for which (2.3.5) holds being the norm ‖f‖Lip(α,Lp(Ω)). Thespace Lip(α,L∞(Ω)) is equal to W 1

∞(Ω). Thanks to (2.3.4), we know precisely whichclass of functions can be approximated with order α by piecewise constants on uniformtriangulations. This type of approximation where the triangulations are chosen in advanceand are independent of the target function f is called linear.

14

Page 9: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Another type of approximation is nonlinear approximation. Nonlinear approximationallows the triangulations T of Ω to depend on f . Let

W 0NL(n) := ∪T ,]T =nW

0(T ), (2.3.6)

i.e., the space of piecewise constants with respect to some partition of [0, 1] into n subin-tervals, which, clearly, is not a linear space. Whereas for linear approximation the optimalorder 1 is attained for functions f ∈ Lip(1, L∞(Ω)), as shown in ([18]) the nonlinear ap-proximation provides the optimal order for f ∈ Lip(1, L1(Ω)), i.e.,

e(f,W 0NL(n)) h n−1 ⇔ f ∈ Lip(1, L1(Ω)). (2.3.7)

Since Lip(1, L1(Ω)) is a much larger space than Lip(1, L∞(Ω)), with nonlinear approx-imation it is possible to realize the optimal approximation order for a much larger classof functions. Note, however, that the sequence of underlying optimal triangulations is notexplicitly given.

The third type of approximation we discuss is adaptive approximation. This is a kindof nonlinear approximation given by an algorithm that, as we will see, produces a sequenceof piecewise constant approximations that converges with the optimal order under onlyslightly stronger conditions on f than needed for (best) nonlinear approximations. Sincethis is the first adaptive algorithm that we mention here, we will discuss it in some detail.Given ε, which is the target accuracy of the approximation, the adaptive algorithm ADAP-TIVE subdivides those elements K ∈ T for which the error infP0(K)∈R ‖f − c‖L∞(K) ≥ εinto two equal parts until the desired tolerance is met.

Algorithm 2.1 Adaptive Algorithm

ADAPTIVE[f , ε, K]→ T/* Called with K = [0, 1], the output of the algorithm is the triangulation T such thate(f, W 0(T )) ≤ ε. */

if infc∈R ‖f − c‖L∞(K) > εthen

Subdivide K into two equal parts K1 and K2.T1:=ADAPTIVE[f , ε, K1]T2:=ADAPTIVE[f , ε, K2]T := T1 ∪ T2

endif

In [18] it was shown that this adaptive algorithm attains the optimal approximationorder 1 if f satisfies the condition f ′ ∈ L log L, where L log L stands for the space of allintegrable functions g for which

‖g‖L log L :=

Ω

|g(x)|(1 + log |g(x)|)dx < ∞. (2.3.8)

This space is only slightly smaller than Lip(1, L1(Ω)), membership of which was neededfor convergence of order 1 with best partitions.

As an illustration, we consider the following example.

15

Page 10: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

Nonlinear approximation for y=x1/2

Figure 2.5:

Example 2.3.1 Let us consider the function f(x) = xα with 0 < α < 1. Since it belongsto Lip(α, L∞(Ω)), whereas it doesn’t belong to a higher order Lipschitz space, in view of(2.3.4), this function can be approximated on uniform triangulations with order exactly α,i.e.,

e(f, W 0(T )) h ]T −α. (2.3.9)

In particular, the optimal order 1 can not be attained by linear approximation. Notingthat f ∈ Lip(1, L1(Ω)), we conclude that this function can be approximated with order 1using nonlinear approximation. Indeed, choose the points xk := (k/n)1/α, k = 0, 1, . . . , n,which are preimages of yk := (k/n), k = 0, 1, . . . , n (see Fig. 2.5, Fig. 2.6). Then, withT = [xi, xi+1]n−1

i=0 one easily verifies that

e(f,W 0NL(n)) ≤ e(f, W 0(T )) . 1/n, (2.3.10)

i.e., with nonlinear approximation the optimal order 1 is attained for any 0 < α < 1.Finally what is more, since for any α ∈ (0, 1), f ′ ∈ L log L, this optimal order is also

attained by the algorithm ADAPTIVE.

2.3.2 Singular solutions of boundary value problems

In the previous section, we discussed the problem of approximating a given function. Thesituation is more complicated if we want to construct an algorithm for the numericalsolution of a boundary value problem, since the solution is unknown and can have asingularity somewhere in the domain Ω. The origin of a singularity can be a concentration

16

Page 11: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

Nonlinear approximation for y=x1/4

Figure 2.6:

of stresses in elasticity, boundary/internal layers in environmental pollution problems, ora propagation of shock-waves in fluid mechanics ([4, 20, 33]).

As an example, let us consider the Poisson problem (2.2.1) in two space dimensions,where Ω is a polygon (Fig. (2.7)), and the right-hand side is sufficiently smooth. Then itis known ([2]) that the regularity of the solution u depends on the maximum interior angleω at the corners in the boundary of the domain. More precisely, for any ε > 0 we have

u ∈ Hπ/ω+1−ε(Ω), whereas generally u /∈ Hπ/ω+1(Ω). (2.3.11)

Consider now the finite element spaces Wj from (2.2.5) of order k + 1 with respect toa sequence of quasi-uniform triangulations (Tj)j∈N. From (2.2.6), with Nj := ]Tj ∼ h−2

j weinfer that

infwj∈Wj

‖u− wj‖H1(Ω) . N−k/2j ‖u‖Hk+1(Ω) for all u ∈ Hk+1(Ω) (2.3.12)

We conclude that the optimal order k/2 is attained if u ∈ Hk+1(Ω). Actually, one canshow that this is sharp, so that from (2.3.11) we see that the rate of such finite elementapproximations is not restricted by the regularity of the solution only if ω ≤ π/k. Fork ≥ 4, there is no polygon that satisfies this condition. For k = 3, it is only satisfied foran equilateral triangle, and for k = 1, it requires that there are no re-entrant corners.

To handle the case that ω > π/k, algorithms based on adaptive triangulations are oftenproposed with the aim to retain the optimal rate of convergence. We are interested in thequestion whether such adaptive procedures can really lead to a significant improvementof the numerical efficiency. An example of a typical adaptive finite element mesh for theproblem with a corner singularity is given in Fig. 2.8.

17

Page 12: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.7: Polygon with maximum interior angle ω

Figure 2.8: Typical adaptive finite element mesh for a problem with a corner singularity

18

Page 13: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

2.3.3 Nonconforming triangulations and adaptive hierarchical tree-structures

As we have described before, finite element spaces consist of suitable subspaces of allpiecewise polynomials of a certain degree with respect to a subdivision of Ω into a largenumber of tiny polytopes. Here, we will consider such approximation spaces based onsubdivisions into triangles, so in two space dimensions. In this section, we describe the typeof triangulations that we shall use throughout this chapter. Let T be a triangulation of Ω.Recall from Definition 2.2.2 the concepts of conforming and nonconforming triangulationsand hanging and non-hanging vertices of T . We shall say that an edge ` of a triangle K ∈ Tis an edge of the triangulation T if it doesn’t contain a hanging vertex in its interior. Theset of all edges of T will be denoted by ET , and the set of all non-hanging vertices of T byVT . The set of all interior edges of T and the set of all interior non-hanging vertices of Twill be denoted by ET and VT respectively. If two triangles K1, K2 ∈ T share some edge` ∈ ET , they will be called edge neighbors, and if they share a vertex we will call themvertex neighbors.

An adaptive algorithm will produce generally locally refined meshes. In view of the im-portance of having shape-regular triangulations, the question arises which local refinementstrategies preserve shape regularity. The following three methods are known to do so:

• Regular or red-refinement. The red refinement strategy consists of subdividing atriangle by connecting the midpoints of its edges. Clearly, since we start with a non-degenerate triangulation T0, subdivision based on red refinement produces a shaperegular family of triangulations.

• Longest edge bisection. This strategy subdivides a triangle by connecting the mid-point of the longest edge with the vertex opposite to this edge. In ([34]) it wasproven that in each triangulation from any sequence T1 . . . , Tn built by the longestedge bisection, the smallest angle is larger than or equal to half of the smallest anglein the initial triangulation T0.

• Newest vertex bisection. This strategy starts with assigning to exactly one vertex ofeach triangle in the initial triangulation T0 the newest vertex label. Then if at somestep of the adaptive algorithm triangle K should be subdivided, we perform this byconnecting the newest vertex of K with the opposite edge, and mark the midpoint ofthis edge as the newest vertex for both new triangles produced by this subdivision.In ([28]) it was proven that the refinement based on newest vertex bisection builds afamily of triangulations which is shape regular.

In this chapter we restrict ourselves to triangulations that are created by (recursive),generally non-uniform red-refinements starting from some fixed initial conforming triangu-lation T0. Generally these triangulations are generally nonconforming.

For reasons that will become clear later (in Section 2.8.1), we will have to limit the‘amount of nonconformity’ by restricting ourselves to admissible triangulations, a conceptthat is defined as follows:

19

Page 14: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.9: Triangle and its red-refinement.

Figure 2.10: An example of admissible triangulation.

Definition 2.3.2 A triangulation T is called an admissible triangulation if for every edgeof a K ∈ T that contains a hanging node in its interior, the endpoints of this edge arenonhanging vertices.

Figure 2.10 depicts an example of an admissible triangulation. Apparently local re-finement produces triangulations which are generally not admissible. In the following wediscuss how to repair this. To any triangle K from the initial triangulation T0 we assignthe generation gen(K) = 0. Then the generation gen(K) for K ∈ T is defined as thenumber of subdivision steps needed to produce K starting from the initial triangulationT0. A triangulation is called uniform when all triangles have the same generation. A vertexof the triangulation will be called a regular vertex if all triangles that contain this vertexhave the same generation (see Fig. 2.11). Obviously, a regular vertex is non-hanging.

Corresponding to the triangulation T , we build also a tree T (T ) that contains as nodesall the triangles which were created to construct T from T0. The roots of this tree arethe triangles of the initial triangulation T0. When a triangle K is subdivided, four newtriangles appear which are called children of K, and K is called their parent. Similarly, we

20

Page 15: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.11: A regular (left picture) and a non-regular although non-hanging vertex (rightpicture).

introduce also grandparent/grandchildren relations. In case a triangle K ∈ T (T ) has nochildren in the tree T (T ), it is called a leaf of this tree. The set of all leaves of the giventree T (T ) will be denoted by L(T (T )). Apparently, the set of leaves L(T (T )) forms thefinal triangulation T . We will call T a subtree of T , denoted as T ⊂ T , if it contains allroots of T and if for any K ∈ T all its siblings and ancestors are also in T .

Proposition 2.3.3 [36] For any triangulation T created by red-refinements starting fromT0, there exists a unique sequence of triangulations T0, T1 . . . , Tn with maxK∈T i gen(K) = i,Tn = T and where Ti+1 is created from Ti by refining some K ∈ Ti with gen(K) = i.

Let T−1 = ∅ and VT−1 = ∅. The following properties are valid:

• ∑ni=0 ]Ti ≤ 4

3]T ,

• VTi−1⊂ VTi

and so VT = ∪ni=0VTi

\ VTi−1with empty mutual intersections,

• a v ∈ VTi\ VTi−1

is not a vertex of Ti−1, and so it is a regular vertex of Ti

As we noted before, in case we refine some admissible triangulation only locally, theadmissibility of the triangulation might be destroyed. If a given triangulation is not admis-sible (see Fig. 2.12), it can be completed to an admissible triangulation using the followingalgorithm. This algorithm makes use of the sequence T0, . . . , Tn corresponding to the inputtriangulation T as defined in Proposition 2.3.3.

Algorithm 2.2 Make Admissible

MakeAdmissible[T ]→ T

1. T a0 := T0

2. Let i := 0

3. Define T ai+1 as the union of Ti+1 and, when i ≤ n − 2, the collection of children of

those K ∈ T ai that have a vertex neighbor in Ti with grandchildren in Ti+2

4. If i ≤ n− 2 then i + + and go to step (3). Otherwise set T := T an and stop.

21

Page 16: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.12: An example of a non-admissible triangulation (left) and its smallest admissiblerefinement (right).

Proposition 2.3.4 [36] The triangulation T a constructed by MakeAdmissible is anadmissible refinement of T . Moreover,

]T a . ]T (2.3.13)

2.3.4 Adaptive Algorithms of Optimal Complexity

For (Ti)i being the family of all triangulations that can be constructed from a fixed initialtriangulation T0 of Ω by red-refinements, let (WTi

)i, (YTi)i be the families of finite element

approximation spaces defined by

WTi:= v ∈ C(Ω) ∩

∏K∈Ti

P1(K) : v|∂Ω = 0, (2.3.14)

YTi:=

∏K∈Ti

P0(K). (2.3.15)

The spaces YTiwill be used for approximating the right-hand side of our equation. The

reason for their introduction will become clear clear in the next section where we discussa posteriori error estimators. Defining for any S ⊂ H1

0 (Ω) and w ∈ H10 (Ω)

e(w, S)H10 (Ω) := inf

w∈S‖w − w‖H1

0 (Ω), (2.3.16)

the error of the best approximation from the best space WTiwith underlying triangulation

consisting of n + ]T0 triangles is given by

σH10 (Ω)

n (w) = infTi∈(Tj)j , ]Ti−]T0≤n

e(w,WTi)H1

0 (Ω) (2.3.17)

We now define classes for which the errors of the best approximations from the best spacesdecay with certain rates.

Definition 2.3.5 For any s > 0, let As(H10 (Ω)) = As(H1

0 (Ω), (WTi)i) be the class of

functions w ∈ H10 (Ω) such that for some M > 0

σH10 (Ω)

n (w) ≤ Mn−s n = 1, 2, . . . . (2.3.18)

22

Page 17: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

We equip As(H10 (Ω)) with a semi-norm defined as the smallest M for which (4.4.6) holds:

|w|As(H10 (Ω)) := sup

n≥1nsσH1

0 (Ω)n (w), (2.3.19)

and with norm‖w‖As(H1

0 (Ω)) := |w|As(H10 (Ω)) + ‖w‖H1

0 (Ω). (2.3.20)

For approximating the right-hand side of our boundary value problem, in a similar waywe define also the class As(H−1(Ω)) = As(H−1(Ω), (YTi

)i) using the approximation space(YTi

)i. The goal of adaptive approximation is to realize the rates of the best approximationfrom the best spaces. We note here that a rate s > 1/2 only occurs when the approximatedfunction is exceptionally close to a finite element function, and such a rate can not be en-forced by imposing whatever smoothness condition. From now on we consider the cases ≤ 1/2. Below we sketch why we expect, similar to section 2.3.1, better results with adap-tive approximation than with non-adaptive approximation based on uniform refinements.In order to do so, we temporarily consider the family of triangulations generated by newestvertex bisection. We expect however similar results to be valid with red refinements.

Let (Ti)i be the family of all triangulations created by newest vertex bisections from aninitial triangulation T0 and let

XTi:= C(Ω) ∩

K∈Ti

P1(K). (2.3.21)

For simplicity, to illustrate the ideas, we dropped here the Dirichlet boundary conditions.Recently, in [6] it was shown that the approximation classes As(H1(Ω), (XTi

)i) are(nearly) characterised by membership of certain Besov spaces:

Theorem 2.3.6 (i) If u ∈ B2s+1τ (Lτ (Ω)) with 0 ≤ s ≤ 1/2 and 1/τ < s + 1/2 then

u ∈ As(H1(Ω), (XTi)i) and ‖u‖As(H1(Ω),(XTi

)i) . ‖u‖B2s+1τ (Lτ (Ω)) (2.3.22)

(ii) if u ∈ As(H1(Ω), (XTi)i) then u ∈ B2s+1

τ (Lτ (Ω)), where 1/τ = s + 1/2.

On the other hand, as is well-known, in two space dimensions and with piecewise linearapproximation a rate s ≤ 1/2 is attained by approximations on uniform triangulationsif and only if the approximated function belongs to H2s+1(Ω). The above result showsthat a rate s ≤ 1/2 is achieved by ‘the best approximation from the best spaces’ for anyfunction from the much larger spaces B2s+1

τ (Lτ (Ω)) for any τ > (s + 1/2)−1. Althoughthese results were stated for the triangulations created by newest vertex bisections, as wesaid we expect the same results to be valid for the family of triangulations constructed byred-refinements. The difference between Sobolev spaces and Besov spaces is well illustratedby the DeVore diagram (Fig. 2.13). In this diagram, the point (1/τ, r) represents the Besovspace Br

τ (Lτ (Ω)). Since Br2(L2(Ω)) = Hr(Ω), the Sobolev spaces Hr(Ω) corresponds to the

points (1/2, r). From this diagram we can observe that the larger s is, the larger is thespace B2s+1

τ (Lτ (Ω)) with τ = (s + 1/2)−1 compared to H2s+1(Ω).

23

Page 18: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

To solve the Poisson problem (2.2.1) numerically, we need to be able to approximatethe right-hand side within any given tolerance. We assume the availability of the followingroutine RHS.

Algorithm 2.3 RHS

RHS[T , f, ε] → [T , fT ]/* Input of the routine:

• triangulation T• f ∈ H−1(Ω)

• ε > 0

Output: admissible triangulation T ⊃ T and an fT ∈ YT such that ‖f − fT ‖H−1(Ω) ≤ ε */

As in [36], we will call the pair (f,RHS) to be s-optimal if there exists an absoluteconstant cf > 0 such that for any ε > 0 and triangulation T , the call of the algorithmRHS performs such that both

]T and the number of flops required by the call are . ]T + c1/sf ε−1/s (2.3.23)

Apparently, for a given s, such a pair can only exist if f ∈ A s(H−1(Ω), (YTi)i). A realisation

of the routine RHS depends on the right-hand side at hand. As it was shown in [36], forf ∈ L2(Ω), RHS can be based on uniform refinements.

We will say that an adaptive method is optimal if it produces a sequence of approxima-tions with respect to triangulations with cardinalities that are at most a constant multiplelarger then the optimal ones. More precisely, we define

Definition 2.3.7 Suppose that the solution of the Poisson problem (2.2.1) is such thatu ∈ As(H1

0 (Ω)) for some s > 0 and there exists a routine RHS such that the pair (f,RHS)is s-optimal. Then we shall say that an adaptive method is optimal if for any ε > 0 itproduces a triangulation Tj from the family (Ti)i with ]Tj . ε−1/s(|u|1/s

As(H10 (Ω))

+ c1/sf ) and

an approximation wTj∈ WTj

with ‖u−wTj‖H1

0 (Ω) ≤ ε taking only . ε−1/s(|u|1/s

As(H10 (Ω))

+c1/sf )

flops.

Note that in view of the assumption u ∈ As, a cardinality of Tj equal to ε−1/s|u|1/s

As(H10 (Ω))

is generally the best we can expect.The questions that should be answered, being the topics of our discussions in the next

sections, are

• How to construct an a posteriori error estimator that provides reliable error controlduring the adaptive approximation process?

• How to construct a convergent adaptive strategy?

24

Page 19: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

6

-

(0,0)

¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢

r

1/τ

u

u

u

(1/2,1) ∼ H1(Ω)

(1/τ, r) ∼ Brτ (Lτ ), 1/τ = r/2u(1/2,r)

Figure 2.13: DeVore diagram. On the vertical line are the spaces Hr(Ω), and on the skewline are the spaces Br

τ (Lτ (Ω)) with r = 2s + 1 and τ = (1/2 + s)−1, i.e., r = 2/τ

25

Page 20: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

• How to ensure that our adaptive approximation is optimal in the above sense?

• How to design the computer implementation of the adaptive algorithm which useCPU, memory etc. also in an optimal way?

2.4 A Posteriori Error Analysis

In this section we will apply the elegant technique developed by Verfurth ([41]) to constructand analyze an a posteriori error estimator for the case of nonconforming triangulations.Moreover, with this technique we will be able to prove the so called saturation propertywhich, as we will show in the next section, is essential for realizing a convergent adaptivemethod.

Thinking of an adaptive algorithm for solving the boundary value problem (2.2.1), wedraw the attention of the reader to the algorithm ADAPTIVE. Note that this algorithmfor approximating a given function is based on the full knowledge of the error. Whensolving a boundary value problem, however, we do not know the solution u and thus theerror. Given the Galerkin approximation uT , from (2.2.1) we can derive the followingequation for the error e := u− uT

find e ∈ H10 (Ω) such that (2.4.1)

a(e, w) = f(w)− a(uT , w) for all w ∈ H10 (Ω).

This problem is as expensive to solve as the initial one. However, we have the discreteapproximation uT and right hand side f ∈ H−1(Ω) in our hands. This information can beused to create an a posteriori error estimator E(T , f, uT ).

The important requirements on the estimator E are:

• Reliability. There exists a constant CU > 0 such that

‖u− uT ‖E ≤ CUE (2.4.2)

• Efficiency. The estimator is called efficient if there exists a constant CL > 0 suchthat

CLE ≤ ‖u− uT ‖E (2.4.3)

Remark 2.4.1 A quantitative characterization of the estimator is given by the global ef-fectivity index defined by

Ieffectivity =E

‖u− uT ‖E

(2.4.4)

In view of the characterization

‖u− uT ‖E = sup06=w∈H1

0 (Ω)

a(u− uT , w)

‖w‖E

, (2.4.5)

26

Page 21: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

and assuming that f ∈ L2(Ω), using integration by parts, we write

a(u− uT , w) =

Ω

fw −∫

Ω

∇uT∇w

=∑K∈T

K

f + ∆uT w −∑

`∈ET

`

[∂uT∂ν`

]w (2.4.6)

=∑K∈T

K

RKw −∑

`∈ET

`

R`w,

where RK is an element residual

RK = RK(f) := (f + ∆uT )|K , (2.4.7)

and R` denotes an edge residual

R` = R`(uT ) := [∂uT∂ν`

]. (2.4.8)

Here for ` ∈ ET , ν` is a unit vector orthogonal to `, and with ` separating two elementsK1, K2 ∈ T ,

[∂wT∂ν`

](x) := limt→0+

∂wT∂ν`

(x + tν`)− limt→0+

∂wT∂ν`

(x− tν`) for all x ∈ `. (2.4.9)

is the jump of the normal derivative over `. Note here that, since we consider the case thatuT is linear, ∆uT = 0, so that indeed RK only depends on f .

Having these quantities in our hands we are ready to set an a posteriori error estimator

E(T , f, wT ) = [∑K∈T

diam(K)2‖RK‖2L2(K) +

`∈ETdiam(`)‖R`‖2

L2(`)]1/2.

Before stating the theorem about our error estimator, we need to recall one more result,that will be invoked in the proof.

In [36] a piecewise linear (quasi)-interpolant was constructed on admissible triangula-tions for H1

0 (Ω) functions. Its properties are stated in the following theorem.

Theorem 2.4.2 Let T be an admissible triangulation of Ω. Then there exists a linearmapping IT : H1

0 (Ω) → WT such that

‖w − IT w‖Hs(K) . diam(K)1−s‖w‖H1(ΩK) (w ∈ H10 (Ω), s = 0, 1, K ∈ T ) (2.4.10)

‖w − IT v‖L2(`) . diam(`)1/2‖w‖H1(ΩK`) (w ∈ H1

0 (Ω), ` ∈ ET ) (2.4.11)

where for each K, ΩK is the union of uniformly bounded collection of triangles in T ,among them K, which is simply connected and satisfies diam(ΩK) . diam(K), and wherefor ` ∈ ET , K` ∈ T is a triangle that has edge `.

27

Page 22: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Proof. The statement follows from the proof of the Theorem 6.1 [36]. ♦

Theorem 2.4.3 Let T be an admissible triangulation and T be a refinement of T . As-sume that f ∈ YT , and let u = A −1f and let uT = A −1

T f , uT = A −1

T f be its Galerkin

approximations on the triangulations T and T respectively.i) If VT contains a point inside K ∈ T , then

‖uT − uT ‖2H1(K) & diam(K)2‖RK‖2

L2(K) (2.4.12)

ii) With K1, K2 ∈ T , such that ` := K1∩K2 ∈ ET , if VT contains a point in the interiorof both K1 and K2, then

‖uT − uT ‖2H1(K1∪K2) & diam(`)‖R`‖2

L2(`) (2.4.13)

iii) Let for some F ⊂ T and G ⊂ ET , the refinement T satisfies the condition i) or ii)for all K ∈ F and ` ∈ G, respectively, then

‖uT − uT ‖2E ≥ C2

L∑K∈F

diam(K)2‖RK‖2L2(K) +

`∈G

diam(`)‖R`‖2L2(`) (2.4.14)

for some absolute constant CL > 0.iv)

‖u− uT ‖E ≥ CLE(T , f, uT ) (2.4.15)

v) there exists an absolute constant CU such that for any f ∈ L2(Ω), so not only for f ∈ YT ,

‖u− uT ‖E ≤ CUE(T , f, uT ). (2.4.16)

K1

K2

`

T TProof. (i) To prove the local lower bounds i)-ii), we notice that, due to the conditions

on T , for all K ∈ T , ` ∈ ET there exist functions ψK , ψ`, which are continuous piecewiselinear with respect to T , with the properties

supp(ψK) ⊂ K, supp(ψ`) ⊂ Ω` := K ′ ∈ T , ` ⊂ K ′,∫

K

ψK h meas(K),

`

ψ` h meas(`),

(2.4.17)

0 ≤ ψK ≤ 1, x ∈ K, 0 ≤ ψ` ≤ 1, x ∈ `, (2.4.18)

28

Page 23: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

‖ψK‖L2(K) . meas(K)1/2,

‖ψ`‖L2(Ω`) . diam(`).(2.4.19)

For any w ∈ WT , we have

a(uT − uT , w) = f(w)− a(uT , w)

=∑K∈T

K

RKw −∑

`∈ET

`

R`w(2.4.20)

by (4.7.1). Now, for K as in (i), by choosing w := RKψK , that, thanks to RK being aconstant is in WT , and using suppψK ⊂ K and the Bernstein inequality (2.2.7), we have

‖RK‖2L2(K) h

K

RKw = a(uT − uT , w)

. ‖uT − uT ‖H1(K)diam(K)−1‖w‖L2(K)

. ‖uT − uT ‖H1(K)diam(K)−1‖RK‖L2(K).

(2.4.21)

(ii) In a similar way, let for ` as in (ii), w := R`ψ`, then using suppψ` ⊂ Ω`, the Bernsteininequality, (i) and (4.7.15), we find

`

R`w = a(uT − uT , w) +∑

K∈Ω`

K

RK · w

.∑

K∈Ω`

‖uT − uT ‖H1(K)diam(K)−1‖w‖L2(K) + ‖RK‖L2(K)‖w‖L2(K)

.∑

K∈Ω`

‖uT − uT ‖H1(K)diam(K)−1‖w‖L2(K)

.∑

K∈Ω`

‖uT − uT ‖H1(K)diam(`)−1/2‖R`‖L2(`)

(2.4.22)

Noting that∫

`R`w h ‖R`‖2

L2(`) the statement follows.

iii) Follows from i) and ii).iv) Let T be a refinement of T such that it satisfies conditions i),ii) for all K ∈ T and

for all ` ∈ ET , respectively. The statement follows from iii) and Pythagoras

‖u− uT ‖2E = ‖u− uT ‖2

E + ‖uT − uT ‖2E (2.4.23)

v) Using the Galerkin orthogonality

a(u− uT , wT ) = 0, for all wT ∈ WT , (2.4.24)

29

Page 24: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

equation (4.7.1), Cauchy-Schwarz inequality and Theorem 4.7.1, for arbitrary w ∈ H10 (Ω)

we find

a(u− uT , w) = a(u− uT , w − IT w) =

=∑K∈T

K

RK(w − IT w)−∑

`∈ET

`

R`(w − IT w)

≤∑K∈T

‖RK‖L2(K)]‖w − IT w‖L2(K)] +∑

`∈ET‖R`‖L2(`)‖w − IT w‖L2(`)

.∑K∈T

‖RK‖L2(K)]diam(K)‖w‖H1(ΩK) +∑

`∈ET‖R`‖L2(`)diam1/2(`)‖w‖H1(Ω`)

. ∑K∈T

diam(K)2‖RK‖2L2(K) +

`∈ETdiam(`)‖R`‖2

L2(`)1/2‖w‖E

(2.4.25)

Finally, invoking

‖u− uT ‖E = supw∈H1

0 (Ω)

a(u− uT , w)

‖w‖E

, (2.4.26)

the upper bound follows. ♦

2.5 Basic Principle of Adaptive Error Reduction

In this section we describe the basic principle of adaptive error reduction. Pioneering workin this direction can be found in ([19], [29]).

Assume that T is a refinement of triangulation T and uT = A −1

T f ∈ WT , uT = A −1T f ∈

WT ⊂ WT are the Galerkin solutions.Since

a(u− uT , wT ) = 0 for all wT ∈ WT , (2.5.1)

the error u− uT of the Galerkin approximation is orthogonal to the approximation spaceWT with respect to the energy inner product. As a consequence, we have

‖u− uT ‖2E = ‖u− uT ‖2

E − ‖uT − uT ‖2E, (2.5.2)

Suppose now that there exists a σ > 0 such that

‖uT − uT ‖E ≥ σ‖u− uT ‖E, (2.5.3)

then we immediately conclude that

‖u− uT ‖E ≤√

1− σ2‖u− uT ‖E. (2.5.4)

On the other hand, (2.5.4) also implies (2.5.3) proving the following lemma

30

Page 25: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Lemma 2.5.1 Let u = A −1f , and let uT := A −1

T f , uT ∈ WT ⊂ WT . Then with σ ∈ (0, 1)we have

‖u− uT ‖E ≤√

1− σ2‖u− uT ‖E ⇔ ‖uT − uT ‖E ≥ σ‖u− uT ‖E (2.5.5)

The second inequality in (2.5.5) is known as a saturation property. If we think of con-structing a convergent algorithm, according to the above lemma in order to guarantee afixed error reduction we should be able to bound ‖uT − uT ‖2

E from below by a constantmultiple of ‖u− uT ‖2

E.The aforementioned observations lead us to the following task

Given a triangulation T , and a σ ∈ (0, 1),

find a triangulation T that is

a refinement of T such that

‖uT − uT ‖E ≥ σ‖u− uT ‖E. (2.5.6)

Based on our observations from Theorem 2.4.3, we define a routine that constructs alocal refinement of a given triangulation, such that, as we will see later, an error reductionin our discrete approximation is ensured. In view of the assumptions made in Theorem2.4.3, for the moment we will simply assume that f ∈ YT . We return to this point in thenext section.

Algorithm 2.4 REFINE

REFINE[T ,f ,wT ,θ]→ T/* Input of the routine:

• admissible triangulation T• f ∈ YT

• wT ∈ WT

• θ ∈ (0, 1)

Select in O(T ) operations F ⊂ T and G ⊂ ET , such that

∑K∈F

diam(K)2‖RK‖2L2(K) +

`∈G

diam(`)‖R`‖2L2(`) ≥ θ2E(T , f, wT )2 (2.5.7)

Construct a refinement T of T , such that for all K ∈ F or ` ∈ G the conditions i) or ii)from Theorem 2.4.3 are satisfied. */

Theorem 2.5.2 (Basic principle of adaptive error reduction) Let T be an admissible trian-gulation, and let f ∈ YT , uT := A −1

T f . Taking T =REFINE[T , f, uT , θ], for uT := A −1

T f

31

Page 26: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

the following error reduction holds

‖u− uT ‖E ≤ (1− (CLθ

CU

)2)1/2‖u− uT ‖E (2.5.8)

Proof. Thanks to the properties of REFINE, invoking iii) and v) of Theorem 2.4.3, wefind that the saturation property holds.

‖uT − uT ‖2E ≥ C2

L∑K∈F

diam(K)2‖RK‖2L2(K) +

`∈G

diam(`)‖R`‖2L2(`)

≥ C2Lθ2E(T , fT , wT )2

≥ C2Lθ2

C2U

‖u− uT ‖2E

(2.5.9)

Now, invoking Lemma 2.5.5 with σ = CLθCU

, the statement follows. ♦At this point, we realize that we have enough knowledge to formulate our first adaptive

finite element algorithm, which applies in the idealized situation that f ∈ YT and wherewe moreover solve the Galerkin problems exactly.

Algorithm 2.5 An Idealized Adaptive Finite Element Algorithm

AFEMprelim[T , f, uT , ε0, ε] → [T , uT ]/*Input parameters of the algorithm are: an admissible triangulation T , f ∈ YT , anduT := A −1

T f such that ‖u− uT ‖H10 (Ω) ≤ ε0.

*/Select 0 < θ < 1N := mini ∈ N : (1− (CLθ

CU)2)i/2ε0 ≤ ε

for i = 1, . . . , N do

T :=REFINE[T , f, uT , θ]uT := A −1

T fendfor

Obviously, due to the Theorem 2.5.2, the adaptive algorithm AFEMprelim is conver-gent, and the following theorem holds

Theorem 2.5.3 Algorithm AFEMprelim[T , f, uT , ε0, ε] → [T , uT ] terminates with

‖u− uT ‖E ≤ ε. (2.5.10)

2.6 First Practical Convergent Adaptive FE Algorithm

In Theorem 2.5.2, the adaptive error reduction was based on the assumption that f ∈ YTand on the availability of the exact Galerkin solution uT := A −1

T f . Of course, in practice,

32

Page 27: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

we will approximate f by some fT ∈ YT using the RHS routine introduced in Section2.3.4, and A −1

T fT will be approximated by some wT ∈ WT using some inexact iterativesolver. This implies that the evaluation of the a posteriori error estimator and so the meshrefinement will be performed using these approximations. In the following, we will studyhow this influences the convergence of the adaptive approximations. We will start withintroducing an iterative solver:

To solve the Galerkin problem approximately we assume the following routine.

Algorithm 2.6 GALSOLVE

GALSOLVE[T ,fT ,u(0)T , ε]→ uε

T/* Input of the routine:

• an admissible triangulation T• fT ∈ X ′

T ,

• u(0)T ∈ WT

• ε > 0

output: uεT ∈ WT such that

‖uT − uεT ‖E ≤ ε (2.6.1)

where uT = A −1T fT .

A call of the algorithm should take not more than O(max1, log(ε−1‖uT − u(0)T ‖E)]T )

flops. */

To implement the routine GALSOLVE, one can, for example, apply Conjugate Gra-dients to the matrix-vector representation of AT uT = fT with respect to the wavelet basisΨT described in the next section, that is well-conditioned uniformly in ]T .

Next we investigate the stability of the error estimator.

Lemma 2.6.1 For any admissible triangulation T , f ∈ L2(Ω), uT , uT ∈ WT , it holds

|E(T , f, uT )− E(T , f, uT )| ≤ CS‖uT − uT ‖E, (2.6.2)

for an absolute constant CS > 0.

33

Page 28: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Proof. Using the triangle inequality twice, first for vectors and then for functions, we find

|E(T , f, uT )− E(T , f, uT )|= |(

∑K∈T

diam(K)2‖RK(f)‖2L2(K) +

`∈ETdiam(`)‖R`(uT )‖2

L2(`))1/2

− (∑K∈T

diam(K)2‖RK(f)‖2L2(K) +

`∈ETdiam(`)‖R`(uT )‖2

L2(`))1/2|

≤ (∑

`∈ETdiam(`)(‖R`(uT )‖L2(`) − ‖R`(uT )‖L2(`))

2)1/2

≤ (∑

`∈ETdiam(`)‖R`(uT − uT )‖2

L2(`))1/2 ≤ CS‖uT − uT ‖E,

(2.6.3)

where in the last line we have used that by a homogeneity argument, for any edge ` of atriangle K ∈ T , and any wT ∈ P1(K), it holds

‖R`(wT )‖L2(`) . diam(`)−1/2‖wT ‖H1(K).♦ (2.6.4)

In the following theorem, we give an error estimate for the realistic situation that on T ,T , the approximate right-hand sides fT , fT are used, and that in the refinement routineREFINE, instead of the exact Galerkin solution, an inexact Galerkin solution is used thatis produced by the application of GALSOLVE.

Theorem 2.6.2 Let T be an admissible triangulation, f ∈ H−1(Ω), u = A −1f , fT ∈ YT ,uT = A −1

T fT , wT ∈ WT , and let T = REFINE[T ,fT ,wT ,θ], fT ∈ H−1(Ω), uT = A −1

T fT .Then it holds

‖u−uT ‖E ≤ (1− 1

2(CLθ

CU

)2)1/2‖u−uT ‖E +2CSCL‖uT −wT ‖E +3‖f − fT ‖E′ + ‖f − fT ‖E′

(2.6.5)

Proof.

‖u− uT ‖E =‖A −1f −A −1

T fT ‖E

≤‖A −1f −A −1fT ‖E + ‖A −1fT −A −1

T fT ‖E

≤‖f − fT ‖E′ + ‖A −1fT −A −1

T fT ‖E + ‖A −1

T fT −A −1

T fT ‖E

≤2‖f − fT ‖E′ + ‖A −1fT −A −1

T fT ‖E + ‖f − fT ‖E′

(2.6.6)

Now, to get an estimate for ‖A −1fT − A −1

T fT ‖E, we will apply a similar analysis asin the proof of the Theorem 2.5.2. With F ⊂ T , G ⊂ ET as determined in the call ofREFINE, using Theorem 2.4.3 (iii), Lemma 2.6.1, a property of REFINE, Lemma 2.6.1

34

Page 29: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

and Theorem 2.4.3 (v), we find

‖A −1

T fT − uT ‖E

≥CL(∑K∈F

diam(K)2‖RK(fT )‖2L2(K) +

`∈G

diam(`)‖R`(uT )‖2L2(`))

1/2

≥CL(∑K∈F

diam(K)2‖RK(fT )‖2L2(K) +

`∈G

diam(`)‖R`(wT )‖2L2(`))

1/2

− CS‖uT − wT ‖E)

≥CL(θE(T , fT , wT )− CS‖uT − wT ‖E)

≥CL(θE(T , fT , uT )− 2CS‖uT − wT ‖E)

≥CL(θ

CU

‖A −1fT − uT ‖E − 2CS‖uT − wT ‖E).

(2.6.7)

Due to the Galerkin orthogonality and the previous estimate we have

‖A −1fT −A −1

T fT ‖2E = ‖A −1fT − uT ‖2

E − ‖A −1

T fT − uT ‖2E

≤‖A −1fT − uT ‖2E − C2

L(θ

CU

‖A −1fT − uT ‖E − 2CS‖uT − wT ‖E)2

≤‖A −1fT − uT ‖2E − C2

L(1

2(

θ

CU

)2‖A −1fT − uT ‖2E − 4C2

S‖uT − wT ‖2E)

=(1− 1

2(θCL

CU

)2)‖A −1fT − uT ‖2E + 4C2

SC2L‖uT − wT ‖2

E

≤((1− 1

2(θCL

CU

)2)1/2‖A −1fT − uT ‖E + 2CSCL‖uT − wT ‖E)2

≤((1− 1

2(θCL

CU

)2)1/2(‖u− uT ‖E + ‖f − fT ‖E′) + 2CSCL‖uT − wT ‖E)2,

(2.6.8)

where in the third line we have used that for any scalars a, b, (a − b)2 ≥ 12a2 − b2. Com-

bination of the last result with the first bound obtained in this proof completes the task.♦

Corollary 2.6.3 (General adaptive error reduction estimate) For any µ ∈ ((1−12(CLθ

CU)2)1/2, 1)

there exists a sufficiently small constant δ > 0 such that if for f ∈ H−1(Ω), an admissibletriangulation T , fT ∈ YT , wT ∈ WT , T = REFINE[T ,fT ,wT ,θ], fT ∈ H−1(Ω), wT ∈ WTand ε > 0, with u = A −1f , uT = A −1

T fT , uT = A −1

T fT , we have that ‖u−wT ‖E ≤ ε and

‖uT − wT ‖E + ‖f − fT ‖E′ + ‖uT − wT ‖E + ‖f − fT ‖E′ ≤ 2(1 + µ)δε, (2.6.9)

then‖u− wT ‖E ≤ µε (2.6.10)

35

Page 30: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Proof. Using the triangle inequality, the conditions of the corollary and the Theorem 2.6.2,we easily find

‖u− wT ‖E ≤ ‖u− uT ‖E + ‖uT − wT ‖E

≤(1− 1

2(CLθ

CU

)2)1/2‖u− uT ‖E + 2CSCL‖uT − wT ‖E

+ 3‖f − fT ‖E′ + ‖f − fT ‖E′ + ‖uT − wT ‖E

≤µ‖u− wT ‖E + ((1− 1

2(CLθ

CU

)2)1/2 − µ)‖u− wT ‖E

+ max2CSCL, 3(‖uT − wT ‖E + ‖f − fT ‖E′ + ‖f − fT ‖E′ + ‖uT − wT ‖E)

≤µε,

(2.6.11)

where we have chosen δ to satisfy

0 < δ <µ− (1− 1

2(CLθ

CU)2)1/2

2(1 + µ) max2CSCL, 3 .♦ (2.6.12)

Now we are ready to present our practical adaptive algorithm and to state a theoremabout its convergence.

Algorithm 2.7 First Practical Convergent Adaptive FE Algorithm

AFEM1[T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T andwT ∈ WT such that with u = A −1f , ‖u − wT ‖H1

0 (Ω) ≤ ε0. The parameter δ < 1/3 ischosen such that it corresponds to a µ < 1 as in Corollary 2.6.3.*/

ε1 := ε0

1−3δ

[T , fT ]:=RHS[T , f, δε1]wT :=GALSOLVE[T , fT , wT , δε1]N := minj ∈ N : µjε1 ≤ ε for k = 1, . . . , N do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, δµkε1]wT :=GALSOLVE[T , fT , wT , δµkε1]

endfor

Theorem 2.6.4 Algorithm AFEM1[T , f, wT , ε0, ε] → [T , wT ] terminates with

‖u− wT ‖E ≤ ε (2.6.13)

Proof. By induction on k, we will show that before the kth call of REFINE

‖u− wT ‖E ≤ ε1µk−1 (2.6.14)

36

Page 31: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

meaning that after the kth inner loop ‖u−wT ‖E ≤ ε1µk, which, by definition of N , proves

the theorem. For k = 1 after the call [T , fT ] :=RHS[T , f, δε1] it holds

‖f − fT ‖E′ ≤ δε1. (2.6.15)

Further, we note that for the input wT we have ‖u−wT ‖E ≤ (1−3δ)ε1. Using the triangleinequality and the fact that A −1

T fT is the best approximation to A −1fT from WT withrespect to ‖.‖E, we find

‖u−A −1T fT ‖E ≤‖u−A −1fT ‖E + ‖A −1fT −A −1

T fT ‖E

≤‖u−A −1fT ‖E + ‖A −1fT − wT ‖E

≤2‖u−A −1fT ‖E + ‖u− wT ‖E

≤2‖f − fT ‖E + ‖u− wT ‖E ≤ 2δε1 + (1− 3δ)ε1 ≤ (1− δ)ε1.

(2.6.16)

After the call wT :=GALSOLVE[T , fT , wT , δε1] we have

‖A −1T fT − wT ‖E ≤ δε1. (2.6.17)

Together with the previous estimate it gives

‖u− wT ‖E ≤ ‖u−A −1T fT ‖E + ‖A −1

T fT − wT ‖E ≤ ε1, (2.6.18)

i.e., (2.6.14) is valid for k = 1.Let us now assume that (2.6.14) is valid for some k. By the last calls of RHS and

GALSOLVE, for the current T , uT , fT , i.e., just before the kth call of REFINE, wehave

‖f − fT ‖E′ ≤ δε1µk−1, ‖A −1

T fT − wT ‖E ≤ δε1µk−1. (2.6.19)

The subsequent calls T := REFINE[T , fT , wT , θ], [T , fT ] := RHS[T , f, µkδε1], wT :=GALSOLVE[T , fT , wT , µkδε1] results in

‖f − fT ‖E′ ≤ δε1µk, ‖A −1

T fT − wT ‖E ≤ δε1µk. (2.6.20)

Therefore, we obtain

‖f−fT ‖E′ +‖A −1T fT −wT ‖E +‖f−fT ‖E′ +‖A −1

T fT −wT ‖E ≤ 2(1+µ)δε1µk−1. (2.6.21)

Using the induction hypothesis that ‖A −1T fT − wT ‖ ≤ ε1µ

k−1, Corollary 2.6.3 shows

‖u− wT ‖E ≤ ε1µk, (2.6.22)

with which (2.6.14) is proven. ♦

37

Page 32: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

2.7 Second Practical Adaptive FE Algorithm

In the previous section we have analyzed a practical convergent adaptive FE algorithm.Although, this algorithm is quite general, it has one drawback. Namely, in this algorithmthe tolerances in RHS and GALSOLVE are chosen to be h µk. Drawing the atten-tion to Theorem 2.6.2, we see that if we choose µ < (1 − 1

2(CLθ

CU)2)1/2 this can results in

unnecessarily small tolerances and expensive iterations. On the other hand, the choiceµ > (1 − 1

2(CLθ

CU)2)1/2 can cause a reduced convergence rate. A proper choice for µ is not

easy to make, because the constants CU and CL are generally not known and should beestimated.

In this section we present an alternative algorithm, in which the mentioned tolerancesare chosen as some fixed multiple of an a posteriori error estimator of the error in theprevious iterand. In this algorithm the a posteriori error estimator is enhanced with extraterms to incorporate the errors in the approximate right-hand sides and inexact Galerkinsolutions as analyzed in the following Proposition.

Proposition 2.7.1 i) Let CO := 1+CUCS. For f ∈ H−1(Ω), any admissible triangulationT , wT ∈ WT , fT ∈ L2(Ω), with u = A −1f , uT := A −1

T fT we have

‖u− wT ‖E ≤ CUE(T , fT , wT ) + ‖f − fT ‖E′ + CO‖uT − wT ‖E. (2.7.1)

ii) With fT ∈ YT , and ξT being an upper bound for ‖f −fT ‖+CO‖uT −wT ‖ we have that

CUE(T , fT , wT ) + ξT h ‖u− wT ‖E + ξT (2.7.2)

iii) For any µ ∈ ((1− 12(CLθ

CU)2)1/2, 1), there exists Cµ > 0 small enough and CE > 0, such

that if T =REFINE[T , fT , wT ], wT ∈ WT , fT ∈ H−1(Ω), uT := A −1

AfT , and

ξT ≤ Cµ(1 + CO)(CUE(T , fT , wT ) + ξT ), (2.7.3)

where ξT denotes an upper bound on ‖f − fT ‖E′ + CO‖uT − wT ‖E, then

‖u− wT ‖E + CEξT ≤ µ(‖u− wT ‖E + CEξT ). (2.7.4)

Proof. i) Using the triangle inequality, Theorem 2.4.3 (v) and Lemma 2.6.1, we easilyfind

‖u− wT ‖E ≤ ‖u−A −1fT ‖E + ‖A −1fT − uT ‖E + ‖uT − wT ‖E

≤ ‖f − fT ‖E′ + CUE(T , fT , uT ) + ‖uT − wT ‖E

≤ ‖f − fT ‖E′ + CUE(T , fT , wT ) + (1 + CUCS)‖uT − wT ‖E

(2.7.5)

ii) Using Lemma 2.6.1, Theorem 2.4.3 (iv) and the triangle inequality, we have

E(T , fT , wT ) ≤ E(T , fT , uT ) + CS‖uT − wT ‖E

≤ 1

CL

‖A −1fT − uT ‖E + CS‖uT − wT ‖E

≤ 1

CL

(‖u− wT ‖E + ‖f − fT ‖E′) + (CS +1

CL

)‖uT − wT ‖E

. ‖u− wT ‖E + ‖f − fT ‖E′ + ‖uT − wT ‖E

. ‖u− wT ‖E + ξT .

(2.7.6)

38

Page 33: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

The proof in the other direction follows from (2.7.1).iii) Using Theorem 2.6.2, we find that with C := max1+2CLCS

CO, 3 it holds

‖u− wT ‖E + CEξT ≤ (1− 1

2(CLθ

CU

)2)1/2‖u− wT ‖E + C(ξT + ξT ) + CEξT (2.7.7)

Then, for a given µ ∈ ((1− 12(CLθ

CU)2)1/2, 1), the error reduction (2.7.4) holds if

(C + CE)ξT ≤ (µ− (1− 1

2(CLθ

CU

)2)1/2)‖u− uT ‖E + (µCE − C)ξT . (2.7.8)

Finally, selecting CE > C/µ, and taking Cµ small enough such that

Cµ(1 + CO)(CUE(T , fT , wT ) + ξT ) ≤ (µ− (1− 12(CLθ

CU)2)1/2)‖u− uT ‖+ (µCE − C)ξT

C + CE

,

(2.7.9)the proof is complete.♦

Now we are ready to present a convergent adaptive algorithm with a posteriori errorcontrol.

Algorithm 2.8 Second Practical Adaptive FE Algorithm

AFEM2[T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T ,ε0, ε > 0, wT ∈ WT , such that with u := A −1f , maxε, ‖u − wT ‖E ≤ ε0. Let Cµ be aconstant that corresponds to a µ ∈ ((1− 1

2(CLθ

CU)2)1/2, 1) as in Proposition 2.7.1(iii).

*/EO := ε0

[T , fT ]:=RHS[T , f, CµEO]wT :=GALSOLVE[T , fT , wT , CµEO]while EO := CUE(T , fT , wT ) + (1 + CO)CµEO > ε do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, CµEO]wT :=GALSOLVE[T , fT , wT , CµEO]

endwhile

The following theorem states results about convergence and complexity of this algo-rithm. A similar complexity analysis could also be performed for the algorithm AFEM1.

Theorem 2.7.2 i) For some absolute constant C > 0, Algorithm AFEM2[T , f, wT , ε0, ε] →[T , wT ] terminates with

‖u− wT ‖E ≤ ε (2.7.10)

39

Page 34: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

after at most M iterations of the while-loop, with M being the smallest integer with CµM ≤ε/ε0.

ii) The number of operations required by a call of AFEM2 is proportional to the sumof the cardinalities of all triangulations created by this algorithm.

Proof. i) Let us consider any newly computed wT in the while-loop, and let us denotewith δ1, δ2 the tolerances that were used in the most recent calls of RHS and GALSOLVE,respectively, and let ξT := δ1+COδ2. Then ξT is equal to (1+CO)Cµ(CUE(T , fT , wT )+ξT ),where in this expression T , fT , wT , ξT refer to the previous triangulation, right-hand sideapproximation, approximate solution and previous value of ξT , respectively.

Since ξT thus satisfies (2.7.3), we conclude that each call of the triple REFINE, RHS,GALSOLVE reduces ‖u− wT ‖E + CEξT at least with a factor µ. Thanks to (2.7.2) and(2.7.1) it holds

EO . ‖u− wT ‖E + CEξT (2.7.11)

and‖u− wT ‖E ≤ EO. (2.7.12)

The proof is completed by taking into account the upper bounds ε0 and (1 + CO)Cµε0 forthe initial values of ‖u− wT ‖E and ξT , respectively.

ii) Now we are going to analyse the complexity of this algorithm. Thanks to theproperties of REFINE and RHS, in producing a triangulation T these routines require. ]T flops. Before any call wT = GALSOLVE[T , fT , wT , δEO] we have ‖f−fT ‖E′ ≤ δEO

and ‖u−wT ‖ ≤ EO. Using the triangle inequality and the property of the Galerkin solutionof being the best approximation with respect to ‖ · ‖E, we find

‖u−A −1T fT ‖E ≤ ‖u−A −1fT ‖E + ‖A −1fT −A −1

T fT ‖E

≤ ‖u−A −1fT ‖E + ‖A −1fT − wT ‖E

≤ 2‖u−A −1fT ‖E + ‖u− wT ‖E

= 2‖f − fT ‖E′ + ‖u− wT ‖E ≤ (2δ + 1)EO,

(2.7.13)

and so,

‖wT −A −1T fT ‖E ≤ ‖u− wT ‖E + ‖u−A −1

T fT ‖E ≤ 2(δ + 1)EO. (2.7.14)

Observing that 2(δ+1)EO

δEOis an absolute constant, we conclude that also a call of GAL-

SOLVE requires . ]T flops. ♦

2.8 Optimal Adaptive Algorithms

In the previous section we were able to prove convergence of our practical adaptive al-gorithm AFEM2. However, this result does not guarantee optimality of the algorithm.Recently, in [6], Binev, Dahmen and DeVore proposed a modification of the method from[29] by adding a coarsening step, and proved that the resulting adaptive method achievesthe optimal convergence rate. Later, in [36], Stevenson developed a similar coarseningroutine, that is based on the transformation to a wavelet basis. In the next section, wediscuss the construction and properties of this wavelet basis.

40

Page 35: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

2.8.1 Multiscale Decomposition of FEM Approximation Spaces

In this section we briefly discuss the wavelet bases mentioned before in connection withthe GALSOLVE routine. Our exposition is based on the results from [36] and uses thedefinitions of vertex of triangulation and regular vertex that we made in the Section 2.3.3.For an admissible triangulation T , let φv

T : v ∈ VT denote the nodal basis for WT . Inthe following, we will denote by T ∗

i a triangulation constructed by applying recursively iuniform red-refinement to T0. Recalling the properties of the finite element nodal basisfunctions, we easily check the following result.

Lemma 2.8.1 Let v be a regular vertex of a triangulation T . Then with i = gen(K) forsome and thus for any K ∈ T that contains v, we have φv

T ∗i ∈ WT .

For a triangulation T , recall the definition of T0, . . . , Tn = T . Using that any v ∈ VTi\VTi−1

is a regular vertex of Ti (Proposition 2.3.3), one verifies the following lemma.

Lemma 2.8.2 For any triangulation T = Tn, ∪ni=0φv

T ∗i : v ∈ VTi\ VTi−1

is a basis forWT , called the hierarchical basis.

We will now turn the hierarchical basis into a true wavelet basis by correcting each basisfunction, except those on the coarsest level, by a linear combination of coarse grid nodalbasis functions.

Let us defineV∗ := ∪i≥0VT ∗i \ VT ∗i−1

. (2.8.1)

Let v ∈ V∗ and i be such that v ∈ VT ∗i \VT ∗i−1. When i > 0, there exist two triangles K1, K2 ∈

T ∗i−1 such that v is the midpoint of their common edge. Denoting by v1(v), . . . , v4(v) the

vertices of these K1, K2, with v2(v), v3(v) being the vertices of the edge containing v, wedefine

ψv =

φvT ∗0 , v ∈ VT ∗0

φvT ∗i −

4∑j=1

µv,jφvj(v)T ∗i−1

, otherwise,(2.8.2)

where µv,2 = µv,3 = −3/16, µv,1 = µv,4 = 1/16 and µv,j = 0 if vj(v) ∈ ∂Ω [36], [15]. Since,as one can verify,

∫Ω

ψv = 0, except for those of the coarsest level, it is appropriate to callthe new basis a wavelet basis. Due to the construction procedure used, it is also calledcoarse-grid correction wavelet basis. The following result was shown in [36].

Theorem 2.8.3 With ψv := ψv/‖ψv‖E,

ψv : v ∈ V∗ (2.8.3)

is a Riesz basis for H10 (Ω).

41

Page 36: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.14: Wavelet ψv

Each u ∈ H10 (Ω) has a unique representation u =

∑v∈V∗ cvψv. With ‖u‖Ψ := (

∑v∈V∗ c2

v)1/2,

we define λΨ, ΛPsi > 0 to be largest and smallest constants, respectively, such that

λΨ‖u‖2Ψ ≤ ‖u‖2

E ≤ ΛΨ‖u‖2Ψ (u ∈ H1

0 (Ω)). (2.8.4)

The condition number of the wavelet basis is then given by κΨ := ΛΨ

λΨ.

Let us now consider an admissible triangulation T , with corresponding sequence T0, . . . , Tn =T . Because of the admissability, for any v ∈ VTi

\VTi−1(i > 0), the vertices v1(v), . . . , v4(v),

if not on ∂Ω, are regular vertices of Ti−1. Thanks to this property one infers

Theorem 2.8.4 For an admissible triangulation T , ψv : v ∈ VT is a basis for WT . Inparticular, it is a uniform Riesz basis (with respect to the H1-metric), i.e., the stabilityconstants are independent of T .

Given an admissible triangulation T = Tn, any wT ∈ WT can be represented in thenodal basis as well as in the wavelet basis

wT =∑v∈VT

wT (v)φvT =

∑v∈VT

cvψv. (2.8.5)

Whenever a transformation from one basis to another one is needed, the followingroutines can be used.

42

Page 37: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Algorithm 2.9 Transformation from nodal basis to wavelet basis

NodaltoWavelet[(wT (v))v∈VT ] → [(cv)v∈VT ]/* Input: (wT (v))v∈VT - coefficients of representation of wT ∈ WT with respect to thenodal basis, where wT (v) := 0 for v ∈ VT \ VT .Output: (cv)v∈VT - coefficients of representation of wT ∈ WT with respect to the waveletbasis. */

for v ∈ VT do cv := wT (v) endforfor i = n, . . . , 1 do

for v ∈ VTi\ VTi−1

do cv := cv − (cv2(v) + cv3(v))/2 endfor

for v ∈ VTi\ VTi−1

do

for j = 1, . . . , 4 do cvj(v) := cvj(v) + cvµv,j endfor

endfor

endfor

Algorithm 2.10 Transformation from wavelet basis to FE basis

WavelettoNodal[(cv)v∈VT ] → [(wT (v))v∈VT ]/* Input: (cv)v∈VT - coefficients of representation of wT ∈ WT with respect to the waveletbasis, where cv := 0 for v ∈ VT \ VT .Output: (wT (v))v∈VT - coefficients of representation of wT ∈ WT with respect to the nodalbasis. */

for i = 1, . . . , n do

for v ∈ VTi\ VTi−1

do

for j = 1, . . . , 4 do cvj(v) := cvj(v) − cvµv,j endfor

endfor

for v ∈ VTi\ VTi−1

do cv := cv + (cv2(v) + cv3(v))/2 endfor

endfor

for v ∈ VT do wT (v) := cv endfor

The next statement about the complexity of the above algorithms is obvious.

Proposition 2.8.5 Both algorithms, NodaltoWavelet and WavelettoNodal, take O(]T )operations.

2.8.2 Optimization of Adaptive Triangulations

In order to guarantee an optimal relation between the degrees of freedom and the errorof approximation, we will add a coarsening routine to our adaptive algorithm. The aimis to undo refinements that hardly turn out to contribute to a better approximation, butthat, due to their possibly large number, significantly contribute to the cardinality ofthe triangulation. Given a finite element approximation with respect to an admissibletriangulation, we can compute its wavelet coefficients. Then in order to find a more efficientrepresentation, in view of a norm equivalence (2.8.4), one may think of ordering the wavelet

43

Page 38: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

coefficients by their size and then removing the smallest ones until the desired toleranceis met. Note, however, that the remaining function has to be represented as a finiteelement function. With this unconstrained coarsening, a small number of remaining waveletcoefficients does not mean that the corresponding function can be represented as a finiteelement function with respect to a triangulation that has only a few triangles. Havingin mind that a triangulation can be represented as a tree, we will search for a best treeapproximation.

It is possible to equip V∗ with a tree structure as follows. The vertices from VT ∗0will be the roots of this tree. For v ∈ VT ∗1 \ VT ∗0 , assuming that one of the verticesv1(v), . . . , v4(v) is not on the boundary ∂Ω, we pick one of them to be the parent of v.Now consider i > 1 and v ∈ VT ∗i \ VT ∗i−1

. At least one of v1(v), . . . , v4(v) is VT ∗i−1\ VT ∗i−2

,and we can pick one of them to be the parent of v. In the case of multiple choices, onecan design a deterministic rule to make the selection, for example as follows. If one ofv2(v) or v3(v) is in VT ∗i−2

, then the other is in VT ∗i−1\ VT ∗i−2

, and we select it to be the

parent of v. Otherwise, if both v2(v), v3(v) ∈ VT ∗i−1\ VT ∗i−2

, then one of the remaining

v1(v), v4(v) is in VT ∗i−2, and we call it v1(v), whereas the other, thus called v4(v), is in

VT ∗i−1\ VT ∗i−2

. After numbering v1(v), v2(v), v3(v) in a clockwise direction, we select thefirst of v2(v), v3(v), v4(v) ∈ VT ∗i−1

\ VT ∗i−2to be the parent of v. Apparently, in view of

the described construction, the number of children of any parent in this tree is uniformlybounded and is only dependent on T0. We will call V ⊂ V∗ a (vertex) subtree when itcontains all roots of V∗, and when, for any v ∈ V , all its ancestors and all its siblings, i.e.,those v ∈ V∗ that have the same parent of v, are also in V . For an admissible triangulationT , VT is nearly a subtree of V∗ since it contains the roots of V∗ as well as all ancestors ofany v ∈ VT . There may be v ∈ VT with siblings outside VT .

Given a vertex subtree V , we can construct a corresponding triangulation by the fol-lowing routine.

Algorithm 2.11 ConstructTriangulation

ConstructTriangulation [V ]→ [T ]T0 := T0, i := 1while V ∩ (VT ∗i \ VT ∗i−1

) 6= ∅ do

construct Ti from Ti−1 by refining those K ∈ Ti−1 that contain av ∈ V ∩ (VT ∗i \ VT ∗i−1

), i + +endwhile

Theorem 2.8.6 Let for subtree V ⊂ V∗, T be a triangulation constructed by a callConstructTriangulation(V). Then for an admissible triangulation T := MakeAdmissible(T )it holds

]T . ]V and spanψv : v ∈ V ⊂ WT . (2.8.6)

Proof. The first part of the statement follows by construction and properties of MakeAd-missible. Since V ⊂ VT , and so spanψv : v ∈ V ⊂ spanψv : v ∈ VT = WT .♦

44

Page 39: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Let w =∑

v∈V∗ cvψv, then for any subset V ⊂ V∗, its best approximation with respect to

‖·‖Ψ from spanψv : v ∈ V is∑

v∈V cvψv, and the squared error E(V) of this approximation

with respect to ‖ · ‖Ψ is E(V) =∑

v∈V∗\V |cv|2. Defining for v ∈ V∗ the error functional

e(v) :=∑

v a descendant of v |cv|2, for any subtree V ⊂ V∗ we have E(V) =∑

v∈L(V ) e(v).

Further, we also define a modified error functional e(v) for v ∈ V∗ as follows. For theroots v ∈ VT0 , e(v) := e(v), and assuming that e(v) has been defined, for all of its childrenv1, . . . , vm,

e(vj) :=

∑mi=1 e(vj)

e(v) + e(v)e(v). (2.8.7)

To compute the error functional we define the following routines.

Algorithm 2.12 ComputeErrorFunctional

ComputeErrorFunctional [T ,(cv)v∈VT ]→ [(e(v))v∈VT ]For all v ∈ VT0 do

Compute-e(v)endfor

Algorithm 2.13 Compute-eCompute-e

if v is a leaf of VT then e(v) := 0else e(v) :=

∑v∈VT :v is a child of v c2

v + Compute-e(v)endif

Proposition 2.8.7 The call of ComputeErrorFunctional[T ,(cv)v∈VT ] requires O(]T ).

Now we are ready to describe the ThresholdingSecondAlgorithm from [8] for de-termining a quasi-optimal subtree approximation.

Algorithm 2.14 ThresholdingSecondAlgorithm

ThresholdingSecondAlgorithm [(e(v)v∈V∗),ε]→ [V ]V := VT0

while E(V) > ε2 do

compute ρ = maxv∈L(V) e(v)

for all v ∈ L(V) with e(v) = ρ add all children of v to Vendwhile

A most efficient implementation of this algorithm is obtained by keeping the values e(v)stored as an ordered list. Even then, with V being the subtree at termination, the operationcount of this algorithm will contain a term O(]V log(]V)) due to the insertion of e(v) fornewly created leaves in this list. To keep the complexity of the algorithm of order ]V ,

45

Page 40: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

an approximate sorting based on binary binning can be used instead of the exact sorting.With this modification the following holds for the ThresholdingSecondAlgorithm.

Proposition 2.8.8 The subtree V yielded by (the modified) ThresholdingSecondAlgo-rithm satisfies E(V) ≤ ε2. There exists absolute constants t1, T2 > 0, necessary witht1 ≤ 1 ≤ T2, such that if V is a subtree with E(V) ≤ t1ε

2, then ]V ≤ T2]V. The numberof evaluations of e, and the number of additional arithmetic operations required by thisalgorithm are . ]V + max0, log(ε−2‖w‖2

Ψ]VT ).Now we are ready to present a coarsening (derefinement) algorithm.

Algorithm 2.15 Derefine

DEREFINE [T ,wT ,ε]→ [T , wT ]/* T = Tn is some admissible triangulation, wT ∈ WT is given by its values (wT (v))v∈VT ,and ε > 0. The output T is an admissible triangulation, and wT ∈ WT is given by itsvalues (wT (v))v∈VT . */

[(cv)v∈VT ] := NodaltoWavelet[(wT (v))v∈VT ](e(v))v∈VT :=ComputeErrorFunctional [T ,(cv)v∈VT ]V := ThresholdingSecondAlgorithm [(e(v)v∈V∗)]T := ConstructTriangulation [V ]T := MakeAdmissible [T ](wT (v))v∈VT := WavelettoNodal[(cv)v∈V ]

Theorem 2.8.9 i) [T , wT ] :=DEREFINE[T , wT , ε] satisfies ‖wT − wT ‖Ψ ≤ ε. There

exists an absolute constant D > 0, such that for any triangulation T for which there existsa wT ∈ WT with ‖wT − wT ‖Ψ ≤ t

1/21 ε, we have that ]T ≤ D]T .

ii) The call of this algorithm requires . ]T + max0, log(ε−1‖wT ‖Ψ) arithmetic oper-ations.

Proof. i) The first statement follows by construction. With wT =∑

v∈V∗ cvψv written in

the wavelet basis, let T , be a triangulation wT ∈ WT such that ‖wT −wT ‖Ψ ≤ t1/21 ε, and let

T a :=MakeAdmissible[T ], wT =∑

v∈VT a∈ VT ⊂ VT a . Let V be the enlargement of VT a

by adding all siblings of all v ∈ VT a to this set. Then V is a subtree with ]V . ]VT a . ]T .Since

E(V) =∑

v∈V∗\V|cv|2 ≤ ‖wT − wT ‖2

Ψ ≤ t1ε2, (2.8.8)

applying Proposition 2.8.8, we get that V constructed by the ThresholdingSecondAl-gorithm satisfies ]V ≤ T2V . Finally, we use that for T :=ConstructTriangulation[V ]and T a :=MakeAdmissible[T ] it holds ]T a . ]T . ]V .

ii)Let us now analyze the computational complexity of the algorithm. The transforma-tion to the wavelet basis and the computation of error functionals cost . ]T arithmetic

46

Page 41: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

operations. Thanks to ]VT . ]T , Proposition 2.8.8 shows that ThresholdingSecon-dAlgorithm requires . ]V + max0, log(ε−2‖wT ‖2

Ψ]VT ) . ]T + max0, log(ε−2‖wT ‖2Ψ]T )

arithmetic operations. Finally, the number of operations required to construct triangula-tion from the vertex tree and to make this triangulation admissible is . ]V . ]T . ♦

The next corollary shows that by allowing the error of approximation to increase bysome constant factor, DEREFINE outputs a triangulation, which has the smallest cardi-nality, modulo some constant factor, one can generally expect for an approximation withthis accuracy.

Corollary 2.8.10 Let γ > t−1/21 . Then for any ε > 0, u ∈ H1

0 (Ω), a triangulation T ,wT ∈ WT with ‖u − wT ‖Ψ ≤ ε, for [T , wT ] :=DEREFINE[T , wT , γε] we have that

‖u−wT ‖Ψ ≤ (1 + γ)ε and ]T ≤ D]T for any triangulation T with infwT ∈VT ‖u−wT ‖Ψ ≤(t

1/21 γ − 1)ε

Proof. The first statement follows from Theorem 2.8.9. Since,

infwT ∈WT

‖wT −wT ‖Ψ ≤ ‖u−wT ‖Ψ + infwT ∈WT

‖wT − u‖Ψ ≤ ε + (t1/21 γ − 1)ε = t

1/21 γε, (2.8.9)

using again the same theorem, the proof is completed. ♦

2.9 Optimal Adaptive Finite Element Algorithm

Finally, we can present the optimal algorithm AFEMOPT, that is obtained by adding aderefinement routine to AFEM1.

47

Page 42: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Algorithm 2.16 Optimal Adaptive FE Algorithm

AFEMOPT[T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T , wT ∈WT , ε0 such that ε0 ≥ ‖u− wT ‖E, and ε > 0. Parameters δ ∈ (0, 1/3), γ and M ∈ N are

chosen such that δ corresponds to a µ < 1 as in Corollary 2.6.3, γ > t−1/21 with t1 as in

Proposition 2.8.8, and(1+γ)κ

1/2Ψ µM

1−3δ< 1.

*/

N := mini ∈ N : ( µM

1−3δ)N((1 + γ)κ

1/2Ψ )N−1 ≤ ε

for i = 1, . . . , N do

if i = 1 then ε1 := ε0/(1− 3δ)else

[T , wT ] :=DEREFINE [T , wT , γλ−1/2Ψ µMεi−1]

εi :=(1+γ)κ

1/2Ψ µM

1−3δεi−1

endif

[T , fT ]:=RHS[T , f, δεi]wT :=GALSOLVE[T , fT , wT , δεi]for k = 1, . . . , M do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, δµkεi]wT :=GALSOLVE[T , fT , wT , δµkεi]

endfor

endfor

Theorem 2.9.1 i) Let T be an admissible triangulation , ε > 0 and wT ∈ WT such that‖u− wT ‖E ≤ ε0. Then [T , wT ] :=AFEMOPT[f, T , wT , ε0, ε] satisfies ‖u− wT ‖E ≤ ε.

ii)Assuming ε0 . ‖u‖E, if for some s > 0, u ∈ As and (f ,RHS) is s-optimal, then

both ]T and the number of flops required by this call are . max1, ε−1/s(c1/sf + ‖u‖1/s

As ).

Proof. i) By induction on i we are going to prove that after the if-then-else-endif

statement it holds that‖u− wT ‖E ≤ (1− 3δ)εi (2.9.1)

For i = 1, (2.9.1) is valid by the definition of ε1 and since ‖u− wT ‖E ≤ ε0.Let us assume that (2.9.1) holds for some i < N . Since the call [T , fT ] :=RHS[T , f, δεi],

following the if-then-else-endif statement, leads to

‖f − fT ‖E′ ≤ δεi, (2.9.2)

as in (2.6.16) after this call we have

‖u−A −1T fT ‖E ≤ (1− δ)εi. (2.9.3)

48

Page 43: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Clearly, after wT :=GALSOLVE[T , fT , wT , δεi] we have

‖A −1T fT − wT ‖E ≤ δεi. (2.9.4)

Together, (2.9.3) and (2.9.4) show that

‖u− wT ‖E ≤ εi (2.9.5)

at the moment of entering the inner loop.By induction on k, for the inner loop we will prove, that just before the call of REFINE

we have‖u− wT ‖E ≤ µk−1εi, (2.9.6)

proving that the inner loop terminates with

‖u− wT ‖E ≤ εiµM . (2.9.7)

For k = 1, (2.9.6) is satisfied thanks to (2.9.5). Assuming that (2.9.6) is valid for some k,and using that on that moment

‖f − fT ‖E′ ≤ δεiµk−1, ‖A −1

T fT − wT ‖E ≤ δεiµk−1, (2.9.8)

by the subsequent calls T :=REFINE[T , fT , wT , θ], [T , fT ] :=RHS[T , f, δεiµk], and

wT := GALSOLVE[T , fT , wT , δεiµk], from Corollary 2.6.3 we have

‖u− wT ‖E ≤ εiµk (2.9.9)

proving the induction hypothesis. Knowing (2.9.7), before the call of DEREFINE in thenext outer iteration it holds

‖u− wT ‖Ψ ≤ λ−1/2Ψ ‖u− wT ‖E ≤ λ

−1/2Ψ εiµ

M . (2.9.10)

Then the call of DEREFINE leads to

‖u− wT ‖E ≤ Λ1/2Ψ ‖u− wT ‖Ψ ≤ Λ

1/2Ψ (1 + γ)λ

−1/2Ψ εiµ

M = (1− 3δ)εi+1 (2.9.11)

by definition of εi+1. With this we conclude the proof of (2.9.1).As a special case of (2.9.7), at termination of the last inner loop, we have

‖u− wT ‖E ≤ µMεN , (2.9.12)

and the proof of the first part of the theorem is completed by definition of N .ii) First, we will show that at the end of the ith outer cycle both T and the cost of this

cycle are . ε−1/si (c

1/sf + ‖u‖1/s

As ). Using that∑N

i=1 ε−1/si . ε

−1/sN ≤ ε−1/s, this will suffice for

the proof.For i = 1 at the beginning of the outer cycle we have ]T . ε

−1/si ‖u‖1/s

As , because of]T = ]T0 . 1 and our assumption that ε0 . ‖u‖E. Next, for i > 1 before the call

49

Page 44: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

of DEREFINE[T , wT , γλ−1/2Ψ µMεi−1] it holds ‖u− wT ‖Ψ ≤ λ

−1/2Ψ εi−1µ

M . Now Corollary

2.8.10 states that after this call for any triangulation T with infvT ∈WT ‖u−vT ‖Ψ ≤ (t1/21 γ−

1)λ−1/2Ψ εi−1µ

M we have ]T ≤ D]T . Since u ∈ As,

]T ≤ D(]T0 + ((t1/21 γ − 1)εi−1µ

M)−1/s‖u‖1/sAs ) . ε

−1/si ‖u‖1/s

As . (2.9.13)

Further, recalling the properties of RHS and REFINE and using that (f,RHS) iss-optimal, we conclude that at the end of the outer cycle we have

]T . ε−1/si (c

1/sf + ‖u‖1/s

As ). (2.9.14)

Therefore, the proof of the statement about the cardinality of T is complete.Let us analyze the costs of the algorithm. Theorem 2.8.9 states that the call of

DEREFINE requires . ]T + log(ε−1i ‖uT ‖Ψ) operations. Next, we find log(ε−1

i ‖wT ‖Ψ) ≤ε−1/si ‖wT ‖1/s

Ψ , ‖wT ‖Ψ . ‖wT ‖E ≤ ‖u‖E + ε0 . ‖u‖E ≤ ‖u‖As . Using (2.9.14), we find thatthe cost of this call is

. ε−1/si (c

1/sf + ‖u‖1/s

As ). (2.9.15)

Thanks to the properties of RHS[T , f, δεi] and REFINE, the costs of the calls of these

routines are . ε−1/si (c

1/sf + ‖u‖1/s

As ).As we have seen, before the call of GALSOLVE in the outer loop it holds

‖u− wT ‖E ≤ (1− 3δ)εi, ‖u−A −1T fT ‖E ≤ (1− δ)εi, (2.9.16)

therefore, ‖A −1T fT −wT ‖E ≤ 2(1−2δ)εi. Observing that 2(1−2δ)εi

δεiis constant, we conclude

that the cost of this call is . ]T .Before the call of GALSOLVE[T , fT , wT , δµkεi], in the inner loop it holds that

‖u− wT ‖E ≤ µk−1εi, ‖f − fT ‖E′ ≤ δµkεi. (2.9.17)

Using (2.6.16),

‖u−A −1T fT ‖E ≤ 2‖f − fT ‖E′ + ‖u− wT ‖E ≤ (2δµk + µk−1)εi, (2.9.18)

and we find that ‖A −1T fT −wT ‖E ≤ 2(δµk+µk−1)εi. Noting that 2(δµk+µk−1)εi

δµkεiis a constant,

the cost of this call is also . ]T . ♦In a similar way we can turn AFEM2 into an optimal adaptive finite element routine by

adding the derefinement routine to it. In that case, the third parameter of DEREFINE,i.e., the tolerance, will be a constant multiple of the current a posteriori error estimator.For details, one may consult [36].

2.10 Numerical Experiments

In this section we present numerical examples to verify experimentally the performance ofthe second practical adaptive algorithm (AFEM2).

50

Page 45: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.15: Solution of the Poisson problem on an L-shaped domain

2.10.1 L-shaped domain

Let us consider the Poisson problem on the L-shaped domain Ω = [0, 1]2 \ [0.5, 1]2 withconstant right-hand side. This problem has a solution as depicted in Fig. 2.15. Recallingour discussion in Section 2.3.2, we infer that in this case for the solution u it holds

u ∈ H2/3+1−ε(Ω), (2.10.1)

for any ε > 0. As a consequence, finite element approximations with respect to triangu-lations created by uniform refinements can be expected to have errors in energy norm oforder N−1/3+ε/2 where N denotes the number of unknowns. So the convergence rate is−1/3 + ε/2 instead of −1/2 that we may expect for optimal triangulations.

We have created a C++ implementation of the adaptive solver AFEM2. For thetheory (Theorem 2.4.3(i) and (ii)) it was required to create an interior node in each elementthat was marked for refinement. This can be achieved by 2 levels of red refinements (i.e.subdivision into 16 subtriangles). We compared this approach with one in which only onered refinement was applied, which does not create an interior node. Since the convergencerates were the same, the results presented here were obtained by the simpler approach ofapplying only one red refinement.

In Fig. 2.17 and Fig. 2.18 we give two examples of adaptive triangulations producedby this solver. We have studied AFEM2 for different values of θ. As one can see from Fig.2.16, for θ being small enough (θ2 = 0.3 and 0.5), the approximations converge with theoptimal rate −1/2, whereas for larger θ, the rates become increasingly closer to the rate−1/3 that one obtains with uniform refinements. The results indicate that for sufficientlysmall θ, the derefinement routine is not required for obtaining optimal rates.

2.10.2 Domain with a crack

In our second example we consider the Poisson problem on a crack domain, that hasmaximum interior angle 2π. In this case the solution u has even lower Sobolev regularity

51

Page 46: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

101

102

103

104

105

106

10−1

100

101

102

Ideal line −1/2Ideal line −1/3theta2=0.3theta2=0.5theta2=0.8theta2=0.9theta2=0.95theta2=1

Figure 2.16: Error estimator vs ]T for different values of θ on the L-shaped domain

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.17: Mesh example (θ2 = 0.3)

52

Page 47: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.18: Mesh example (θ2 = 0.3)

than in the first example, namely,

u ∈ H1/2+1−ε(Ω). (2.10.2)

This means that the non-adaptive method will converge with a rate close to −1/4. Thesolution is illustrated in Fig. 2.19. As in the L-shaped domain case, the results obtainedwith AFEM2 show (Fig. 2.20) that for θ small enough (here θ2 = 0.3) the optimal rateis obtained, whereas for larger θ the rates get increasingly closer to the rate −1/4 thatcorresponds to uniform refinements. Mesh examples obtained with AFEM2 are given inFig. 2.21 and Fig. 2.22.

53

Page 48: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

Figure 2.19: Solution of the Poisson problem on a domain with a crack

102

103

104

105

106

10−1

100

101

102

Ideal line −1/2Ideal line −1/4theta2=0.3theta2=0.5theta2=0.8theta2=0.9theta2=0.95theta2=1

Figure 2.20: Error estimator vs ]T for different values of θ on the crack domain

54

Page 49: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.21: Mesh example with 2735 triangles (θ2 = 0.3)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.22: Mesh example with 10571 triangles (θ2 = 0.3)

55

Page 50: Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson … · Chapter 2 Analysis of Adaptive FE Algorithms for the Poisson Problem 2.1 Abstract Variational Problem Let W be

56