michail id is

Upload: cesar-audiveth

Post on 02-Jun-2018

235 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/11/2019 Michail Id Is

    1/198

  • 8/11/2019 Michail Id Is

    2/198

  • 8/11/2019 Michail Id Is

    3/198

    Contents

    Introduction vi

    1 Mathematical background

    1.1 Mathematican Modelling and Numerical Simulation1.1.1 Introduction 1

    1.1.2 Mathematical Modelling 1

    1.1.3 Some classical models 2

    1.1.4 Numerical calculation. 4

    1.1.5 The idea of a well-posed problem.. 41.1.6 Classification of PDEs 6

    1.2 Variational formulation of elliptic problems1.2.1 Introduction. 7

    1.2.2 Classical formulation... 7

    1.2.3 Greens formulas. 8

    1.2.4 Variational formulation 9

    1.2.5 Lax-Milgram theory. 10

    1.2.6 System of linearized elasticity. 11

    1.3 Finite Element Method1.3.1 Variational approximation 14

    1.3.1.1 Introduction... 14

    1.3.1.2 General internal approximation 14

    1.3.1.3 Finite element method ( general principles ) 16

    1.3.2 Finite elements in N=1 dimension 16

    1.3.2.1 P1 finite elements. 16

    1.3.2.2 Convergence and error estimation 18

    2 Optimization

    2.1 Introduction. 20

    2.2 Definitions and notation.. 21

    2.3 Categories of optimization problems2.3.1 Continuous versus discrete optimization 22

    2.3.2 Constrained and unconstrainted optimization 22

    2.3.3 Nature of objective function and constraints. 22

    2.3.4 Global and local optimization. 232.3.5 Stochastic and deterministic optimization.. 23

    iii

  • 8/11/2019 Michail Id Is

    4/198

    2.4 Existence of a minimum2.4.1 Optimization in finite dimensions 23

    2.4.2 Optimization in infinite dimensions. 24

    2.4.3 Convex analysis 25

    2.5 Optimality conditions2.5.1 Introduction. 26

    2.5.2 Differentiability 26

    2.5.3 Euler inequalities and convex constraints 27

    2.5.4 Lagrange multipliers 28

    2.6 Numerical algorithms2.6.1 Introduction..... 32

    2.6.2 Overview of algorithms 32

    2.6.3 Gradient algorithms ( case without constraints ). 33

    2.6.4 Gradient algorithms ( case with constraints ).. 352.6.5 Penalization of constraints 36

    2.6.6. Newtons method 37

    3 Shape Optimization

    3.1 Introduction................................................................................................ 39

    3.2 Examples

    3.2.1 Optimization of the thickness of a membrane. 403.2.2 Some remarks on the criteria of optimization. 42

    3.2.3 Optimization of the shape of a membrane.. 43

    3.2.4 Shape Optimization in elasticity. 46

    3.3 Optimization of distributed systems 47

    3.4 Parametrical Optimization3.4.1 Modelling.... 51

    3.4.2 Gradient method. 52

    3.4.3 Numerical algorithm 54

    3.5 Geometrical Optimization3.5.1 Introduction 55

    3.5.2 Diffeomorphisms 56

    3.5.3 Differentiation with respect to a domain 57

    3.5.4 Gradient and optimality condition-Method of the Lagrangian.. 60

    3.5.5 Numerical algorithms. 63

    iv

  • 8/11/2019 Michail Id Is

    5/198

    v

    4 Applications using FreeFem++

    4.1 Introduction to FreeFem++.. 69

    4.2 Example of a Cantilever4.2.1 Description of the problem 70

    4.2.2 Parametrical Optimization 70

    4.2.3 Geometrical Optimization. 94

    4.3 Example of an L-shaped structure. 125

    5 Conclusions and Perspectives........................................................... 156

  • 8/11/2019 Michail Id Is

    6/198

    Introduction

    Optimization is something more than a pure mathematrical concept. Nature optimizes

    in most of its procedures. A structure or body shall deform or displace to a position that

    minimizes the total potential energy: this is the stable configuration for equilibrium.People tried to take advantage of optimization procedures many decades ago. Several

    optimization techniques have been developed, but the cost in calculation made impossible their

    application in engineering problems. Therefore, enineers were mostly using their experience

    and intuition, or trial and error techniques in order to design structures with improved

    characteristics, always compared with the previous ones and not with some optimal solution,

    which remained unknown so far. As it has been expected, this has led to serious failures,

    sometimes with great impact on human lifes.

    The tremendous evolution of computers during the last decades settled the engineers

    able to abandon the former traditional methods of designing, which were both expensive in

    time and money and imprecise, and to develop and adopt optimization techniques for a variety

    of difficult problems they faced.Combining mechanics and applied mathematics, StructuralOptimization endows Civil Engineers with the capability to design structures with optimal

    characteristics or with some desired mechanical behaviour.

    The variety of criteria to be minimized (or maximized) that we can settle is huge. We

    can use optimization techniques to minimize the cost of a structure, by varying the boundary of

    the structure and consequently reducing its volume ( [2], [14], [15], [17], [18], [19] ). Also, we

    can achieve maximum rigidity of a structure with predefined volume ( [2] ), which is a very

    important topic from the point of view of a Civil Engineer. Moreover, in many applications of

    a Civil Engineer, the problem of minimum stress design appears, which can be adequately

    treated with optimization techniques ( [4], [7] ). Finaly, a great number of industrial

    applications include optimization techniques, such as the design of compliant mechanisms (

    [16] ), where the use of other methods can be highly uneffective.Of course, in order for all these problems to obtain a practical significance, engineers

    have to set most of the times some constraints, that is, some additional requirements that the

    structure should fulfill. These constraints could introduce great difficulties on the solution of

    the initial problem and make the problem too complicated.Some of the most regularly

    presented constraints are perimeter constraints ( [11] ), volume constraints ( [2] ) and stress

    constraints ( [12], [13] ), where we exclude from our structure the possibility to overcome

    some stress level, usually linked with a yield criterion (for example a Von Mises bound for the

    stress).

    In order to treat such problems, various techniques have been developed. Some of the

    most popular in Structural Optimization, is the Parametrical Optimization ( [2] ), the

    Geometrical Optimization ( [2], [14], [15], [17], [18], [19] ) and the Topological Optimization (

    [2], [3], [8], [9] ). All of these methods above present benefits and drawbacks, which make

    each of them more or less effective for some kind of problems. As a result, many variants of

    these basic methods have been proposed, sometimes presenting significant improvement.For

    example,the huge computational cost ( because of remeshing ) of Geometric Optimization and

    its tendency to fall into local minima far away from global ones, have led to the development

    of a Structural optimization technique, using sensitivity analysis and a level set method.The

    knowledge of such techniques constitutes a powerful tool for Civil Engineers that need to keep

    up with the contemporary demands of our society and design with safety, precision and

    economy.

    The objectives of this Thesis are, at first, to familiarize the reader with the idea ofOptimization and to present a new way of designing a variety of structures. Moreover, we want

    vi

  • 8/11/2019 Michail Id Is

    7/198

    vii

    to question ourselfes about the practical implementation of such techniques to applications and

    problems that cover the whole scientific spectrum of Civil Engineers.

    The plan of this Thesis is the following. In the first Chapter we present the

    mathematical background needed for the understanding of the examples that we will present.

    We explain the notions and the difference between the Mathematical Modelling of a problem

    and its Numerical Simulation ( [1] ). After this, we analyze the Variational Formulation ofelliptic problems in general and in the sequence we focus on the system of linearized elasticity,

    which is of our interest in this work. We mention that this is done for reasons of simplicity,

    since we are already familiar with linear elasticity, and that there is no particular difficulty in

    considering another system. At the end of this Chapter we present some details of the Finite

    Element Method ( F.E.M. ), which will be the method to use to approximate the solution of our

    systems. We have to mention that the knowledge gained in the first few years of university is

    the only prerequisite of this Thesis.

    Chapter 2 is dedicated to Optimization. After a general introduction about Optimization

    methods and especially methods for Shape Optimization, we present the theory of optimality

    conditions and of constrained optimization, with equality or inequality constraints. In the last

    part of this Chapter, we describe one very popular numerical method used in Optimization, thegradient method and we give some brief description of other methods.

    In Chapter 3, we make a full explanation of the two methods used in our examples, the

    Parametrical and the Geometrical Optimization. We present the Lagrangian method used to

    reveal the necessary equations in order to solve numerically our problem. We also criticize

    these two methods and reveal their drawbacks and the possible problems coming from their

    implementation in computers.

    In Chapter 4, we present the results taken from implementing these two methods in the

    free finite element software FreeFem++, developed by F. Hecht and O. Pironneau. FreeFem++

    is available on the website: http://www.freefem.org . At the beginning, we give a brief

    description of the program, with the help of an example. After this, we give a short explanation

    of the Geometrical Optimization code introduced in FreeFem++ ( [10] ). Finaly, we present

    two examples. The first one refers to a cantilever and we try to optimize its shape using several

    objective functions ( functions to be minimized ) such as the compliance ( the work done by

    the loads ), a desired displacementof a boundary and a desired stress distribution in the

    structure.The second example is an L-shaped structure, where we try to minimize the stress

    concentration, considering as an objective function various norms of the stress tensor.

    In the last Chapter, Chapter 5, we end up with some conclusions concerning our results

    and our methods, focusing on the difficulties presented in the implementation of Optimization

    algorithms in freeFem++. We propose some ways to overcome such difficulties and we

    comment their influence in our final results. Furthermore, we propose some other examples to

    be studied, which are of great significance from the point of view of a Civil Engineer.As we mentioned before, we just study the system of linearized elasticity in our

    examples, but the practical implementations of Optimization techniques in Civil Engineering

    problem are numerous. We just have to replace our system with another one, coming from

    Fluid Mechanics, Transportation or Management problems, and the global character of

    Optimization in Engineering problems is revealed.

    http://www.freefem.org/http://www.freefem.org/
  • 8/11/2019 Michail Id Is

    8/198

    1.Mathematical Background

    1.1 Mathematical Modelling and Numerical Simulation ( [1] )

    1.1.1 Introduction

    In this first chapter, we try to describe two closely linked, although distinct, aspects of

    applied mathematics: mathematical modeling and numerical simulation. Using the term

    mathematical model, we refer to a representation or an abstract interpretation of physical

    reality that is amenable to analysis and calculation. In our work, these models will be partial

    differential equations (PDEs), that is, differential equations in several variables. On the other

    hand, numerical simulation allows us to calculate the solutions of these models on a

    computer, and therefore to simulate physical reality.

    We shall see that the numerical calculation of the solutions of several physical models

    sometimes has some unpleasant surprises, which can only be explained by a soundunderstanding of their mathematical properties. Therefore, we shall also pay attention to a third

    fundamental aspect of applied mathematics, that is, the mathematical analysisof models.

    In our work, we confine ourselves to linear problems for simplicity. Likewise, we only

    consider deterministic problems, that is, with no random or stochastic components.

    Finally, in order for this work to be understandable and applicable from the aspect of

    view of a Civil Engineer, we shall often be a little imprecise in our mathematical arguments.

    Without underestimating the need for a rigorous mathematical justification of our models, it is

    the physical interpretation that dominates and allows engineers to make this knowledge

    applicable to their specific problems. However, we shall often face difficulties and results, that

    cannot be explained just with our physical intuition, but need a combined knowledge of

    mathematics, computer science and engineering.

    1.1.2 Mathematical Modelling

    Although mathematical modeling is not one of the main purposes of our work, we have to

    explain at the beginning the symbols that we will use in order to define our problem. In this

    chapter, we present some more general models, but in the sequence we will focus on systems

    of linearized elasticity.

    Let us consider a domain in N space dimensions (denoted by RN, with in general

    N=1,2,or 3) which we assume is occupied by an homogeneous, isotropic, elastic material. We

    denote the space variable by x, that is a point of , and the time variable by t. In N=1dimensions, the mass forces applied on the material are represented by a given function f(x,t)

    and the external forces by g(x,t), while the deformation is an unknown function u(x,t). In

    higher dimensions, the applied forces are represented by vectors in RN, while the displacement

    is described by a vector field u: RN RN.In order to calculate the unknown function u, we have to consider some fundamental laws

    of mechanics. In the case of a body in balance, we use the equilibrium of forces, but we can

    also consider non-balanced situations and use the fundamental laws of Newton. These

    equations are usually applied in an elementary volume V contained in and also involve the

    boundary of V, denoted V, with surface element ds, and the outward normal from V, denoted

    n.

    1

  • 8/11/2019 Michail Id Is

    9/198

    Moreover, we need to introduce a constitutive law of the material, in order to link the

    stress tensor that appears in the equilibrium equation with the strain tensor and so with the

    displacements of the body.

    The equation coming from our previous considerations is valid in the entire domain and

    we must add another relation, called a boundary condition, which describes what happens at

    the boundary of the domain, and another relation which describes the initial state of ourunknown function u. By convention, we choose the instant t=0 to be the initial time, and we

    impose an initial condition

    u(t=0,x)=u0(x),

    where u0 is the function giving the initial distribution of the displacement in the domain . The

    type of boundary condition depends on the physical context. If the domain is fixed across its

    boundary , the displacement satisfies the Dirichletboundary condition

    u(t,x)=0 for all xand t>0.

    If the domain is assumed to be free across its boundary, then the slope of the deformed shape

    across the boundary is zero and the displacement satisfies the Neumannboundary condition

    ( , ) ( ) ( , ) 0u

    t x n x u t xn

    for all x and t>0,

    where n is the unit outward normal to .

    1.1.3. Some classical models

    In this section we shall quickly describe some classical models, which mostly involve

    applications of Civil Enginnering. Our goal is not to study in depth the specific details of these

    examples, but to present the principal classes of PDEs which can appear in our problems. For

    simplicity, we shall nondimensionalize all the variables, which will allow us to set the

    constants in the models equal to 1.

    The wave equation:

    The wave equation models propagation of waves or vibration. For example, in two

    space dimensions it is a model to study the vibration of a stretched elastic membrane, like the

    skin of a drum (see Figure 1.1).

    Figure 1.1 Stretched elastic membrane. ( [21] )

    2

  • 8/11/2019 Michail Id Is

    10/198

  • 8/11/2019 Michail Id Is

    11/198

  • 8/11/2019 Michail Id Is

    12/198

    The fact that a mathematical model is a Cauchy problem or a boundary value

    problem does not automatically imply that it is a good model. The expression good

    modelis not used here in the sense of the physical relevance of the model and of its

    results, but in the sense of its mathematical coherence. As we shall see, this

    mathematical coherence is a necessary condition before we can consider numerical

    simulations and physical interpretations. The mathematician Jacques Hadamard gavea definition of what is a good model, while speaking about well-posed problems

    ( an ill-posed problem is the opposite of a well-posed problem ). We denote by f the

    data ( the right-hand side, the initial conditions, the domain, etc. ), u the solution

    sought, and A the operator which acts on u. We are using abstract notation, A

    denotes simultaneously the PDE and the type of initial or boundary conditions. The

    problem is therefore to find u, the solution of:

    A(u)=f . ( 1.4 )

    Definition 1.3: We say that problem ( 1.4 ) is well-posed if for all data f it has a

    unique solution u, and if this solution u depends continuously on the data f.

    Let us examine Hadamards definition in detail: it contains, in fact, three

    conditions for the problem to be well-posed. First, a solution must at least exist: this is

    the least we can ask of a model supposed to represent reality! Second, the solution

    must be unique: this is more delicate since, while it is clear that, if we want to predict

    tomorrows weather, it is better to have sun or rain ( with an exclusive or ) but

    not both with equal chance, there are other problems which reasonably have several

    or an infinity of solutions. For example, problems involving finding the best route

    often have several solutions: to travel from the South to the North Pole then any

    meridian will do, likewise, to travel by plane from Paris to New York, your travel

    agency sometimes makes you go via Brussels or London, rather than directly, because

    it can be more economic. Hadamard excludes this type of problem from his definition

    since the multiplicity of solutions means that the model is indeterminate: to make the

    final choice between all of those that are best, we use another criterion ( which has

    been forgotten until now ), for example, the most practical or most comfortable

    journey. This is a situation of current interest in applied mathematics: when a model

    has many solutions, we must add a selection criterion to obtain the good solution.

    Third, and this is the least obvious condition a priori, the solution must depend

    continuously on the data. At first sight, this seems a mathematical fantasy, but it is

    crucial from the perspective of numerical approximation. Indeed, numerically

    calculating an approximate solution of ( 1.4 ) amounts to perturbing the data ( whencontinuous becomes discrete ) and solving ( 1.4 ) for the perturbed data. If small

    perturbations of the data lead to large perturbations of the solution, there is no chance

    that the numerical approximation will be close to reality ( or at least to the exact

    solution ). Consequently, this continuous dependence of the solution on the data is an

    absolutely necessary condition for accurate numerical simulations. We note that this

    condition is also very important from the physical point of view since measuring

    apparatus will not give us absolute precision: if we are unable to distinguish between

    two close sets of data which can lead to very different phenomena, the model

    represented by ( 1.4 ) has no predictive value, and therefore is of almost no practical

    interest.

    We finish by acknowledging that, at this level of generality, the definition 1.3is a little fuzzy, and that to give it a precise mathematical sense we should say in

    5

  • 8/11/2019 Michail Id Is

    13/198

  • 8/11/2019 Michail Id Is

    14/198

    1.2 Variational formulation of elliptic problems ( [1] )

    1.2.1 Introduction

    In this chapter we are interested in the mathematical analysis of elliptic partial

    differential equations ( PDEs ). In general, these elliptic equations correspond to stationaryphysical models, that is, models which are independent of time. We shall see that boundary

    value problems are well-posed for these elliptic PDEs, that is, they have a solution which is

    unique and depends continuously on the data. The approach that we shall follow is called the

    variational formulation. This approach has a very natural physical or mechanical

    interpretation, and it will be crucial for understanding the finite element method that we

    explain later.

    In this chapter, the prototype example of elliptic PDEs will be the Laplacian for which

    we shall study the following boundary value problem,

    in ,0 on ,

    u fu

    ( 1.6 )

    where we impose the Dirichlet boundary conditions. In ( 1.6 ), is an open set of the space

    RN, is its boundary, f is a right-hand side data for the problem, and u is the unknown.

    The plan of this chapter is the following. First, we recall some integration by parts

    formulas, called Greens formulas, then we define the variational formulation. Finally, we

    refer to Lax-Milgram theoremwhich will be the essential tool allowing us to show existence

    and uniqueness of the solutions of the variational formulation.

    1.2.2 Classical formulation

    The classical formulation of ( 1.6 ), which might appear natural at first sight, is to

    assume sufficient regularity for the solution u so that equations ( 1.6 ) have a meaning at every

    point of or of . First we recall some notation related to spaces of regular functions.

    Definition 1.6: Let be an open set of RN, and its closure. We denote by C()( respectively, C( ) ) the space of continuous functions in ( respectively, in ). Let k0 bean integer. We denote by Ck() ( respectively, Ck( ) ) the space of functions k timescontinuously differentiable in ( respectively, in ).

    A classical solution ( we also say strong solution ) of ( 1.6 ) is a solution

    u )()(2 CC , which implies that the right-hand side f must be in C().This classical

    formulation, unfortunately, has a number of problems. Without going into detail, we note that,

    under the single hypothesis f (C ), there is not in general a solution of class C2for ( 1.6 ) ifthe dimension of the space is greater than two ( N2). In fact, a solution does exist, as we shall

    see later, but it is not of class C2( it is a little less regular except if the data f is more regular

    than C( ) ).

    In what follows, to study ( 1.6 ), we shall replace its classical formulation by a so-called

    variational formulation, which is much more advantageous. The principle of the variational

    7

  • 8/11/2019 Michail Id Is

    15/198

    approach for the solution of PDEs is to replace the equation by an equivalent so-called

    variational formulation obtained by integrating the equation multiplied by an arbitrary

    function, called a test function. As we need to carry out integration by parts when establishing

    the variational formulation, we start by giving some essential results on this subject.

    1.2.3 Greens formulas

    In this section is an open set of the space RN( which may be bounded or unbounded),

    whose boundary is denoted by . We also assume that is a regularopen set of class C1.It is

    not necessary to understand absolutely the precise definition of a regular open set, to follow the

    rest of this course. It is enough to know that an open regular set is roughly speaking an open set

    whose boundary is a regular hypersurface ( a manifold of dimension N-1 ), and this open set is

    locally situated on one side of its boundary. We then define the outward normal at the

    boundary as being the unit vector n=( n i)1iN normal at every point to the tangent plane of

    and pointing to the exterior of . In RN we denote by dx the volume measure, orLebesque measure of dimension N. On , we denote by dsthe surface measure, or Lebesquemeasure of dimension N-1 on the manifold . The principal result of this section is the

    following theorem.

    Theorem 1 ( Greens formula ): Let be a regular open set of class C1. Let w be a C1( )function with bounded support in the closure . Then w satisfies Greens formula,

    ( )( ) ( ) ,

    i

    i

    w xdx w x n x ds

    x

    ( 1.7 )

    where niis the ith component of the unit outward normal to .

    Theorem 1 has many corollaries which are all immediate consequences of Greens

    formula ( 1.7 ). We present here some of them, which are useful for our examples.

    Corollary 1 ( Integration by parts formula ):Let be a regular open set of class C1. Let u

    and y be two C1( ) functions with bounded support in the closed set . Then they satisfy theintegration by parts formula,

    ( ) ( )( ) ( ) ( ) ( ) ( )ii i

    y x u xu x dx y x dx u x y x n x dsx x

    . ( 1.8 )

    Proof. It is enough to take w=yu in Theorem 1.

    Corollary 2: Let be a regular open set of class C1. Let u be a function of C2( ) and y afunction of C1( ), both with bounded support in the closed set . Then they satisfy theintegration by parts formula,

    8

  • 8/11/2019 Michail Id Is

    16/198

    ( )( ) ( ) ( ) ( ) ( ) ,

    u xu x y x dx u x y x dx y x ds

    n

    ( 1.9 )

    where

    1i i N

    uu

    x

    is the gradient vector of u, and

    uu n

    n

    .

    Proof.We apply Corollary 1 to y andi

    u

    x

    and we sum in i.

    1.2.4 Variational formulation

    To simplify the presentation, we assume that the open set is bounded and regular,

    and that the right-hand side f of ( 1.6 ) is continuous on . The principal result of this sectionis the following proposition.

    Proposition 1: Let u be a function of C2( ). Let X be the space defined by,

    X = { C1( ) such that = 0 on }.

    Then u is a solution of the boundary value problem ( 1.6 ) if and only if u belongs to X and

    satisfies the equation,

    dx = dx for every y X. ( 1.10 ) )()( xyxu )()( xyxf

    Equation ( 1.10 ) is called the variational formulation of the boundary value problem ( 1.6 ).

    Proof. If u is a solution of the boundary value problem ( 1.6 ), we multiply the equation by

    yX and we use the integration by parts formula of Corollary 2.

    )()( xyxu dx = dx + )()( xyxu( )

    ( ) (u x

    )y xn

    ds ,

    where y = 0 on since yX, therefore,

    dx = dx )()( xyxu )()( xyxf

    which is nothing other than the formula ( 1.10 ). Conversely, if uX satisfies ( 1.10 ), by usingthe integration by parts formula in reverse we obtain,

    dx = 0 for every y )())()(( xyxfxu X.

    As (u+f) is a continuous function, we conclude that -u(x) = f(x) for all x. In addition,

    since uX, we recover the boundary condition u=0 on , that is, u is a solution of theboundary value problem ( 1.6 ).

    9

  • 8/11/2019 Michail Id Is

    17/198

    Remark 1: An immediate consequence of the variational formulation ( 1.10 ) is that it is

    meaningful if the solution u is only a function of C1( ), as opposed to the classicalformulation ( 1.6 ) which requires u to belong to C2( ). We therefore already suspect that it iseasier to solve ( 1.10 ) than ( 1.6 ) since it is less demanding on the regularity of the solution.

    In the variational formulation ( 1.10 ), the function y is called the test function. The

    variational formulation is also sometimes called the weak form of the boundary value problem

    ( 1.6 ). In mechanics, the variational formulation is known as the principle of virtual work.

    Remark 2:We can rewrite the variational formulation ( 1.10 ) in compact notation: find u Xsuch that,

    a(u,y) = L(y) for every yX,with

    a(u,y) = dx )()( xyxuand

    L(y) = dx, )()( xyxf

    where a(,) is a bilinear form on X and L() is a linear form on X. It is in this abstract form that

    we solve ( under some hypotheses ) the variational formulation in the next section.

    The principal idea of the variational approach is to show the existence and

    uniqueness of the solution of the variational formulation ( 1.10 ), which implies the same result

    for the equation ( 1.6 ) because of Proposition 1. Indeed, we shall see that there is a theory,

    both simple and powerful, for analyzing variational formulations. Nonetheless, this theory only

    works if the space in which we look for the solution and in which we take the test functions ( inthe preceding notation, the space X ) is a Hilbert space, which is not the case for X = { y C1( ) such that y = 0 on } equipped with the natural scalar product for this problem. Themain difficulty in the application of the variational approach will therefore be that we must use

    a space other than X, that is the Sobolev space H 1 () which is indeed a Hilbert space.0

    At this point, we mention once more that the objective of this work is not a rigorous

    mathematical justification of the validity of the equations used, but instead the practical interest

    that these new formulations give to our problems. However, as we shall see later, neglecting

    details that are relevant with functional spaces could prove to be dangerous sometimes.

    1.2.5 Lax-Milgram theory

    We describe an abstract theory to obtain the existence and the uniqueness of the

    solution of a variational formulation in a Hilbert space. We denote by V a real Hilbert space

    with scalar product and norm || ||. Following Remark 2 we consider a variational

    formulation of the type,

    ,

    find uV such that a(u,y) = L(y) for every yV. ( 1.11 )

    10

  • 8/11/2019 Michail Id Is

    18/198

    The hypotheses on a and L are:

    (1) L() is a continuous linear form on V, that is, yL(y) is linear from V into R and there

    exists C>0 such that:

    |L(y)| C ||y|| for all yV,

    (2)

    a(,) is a bilinear form on V, that is, ua(u,y) is a linear form from V into R for all

    yV, and ya(u,y) is a linear form from V into R for all u V,

    (3) a(,) is continuous, that is, there exists M>0 such that:

    |a(u,y)| M ||u|| ||y|| for all u,yV, ( 1.12 )

    (4)

    a(,) is coercive ( or elliptic ), that is, there exists v>0 such that:

    a(u,u) v ||u||2 for all uV. ( 1.13 )

    Theorem 2 ( Lax-Milgram ):Let V be a real Hilbert space, L() a continuous linear form on

    V, a(,) a continuous coercive bilinear form on V. Then the variational formulation ( 1.11 )

    has a unique solution. Further, this solution depends continuously on the linear form L.

    Remark 3: The need to replace the space C1( ) with H 1 (), in order to apply the Lax-

    Milgram theorem for the Laplacian is non-trivial and need the use of Sobolev spaces.

    Moreover, the equivalence between the classical and the variational formulation is also out of

    the purposes of this Thesis, and can be found in [1]. The main reason that we presented thevariational formulation above is the use of the Finite Element Method, that we present in the

    sequel.

    0

    1.2.6 System of linearized elasticity

    We apply the variational formulation to the solution of the system of linearized

    elasticity equations. We start by describing the mechanical model. These equations model the

    deformations of a solid under the hypothesis of small deformations and small displacements

    ( this hypothesis allows us to obtain linear equations, from which we have linear elasticity ).

    We consider the stationary elasticity equations , that is, independent of time. Let be an openbounded set of RN. Let a force f(x) be a function from into RN. The unknown u ( the

    displacement ) is also a function from into RN. The mechanical modeling uses the

    deformation tensor, denoted by e(u), which is a function with values in the set of symmetric

    matrices,

    e(u) = Njii

    j

    j

    it

    x

    u

    x

    uuu

    ,1)(

    2

    1))((

    2

    1,

    as well as the stress tensor ( another function with values in the set of symmetric matrices )

    which is related to e(u) by Hookes law,= 2e(u) + tr(e(u))I,

    11

  • 8/11/2019 Michail Id Is

    19/198

    where and are the Lame coefficients of the homogeneous isotropic material which occupies

    . For thermodynamic reasons the Lame coefficients satisfy,

    > 0 and 2+N> 0.

    We add, to this constitutive law, the balance of forces in the solid,

    -div= f in

    where, by definition, the divergence of is the vector of components,

    N

    j j

    ij

    xdiv

    1

    .

    Using the fact that tr(e(u)) = divu, we deduce the equations for 1 i N,

    N

    j

    iij

    i

    j

    j

    i

    j

    fdivux

    u

    x

    u

    x1)( in , ( 1.14 )

    with fiand u i, for 1 i N, the components of f and u in the canonical basis of RN. By adding

    a Dirichlet boundary condition, and by using vector notation, the boundary value problem is,

    fIuetruediv )))(()(2( in

    u=0 on . ( 1.15 )

    To find the variational formulation we multiply each equation ( 1.14 ) by a test function w i

    ( which is zero at the boundary to take account of the Dirichlet boundary conditions ) and

    we integrate by parts to obtain,

    dxwfdxx

    wdivudx

    x

    w

    x

    u

    x

    uN

    j

    ii

    i

    i

    j

    i

    i

    j

    j

    i

    1

    .

    We sum these equations, for i going from 1 to N, in order to obtain the divergence of the

    function w=(w1, , wN) and to simplify the first integral as,

    N

    ji i

    j

    j

    i

    i

    j

    j

    iN

    ji j

    i

    i

    j

    j

    i weuex

    w

    x

    w

    x

    u

    x

    u

    x

    w

    x

    u

    x

    u

    1,1,

    )()(22

    1.

    Choosing H 1 ()0Nas the Hilbert space, we obtain the variational formulation:find uH 1 ()0

    N

    such that,

    wdxfdivudivwdxdxweue )()(2 w H1

    0 ()N.

    12

  • 8/11/2019 Michail Id Is

    20/198

    Theorem 3: Let be an open bounded set of RN. Let fL2()N. There exists a unique (weak)solution u H 1 ()0

    Nof ( 1.15 ).

    In order to prove this Theorem, we need to use the Lax-Milgram Theorem for the

    variational formulation of ( 1.15 ). The proof, and especially the verification of the coercivity

    of the bilinear form, is delicate and we refer to [1] (pg 137-138) for a detailed proof. In effect,

    to introduce other boundary conditions ( for example, Neumann ) on part of the boundary, the

    proof of the coercivity of the variational formulation becomes much more difficult.

    In practice, all the boundary is not fixed and often a part of the boundary is free to

    move, or surface forces are applied to another part. These two cases are modeled by Neumann

    boundary conditions which are written here:

    n=g on , ( 1.16 )

    where g is a vector valued function. The Neumann condition ( 1.16 ) is interpreted by saying

    that g is a force applied on the boundary. If g=0, we say that no force is applied and theboundary can move without restriction: we say that the boundary is free.

    We shall now consider the elasticity system with mixed boundary conditions ( a

    mixture of Dirichlet and of Neumann ), that is,

    -div(2e(u)+tr(e(u))I)=f in

    u=0 on D ( 1.17 )

    n=g on N,

    where (D,N) is a particion of such that the surface measures of D and N are

    nonzero. The analysis of this new boundary value problem is more complicated than in the

    case of Dirichlet boundary condition. We just cite a Theorem that guarantees the existence anduniqueness of solution of ( 1.17), and we refer to [1] (pg 140-141) for a full proof.

    Theorem 4:Let be a regular open bounded connected set of class C1of RN. Let fL2()Nand gL2()N. We define the space:

    V={u such that u=0 on NH )(1 D}. ( 1.18 )

    There exists a unique (weak) solution uV of ( 1.17 ) which depends linearly and continuouslyon the data f and g.

    13

  • 8/11/2019 Michail Id Is

    21/198

    1.3 Finite Element Method ( [1],[2] )

    1.3.1 Variational approximation

    1.3.1.1

    Introduction

    In this chapter, we present the method of finite elements which is the numerical

    method of choice for the calculation of solutions of elliptic boundary value problems, but is

    also used for parabolic or hyperbolic problems. The principle of this method comes directly

    from the variational approachthat we have studied in detail in the preceding chapters.

    The idea at the base of the finite element method is to replace the Hilbert space V on

    which we pose the variational formulation by a subspace Vh of finite dimension. The

    approximate problem posed over Vhreduces to the simple solution of a linear system, whose

    matrix is called the stiffness matrix. In addition, we can choose the construction of Vhin such

    a way that the subspace Vhis a good approximation of V and that the solution uhin Vhof the

    variational formulation is close to the exact solution u in V.

    1.3.1.2 General internal approximation

    We again consider the general framework of the variational formalism introduced in

    Chapter 1.2. Given a Hilbert space V, a continuous and coercive bilinear form a(u,w), and a

    continuous linear form L(w), we consider the variational formulation,

    find uV such that a(u,w)=L(w) wV, ( 1.19 )

    which we know has a unique solution by the Lax-Milgram Theorem. The internal

    approximationof ( 1.19 ) consists of replacing the Hilbert space V by a finite dimensional

    subspace Vh, that is to look for the solution of,

    find uhVhsuch that a(uh,wh)=L(wh) whVh. ( 1.20 )

    The solution of the internal approximation ( 1.20 ) is easy as we show in the following lemma.

    Lemma 1:Let V be a real Hilbert space, and Vha finite dimensional subspace. Let a(u,w) be a

    continuous and coercive bilinear form over V, and L(w) a continuous linear form over V. Then

    the internal approximation ( 1.20 ) has a unique solution. In addition, this solution can beobtained by solving a linear system with a positive definite matrix ( and symmetric if a(u,w) is

    symmetric ).

    Proof.The existence and uniqueness of uhVh, the solution of ( 1.20 ), follows from the Lax-Milgram Theorem 2 applied to Vh. To put the problem in a simpler form, we introduce a basis

    (j) of VhNj1 h

    . If uh= , we set U

    hN

    j

    jju1

    h=(u1,,uh) the vector in R h of coordinates of u .

    Problem ( 1.20 ) is equivalent to,

    N

    N

    N

    h

    find such that ahh RU )(,1

    i

    N

    j

    ijj Luh

    h

    Ni1 ,

    14

  • 8/11/2019 Michail Id Is

    22/198

    which can be written in the form of a linear system

    KhUh=bh, ( 1.21 )

    with, forhNji ,1 ,

    ),(ijijh

    aK , iih Lb )( .

    The coercivity of the bilinear form a(u,w) implies the positive definite character of the matrix

    Kh, and therefore its invertibility. In engineering applications the matrix Kh is called the

    stiffness matrix.

    We shall now compare the error caused by replacing the space V by its subspace V h.

    More precisely, we shall bound the difference ||u-uh|| where u is the solution in V of ( 1.19 )

    and uhthat in Vhof ( 1.20 ). We denote by v>0 the coercivity constant and M>0 the continuity

    constant of the bilinear form a(u,w). The following lemma shows that the distance between theexact solution u and the approximate solution u h is bounded uniformly with respect to the

    subspace Vhby the distance between u and Vh.

    Lemma 2: (Cea) We use the hypotheses of Lemma 1. Let u be the solution of ( 1.19 ) and u h

    that of ( 1.20 ). We have,

    hh Vwv

    M

    inf ||u-wh||. ( 1.22 )||u-uh||

    Finally, to prove the convergence of this variational approximation, we give a last

    general lemma. Recall that in the notation Vh the parameter h>0 does not have a practical

    meaning. Nevertheless, we shall assume that it is in the limit h0 that the internal

    approximation ( 1.20 ) converges to the variational formulation ( 1.19 ).

    Lemma 3:We use the hypotheses of Lemma 1. We assume that there exists a subspace V V

    which is dense in V and a mapping r

    hfrom Vinto Vh(called an interpolation operator) such

    that,

    ||w-r0

    limh

    h(w)||=0 w V. ( 1.23 )

    Then the method of internal variational approximation converges, that is,

    ||u-u0

    limh

    h||=0. ( 1.24 )

    The strategy indicated by Lemmas 1, 2, and 3 above is now clear. To obtain a

    numerical approximation of the exact solution of the variational problem ( 1.19 ), we

    must introduce a finite dimensional space Vh, then solve a simple linear system associated

    with the internal variational approximation ( 1.20 ).Nevertheless, the choice of Vh is notobvious. It must satisfy two criteria:

    1. We must construct an interpolation operator rh from Vinto Vhsatisfying ( 1.23 ).

    2. The solution of the linear system KhUh=bhmust be economical.

    15

  • 8/11/2019 Michail Id Is

    23/198

    1.3.1.3 Finite element method (general principles)

    The principle of the finite element method is to construct internal approximation spaces

    Vh from the usual functional spaces , whose definition is based on the

    geometrical concept of a mesh of the domain . A mesh is a tessellation of the space by very

    simple elementary volumes: triangles, tetrahedral, parallelopipeds. We shall later give a precisedefinition of a mesh in the framework of the finite element method.

    ),(),( 0 HH11

    In this context the parameter h of Vhcorresponds to the maximum size of the mesh or

    the cells which comprise the mesh. Typically, a basis of V h will be composed of functions

    whose support is localizedin one or few elements. This will have two important consequences:

    on the one hand, in the limit h0, the space Vh will be more and more large and will

    approach little by little the entire space V, and on the other hand, the stiffness matrix Khof the

    linear system ( 1.21 ) will be sparse, that is, most of its coefficients will be zero ( which will

    limit the cost of the numerical solution ).

    The finite element method is one of the most effective and most popular methods of

    numerically solving boundary value problems. It is the basis of innumerable industrial software

    packages.

    1.3.2 Finite elements in N=1 dimension

    To simplify the exposition, we will present the finite element method in one space

    dimension. Without loss of generality we choose the domain =(0,1). In one dimension a mesh

    is simply composed of a collection of points (xj)0jn+1such that:

    x0= 0 < x1< < xn< xn+1 = 1.

    The mesh will be called uniformif the points xjare equidistant, that is,

    xj=jh with h=1

    1

    n, 10 nj .

    The points xjare also called the verticesor nodes of the mesh. In all that follows, we denote by

    Pkthe set of polynomials, with real coefficients, of one real variable with degree less than or

    equal to k.

    1.3.2.1

    P1finite elements

    The P1finite element method uses the discrete space of globally continuous functions

    which are affine on each element,

    Vh= { u such that u|])1,0([C [xj,xj+1] 1P for all nj0 }, ( 1.24 )

    and on its subspace,

    V0h= { u V hsuch that u(0) = u(1) =0 }. ( 1.25 )

    16

  • 8/11/2019 Michail Id Is

    24/198

    The P1finite element method is then simply the method of internal variational approximation

    applied to spaces Vhor V0hdefined by ( 1.24 ) or ( 1.25 ).

    We can represent the functions of Vhor V0h, which are piecewise affine, with the help

    of very simple basis functions. We introduce the hat function defined by,

    (x) = 1-|x| if |x| 1 ,0 if |x| > 1.

    If the mesh is uniform, for 10 nj we define the basis functions (see Figure 3),

    j(x) =

    h

    xx j. ( 1.26 )

    x0=0 x1 x2 xj xn xn+1=1

    vh

    j

    Figure 3: Mesh of =(0,1) and P1finite element basis functions.

    Lemma 4: The space Vh, defined by ( 1.24 ), is a subspace of H1 (0,1) of dimension n+2, and

    every function uhVhis defined uniquely by its values at the vertices (xj) 1 ,0 nj

    uh(x) =

    1

    0

    )()(n

    j

    jjh xxu 1,0x .

    Likewise, V0h, defined by ( 1.25 ), is a subspace of H (0,1) of dimension n, and

    every function

    1

    0

    hh Vu 0 is defined uniquely by its values at the vertices (xj) ,nj1

    uh(x) =

    n

    j

    jjh xxu

    1

    )()( 1,0x .

    The basis (j), defined by ( 1.26 ), allows us to characterize a function of Vh by its

    values at the nodes of the mesh. In this case we talk of Lagrange finite elements.

    17

  • 8/11/2019 Michail Id Is

    25/198

    As we mentioned before, using the Finite Element Method to approximate the solution

    of a PDE ends up with the solution of a linear system of the form,

    KhUh= bh.

    As the basis functions jhave a small support, the intersection of supports of jand

    i, which form in fact the stiffness matrix (Kh)is often empty and most of the coefficients of

    Kh are zero. For example, in N=1 dimension, using the P 1 F.E.M. results in a tridiagonal

    stiffness matrix.

    The exact evaluation of the right-hand side bh can be difficult or impossible if the

    function f is complicated. In practice, we use quadrature formulas ( or numerical integration

    formulas ) which give an approximation of the integrals b h. For example, we can use the

    midpoint formula:

    1

    2)(1 1

    1

    i

    i

    x

    x

    ii

    ii

    xx

    dxxxx .

    1.3.2.2 Convergence and error estimation

    We first of all define an interpolation operatorrh:

    Definition 1.7: The P1 interpolation operator is the linear mapping from H1(0,1) into Vh

    defined, for all uH1(0,1), by:

    (rhu)(x) = .

    1

    0 ( ) ( )

    n

    j jj u x x

    The interpolant rhu of a function u is simply the function which is piecewise affine and

    coincides with u on the vertices of the mesh xj.

    Lemma 5: Let rhbe the P1interpolation operator. For every u ), it satisfies:1,0(1

    H

    .0||||lim)1,0(0

    1 H

    hh

    uru

    Moreover, if , then there exists a constant C independent of h such that:)1,0(2

    Hu

    .||||||||)1,0()1,0( 21 LHh

    uChuru

    In our opinion, a full presentation of the Finite Element Method is out of the purposes

    of this Thesis. For a more complete presentation and understanding of the FEM, we refer to

    [1]. The main idea to remember is that FEM approximatesa solution of our equation and uses

    several methods, according to the polynomials which will substitute our initial function. The

    inevitable use of different basis functions for each method, results in methods with differentbenefits and drawbacks. In our examples, we use the P1and P2method.

    18

  • 8/11/2019 Michail Id Is

    26/198

    19

    Considering again that we work in N=1 dimension, the principal advantage of P 2finite

    elements is that, if the solution is regular, then the convergence of the method is quadratic

    ( the rate of convergence is proportional to h2) while the convergence for P1finite elements is

    only linear ( proportional to h ).Of course, this advantage has a price: there are twice as many

    unknowns ( exactly 2n+1 instead of n for P 1finite elements ) therefore the matrix is twice as

    large, also the matrix has five nonzero diagonals instead of three in the P 1case. Let us remarkthat if the solution is not regular, there is no theoretical ( or practical ) advantage in the use of

    P2finite elements rather than P1. The fact that the unknowns are twice more is due to the fact

    that we also need the values of the function at the midpoints, in order to decompose a second

    degree polynomial on a basis.

  • 8/11/2019 Michail Id Is

    27/198

  • 8/11/2019 Michail Id Is

    28/198

    2.2 Definitions and notation ( [1] )

    Optimization has a particular vocabulary: let us introduce some classical notation and

    definitions. We consider principally some minimization problems ( knowing that it is enoughto change the sign to obtain a maximization problem).First of all, the space in which the problem lies, denoted as V, is assumed to be a

    normed vector space, that is to say equipped with a norm denoted by ||u||. The space V can be

    the space RN, or a real Hilbert space, or even some other space. We also have a subset

    where we will look for the solution: we say that K is the set of admissible elements of the

    problem, or that K defines the constraints imposed on the problem. Finally, the criterion, orthe cost function, or the objective function, to be minimized, denoted by J, is a function

    defined over K with values in R. The problem studied will therefore be denoted as,

    VK

    . ( 2.1 ))(inf uJVKu

    For the maximization problems, the notation max replaces inf. Let us specify some basic

    notations.

    Definition 2.1: We say that u is a local minimum ( or minimum point ) of J over K if and only

    if,

    Ku and 0 , Kw , ||w-u||

  • 8/11/2019 Michail Id Is

    29/198

    2.3 Categories of optimization problems ( [1],[20] )

    2.3.1 Continuous versus discrete optimization

    As we mentioned earlier, in some optimization problems the variables make sense onlyif they take on integer values. The obvious strategy of ignoring the integrality requirement,

    solving the problem with real variables, and then rounding all the components to the nearest

    integer is by no means guaranteed to give solutions that are close to optimal. The mathematical

    model is changed by adding the constraint Zxij for all i and j, to the existing constraints.

    The problem is then known as an integer programming problem.

    The generic term discrete optimization usually refers to problems in which the

    solution we seek is one of a number of objects in a finite set.

    Continuous optimization problems are normally easier to solve, because the

    smoothness of the functions makes it possible to use objective and constraint information at a

    particular point x to deduce information about the functions behaviour at all points close to x.

    The same statement cannot be made about discrete problems, where points that are close insome sense may have markedly different function values.

    Some models contain variables that are allowed to vary continuously and others that

    can attain only integer values. We refer to these as mixed integer programming problems.

    2.3.2

    Constrained and unconstrained optimization

    The most important distinction between optimization algorithms includes the existence

    or not of constraints. Sometimes it is safe to disregard some natural constraints on the

    variables, and assume that they have no effect on the optimal solution. Unconstrained

    problems arise also as reformulations of constrained optimization problems, in which the

    constraints are replaced by penalization terms in the objective function that have the effect of

    discouraging constraint violations.

    Constrained problems arise from models that include explicit constraints on the

    variables. These constraints may be simple bounds, more general linear constraints, or

    nonlinear inequalities that represent complex relationships among the variables.

    2.3.3 Nature of objective function and constraints

    According to the nature of the objective function and constraints, we classifyoptimization problems to linear, nonlinear and convex.

    In linear programming problems, the objective as well as the constraints are all linear

    functions. Such kind of problems appear mostly in management and transportation problems.

    The term nonlinear characterizes problems, where at least some of the constraints or

    the objective are nonlinear functions. Such problems tend to arise naturally in the physical

    sciences and engineering.

    Finaly, the term convex programming is used to describe a special case of

    constrained optimization problems, in which:

    The objective function is convex. The equality constraint functions are linear.

    The inequality constraint functions are concave.

    22

  • 8/11/2019 Michail Id Is

    30/198

    2.3.4 Global and local optimization

    The fastest optimization algorithms seek only a local solution, a point at which the

    objective function is smaller than at all other feasible points in its vicinity.

    A special case is convex programming, in which all local solutions are also global

    solutions. Linear programming problems fall in the category of convex programming.

    2.3.5 Stochastic and deterministic optimization

    In some optimization problems, the model cannot be fully specified because it depends

    on quantities that are unknown at the time of formulation.

    Frequently, however, modelers can predict or estimate the unknown quantities with

    some degree of confidence. They may, for instance, come up with a number of possible

    scenarios for the values of the unknown quantities and even assign a probability to each

    scenario. Stochastic optimization algorithms use these quantifications of the uncertainty to

    produce solutions that optimize the expected performance of the model.Here, we focus on deterministic optimization problems, in which the model is fully

    specified.

    2.4 Existence of a minimum ( [1] )

    Generally, the question of the existence of a minimum point is out of the purpose of this

    Thesis. However, it is interesting to highlight the difference on this topic between finite and

    infinite dimensions. Moreover, referring to convex analysis will allow us to understand better

    some results in the sequel.

    2.4.1

    Optimization in finite dimensions

    Let us interest ourselves now in the question of the existence of minima for

    optimization problems posed in finite dimensions. We shall assume in this section ( without

    loss of generality ) that V=RNprovided with the usual scalar product and with

    the Euclidean norm

    1

    N

    i iiu w u w

    || ||u u u

    n

    . A general result guaranteeing the existence of a minimum is

    the following.

    Theorem 2.1: (existence of a minimum in finite dimensions)Let K be a closed nonempty set

    of RN, and J a continuous function over K with values in R satisfying the property, called

    infinite at infinity,

    . ( 2.2 )0

    (u ) a sequence in K, lim || || lim ( )n nnn n

    u J u

    Then there exists at least one minimum point of J over K. Further, from every minimizing

    sequence of J over K we can extract a subsequence converging to a minimum point over K.

    23

  • 8/11/2019 Michail Id Is

    31/198

    Remark 2.1: The property ( 2.2 ), which assures that every minimizing sequence of J over K is

    bounded, is automatically satisfied if K is bounded. When the set K is not bounded, this

    condition means that, in K, J is infinite at infinity.

    2.4.2 Optimization in infinite dimensions

    We give an example showing that the existence of a minimum in infinite dimensions is

    not absolutely guaranteedby conditions like those used in the statement of Theorem 2.1. This

    difficulty is closely linked to the fact that in infinite dimensions the closed bounded sets are not

    compact!

    Example 2.2:Take the Hilbert space ( of infinite dimensions ) of square summable sequences

    in R,

    2

    2 1

    1( ) { ( ) such that },i i i

    il R x x x

    equipped with the scalar product1

    ,i ii

    x y

    x y . We consider the function J defined over

    by,2( )l R

    2

    2 2

    1

    ( ) (|| || 1) .i

    i

    xJ x x

    i

    Taking K= , we consider the problem,2( )l R

    ( 2.3 )

    2 ( )inf ( ),

    x l RJ x

    For which we shall show that there does not exist a minimum point. We verify first of all that,

    2 ( )

    inf ( ) 0.x l R

    J x

    ( 2.4 )

    Let us introduce the sequence xnin defined by2( )l R nix

    n

    i for all i1. We verify easily that,

    1( ) 0 when n + .nJ x

    n

    As J is positive, we deduce that xn is a minimizing sequence and that the minimum value is

    zero. However, it is evident that there does not exist any2( )x l R such that ( ) 0J x .

    Consequently, there does not exist a minimum point of ( 2.4 ). We see in this example that the

    minimizing sequence xnis not compact in ( although it is bounded ).2( )l R

    In order to obtain an existence result in infinite dimension, we need to add some

    additional condition. Unfortunately, even these conditions are not verifiable in general.

    However, we can verify it for a particular class of problems, which are very important in

    theory and practice: convexminimization problems.

    24

  • 8/11/2019 Michail Id Is

    32/198

  • 8/11/2019 Michail Id Is

    33/198

    2.5 Optimality conditions ( [1] )

    2.5.1 Introduction

    In this part, we shall obtain necessary and sometimes sufficient conditions for

    minimality. The objective is in a certain way much more practical, since these optimalityconditions will be more often used to try to calculate a minimum( sometimes even without

    having shown its existence! ). The general idea of optimality conditions is the same as writing

    that the derivative must be zero, when we calculate the extremum of a function over R.

    These conditions will therefore be expressed with the help of the first derivative

    ( conditions of order 1 ) or second derivative ( conditions of order 2 ). Above all we will obtain

    necessaryconditions for optimality, but the use of the second derivative or the introduction of

    convexity hypotheses will also allow us to obtain sufficient conditions, and to distinguish

    between minima and maxima.

    These optimality conditions generalize the following elementary remark: if x 0is a local

    minimum point of J on the interval [a,b] R ( J being a differentiable function on [a,b] ), then

    we have:

    0 0 0 0 0 0( ) 0 if x , J (x ) 0 if x , , J (x ) 0 if x .J x a a b b The strategy to obtain and to prove the minimality conditions is therefore clear: we take

    account of constraints to test the minimality of0

    x in particular directions which respect the

    constraints: we shall talk of admissible directions. We then use the definition of the derivative

    ( and the second order Taylor formulas ) to conclude.

    2.5.2 Differentiability

    From now on, we assume that V is a real Hilbert space, and that J is a continuous

    function with values in R. The scalar product in V is always denoted ,u w and the associated

    norm ||u||.

    We start by introducing the idea of a first derivative of J, since we shall need this to

    write optimality conditions. When there are several variables ( that is to say if the space V is

    not R ), the good theoretical idea of differentiability, called differentiability in the sense of

    Frechet, is given by the following definition.

    Definition 2.5: We say that the function J, defined on a neighbourhood of u with values in

    R, is differentiable in the sense of Frechet at u if there exists a continuous linear form on V,, such that,

    V

    L V

    0

    | ( ) |( ) ( ) ( ) ( ), with lim 0.

    || ||w

    o wJ u w J u L w o w

    w ( 2.8 )

    We call L the derivative ( or the differential, or the gradient ) of J at u and we denote

    ( ).L J u

    26

  • 8/11/2019 Michail Id Is

    34/198

  • 8/11/2019 Michail Id Is

    35/198

    Proposition 2.2 ( second order optimality condition ): We assume that K=V and that J is

    0 and

    twice differentiable at u. If u is a local minimum point of J, then,

    ( )J u ( )( , ) 0 w V.J u w w ( 2.12 )

    he converse of Proposition 2.2 is also true.

    .5.4 Lagrange multipliers

    We shall now try to write the minimality conditions when the set K is not convex. More

    precise

    efinition 2.7: At every point

    T

    2

    ly, we shall study sets K defined by equality constraintsor inequality constraints( or

    both at the same time ). We start with a general remark about admissible directions.

    D w K , the set:

    called the cone of admissible directions at the point w.

    In more visual terms, we can also say that K(w) is the set of all the vectors which are

    tangent

    he interest in the cone of admissible directions lies in the following result, which gives

    a neces

    roposition 2.3 (Euler inequality, general case): Let u be a local minimum of J over K. If J

    ( ) { , ( ) , ( ) ( ) ,

    lim , lim 0, lim ( ) / }

    n N n N

    n n n n

    n n n

    y V w K R

    w w w w y

    K w

    is

    s at w with a curve contained in K and passing through w. In other words, K(w) is the

    set of all the possible directions of variations starting from w which remain infinitesimally in

    K.

    T

    saryoptimality condition.

    P

    is differentiable at u, we have,

    ( ), 0 K(w)J u y y .

    We shall now make precise the necessary condition of Proposition 2.3 in the case where

    K is giv

    quality constraints

    en by equalityand inequality constraints.

    E

    this first case we assume that K is given by,

    }

    In

    { , F(y)=0K y V , ( 2.13 )

    here 1( ) ( ),..., ( )MF y F y F y is a mapping from V intoMRw , with M1. The necessary

    optimality condition then takes the following form.

    28

  • 8/11/2019 Michail Id Is

    36/198

    Theorem 2.5:Take where K is given by ( 2.13 ). We assume that J is differentiable atu Ku K and that the functions (1 )

    iF i M are continuously differentiable in a neighbourhood

    e further assume that t F uof u. W he vectors 1( ( ))i i M

    M

    are linearly independent. Then, if u is a

    local minimum of J over K, there exist1,..., R , called Lagrange multipliers, such that,

    1

    ( ) ( ) 0M

    i i

    i

    J u F u

    . ( 2.14 )

    roof. See [1] ( pg 307 ).

    emark 2.4:It is useful to introduce the function Ldefined over

    P

    M

    V RR by,

    at we call the Lagrangian of the minimization problem of J over K. If is a local

    1

    ( , ) ( ) ( ) ( ) ( )M

    i i

    i

    L y J y F y J y F y

    ,

    th u K

    s aminimum of J over K, Theorem 2.5 tells us that, in the regular case, there exist MR suchthat:

    ( , ) 0, ( , ) 0,L L

    u uy

    ( , ) ( ) 0L

    u F u

    if andu K ( , ) ( ) ( ) 0

    Lu J u F u

    y

    since from ( 2.14 ). We can

    therefore write the constraint and the optimality condition as the annihilation of the gradient

    We now give a necessarysecond order optimality condition.

    roposition 2.4:We take the hypotheses of Theorem 2.5 and we assume that the functions J

    ( the stationarity ) of the Lagrangian.

    P

    and ,...,1 M

    F F are twice continuously differentiable and that the vectors ( ( ))F u 1i i M are

    linearly independent. Let MR be the Lagrange multiplier defined by The hen

    every local minimum of J over K satisfies,

    orem 2.5. T

    . ( 2.15 )

    roof.See [1] ( pg 310 ).

    1 1

    ( ) ( ) ( , ) 0 w K(u)= ( )MM

    i i i

    i i

    J u F u w w F u

    P

    29

  • 8/11/2019 Michail Id Is

    37/198

    Inequality constraints

    In this second case we assume that K is given by,

    { , F ( ) 0 for 1 i M}iK y V y , ( 2.16 )

    where1,...,

    MF F

    )w

    are always functions from V into R. When we want to determine the cone of

    admissible directions K(y), the situation is a little more complicated than before as all the

    constraints in ( 2.16 ) do not play the same role depending on the point y where we calculate

    K(y). In effect, if , it is clear that, for sufficiently small, we will also have( ) 0iF y

    (iF y 0 ( we say that the constraint i is inactive at y ). If for certain indices I,

    it is not clear that we can find a vector w

    ( ) 0iF y

    V such that, for >0 sufficiently small, (y+w)satisfies all the constraints in ( 2.16 ). It will therefore be necessary to impose supplementary

    conditions on the constraints, called constraint qualifications. Roughly speaking, these

    conditions will guarantee that we can make variations around a point y in order to test its

    optimality. There exist different types of constraint qualification ( more or less sophisticated

    and general ). We shall give a definition whose principle is to look at the linearizedproblem if

    it is possible to make variations respecting the linearized constraints. These calculus of

    variations considerations motivate the following definitions.

    Definition 2.8:Take . The setu K ( ) { {1,..., }, F ( ) 0}iI u i M u is called the set of active

    constraints at u.

    Definition 2.9:We say that the constraints ( 2.16 ) are qualifiedat u if and only if there

    exists a direction

    K

    w V such that we have for all ( )i I u ,

    either ( ), 0iF u w ,

    or ( ), 0iF u w and is affine. ( 2.17 )iF

    The direction w is in some way a re-entrant direction since we deduce from ( 2.17 )

    that u w K for all 0 sufficiently small. Of course, if all the functions are affine, we

    can take

    iF

    0w and the constraints are automatically qualified.

    We can then state the necessaryoptimality conditions over the set ( 2.17 ).

    Theorem 2.6:We assume that K is given by ( 2.16 ), that the functions J and1,..., MF F are

    differentiable at u and that the constraints are qualified at u. Then, if u is a local minimum of J

    over K, there exist1,..., 0M , called Lagrange multipliers, such that,

    1

    ( ) ( ) 0, 0, 0 if F ( ) 0 i {1,...,M}.M

    i i i i i

    i

    J u F u u

    ( 2.18 )

    30

  • 8/11/2019 Michail Id Is

    38/198

    Remark 2.5:We can rewrite the condition ( 2.18 ) in the following form,

    ,1

    ( ) ( ) 0, 0, F( ) 0M

    i i

    i

    J u F u u

    where 0 means that each of the components of the vector 1,..., is positive, since,

    for every index {1,..., }i M , we have either ( ) 0iF u , or 0i . The fact that ( ) 0F u is

    called the condition of complementary variations.

    Proof.See [1] ( pg 313 ).

    Equality and inequality constraints

    We can of course mix the two types of constraints. We therefore assume that K is given

    by,

    , ( 2.19 ){ , G(y)=0, F(y) 0}K y V

    where and 1( ) ( ),..., ( )NG y G y G y 1( ) ( ),..., ( )MF y F y F y are two mappings from V intoN

    R andM

    R . In this new context, we must give an adequate definition of the qualification of

    constraints. We always denote by ( ) { {1, ..., }, ( )iI u i M F u 0}

    N

    the set of active inequality

    constraints at .u K

    Definition 2.10:We say that the constraints ( 2.19 ) are qualifiedat u if and only if the

    vectors are linearly independent and there exists a direction

    K

    1( ( ))i iG u 1

    ( )N

    ii

    w G u

    such that we have for all ,( )i I u

    ( ), 0iF u w . ( 2.20 )

    We can then state the necessaryoptimality conditions over the set ( 2.19 ).

    Theorem 2.7: Take where K is given by ( 2.19 ). We assume that J and F are

    differentiable at u, that G is differentiable in a neighbourhood of u, and that the constraints are

    qualified at u ( in the sense of Definition 2.10 ). Then, if u is a local minimum of J over K,

    there exist lagrange multipliers

    u K

    1,..., and 1,..., 0 , such that,

    1 1

    ( ) ( ) ( ) 0, 0, F(u) 0, F(u)=0.N M

    i i i i

    i i

    J u G u F u

    ( 2.21 )

    31

  • 8/11/2019 Michail Id Is

    39/198

    2.6 Numerical algorithms ( [1],[20] )

    2.6.1 Introduction

    In this section we present and analyse some gradient algorithms, which will allow us tocalculate, or more exactly to approximate the solution of the optimization problems studiedabove. We also refer to some Newton algorithms, although we have not used them in our

    examples. All the algorithms studied here are used effectively in practice to solve concreteoptimization problems by computer.

    These algorithms are also of an iterative nature: starting from a given initial point ,

    each method constructs a sequence

    0u

    ( )n

    n Nu which converges, under certain hypotheses, to the

    solution u of the optimization problem considered. In this Thesis, we sometimes refer to theconvergence of these algorithms, but the proofs and the rate of convergence are out of thepurpose of this work.

    In all of this section we assume that the objective function J to be minimized isa-convex and differentiable. The application of the algorithms presented here for the

    minimization of convex functions which are not strongly convex can present some small

    difficulties, without mentioning the great difficulties which appear when we want toapproximate the minimum of a non-convex function! Typically, these algorithms cannot

    converge and oscillate between several minimum points, or worse they converge to a localminimum, very far from a global minimum.

    Remark 2.6: We limit ourselves to deterministic algorithms and we say nothing about

    stochastic algorithms. Besides the fact that their analysis calls on probability theory ( which wedo not discuss in this Thesis ), their use is very different. To put it simply, let us say that

    deterministic algorithms are the most efficient for the minimization of convex functions, whilestochastic algorithms allow us to approximate global ( not only local ) minima of non-convexfunctions.

    2.6.2 Overview of algorithms

    In this section, we refer to unconstrained optimization problems, since algorithms for

    constrained problems are just combinations of these basic methods and of techniques used to

    treat with constraints. All algorithms for unconstrained minimization problems require the user

    to supply a starting point, which we usually denote by . Beginning at , optimizationalgorithms generate a sequence of points

    0

    u

    0

    u( )

    n

    n Nu that terminate when either no more progress

    can be made or when it seems that a solution point has been approximated with sufficient

    accuracy. In deciding how to move from one iterate to the next, the algorithms use

    information about the function J at , and possibly also information from earlier iterates

    . They use this information to find a new iterate

    nu

    nu

    0 1 1, ,..., nu u u 1n

    u with a lower function value

    than . In this work, we use monotone algorithms, although there are also non-monotone

    algorithmsthat do not insist on a decrease in J at every step, but even these algorithms require

    J to be decreased after some prescribed number of iterations.

    nu

    There are two fundamental strategies for moving from the current point to a new

    iterate , the line search and the trust region method. In the line search algorithmic

    strategy, the distance to move along the direction w can be found by approximately solving

    nu

    1nu

    32

  • 8/11/2019 Michail Id Is

    40/198

    the following minimization problem to find a step length n : ( 2.12 ). The

    exact solution of ( 2.12 ) is expensive and usually unnecessary. Instead, the line search

    algorithm generates a limited number of trial step lengths until it finds one that looselyapproximates the minimum of ( 2.12 ).

    0min ( )n n nJ u w

    nu

    nm

    In the trust region algorithmic strategy, the information gathered about J is used toconstruct a model function whose behaviour near the current point is similar to that of

    the actual objective function J. Because the model may not be a good approximationof J

    when u is far from , we restrict the search for a minimizer of to some region around .

    In other words, we find the candidate step by approximately solving the following sub-

    problem: , where

    nm

    nm

    nu

    n

    nu

    nw

    min ( )n

    n

    nw

    m u w n

    u wn lies inside the trust region. If the candidate solution

    does not produce a sufficient decrease in J, we conclude that the trust region is too large, and

    we shrink it and re-solve our problem. Usually, the trust region is a ball defined by || ||n

    w ,

    where is called the trust-region radius.0

    Each time we decrease the size of the trust region after failure of a candidate iterate, thestep from to the new candidate will be shorter, and it usually points in a different direction

    from the previous candidate, unlike the line-search method.

    nu

    In a sense, the line-search and the trust-region approaches differ in the order in which

    they choose the direction and distanceof the move to the next iterate. In trust-region, we first

    choose a maximum distance ( the trust region radius n ) and then seek a direction and step

    that attain the best improvement possible subject to this distance constraint. In line-search, wefollow the opposite procedure.

    In this Thesis, we occupy ourselves with line-search methods and especially withgradient algorithms, which we present in the sequel of this Chapter.

    2.6.3 Gradient algorithms ( case without constraints )

    Let us start by studying the practical solution of optimization problems in the absenceof constraints. Let J be a function which is a-convex ( or even convex ) and differentiable

    defined over the real Hilbert space V. We consider the problem without constraints,

    inf . ( 2.23 )( )y V

    J y

    From Theorem 2.2 there exists a unique solution u, characterized by the Euler equation,

    ( )J u 0 .

    Gradient algorithm with optimal step

    The gradient algorithm consists of moving from an iterate by following the line of

    greatest slope associated with the cost function J(y). The direction of descent corresponding to

    this line of greatest slope from is given by the gradient . In effect, if we look for

    in the form,

    nu

    ( )n

    J unu1n

    u

    , ( 2.24 )1n n n

    u u w n

    33

  • 8/11/2019 Michail Id Is

    41/198

    with small and a unit vector in V, it is with the choice of direction

    that we can hope to find the smallest value of ( in the absence

    of other information such as higher derivatives or previous iterates).

    0n

    ( )/n n

    u

    nw

    || ( ) |n

    w J J u | )

    n

    1( n

    J u

    This simple remark leads us, among the methods ( 2.24 ) which are called methods of

    descent, to the gradient algorithm with optimal step, in which we solve a succession ofminimization problems with one real variable ( even if V is not finite dimensional ). Starting

    from an arbitrary in V, we construct a sequence ( defined by,0u )nu

    , ( 2.25 )1 ( )n n nu u J u

    where n R is chosen at each step such that,

    . ( 2.26 )1( ) inf ( ( )n nR

    J u J u J u

    )n

    Gradient algorithm with fixed step

    The gradient algorithm with fixed step consists simply of the construction of a sequence

    defined by,n

    u

    , ( 2.27 )1 ( )n n

    u u J u n

    where is a fixed positive parameter. This method is therefore simpler than the gradientalgorithm with optimal step, since at each step we save the cost of calculating the solution of

    ( 2.26 ).

    Remark 2.7:In order to choose between the two methods that we already presented, we haveto take under consideration the benefits and drawbacks of each one, and the specific problem to

    which we apply the method. Generally, the advantage of the gradient algorithm with optimalstep lies at the rate of convergence. This can prove to be very important, especially in problem

    with a large amount of data. On the other hand, the gradient algorithm with fixed step is

    simpler ( as we mentioned above ) and costs less at each iteration. In our examples, we use agradient algorithm with fixed step, especially in favor of simplicity.

    34

  • 8/11/2019 Michail Id Is

    42/198

    2.6.4 Gradient algorithms (case with constraints )

    We now study the solution of optimization problems with constraints,

    inf , ( 2.28 )( )y K

    J y

    where J is a function which is a-convex ( or even convex ) and differentiable defined over K, a

    convex closed nonempty subset of the real Hilbert space V. Theorem 2.2 ensures the existenceand uniqueness of the solution u of ( 2.28 ), characterized by the condition,

    ( ), 0 y K J u y u . ( 2.29 )

    According to the algorithms studied below, we will sometimes need to state supplementaryhypotheses over the set K.

    Gradient algorithm with fixed step and projection

    Theorem 2.8 ( projection over a convex set ): Let V be a Hilbert space. Let be a

    convex closed nonempty subset. For all

    K Vx V , there exists a unique kx K such that,

    || || min || ||ky K

    x x x

    y .

    Equivalently, kx is characterized by the property,

    kx K , , 0 y K k kx x x y . ( 2.30 )

    We call kx the orthogonal projection over K of x.

    The gradient algorithm with fixed step adapts to problem ( 2.28 ) with constraints

    starting from the following remark. For all real >0, ( 2.29 ) is written,

    ( ( )), 0 yu u J u y u K . ( 2.31 )

    Let us denote by KP the projection operator over the convex set K, defined in Theorem

    2.8. Then, ( 2.31 ) is none other than the characterization of u as the orthogonal projection of

    ( )u J u over K. In other words,

    ( ( )) >0Ku P u J u . ( 2.32 )

    It is easy to see that ( 2.32 ) is in fact equivalent to ( 2.29 ), and therefore characterizes thesolution u of ( 2.28 ). The gradient algorithm with fixed step and projectionalgorithm ( or

    more simply projected gradient ) is then defined by the iteration,

    , ( 2.33 )1

    ( (n n

    K

    u P u J u ))n

    where is a fixed positive parameter.

    35

  • 8/11/2019 Michail Id Is

    43/198

    2.6.5 Penalization of constraints

    We conclude this subsection by briefly describing another way to approximate aminimization problem with constraints, by a sequence of minimization problems without

    constraints: this is the procedure of penalization of constraints. We avoid talking here of a

    method or an algorithm since penalization of the constraints is not, properly speaking, amethod. The solution of the problems without constraints that we shall construct must be donewith the help of the algorithms of section 2.6.3. This solution can raise some difficulties, since

    the penalized problem ( 2.35 ) is often ill conditioned, as for example the new objective

    function may be less smooth than the initial objective and the constraints.

    For simplicity, we shall take the case where , and we again consider the convex

    minimization problem,

    NV R

    , ( 2.34 )( ) 0inf ( )

    F yJ y

    where J is a continuous convex function from NR into R and F is a continuous convexfunction from NR into MR .

    For 0 , we then introduce the problem without constraints,

    2

    1

    1inf ( ) [max( ( ),0]

    N

    M

    iy R

    i

    J y F y

    , ( 2.35 )

    where the constraints are penalized. We can then state the following result, which

    shows that, for small, the problem ( 2.35 ) approximates well the problem ( 2.34 ).

    ( ) 0iF y

    Proposition 2.5:We assume that J is continuous, strictly convex, and is infinite at infinity, that

    the functions are convex and continuous foriF 1 i M , and that the set,

    { , F ( ) 0 i {1,...,M}}Ni

    K y R y

    is nonempty. Denoting by u the unique solution of ( 2.34 ) and, for >0, uthe unique solution

    of ( 2.35 ), we then have,

    0limu u

    .

    Proof.See [1] ( pg 341-342 ).

    Remark 2.8:There are also other methods of penalization, such as the barrier-methods, in

    which we introduce barrier functions in order to replace the constraints. The most well-known method from this category is the log-barrier method, which can prove to be extremely

    sensitive when we try to combine it with a gradient method. For this reason, we avoid the use

    of this method in our examples.

    36

  • 8/11/2019 Michail Id Is

    44/198

  • 8/11/2019 Michail Id Is

    45/198

    method to the minimization of a function J as explained above, it may be that the method

    converges to a maximum or a saddle point of J, and does not tend to a minimum, since it only

    looks for the zeros of .J

    Remark 2.10: A major drawback of Newtons method is the need to know the Hessian ( )J y

    ( or the derivative matrix ).When the problem is very large or if J is not easily twice

    differentiable, we can modify Newtons method to avoid the calculation of this matrix

    = . The methods, called quasi-Newton, propose iteratively calculating an

    approximation of . We replace the formula ( 2.34 ) by:

    ( )F y

    1))

    n

    ( )J y ( )F yn

    S ( (

    S

    F u

    .1 ( ) for n 0n n n n

    u u S F u

    In general, we calculate by a recurrence formula of the type:n

    1n n

    S S C

    n

    1)

    where is a matrix of rank 1 which depends on , chosen so that

    converges to 0.

    nC

    ( (

    1, , ( ), (n n n n

    u u F u F u

    1))n nS F u

    38

  • 8/11/2019 Michail Id Is

    46/198

    3. Shape Optimization ( [2] )

    3.1 Introduction

    A problem of optimal design of structures, or shape optimization in mechanics, is

    defined by three data:

    1.a model ( typically an equation with partial derivatives ) which allows us to evaluate

    and analyse the mechanical behaviour of a structure,

    2. a criterionof whose we should search for a minimum or maximum and eventually

    several criteria

    3. an admissible direction of the variables of optimization that take account of theeventual constraints that are imposed on the variables.

    A problem of shape optimization is a problem where the variables of optimization are

    the shapes of the structure themselves. The shape optimization is evidentially essential in many

    applications, but it is clearly more complicated than the traditional optimization, in which the

    variables are, for example, the mechanical properties of the materials.

    Among the problems of shape optimization, we can distinguish three great categories,

    from the simplest to the most difficult.

    1. the parametric shape optimization, in which the shapes are parametrized by a

    reduced number of variables ( for example the thickness, a diameter, the dimensions ), whoselimit takes under consideration the variety of admissible shapes,

    2. the geometrical shape optimization, in which, starting from an initial shape, we

    authorize the variations of the boundaries of the shape ( without however changing the

    topology of the shape, that is, in the 2-dimensional space, the number of components that

    connect the edges or , more simply, the number of the holes in the shape),

    3. the topological shape optimization, in which we search, without no restriction

    explicit or implicit, the best possible shape, free to change the topology.

    The last type of optimization is, for sure, the most general but also the most difficult.

    Note that, if the definition of the topology of ones shape is simple enough in the 2-

    dimensional space, it is clearly more complicated in 3-d, where the interpretation is not only

    the number of components that connect the edges of the shape ( which allows, for example, to

    differentiate a flat ball into a hollow crown ) but also, between others, its number of handles

    or the loops ( think the difference between a swell, a tore, a bretzel, etc. ). In this Thesis we

    do not refer to topological optimization, and so we escape the technical complications that are

    necessary in order to define precisely this notion of topology.

    39

  • 8/11/2019 Michail Id Is

    47/198

    The questions that can be posed to the problems of shape optimization are the following

    ones:

    1.theoretical questions for the existence, the uniqueness, or the qualitative properties of

    the solutions. We will only speak very little about this here, except when that has direct

    implications to the questions of numerical calculation,

    2. optimality conditions, which are very important not only from the point of

    theoretical sight, but also from the point of numerical sight ( they are often at the base of the

    numerical algorithms or the gradient method type),

    3.numerical calculation of the approximate optimal shape.

    We should principally concentrate on the target of understanding the numerical

    algorithms of the shape optimization ( that is necessary for a good comprehension of the

    subjacent theory ). We notice that we will be privileged from a continue approach of these

    problems, with detriment of the discrete approach which has its partisans but whichcamouflages a little the true stakes and the crucial points of the analysis. By the continue

    approach, we want to say that we will suppose that the mechanical model is indeed an equation

    with partial derivatives, that will oblige us to work with spaces of functions. In contrast, the

    discrete appr