nonlinear boundary conditions/67531/metadc... · sobolev spaces. doctor of philosophy...
TRANSCRIPT
37?
NONLINEAR BOUNDARY CONDITIONS
IN SOBOLEV SPACES
THESIS
Presented to the Graduate Council of the
North Texas State University in Partial
Fulfillment of the Requirements
For the Degree of
DOCTOR OF PHILOSOPHY
By
Walter Brown Richardson Jr., B.S., M.S
Denton. Texas
December. 1984
Richardson. Walter B.. Nonlinear Boundaru Condit ions in
Sobolev Spaces. Doctor of Philosophy (Mathematics),
December. 1984. 65 pp.. 2 tables, bibliography, 28 titles.
The method of dual steepest descent is used to solve
ordinary differential equations with nonlinear boundary
conditions. A general boundary condition is B(u) = 0 where
where B is a continuous functional on the nth order Sobolev
space H n[0.1J. If F:H nCO,l] — • L 2[0,1] represents a
2 differential equation, define *(u) = 1/2 IIF < u) li and £(u) =
1/2 l!B(u)ll2. Steepest descent is applied to the functional
2 £ a * + £. Two special cases are considered. If f:lR — • R
C 2 )
is C ... a Type I boundary condition is defined by B(u) =
f(u(0>.u(1)). Given K : [ 0,1]xR —• R and g:[0,l] — • R of
bounded variation, a Type II boundary condition is B(u) =
J* K(x.u(x)) dg(x).
The following results are proved. A close relationship
is established between the two best known requirements for
convergence of general steepest descent - Rosenbloom's
strict convexity condition and Neuberger's gradient
inequality. If (5 is strictly convex, then Neuberger's 2
inequality holds for i = 1/2 llv/Jll . In dual steepest
descent, if the gradient inequality holds for 4> and $, and
the cosine of the angle between v4> and v£ is bounded away
from -1. then the gradient inequality holds for f. For A
Type I boundary condition, the gradient inequality is
equivalent to IIvfII being bounded away from 0. An explicit
form for v$(u) = B'(u) B(u) where B is a Type II boundary
condition is given. As a corollary, the least constant for
Sobolev's imbedding inequality. M f II S C • II f II „ , is shown 00 n n # 2
j- 1 n 2 sin Ckfl) 1 1 / 2
t o b e C n = { STT **-1 tanh(sln(ke)) ) w r , e r B ' n+l
Finally, dual steepest descent is implemented in a FORTRAN
program, and used to solve the test problem y* = y.
y ( 0 ) = y ( l ) 2 .
TABLE OF CONTENTS Page
LIST OF TABLES iv
Chapter
I. INTRODUCTION S
II. DUAL STEEPEST DESCENT 16
III. TYPE II NONLINEAR BOUNDARY CONDITIONS 23
IV. NUMERICAL SOLUTIONS OF SECOND ORDER PROBLEMS WITH
NONLINEAR BOUNDARY CONDITIONS 3?
APPENDIX S7
BIBLIOGRAPHY . . . . . 63
i i i
LIST OF TABLES
T a b l e Page
I. Best constant for the Sofaolev Imbedding
Inequality on H n C 0 . 1 3 50
II. Output from FH2.CGR . . . . . . . . 51
i v
CHAPTER I
INTRODUCTION
Differential equations, both ordinary and partial, in
which the solution is required to satisfy certain boundary
conditions occur in such diverse areas as mechanics,
accoustics, and electrical engineering. Taking second order
ordinary boundary-value problems as an example, the linear
case has been exhaustively studied over the past 200 years.
Much work has been done over the past forty years in
developing general methods of attacking the case in which
the differential equation is nonlinear. However, almost
without exception, the work has involved linear boundary
conditions of the form
A y (a) + A y ' ( a ) = 0 X M
Bl y(b) + B 2 y'(b) = 0 .
Very little is known about such problems when the boundary
conditions are nonlinear.
Mawhin (6. 3) has proposed the following problem to
2 describe nonlinear vibrations: find a y in H [0.1] such
that
2 y•(x) y(x) y ' 1 ( x ) = - for x in [0,1]
l + y < x )
y'(0) 2 - y(0) 2 = 0 and y(:l)*|y(l) - y'(l)| = 0
An example of a boundary-value problem which arises in the
study of electrical network theory is given by Morosanu (7).
Given physical constants L > 0, R £ o. c > 0, G £ o,
R > 0, and C > 0; a nonlinear resistance f„ which is 0 0
maximal monotone in IR x IR ; and a voltage e per unit length
impressed along the line in series with it; find the current
is-u flowing in the line and th€» voltage v across the line,
so that
0\i <5v
"it"" <3v <3u
L g t - g x + R * u + e(t.x) = 0
i at <3x + G•v = o. for 0 < x < 1 and t > 0
u ( 0 . x ) = u Q ( x ) ; v ( 0 , x ) = v ( x ) : 0 5 x 5 1
dv(t . 1) -u(t.l) = C — + f' ( v ( t . D ) . t > 0 .
Ot 0
Historically, in deriving governing differential equations
from physical principles, nonlinearities were neglected
because they were difficult to handle. One wonders if many
boundary conditions should actually contain nonlinear terms
in order to give a more accurate! picture of the physics.
One seldom encounters a definition of boundary
conditions - rather a group of example boundary-value
problems is presented'to give a feel for what such
conditions are. Taking as our setting 2nd order problems ,
a linear boundary condtion for the differential equation
2 2 F(u) = o. where F:H [0,1] —» L . is defined to be one of the
form B(u) = 0, where B is a bounded linear functional on
2 H [0.1]. This includes standard textbook examples, interior
boundary conditions, and still more general ones of the form
B(u) = /Q u dg. A nonlinear boundary condition is B(u) = 0
where B is a continuous functional on H2[0.13. often assumed
(2) to be C . For Mawhin's problem mentioned earlier.
B(U) • l/2-|u'(0)2 - u(0) 2j + 1/2*ju(l)«|u<1) - U'(l)|| .
Perhaps the essence of the definition lies in the fact that
the range of B is a subset of the reals, rather than that
B(u) involves only the values of u at the topological
boundary of the domain of u.
There are many methods for handling nonlinear problems:
fixed-point theory, generalized inverse function theorems,
homotopy and degree theory, the Newton-Kantorovich method,
and the method of steepest descent. The last two are
particularly attractive because they yield not only valuable
theoretical information, but also numerical results when
implemented on a computer. Steepest descent dates back to
Cauchy (2), and although not used extensively on
minimization problems because of its slow convergence, the
method is very robust compared to Newton's method, which may
break down completely. Variations of steepest descent are
called general gradient methods and have found increasing
favor in numerical analysis. Recently. Neuberger (8. 9. 10.
11. 12) has used steepest descent in a Sobolev space setting
to solve differential equations with linear initial and
boundary conditions.
O n e of the chief a p p e a l s of steepest descent is its
s i m p l i c i t y . S u p p o s e each of X and Y is a Hilbert s p a c e and
(2 > F:X — » Y is C ; to use steepest descent to s o l v e the
e q u a t i o n F ( x ) = 0. o n e d e f i n e s *:X — » IR by * ( x ) =
2 1/2 IIF (x) I I . A zero of • is a s o l u t i o n of the o r i g i n a l
Y
e q u a t i o n and a m i n i m u m of * w o u l d be a "least s q u a r e s "
s o l u t i o n to the e q u a t i o n . In a d i f f e r e n t i a l e q u a t i o n
s e t t i n g 4> is a general v a r i a t i o n a l p r i n c i p l e a p p l i c a b l e to
m a n y p r o b l e m s w h e r e there is no c l a s s i c a l v a r i a t i o n a l
p r i n c i p l e . At the point x Q in X. the n e g a t i v e gradient of *
p o i n t s in the d i r e c t i o n to m o v e for the fastest
i n s t a n t a n e o u s d e c r e a s e of the function * . The fundamental
e x i s t e n c e and u n i q u e n e s s theorem for o r d i n a r y d i f f e r e n t i a l
e q u a t i o n s g u a r a n t e e s the e x i s t e n c e of z:[0,<»> — • X such that
z(0) ss XQ and z'(t) = - v # < z(t ) ) . O n e tries to find
c o n d i t i o n s that insure that 1im z(t> = u e x i s t s and that u t-K»
is a local m i n i m u m of * . Two such c o n d i t i o n s a r e the
c o n v e x i t y c o n d i t i o n of P. C. R o s e n b l o o m and the gradient
i n e q u a l i t y of N e u b e r g e r . A theorem in C h a p t e r I shows that
if a functional 4> s a t i s f i e s R o s e n b l o o m ' s c o n d i t i o n then 7 ( x )
2
= 1/2 llv*(x)ll s a t i s f i e s Neubergeir's gradient i n e q u a l i t y .
A d e t a i l e d d e s c r i p t i o n of using steepest descent w i t h
the S o b o l e v inner product to s o l v e the test p r o b l e m y* = y,
y(0) = l on [0.1] is given by N e u b e r g e r in (11). B r i e f l y ,
letting X = jy in H 1 f 0 . 1 ] : y(0> = oj. Y = L 2 [ 0 , 1 ] . and
defining F:X — • Y by F(x) = <x+_L)' - (x+i.), one has that if
z is a zero of F. then y = z + i is a solution to the
original problem. A discrete steepest descent is
implemented to find a numerical solution and. most
importantly, the rate of convergence of the finite
difference scheme is shown to be far superior to that of the
same process, using the Euclidean inner product.
suppose the problem is complicated by introducing the
simplest nonlinear boundary conditions: y' = y.
2
y(0) = y(l) . There are several ways one might attack the
new problem. Define F:H 1[0,1] — • L 2[0.1] and B:H i[0,ll —» IR
by F(x) = x' - x and B(x> = x(0) - x(l) 2. Neuberger in (8)
uses a scheme which is based on steepest descent for the 2
functional #<x) = 1/2 HF(x)ll and employs the family of
projection operators <P > 1 where P is the X x i n H L O t l ] X
orthogonal projection onto the null space of B'(x). This
approach is closely related to the gradient-projection
method used in nonlinear optimization (14, is, 19). It has
the property that, both in the continuous and discrete
steepest descent processes, one always stays in the set of
all functions which satisfy the boundary conditions.
We shall use another tack and attempt to solve the
problem by applying steepest descent simultaneously to the
differential equation and to the boundary condition part of
the problem. The name penalty function is given to this
method when applied to the constrained optimization of a
10
real-valued function. However, the idea does not seem to
have found its way into the area of differential equations
with nonlinear boundary conditions. Here convergence
criteria would be harder to come by. because of the infinite
dimensional setting and resulting loss of local compactness.
(2)
Given F: X — • Y and B:X — • IR. both C . we seek an x in X
such that F (x) = 0 and B(x) = 0. Define *:X — • IR by *(x) =
1/2 it F < x) li2 and £:X — • IR by $<x) = 1/2 llB(x)ll2. These are
measures of how close an element of X comes to satisfying
the differential equation and the boundary condition,
respectively. If G:X —» XxIR is given by G(x) = ( ' a n d
0 V X /
e< x) = 1/2 1,G{x)l,xX|R - 1 / 2 IIF<X>"2 + 1/2 IIB (X > 112 = *(x) +
^(x). then steepest descent can be used. x is a solution to
the pair of equations only in case it is a zero of the
funct ional $.
A few comments about this proposed method are in order.
First, although it will work on a problem with linear
boundary conditions, it clearly will not be as efficient as
steepest descent applied to the subspace of all functions
satisfying the boundary conditions or as classical
algorithms designed to solve a specific class of problems.
The method wi'.l 1 come into its own when the boundary
conditions are nonlinear. While not depreciating the
importance of, say, fast Poisson-so1vers, there is a great
11
need for a widely applicable, type-iindependent method which
handles general boundary conditions and in which there is a
close relationship between theoretical and computational
aspects.
Second, when the boundary conditions of the above
2
example are written as y(0) = y(l) it is easy to find an
initial guess that satisfies them and then begin the
projection method - never leaving the set K of all functions 2
f such that f(0) = f(l) . However, if the problem were 2
posed in an H [0.1] setting, the boundary conditions might
2
have been written equivalently as y<0) = y'(i) and it is no
longer as easy to find an initial guess. Also, one stays in
a different set K of functions. Lastly, in partial
differential equations one oftens encounters a situation in
which it is not known a priori from physical or theoretical
considerations what a "natural" set of boundary conditions
is. In this case, repeated trials of the dual steepest
descent using different choices for the functional B should
yield valuable information about what boundary conditions
are feasible.
This paper investigates two specific forms of nonlinear
boundary conditions. The natural setting for a first order
problem is H 1 ; suppose f:IR2 —• IR is a given function
and define B : H 1 [ 0 , 1 ] — • IR by B(u) = f (u(0 ) . u( 1 ) ) . This is
the most general nonlinear boundary condition which involves
1 2
only u(0) and u(l) and is still manageable. A sufficient
condition for the gradient inequality to hold for this
boundary condition is given in Chapter II. The second type
of condition uses the values of u everywhere in [0,1],
although the non 1 i near i t i es are milder. Given g:[0.l] • |R
and K : [ 0, 1 ] x(R — • IR. define B:H n[0,l] — • |R by B(u) =
K(x.u(x)) dg(x). This includes conditions such as u(0) =
2 u(l) and, in fact, any problem in which the nonlinearities
occur pointwise, but encounters difficulties with u(0)*u(l>
2 + uCl/2) = 0, because of the first term.
An interesting spinoff of the above work relates to one
of the large class of inequalities in analysis which go
under the name of Sobolev. These state that the norm of a
function in one space is dominated by a constant times the
norm of its derivative in a second space. The existence of
such constants dates back to Sobolev (17) and is critical to
the development of theoretical partial differential
equations (l). Within the last ten years several articles
have been written which address the question of what is the
best or least constant that can be used in a particular
Sobolev inequality. Talenti (18). Rosen (13). and Lieb (4)
seek the least c such that lluII < c-ilu'll where l < p,q < <»
and domain(u) = IRn. Using variational methods. Marti (5)
found the best c for the inequality Hull^ < c-llul^ 2 and gave
numerical results for the higher order Sobolev spaces.
13
H ntO,13, whore n • 2 6. As a Corollary to a Theorem in
Chapter III we show that the least c such that for all u in
H ntO,l], Hull S c Hull is given bu <» n, 2
2 sin 3(k©) >1/2 n , 1 • 2 sin (k©) v
'n ~ |n+l Xk=l tanh(sin(k9)) j where 0 = — — •
n+1
and require none of the machinery of the calculus of
variations.
Chapter IV describes how the dual steepest descent can
4 4 be implemented on a computer. Given F:IR —» IR and B: R — • IR
2
we seek a member u of H [0,13 such that for t in Ca.bl,
F(t.u(t).u ,(t).u , ,(t)> = 0 and B(u(a).u(b>.u'(a>.u•(b)) = 0.
A finite-difference scheme is used to reduce the problem to
solving a set of nonlinear equations by steepest descent.
Much improved convergence over traditional steepest descent
is obtained by employing the Sobolev rather than the
Euclidean inner product and by using a nonlinear version of
Hestenes* conjugate-gradient algorithm. Finally, the code
of a FORTRAN program, FH2.CGR, which uses the above ideas is
discussed, as is the output from the test problem y* = y,
y(0) a y(1) 2.
CHAPTER BIBLIOGRAPHY
1. Brezis, Haim. and Nirenberg,, L.. "Positive Solutions of Nonlinear Elliptic Equations Involving Critical Sobolev Exponents." Communi cat ions on Pure and Applied Mathematics. XXXVI (July. 1983). 437-477.
2. Cauchy. A., "Methods generals pour la resolution des systems d'equations simultanees." Comptes Rendus Hebdomada i res des Seances de 1'Academi e des Sciences. 25 (1847). 536-538.
3. D'Hondt. T.. "The Approximate Solutions of Differential Systems with Nonlinear Boundary Conditions," Zei tschri ft fur Anqewandte Mat hemat i k und Mechanik. (July. 1978), 423-42S.
4. Lieb, Elliott. "Sharp constants in the Hardy-Litt1ewood-Sobolev and related inequalities." Annals of Mathemat ics. 118 (September. 1983), 349-374.
5. Marti, J. T.. "Evaluation of the Least Constant in
Sobo lev's Inequality for H l(0.s)." SI AM Journal on Numerical Anal us i s. 20 (December, 1983), 1239-1242.
6. Mawhin, Jean, "Recent Trends in Nonlinear Boundary Value Problems." Proceedi nqs of t he Seventh Internat ional Conference on Nonlinear Osci1lat ions. (Berlin September. 1975). 51-70.
7. Morosanu. G., "On a Class of Nonlinear Differential Hyperbolic Systems with Non-local Boundary Conditions." Journal of D i f ferent ial Equat ions. 43 (March. 1982). 345-368.
8. Neuberger, J. W. "Gradient Inequalities and Steepest Descent for Systems of Partial Differential Equations." Proceedings of the Conference on Numer i ca1 Ana 1 us i s, Texas Tech. Un i vers i t y. 19 8 3.
9. , "Some Global Steepest Descent Results for Nonlinear Systems." Trends in Theory and Pract ice of Nonlinear Pi fferent ial Equat ions. edited by V. Lakshmikantham. New York, Marcel Dekker, 1984, pp. 413-418.
14
15
10. ; . "Steepest Descent and Time-Stepping for the Numerical Solution of Conservation Equations." Proceedi nqs of the Conference on Physical Math and Nonli near Different ial Equat ions. Marcel Dekker. ( To apptear ).
11. , "Steepest Descent for General Systems of Linear Differential Equations in Hilbert Space," Proceedings of a Symposium held at Dundee. Scot land March-Julu 1982. Lecture Notes in Mathemat ics. Vol. 1032. New York, Springer-Verlag. 1983, 390-406.
12 . , "Steepest Descent for Systems of Nonlinear Partial Differential Equations." Oak Ridqe Nat ional Laboratory CSD/TM-1 & 1, (1981).
13. Rosen. G.. "Minimum Value for c in the Sobolev Inequality 11*11 :£ c-ll7#il ." SI AM Journal on Appl led
6 Z Mathemat ics. 21 (July, 1971 ), 30-32.
14. Rosen, J. B.. "The Gradient Projection Method for Nonlinear Programming. Part I. Linear Constraints," Journa1 of the Society for Indust rial and Applied Mathemat ics. VI I I (March, i960). 181-217.
15. Rosen, J. B., "The Gradient Projection Method for Nonlinear Programming. Part II. Nonlinear Constraints." Journa1 of t he Soci et y for Indust rial and Appl ied Mat hemat i cs IX (December, 194.1), 514-532.
16. Rosenbloom, P. C., "The Method of Steepest Descent." Sympos ia i n Appli ed Mat hemat i cs. Providence. Rhode Island, American Mathematical Society, 1956, 127-176.
17. Sobolev. S. L.. "On a Theorem of Functional Analysis," Matematlceski i Sbornlk. 4 ( 1938). 471-479. American Mathematical Society Translation Series 2, 34 (1963), 39-68.
18. Talenti, Giorgio. "Best Constant in Sobolev Inequality," Anna 1i di Mat hemat i ca Pura ed Appli cata. Series 4. Vol. 1 10. ( 1976). 353-372.
19. Wismer, David, and Chattergy. R., Int roduct ion t o Nonlinear Opt imi zation. New York. North-Holland, 1 978 .
CHAPTER 1 I
DUAL STEEPEST DESCENT
From a theoretical standpoint the most difficult aspect
of steepest descent is finding conditions for convergence.
In an excellent survey article, P. C. Rosenbloom (6, p. 128)
establishes the following fact. If X is a Hilbert space,
$ :X —* IR is C* 2*. z(0) = x Q and z'(t) = - v*(z<t>) for t 2
0. and there is an A > 0 such that for all x.y in X.
2 <H(x)y,y> > A• IIyII where H denotes the Hessian of • , then
lim z(t) s u exits and is the unique minimum of *. If * t-+oo
2
arises from linear least squares, i.e.. *(x) = 1/2 IIF (x > II
for some positive self-adjoint F. then <t> will satisfy
Rosenbloom's condition. Neuberger (2. 3. 4) gives another
sufficient condition for convergence of a variational If
ft is an open subset of X, z(0> = x Q and z(t) = - r*(z(t)),
Range(z) c a, and there exists a C > 0 such that for all x
in X. llv*(x)il £ C*llF(x)ll, then the lim z(t) s u exists and t-*oo
F(u) = 0. The following theorem connects the two
conditions.
(2) Theorem I If *:X — • IR is C . a is a positive number so
2 that for all y in H, <H#(x)y.y> > a IIyII and for x in X.
2 f (x} = 1/2 II v*ll . then i satisfies Neuberger's gradient
1 6
17
inequality, i.e., there is a positive C so that lly?(x)ll £
C*llv + (x)ll for all x in X.
Proof: We use primes to denote Frechet derivatives and
H #(x) to denote the Hessian of • at x. Let each of x.h. and
| <(v*)(x+h),k> - <(v*)(x).k> |
k be in X.
<(?•)•(x)h.k> « | <(v*)(x+h),k> - <(v*>(
= * • (x + h)k - • •< x)k
« * • ' ( x) (k. h)
= <H.(x)k.h> . P
So. Tf'(x)h a <(v*)•(x)h,v«(x>> = <H (x)(v*(x)).h> and v-*(x) P
= ( x ) ( v4> ( x ) > . Now <H^(x)y,y> a*llyll2 just says H^(x) Js.
a-I which implies H^(x)2 > a2•I . P
IIVfl|2 = <H(X) (V*(X) ) ,H fX)<7#(X> )> v v
2. a 2 • II v# ( x ) II 2
o 2 , X
= 2 • a • 7 ( x ) .
Taking square roots we have the desired inequality.
Both Rosenbloom's and Neuberger's conditions for
convergence of the steepest descent process are far from
necessary. Work remains to foe done to find large classes of
problems for which they would give existence results not
already obtained by other methods. Also, their connection
with the well known Palais-Smale (S. 1. p. 353) property
should be investigated.
In the dual steepest descent process, it is natural to
ask if the gradient inequality applies to both of the
18
functions * and $, will it apply to $ ? Not without some
additional condition, for example, that the angle between
the vectors v*(x) and v$(x) be bounded away from 7r .
Theorem I I If fi is an open subset of X, x e a, and
(1) Range(z) c a where z Q = x, z'(t)= -v£(z(t>)
(2) there is a c > 0 and a d > 0 such that for all y in it.
Ilv#(y)ll > c • IIF ( y) II and II ( y) II > d • IIB (y) II
(3) there is an a in {-1.1] such that for all y in a
< v # ( y > . ( y ) > > a
II v*(. y) II • l|v$ ( y) II
Then 1im z(t) s u exists and F(u) = 0 = B(u).
t -*<x>
Proof: Clearly, we may assume a € {-l.o) .
II v C ( y) II2 = llv*(y) + v ( y) II2
= II v* ( y) II 2 + 2 • < y ) . v$ (y) > + II v^ <y) II2
2 2 S II v* (y) tl + 2-a-l!v*(y) II • llv$ (y) II + llv£(y)ll
= -a | II v* ( y) II2 - 2 • llv*(y) II • II v-5 ( y) II + llv${y)ll2 j
( 1+a) • | II v# ( y) II 2 + llv$(y)ll2 j.
£ {l+oO»| llv*{y)ll2 + llv£(y)ll2 j
| IIF ( y) II2 + IIB { y) II 2 | > (l+a)* min<c 2.d 2>
(1+a) min<c2.d2> IIG (y) II2
Letting k = J l+a min<c,d), we have a constant K such that
for all y in a, II v C C y) II > K-IIG(y)ll and Neuberger's theorem
assures that lim z(t) s u exists and satisfies G(u) = 0. t->«o
For first order problems nonlinear boundary conditions
which involve only the values of the unknown function at the
1?
endpoirits of the interval can be put in the following
2 ( 2)
general framework. Given f: IR — • R . c ; define a Type I
boundary condition on H n[0,l] to foe a functional B(y) =
f(y(0),y(l)). f is said to generate B. To investigate
convergence of the boundary part of the steepest descent
process we need to compute v$(x) = B«(x)*B(x). A simple
argument shows
B'(x)y = f 1(x(0),x(l))-y(0) + f(x(0),x(l))«y(l) .
and to compute the adjoint, we seek <g,h> in IR x H 1 such
that for all y in H 1,
<y.h >Hl = <B'(x)y,g> . i.e., so that
/J y - h + y ' - h ' = <f 1 (P x)•y<o) + f 2 < P > • y < l) > • g
where P x = <x(0),x(l)>. Supposing for a moment that h has a
second derivative we can use integration by parts to write
the above as 0 = Jn yth-h*') + y(l>-< h'(l) - f (P )•g >
' ° 2 X
y(o)•< h 1(0) + f,(p )-g > 1 X ^
Which leads to the boundary value problem: find h in
H ^ O . l ] so that
h' • = h
h'(l) = f <P ).g and h'<0) = - f <P )-q 2 x 1 x ^
The solut ion is
h ( t ) = s i nh ( 1) { V P x > - c ° s h d - t > + f 2(Px>•cosh(t> } .
and the following theorem can now be proved.
20
Theorem 11 I Let B be a type I boundary condition on H i[0,l]
2 ( 2 ) with generating function f:!R —1> IR. f C and not
identically 0. The Following statements are equivalent:
2 (1) There is a d > 0 such that for (a.b) in R .
Ilvf (a.b)ll i d .
(2) There is a c > 0 such that for all x in H 1[0,1J,
it v <; ( x) II 2: c • IIB ( x) ti.
Proof: iiv?(x)n2 = iib • ( x) *B < x) ill2
H H
- I B'»>| 2-{VV slnh(l) {vPx'COSh<1> + f 2 C P x ) } «•
W { W + W C O S H ( N } }
l B ( X ) I < A Vf(P ),Tf(P )> x X 2
where A =
sinh( 1) IR
cosh(1) l is a positive definite
1 cosh(1)
matrix with lower bound m = cosh{l) - 1 and upper bound M =
cosh(1) + 1.
(l) —• (2) Let x e H 1.
1 2 IIB '.( X) B ( X > tl t = s l n h ( 1 ) <A7f(P x).rf<P x)> |B(X)|
H IR
m 2 2 II v f ( P ) II . I B ( X ) I
x 2 ' I s i nh(1) x R 2
m 2 2 d IB(x)I
sinh(1)
and there exists a global c for B.
(2) -• (1) Let G = <(a.b) e IR2: f(a,b) * 0 > and (a.b) e G.
For t in [0.1], define x(t) = (l-t)a + tb. Now x € H*[0,1 J
M and I B ( x) I 2 . . llvf(P ) it 2 £ llv$(x)ll2 > C 2-|B(x)| . 1 1 sinh(l) x 1 1
21
C 2 sinh(l) Since B (x) * 0. llvf <P ) II > and letting d be
X M
the square root of the right hand quantity, we have that for
all (a,b) in G. Ilvf(a,b)ll £ d. To show that the inequality
2 must hold on all of R . let S be the interior of the
2
complement of G and suppose that S is nonempty. Since R is
connected, there is a limit point p of S in the complement
of S. Let <qR> be a sequence of points of S and <rR> be a
sequence of points of G. both of which converge to p. Since
q belongs to the open set S on which f is zero, vf(q.) = 0, K K
and since the gradient is continuous, vf(p) = 0. However,
r in G implies llvf(r ) II > d and therefore llvf(p)ll 2: d > 0. K *\
2 The contradiction shows that G is dense in R .
CHAPTER BIBLIOGRAPHY
Berger .Melvyn, Nonl ineari ty and Funct ional Analysis . New York, Academic Press, 1977.
Neuberger, J. W. , "Gradient Inequalities and Steepest Descent for Systems of Partial Differential Equat ions."Proceedings of the Conference on Numerical Analysis. Texas Tech. University. 1983.
"Steepest Descent for General Systems of Linear Differential Equations in Hilbert Space."Proceedings of a Sumposlum held at Dundee. Scot land March-Julu 1982.. Lect ure Notes in Mathemat ics. Vol. 1032. New York, Springer-Verlag. 1983, 390-406.
4. , "Steepest D«iscent for Systems of Nonlinear Partial Di f f erssnt ial Equations," Oak Ridge Nat ional Laboratory CSD/TM-161. (1981).
5. Palais, R., and Smale, S., "A Generalized Morse Theory." Bui let in of the American Mathemat ical Society. 70 (1964), 16S-171.
6. Rosenbloom, P. C.. "The Method of Steepest Descent," Sumposia in Applied Mathematics. Providence. Rhode Island, American Mathematical Society. 19S6, 127-176.
22
CHAPTER III
TYPE II NONLINEAR BOUNDARY CONDITIONS
An important class of boundary-value problems is the
one in which the differential equation is ordinary of order
n and the boundary conditions involve the unknown function
but none of its derivatives. Even this class is so large
that we restrict ourselves to the following subset, where
without loss of generality all work is done on the interval
[0,1]. Let n be a positive integer, K-. [0,l]xlR — • IR be
continuous and such that is continuous, and g:[0,ll — • IR
be a function of bounded variation. A Type II boundary
condition with generating kernel K is B(u) = 0 where
B : H n [ 0 . 1 ] —» IR is given by
(*) B(u) = K(x,u(x)) dg(x )
If K is defined by K(a.b) = (l-a)b - a b 2 , and g by
, o x = o g ( x) = < l 0 < x < l
. I 2 x = 1
2
then the boundary condition y(0) = y(l) of our sample
problem can be put into this framework. Once again, to
investigate steepest descent for the functional $(u> = 2
1/2 IIB (u) II , one seeks an explicit form for v£(u) =
B'(u)*B(u). Theorem IV If y 6 H n [ 0 . 1 ] and 8: H n[0.13 —» IR is defined by
23
24
(*). then
(a) B • (y > f • / J K (t.y(t))•f(t> dg(t)
^ 1 sin(kfl)
(b) ( B 1 { u) l)(t) > — — X, n, — — : — r r S T T K,(X.y(x)>
* n+1 k = l sinh(sin(kfl)) > 0 2 3
| cosh((l-x-t)sin(ke))* cos((t-x)cos(kS))
cos(2k9)-cosh(<l+t-x)sin(k8))*cos((t-x)cos(kfl))
sin(2k0)•sinh((1+t-x)sin(kS))•sin((t-x>cos(ke)) j>dg(x)
2 n t + T m 2 k = i s i n ( k e > / 0
K2<*,y<x>>
| cos(2k0)'sinh((t-x)sin(ke))'cos((t-x)cos(kfl))
+ sin(2k0)•cosh((t-x)sin(k0))•sin((t-x)cos(kfl)) j dg(x)
Proof: (a) This follows directly from the definition of
Frechet derivative.
(b) We first prove the theorem in the special case that
both K and g are c * 2 n ) and that for k=l 2n, both g*fc*(0)
(k >
and g (1) are 0. Using the definition of adjoint
(4, p. 30?). we seek a member h of H n [ 0 . l ] such that for
every f in H n [ 0 , l ] , <B'(y)f.l> = <f,h> n .i.e., so that iR H
/» f d 9 - ik 2 o ilf
(k) _(K) h
Supposing that h has 2n derivatives (Later this will be
verified), integration by parts can be used to obtain
lo t K 2 [ I . y ] 9 ' - / o f-0 <" * /J f-ft < 2 R ) «I -
I*"' ( - D ' - ' - J f < J » . h <2 R - » - J > ,» I
j=0 lo /
" lo '•{ *k = 0 n } d I • i..". f ( J ) h'2*-'-.!' |» . n _k- i •*.1 *j=0
25
Using Fubini's Theorem for finite sums, the double sum can
n-1 „ n . . ik-l-j ^(j) ( 2k— 1 — j) ,1 be rewritten as £ j _ 0
ZK-j+l ( - 1 ) f h '0
_ _. ,, _n-1 f < j) _n-i , . x m - j . (2m-j + l) .1 ) and finally as. 2 j _ 0 { m-j * 'o j '
By a standard argument, for the above equation to hold for
every f in H n[0.l], h must satisfy the boundary-value
problem :
. k .(2k) „ , r ( 1 ) z
k _ 0 ( _ 1 ) h = K 2CI.yJ g
m= j
< 2) j=0.....n-l
I n - ' + = 0 m.j
For convenience, let v denote K [ I . y 3 • g ' . The m
n k 2k characteristic equation. Z (-1) w = 0 . can be
n » v
rewritten as
, 2. n+ i - . 1 - (- w )
n . 2. k 0 = r, (- w ) = .
k = ° 2. 1 - (- w )
Let S be the set of solutions to (- w 2 ) n + 1 = 1 . excluding
* 0 i + i. Let 9 st — a n d a = e . An enumeration of the — n+l
members of S is given by > where IV n> •" i
Ak -i a K for 1 £ k £ n
-(k-n) „ . ^ ^ ^ i a for n+l 5 k 2S 2n
2n K
z(t) ss 2:^^ cfc e is the general solution of the
homogeneous problem. Variation of parameters yields a
2b
A, t 2n k
particular solution Zp<*> = U R l t > e T h e
differential equation in standard form is
2 n ( - l ) n - k h ' 2 1 " - ( - ! > " V
k=o
and so denoting the mth order Vandermonde determinant by
m Vand</Jj >
/ , v n / , x 2n+k r j*k Akj t Vand< A . > . (-1) v(t) (-1) e L J 1 J
V ( t > -£ A Xj=l Aj
vand< A j >
-A, t . . . k-1 / ^ • ,N 2"+k K (-1)
= (-1) v(t) (-1) e .and n (A -A, )
m k m*k
A, t . A, (t-x) _ , > «2n k „2n - /1 k , . . h < " • IK=l Ck 6 + ZK = 1 -P* )o 0 V < X > d X
( - 1 ) "
where /3 = ' . Now. using the fact that K and g n (A -A, )
m*k / a * \
are C . we have that for 1 < m £ 2n, ,.<«&>/*.> ~ 2 n , m h = ZR=1 C» \ e + 1 4>'n ./
k = 1 \
,t m k . . . _m j - l B-i .. . f A, e v x dx + Z . , A,J v J (t) > o k j = l k
and h has the 2n derivatives supposed earlier.
The boundary conditions specify that for q = 0 n-1,
r (t) I r n " ' (-l)P-'» h ( 2 P " c ' + 1 >( t )
q p=q
„n- 1 . ,^p-qf „ 2n „ 2p-q+l Ak* „2n
" p = q ( " I > P q { r K . l s *
Z k . l ' V
{/o A2?"''*1 / " " " " ' . [ . I d . • Z*P-01 AJ V ^ P - ^ ' c t ) U
should be zero when evaluated at 0 and 1. Using the fact
27
(k) (k) that v (0) = o = v <1) for k = 0 2n-l. the above
simplifies to
t , * » *.2n k
r (t) = Zt 4 e q k = 1 • 1 ) P - q a ^ + 1
<=k + ( z n _ 1 (-: \ p=q
* k V > ° k ^ fo e ^ ^ v c x j d x . / x " - 1 aJP-1 + 1\
k= 1 k ' 0 1 p=q k f and the boundary conditions can be written as A*c = b where
A is the 2n x 2n matrix defined by
.n
A JK
Z" . (-l)P-J AfP-J p=J It
j-n.k
and 3 is the 2n vector given by
0
(B) , = J _ -A. x
J*
1 £ j £ n
n+l £ j £ 2n
1 S k S 2n
1 < j S n
Z k = l ^k Aik /J e R v < x ) d x n + 1 * j * 2n
It is shown in Lemma I that A is non-singular, so
h (t ) = X 2 n ( A~ 1 • 3 ) . e j + X 2 n
J = 1 J J=1 A j t ~ n « r' ^ < t _ X ) , ^
=1 ~ Pk >0 ® v(x)dx k > 0
Further investigation of the matrix A and the sequence
2n will give a more explicit form for h.
n a rl
=l^rn Jo -1 T+ 2n
A • b = x m
~ A x l e m v(x)dx A ( 0 , . . , 0 . A A_ )
n+1 , m 2n ,m
Define M to be the 2n x 2n matrix in which the nonzero
entries are
28
M J *
M J *
A A k k + n
e - e
A 4 2 k 1 + A, e k + n
A A 2 k k + n 1 +
e - © k
A, % 2 k 1 + A e k-n
A A, 2 k k-n 1 + A
e - e k
A k
e
J«k
1 S k i n
j=k+n
j = k-n
n+1 < k £ 2n
A A, k k-n
e - e
j=k
Examining the product A • M, one finds that
0 0
0 0
A • M = n+ 1 . 1
2n. 1 -1
n+1,2n
2n, 2n
and then applying A to both sides.
-1 T A CO 0, A A_ ) = mth column of M.
n+1.m 2 n, m
a •* 2n , 4 -1 -•. j
Z . (A b) . • e J = J =1 J
= Z 2 n e A j t lz2n p -(l e""kAv(x)dx-M 1 j=l \ k=l pk Jo VVJWUX ™jk /
_ / ~ A x _ A . t _2n J . ,1 k , , . _2n j „ j
- « s P. • J ® vCx)dx •Z. , e J M., > k = 1 | *k •> 0 j = 1 jk J
29
" Z k = l " k f l ' * v l x ) d x {® " N k k * * "** V k . k } *
zk"» ^n+k f l * "** v ( x > d x {" " V n * k + a """ M n * k . n . k }
From Lemma II, we have that for 1 £ k £ n.
sin(k9) sin(k9> 2
^k n+l Ak a n d ^n + k ~ n+l An+k
Also the expressions for the nonzero entries of the matrix
can be simplified, for example, if 1 £ k £ n.
A, , 2 k l + A
e n+k
M n+k.k » A 2
e - e k
1 1 + ( i a " k ) 2
A - A, . , . k, 2 n+k k l + (la )
1 - e
" 2 k
1 1 - a
2sin(kd) . 2k l - e l - a
-sin(kfl) -k k -k -s a (a - a )
2 sinh(sin(k®)) k, -k k. a (a - a ;
-sin(kfl )
" * 2sinh(sin(ke)) k
After such simplification we have
1 s i n ( k 9 > T ^ — — — — — — ( V ( X ) k= 1 sinh(s in(ka)) > 0 n+l
f , " V - < 2 V < 1/2 e V k 8 + 0 I
, A,t , A , t v
n " } }
+
- A . x + s i n ( k a ) - . A . t A ,s n+k I k ,2 n+k . .
1/2 e < e + A e >> dx
The remaining calculations are routine, for example.
-A, x + A , t - sin(kfl) -A x + A t + sin(kfl) , • K n + K . , • n + k k
l/2>e + 1/2-e
30
, x<-icos(k0)+sin(kfl>> + tCicos(k0)+sin(k0>> - sin(kfl) = l / 2 ' < e {'
x<-icos(k0)-sin(k0)> + t(icos(k0)-sin(k0)> + sin(k0)\
) , -(l-x-t)sin(k0> < l-x-t >sin( k0 ) \ i<t-x)cos(ke)
= 1 / 2 • i e + e } - e {' f = cosh((l-x-t)sin(kfl))|cos(C t - x ) c o s ( k 9 ) ) + i s i n ( ( t - x ) c o s ( k ® ) ) |
-A x - s i n ( k e ) 2 R e i Afct
After simplifying the terms 1/2© ( - e )e and
-A x+sin(k0) . . A .t l/2e n + ( - • " > • and w o r k i n g in similar
, t A (t-X)
fashion w i t h ~^K'/O e v(x)dx. o n e has the
following expression for h<t):
l s i m < k 0 ) h ( t ) = r— Z n :——;—:—., _ . . f . V ( X )
n+1 k = l s l n h ( s i n ( k 0 ) ) ' 0
cosh((l-x-t)sin(kfl))* cos((t-x)cos(kfl)) {{
"{
- cos(2k9)•cosh((l+t-x)sin(kfl)>-cos((t-x)cos(kfl))
- sin(2k0)•s1nh((1+t-x)sin(kfi))•sin<(t-x)cos(k0))|
+ i*^ c o s h ( ( 1 - x - t ) s i n ( k 0 ) ) • s i n ( < t - x > c o s ( k 0 ) )
- cos(2k9)•cosh((l+t-x)sin(k0))'sin(<t-x)cos(kfl))
+ s i n ( 2 k 8 ) • s i n h ( ( 1 + t - x ) s i n ( k 0 ) ) • c o s ( ( t - x ) c o s ( k 0 ) ) | | dx
1 + 2 — 2r.n, sin(kfl) ( n v(x)
n+l k=1 ; 0
c o s ( 2 k 0 ) • s i nh((t-x)sin(k0))•cos(<t-x)cos(kfl))
+ s i n ( 2 k 0 > • c o s h ( ( t - x ) s i n ( k 0 ) ) * s i n ( ( t - x ) c o s ( k 0 ) ) |
+ i•|-s i n ( 2 k 0 ) • c o s h ( < t - x ) s i n ( k 0 > ) • c o s ( ( t - x ) c o s ( k 0 ) )
+ cos ( 2k0 ) • s i nh( < t-x > s in< k0 ) ) • s in( (t-x > cos (Jt0 ) ) | | dx
Since h(t) is real, the m u l t i p l e s of i must sum to zero and
w e a r e left w i t h the desired e x p r e s s i o n (recall that v(x) =
K ( x , y ( x ) ) • g 1 ( x ) ) . The Theorem is proved in the case that
{{
31
K and q are and g^^(O) = 0 = 9 (1). K = 1 2n. 2
Surprisingly, the general case follows from the special
one. Let K:C0.13xlR —» IR be continuous and such that K 2 is *
continuous. g:[0.U —• R be increasing, and h = B'(y) 1.
There exists a sequence of functions such that
is C ( 2 n ). K —• K and <Kj>2 —• * 2 uniformly on £ 0. U x C-i.t ]
where y = sup<|y(t)J:t e [0,1]>. Let < g j > j = i a sequence
(2n) (I j is C . g^
(2n) (k) . (k) . b_. yn and such that g is C . g, = 0 = g^ for k-l.....2n, and
g i g pointwise. With each pair <Kj,gj> there is an
associated h^ = B^'(y)*l which has been shown to have the
desired form. Now. <K.)_ —» K uniformly, implies for fixed J £
w and r.
f1 (K .)_{x,y(x))*w(x)dr(x> —» f* K (x.y(x))•w(x)dr(x) ) o j 2 ' u z
Also, since each gj is nondecreasing and g^ —• g pointwise,
(4• p. 121), for fixed w,
J* w(x)•g^'(x) dx = J* w(x> dg^(x) —» /J w(x) dg(x).
This shows that <h^.>^1 must converge pointwise to a
function of the desired form. Now. <kj>2 —• K
2 uniformly implies B.'(y) —• B*(y) in norm, and using the continuity of
* * .
the adjoint operation, h = B'(y) 1 = 1im B ' ( y) 1 » lira h . n-#«> n->°°
The case where g is of bounded variation follows immediately
from the case in which g is increasing*
Lemma I The matrix A defined in Theorem IV is nonsingular
and if Vand<d >" denotes the nth order Vandermonde K K ~ i
determinant, then
32
det A = i n < n - 1 ) |vand< 2cos(k®)> n 2 n
(e ia
- e ia
-k ) .
Proof: Let w be a complex number such that (- w 2 ) n + 1 = 1.
w * + i - -- , . *®-J 2m-J j _ n . , xm-j 2(m-J) Z . ( - 1 ) J w J = w J £ _ , ( - ! ) J w J =
n m = j m=j
2.n-j + 1 j . 2. - ( - w ) J w J - {- w )
2. n+1 (-1) Jw w J- ( - l ) J w - j
W-1 + w
2n so, det A » n
R = 1
1 + w
det
1 + A,
1 + w
k=l 2n
A, e k v.
where v is the n x 1 vector whose jth component. <v )., is
Aj| - ( - i ) J A ~ J
k k For 1 £ m.j <n.
(v ). = (ia"™)-' - (-1) J ( ia~ m)~j n+m j
= iJa- mJ - (-l)Ji-Ja mJ
* J
= ( - l ) J ( i a m ) - j . . m . j (ia )J
= -(v ). . m j
so adding column m to column n+m. det A is
„2n ,4 ,2. 27, , (1 + A ) k = 1 k
det A k=1...n K
e vk
- A . - 1
k=l (e
A.
- e ) • v.
-1
- 2 1 1 Jlt 4 C 1 + A ) k = 1 k
_ n . k
n
where A = dettv k = l n] . For 1 <, k < n, (v ). = J
<ia k)j - (-1 )-•( iak)~.-' = i-Ua kj - a~k-'> = i < 2 i • s i n ( k j 9 ) > .
J > A = i (2i) det M .
where M. = sin(kj0). According to Muir (3. p. 187). J • k
determinants of this type were first investigated by Prouhet
33
in 18S7. He used the trig identity
2 k _ 1 cos*'1* sin* = ZplJ ( k p S sin((k-2p)*)
and elementary properties of determinants to show det M =
det M where M. « 2 k _ 1 cos k - 1(je> sin(je) and then factored J * K
sin(je> from the jth row to arrive at the nth order
Vandermonde determinant Vand<2cos(j0)>j_^
If l £ k £ n. then there is an m, 1 £ m £ n. such that
'2 n 2 n 2 IK 2 A = - A . Using this fact, IT' (l + A ) = FF . ( L - A ) = n+k m K = I K K - I
n n (-a k (a k-a~ k)) 2 » J7 " <-ak 2i sin(k0))2 . which is R=1 k=l
{2 (2i) 2 n a 2 • < ffR21
sin(k0) > 2-
Noting that an ( n + 1 > - i 2 n
a n d assembling the pieces,
1 k -k „ n , ia la
det A — — — x • " e }
(2 i > i < J7." sin(kfi) > K — 1
r ) 2
,n(n+l) { 2 i )2 n
{ ff^^sintkfl) >2 ivand< 2cos( je ) >j = 1 |
and the proof is complete.
Lemma I I If 1 £ k 5 n . then
(-l) n sin(kfl) A2
5 " ( V \ > = " n t l * m*k
(-l) n sin(k9> a2
( b > ^n + k = It (A -A . . " n+l n + k ib n-i- K /
m*n + k
,. m . k„ 2 ,. -m . k. Proof: (a) n (A - A ) = n (i« -i<* > ® < I A _ L C T >
m*k m 1<m<n m=l m*k
n (- 1aR)<l-a m- R) ff (-ia k)(l-a~ m - k)
1<m<n m=1 m*k
34
. . k . 2 n - l l t ~ 1 , , - ( k - m ) . " . . m-k. " - ( m + k ) . = ( - i a ) 27 ( l - a ) 27 ( l - a ) 27 ( l - a > m=1 m=k+1 m=1
k - l n -k n+k * ( - i a ) 27 ( l - a ~ J > 27 ( i - a J ) 27 ( l - a ~ J )
j = l j . i j = k + l
. -» . t - 1 , n+k . n -k _ . . , . k . 2 n - l „ , „ - 1 . _ , , - j , „ , 4 - ( 2 n + 2 - j ) . = ( - 1 0 ) 27 ( l - a J ) 27 ( l - a J ) 27 ( l - a J ) j = 1 j =k+1 j = 1
, _ . 2n+l , . k . 2 n - l „ . . - j , „ , 4 - j , = ( - i a ) 2 7 ( 1 - a J ) 27 ( l - a J )
l £ j £ n + k j = n+k+2 j*k
. k . 2 n - l - j . = ( - i a ) 27 ( l - a J ) lJSjS 2n+l j*K.n+k+l
. _ , 1 2n+l . ( - i a ) , „ „ 27 ( 1 - a - J )
( l - a ) ( i - a ~ ) j = l
k 2 n l 1 n i 2 n + 1 i = ( - i a ) - 27 ( l - a " J > -2 - 27 ( l - a " J >
( l - a ) ( l + a > j = l j =n + 2
, . k , 2 n - i 1 „ 2 - h 2 - ( 2 n + 2 - i K = ( - i a ) -2* 27 ( l - a J ) 27 ( l - a J )
*" fa l\ , . l - a j = 1 j = 1
= ( - i a k > 2 n _ 1 — — — - — 2 7 ( l - a - - * ) ( l - a - * ) a ( a - a ) j = l
k 2 n - 1 2 n
= ( - i a ) — — 27 (2 — 2 c o s ( j0 ) ) a 2 i s i n (k 0 ) j = l
i. 2 ^ ^ ^
( - i a ) 2 27 ( l - c o s ( j e ) ) . s i n ( k 0 ) j = i
2 1 n = A —• 2 27 ( 1 - C O S ( j 8 ) )
s i n ( k 0 ) j = 1
In h i s book P l a n e T r i q o n o m e t r y . Hobson ( 1 ) d e r i v e s t h e
_ _ _ _ _ ( n / 2 ) n 2 7C e q u a t i o n J n + l = 2 s i n — s i n :— * • • where
n+1 n+1 n - l k
t h e l a s t f a c t o r i s s i n — - — -— i f n i s odd and 2 n+l
35
n ik
s i n i f n i S even. Using this fact, the last factor
2 n+l
in the last equation can be simplified.
n (i) n odd 2 n n (l-cos(je)) =
J " * n-1
2 n+l n 2 n it <i-cos(j«>) • ( i - c o s { — e ) ) • n (i-cos(je>)
j=l J = n + 3
2 n-1
n \ 2, . „. = 2 IT sxn (j 9 )
j = i
= n+l
(ii) n even - similar to (i).
2n 2 For further simplification, note that Afc *Afc =
k.2n,. k.2 ,2(n+l). 2(n+l),k , ..n+l 2n ( i a ) d a ) = i (a ) = ( - 1 ) . s o A R =
. „.n+l -2 ( - 1 ) •A . Finally.
IV
( - 1 5 n ( - 1 ) " sin(k0) 2
^k = 27 (A -A, ) ~ ~~ 4 ,n+l ~-2 n+l ~ ~ n+l Ak mat[f m k ( - 1 ) A nirK K , / , v
sin<k9)
(b) The techniques used to prove this are essentially the
same as those used to prove (a).
Corollary If for each positive integer n. c^is the least
number k such that for all f in H n[0,l], Hfll ^ /c * II f II , <x> T\ , 2
t hen
1 2 s i n 3 ( k 0 ) v l / 2 k
I where Q — ————« n+l k=l tanh(sin(k0>) | n+l
Moreover. li« e - {2-/J t a n h ( 5 t n ( K « > n-*<» v '
s i n 3 ( * x) v1/2
36
Proof: Let n be a positive integer. The least k such that
for all f in H n[0.1], llfli < «• II fll _. is just the norm of <*> n, 2 J
the imbedding operator J:H n[0.l] — • CEO.l]. For a in [0.13.
define B : H n [0.1] — • IR by B (f) = f(a). Now. |B I £ IJ | for a a 1 a • ' '
all a. Let fi = sup<|B |:' a € CO. :L 1 > and suppose p < |J|-
Then there is an f in H n[0.l] such that llfil > a'llfll <*> ^ n. 2
Let a € [0,1] such that |f(a)| = HfH • Then
Hfll = != I B C f > I S I B I • II f II S /3 • II f II , < llfll 00 1 1 1 a i ' a ' n.2 ^ n, 2 oo
and we have a contradiction, so p = |j|. Now. finding |Ba|
directly from the definition is as difficult as finding |j|
directly. Instead, we use the fact that JB I = IB * I and 31 ' <31 '
Theorem IV. Define •tCO.l] — • IR by *(a) = | B^ * | 2 - From the
definitin of adjoint, for all f in H n[0,l], <f,B * 1> n = 3 H
<B f , 1 > = f (a) . a IR
* ( a ) = l B**| 2 == • B *l> u n = { B * 1) (a) . 1 a 1 a n,z a a H a
Applying the theorem with K defined by K(a,b> = b and g the
step function with unit jump at a , we obtain
1 s i n ( k 9 ) • (a) = — x. n
n+l k=l sirih(sin(kfi ) )
cosh ((l-2a)sin(ke>) - cos ( 2k6 ) cosh ( s i n ( ke )) j> .
* is symmetric and since
1 4 sin 3(k8)
* " ( a ) = ~n+T~ Zk=l sinh(sin( k©T7 (1 - 2a ) s i n ( kfl ) )
is greater than 0. 4> is convex. Therefore it must assume
its maximum at 0 and 1. Evaluating at 0 and doing some
algebra, the first part of the theorem is proved. The
second part follows from the first and the definition of
37
Riemann integral.
As mentioned in Chapter I, the determination of the
best c for the case n = 1 was made by Marti (2) using
methods from the calculus of variations. The Corollary
extends significantly Marti's result by giving explicitly
the optimum c for all n. Table I lists the values of c for n
1 S n £ 10.
CHAPTER BIBLIOGRAPHY
1. Hobson. E., A Treatlse on Plane Trigonometry. Cambridge, Cambridge University Press, 1928.
2. Marti. J. T.,"Evaiuation of the Least Constant in
Sobolev's Inequality for H 1(0.s)," SIAM Journal on Numerical Analysis. 20 (December. 1983), 1239-1242.
3. Muir. Thomas, The Theory of Determinants. Vol.11. The Period 1841 to i860. London, Macmi11an and Co.. 1911.
4. Riesz. Frigyes, and Sz-Nagy„ B., Funct ional Analysis. New York, Frederick Ungar Publishing Co.. 1955.
38
CHAPTER IV
NUMERICAL SOLUTIONS OF SECOND ORDER PROBLEMS
WITH NONLINEAR BOUNDARY CONDITIONS
As mentioned in the introduction, one of the most
attractive features of steepest descent is that, in addition
to giving a valuable theoretical framework, the method is
amenable to modern numerical methods. This chapter
discusses the practical aspects of implementing dual
steepest descent to solve second order problems and explains
the FORTRAN program FH2.CGR. A listing of the code is given
in the Appendix.
4 4 ( 2) Suppose F: IR — • IR and B: IR —» IR, both C , are
2
given. Find a member u of H Ca.b] such that for x in Ca.b],
F(x.u(x),u'(x),u' ' <x)) = 0
B ( u(a) . u( b) . u ' (a) . u 1 ( b) ) = 0
We approximate the general problem using finite differences
(finite elements could also be used, giving more accuracy at
the expense of a more complex code). Let n be a positive
integer: find which satisfies within a preassigned
tolerance the. nonlinear difference equations: u - u u - 2u + u i + l i-l i + l i i-l
(1) F(6-( i-l) .u. . — . - ) = 0 i 2-6 s 2
3?
40
U. - U. U - U' 3 I n n-2
(2) B (u . u . — , r—r ) = 0
1 n 2 • o 2 • o
where 2 £ i £ n-1 and 5 = l/(n-:L). Note that CI) is the
only natural difference scheme to use if the equation to be
solved is truly second order, i.e., is not identically
zero, and the standard finite-difference approximation to
u''(x) is used. However, for the boundary condition portion
of the problem there are several other reasonable choices.
for example.
u, - u, u - u B ( u . u , . n n ) = 0 . or
1 n 5 U- - u4 u - u 3 I n n-2
B ( u > . u . . . ) = 0 . 2 n _ 1 j 7 i — —
Computations should show if one is superior.
To soive (*) by steepest descent, first define
T:R n (lR4)n~2 by (Tu) i = (6-( i-1) ,u. . ( Dlu) . . ( D2u) t ) .
2 :S i £ n-l. where Dl and D2 are the standard difference
operators defined above. Define H: lRn — • lRn 2 by H(u> i =
F ( (Tu) . ) and # : R n — • R by *<u) = 1/2 HHull2 „ : * measures i 2
IR
how close a vector u comes to satisfying (1). Similarly,
define $:IRn — • R by
u - u u - u . , , _ io/ 3 * n n-2. ,2
< (u) = 1/2 B (u, . u , . )| I i j-^ 11 • 1
2-6 2-5
Finally, define f:IRn —» (R by $(u) = s *(u) + s $(u), where
Sj and s 2 are positive scale factors or weights which allow
the boundary conditions to be emphasized over the
4 1
d i f f e r e n t i a l e q u a t i o n o r v i c e v e r s a . I f u € R n s u c h t h a t
£ ( u ) = 0 . t h e n u i s a s o l u t i o n t o ( 1 ) a n d ( 2 ) . a n d e v e r y
s o l u t i o n i s a z e r o o f £ .
F o l l o w i n g N e u b e r g e r i n ( 7 ) . f o r 2 £ i S n - 2 .
( H ( V ) - H ( U ) ) » F ( ( T V ) ) - F U T u ) ^
= F ( ( T u ) . ) ( V . — U . ) + F , ( ( T U ) . ) ( D i ( v - u ) . ) + 2 1 1 1 3 1 i
F ( ( T u ) . ) ( D 2 ( V - U ) . ) + <b( ( T V ) . . ( T U ) . ) 4 1 1 T i 1
w h e r e t h e l a s t t e r m g o e s t o z e r o a s ( T v K —• ( T u ) . .
S O . H ' ( u ) ( w ) = F ( < T u ) ) - W + F . t ( ( T U ) ) • ( D1W) . +
F ( ( T u K ) * ( D2w) . U s i n g t h e c h a i n r u l e ,
* ' ( U ) W » <HU . H ' ( U ) W> _ a Z 0 * ( H U ) . ( H ' ( U ) W ) . „ n - 2 1=2 l l IR
= Z n ~ \ F ( T u . ) * F _ ( T u . ) • w , + F ( T u . ) • F _ ( T u . ) • D l w . + 1 = 2 1 2 1 1 1 3 1 1
F ( T u . ) • F ^ ( T U . ) • 0 2W. 1 4 1 1
= <H_U Hu . Jw> _ + <H . U H U , D 1V/ > _ + < H U H u . D2w> 2 „ n - 2 3 „ n - 2 4 n - 2
IR IR R
w h e r e ( J r ) . s < 0 ? P ^ n . • T h e l a s t s u m c a n b e l 1 r . 2 S i £ n - l
l
w r i t t e n a s < J H _ u Hu + Dl H_ Hu + D2 H u Hu , w> . s o 2 3 4 n
IR
( v * ) ( u ) - J * H „ u Hu + Dl H u Hu + D 2 H u H u . N o t e t h a t t h e 2 3 4
a d j o i n t s a b o v e r e f e r t o IRn a n d IRn 2 e q u i p p e d w i t h t h e
E u c l i d e a n i n n e r p r o d u c t .
T o c o m p u t e v $ , d e f i n e S : fRn —• IR4 b y
u „ - u u - u 3 i n n - 2
S u = ( u . u 1 ' n ' 2 - 6 ' 2 - 6
a n d Gu = B ( s u ) . T h e n
42
Gv-Gu * B.(Su)(v - u 4 ) + B^(Su){v -u > + i 1 1 2 n n
( v — u ) - C v - u > 3 3 1 I7
B3<SU)-
So
(v -u ) - (v _ — u ) n n n-2 n-2
2 * 5 + B (Su )-
4 2 * 6
V G ( u ) . = I
B (SU) - — — B (Su) i = 1 1 2 • o 3
B ( S u) 2*5 3
0
B (Su> + — — B (S U) 2 2 * o 4
2 7 6 V S u )
i = 3
l =n
otherwise
i =n-2
Using the chain rule. v$(u) = B(Su)•vG(u).
At this point we depart from the standard steepest
descent method by employing the Sobolev inner product.
Widely used in theoretical work, the Sobolev inner product
has been used by Neuberger (3, 4, s. 6. 7) and by Glowinski
(8) to give markedly superior results in the numerical case.
Consider lRn equipped with the inner product
<U,V> = z" 1 u V + (Dlu) (DlV). + ( D 2 u ).(D2v). ^ 1-iS 1 1 1 1 1 X
Recalling that both gradients and adjoints depend on the
inner product, we propose to move in the direction -v f(u) s
to seek a decrease in £. The given gradients are converted
to Sobolev ones by noting that <u,v> $ = <u.Av> where A = J*J
* *
+ 01 01 + D2 02 and using the standard result; If <X,(*#«)>
is a Hilbert space, A € L(X,X) is positive and self-adjoint,
((x,y)) » (x,Ay), and f:X — • ir has gradient at x, then the
43
gradient of f in the space <X.((•,•)>>. {v f)(x), is just
A - 1vf(x). As noted earlier, it is useful to compute the
cosine of the angle between the vectors v #(u) and v $(u) in
the Sobolev space. When the angle is small, the process
moves in a direction that better satisfies both the
differential equation and the boundary condition, whereas if
the angle is large, the two "pull in opposing directions."
In any descent algorithm, one moves from the present
position u in a prescribed direction z (usually the negative
gradient) to u + a-z; there are many methods of determining
how far one should move, i.e.. of minimizing the function
h:CO,<») —• IR given by h(a) = $(u + a-z). Following Hestenes
(1. p. 59) we use the secant approximation to Newton's
method: to find a t such that h'(i) = o. try
h' (0) h'(0)
i = h' 1(0) h'(e) - h»(0)
- 3 -4
where e is a small positive number, say 10 or 10
Defining L: IR —• X by L(a) = u + a-z. h = ?CL] and h'(a)/3 =
C • (L(a) )L 1 (a)/3 = < ? s $ ( L (a) ) . L ' ( a ) fS> s = <?se<u + a- z) .flz>s. so
e u).z>s
i = <vse(u+6Z> - vsC<u). z>s
Many numerical studies indicate that a nonlinear
version of Hestenes' original conjugate-gradient algorithm
gives superior results to the basic steepest descent
process. Following Glowinski. Keller, and Reinhart (8)
44
we shall use the Polak-Ribiere (2) variant of the conjugate-
gradient algorithm:
(1) Pick u° s g° = v s$(u°) : z° = g°
(2) For n greater than 0, p minimizes h(a) = $(un-azn), n
n+1 n n n+1 „, n+1. u = u - p nz g = v s *
< u )
, n+1 n n+l <g - g -g > s
, . — Z n + 1 = g n + 1 + , Z n
n "gn|,s n
one of the most attractive features of a conjugate-gradient
algorithm is that in the special case where $ is quadratic,
i.e., the underlying equations are linear, it is well known
that convergence theoretically must occur in fewer that n+1
steps, where X = lRn. In the nonlinear case, as a solution
is approached, the ? function will increasingly approximate
a quadratic, giving good convergence. In numerical
applications it is difficult to maintain the
conjugate-orthogonality, on which the success of the method
depends, over a large number of iterations. For this
reason, one should restart the algorithm every ten or
fifteen iterations.
The program FH2.CGR is a FORTRAN implementation of the
above ideas; the code is briefly reviewed. Section 1
dimensions the needed arrays - 2200 8-byte words are
required - and defines the functions F and B, together with
their partials. Note that the construction allows for very
general nonlinear differential equations and boundary
45
conditions and that the storage requirements are quite
moderate. The various parameters used by the program are
set up in Section 2 - number of iterations, scale factors,
interval of solution, etc. In Section 3, the main and 2
super diagonals of the pentadiagonal, positive definite,
A * *
symmetric matrix ABD = J J + Dl 01 + D2 D2 are computed and
stored in a format to be input to the LINPACK subroutine
OPBFA which factors ABD so that later the linear system
(ABD)w = ?£<x) can be solved.
The remaining code is within a DO loop which controls
the steepest descent iteration. Sections 4 and 5 compute
the Euclidean gradients v*(u) and v$(u). Mention should be
made of the method in which adjoints are computed: in the
ODE case it is not critical, but careful coding now will pay
off in the PDE case. We take the operator D2 as an example:
in FORTRAN it is implemented by
DIMENSION U(N). D2U(N) DO 100 1=2.(N-l)
100 D2U(I) = (U(I+1) - 2*U(I) + U(1-1))/(DX**2)
Where the n-dimensiona1 array D2U is treated as a
(n-2)-vector by considering that only the entries 2 through
• (n-l) contain data. To compute D2 , one could find the
matrix representation of D2t take its transpose, and
*
determine that D2 is implemented by
46
DIMENSION V(N).D2A(N> D2A<1) = V(2)/DX**2 D2A(2) a <-2*V(2) + V(3))/DX**2 DO 100 1=3.(N-2)
100 D2A(I) = CV(I+1) + 2*V(I) + V(1-1))/DX**2 D2A(N-1) = (V(N-2) - 2*V(N—1)) / DX* * 2 D2A(M) = V(N—1)/DX* * 2
*
A much easier way of computing D2 is obtained by
considering how it acts on a unit vector e . 2 < j £ n-1 in
the space lRn 2 . For all u in lRn we want <u,D2*e.> = J S?n u
J + 1 - " j * uj-1 <D2u.e.> = r , so we must have
J fR11"2 6 2
(D2*e . )i J
6 2
6 2
i = j — 1 or j + l
i = j
0 otherwise
Using this and the linearity of D2*. the following
•k
particularly simple code computes D2 :
DIMENSION V(N), D2A(N) DO 100 1=2.(N-1) D2AC1-1) = V(I)/DX* * 2 D2A(I> = -2*V(I)/DX* * 2
100 D2A{I + 1) = V( I )/DX* * 2
This difference in perspective - taking V(I) and seeing
which elements of D2A it affects rather than taking D2A(I)
and using elements of 'V to compute its value, results in
substantial simplification in the corresponding PDE code
where five adjoints are computed. Moreover, rather than
computing the adjoints separately the work can all be done
47
in one DO loop.
Section b computes the gradient v£(u) and the value
f(u). If ?(u) is less than the pireassigned tolerance, a
solution has been found to the desired accuracy and the
program exits. The LINPACK subroutine DPBSL is called twice
in Section 7 to compute v c*(u) and v_$(u), and then the
cosine of the angle between them is found. Section 8
implements the conjugate-gradient algorithm, determining the
direction z in which to move. Note the variable IREST which
controls how often the CG algorithm is restarted. The
secant formula is used in Section 9 to determine the optimum
distance to move.
Finally, in Section 10, we move to the trial vector w =
n n * C w > u - R -z and compute f(w). The ratio RR = should
^ * • n x £ (u )
be < 1 . but because of the nonlinearities we may have moved
too far. If RR > l, we go back and move just half as far in
the direction z. The process continues up to ten times
until £(w) < £(un), whereupon u n + L <— w, and the ratios n+l-> n+1, n+1,
t(u ) M u ) £ (u ) , and are printed out
r\ *
t(u n) *(u n) ?(u n)
Table II lists the output of several runs of FH2.CGR
for the test problem y' = y, y<0) = y(l) 2. with an initial
guess of the constant 1 function, the program quickly
x — 2 converges to the solution e . Starting at -1 or 0.1, the
48
other solution, 0, is reached. In runs #4 and #5> different
weights are attached to the differential equation and
n+1. «(u )
boundary conditions. Note that the ratios and . . n. *< u )
^. n+1» . . n+1. $ (u ) $ (u )
can vary widely, but stays below 1, ^ • ft x * • n. $ < u ) $ C u )
indicating continued descent. Finally in run #6. the
conjugate gradient method is restarted at every step, so
regular steepest descent is done. With IREST = 1.
convergence is slower, but much less erratic.
CHAPTER BIBLIOGRAPHY
1. Hestenes, Magnus. Con jugate Direct ion Methods in Opt imizat ion. New York. Springer-Verlag. 1980.
2. Polak, E., Computat ional Met hods i n Opt imi zat ion. New York, Academmic Press. 1971.
3. Neuberger. J. W.. "Gradient Inequalities and Steepest Descent for Systems of Partial Differential Equations," Proceeding of the Conference on Numerical Analysis. Texas Tech. University. 1983.
"Some Global Steepest Descent Results for Nonlinear Systems." Trends in Theory and Pract ice of nonlinear Different ial Eauat ions. edited by V. Lakshmikantham, New York. Marcel Dekker. 1984. pp. 413-418.
'Steepest Descent and Time-Stepping for the Numerical Solution of Conservation Equations." Proceedings of the Conference on Phusleal Mat h and Nonli near Di f ferent ial Equat ions. Marcel Dekker,, C To appear ).
. "Steepest Descent for General Systems of Linear Differential Equations in Hilbert Space." Proceedings of. a Sumposium held at Dundee. Scot land March-July 198 2. Lecture Notes in Mathemat ics. Vol. 1032. New York. Springer-Verlag, 1983, 390-406.
. "Steepest Descent for Systems of Nonlinear Partial Differential Equations," oak Ridge Nat ional Laboratory CSD/TM-161. (1981).
Glowinski. Roland. Keller, H.. and Reinhart, L., Cont inuat ion-Con jugate Gradient Methods for the Least Square Solut ion of Nonlinear Boundary Va1ue Problems. Rapports de Recherche. Institue National de Recherche en Informatique et en Automatique, 1982.
49
s o
TABLE I
Least C For Sobolev*s Imbedding inequality
F SI < C * II f II <x> n„ 2
1 2 3 4 5 6 7 8 9 10
14587751766902701 11280304092518190 11229455523920118 11228314131571282 11228287031259754 11228286380032815 11228286364338547 11228286363960049 11228286363950901 11228286363950679
51
TABLE I I
Program : FH2.CCR
Equat ion u u ; u (0) a u (1)
On the interval C0.0, 1.01
Run # 1
N s 15 ITER = 2 0 TOLERANCE a 0.0000001 EPSILON a 0,001
SCALE 1 = 1. SCALE2 a 1. RESTART INT. a S
Initial guess : u(x) a i.
£ new * new new Distance
Moved Cos ine
* o l d * o l d ^ O l d
1 0 .022237 0 .016 0 .0 0 .507 0 0 2 0 .055957 0 .061 0 .042 5 .250 0 . 436 3 0 .944718 0 .969 0 . 848 0 .733 0 .648 4 0 .337422 0 .311 0 .458 5 .403 -0 , .119 5 0 .514833 0 . 488 0 .598 0 .514 0 . 850 6 0 .456134 0 .514 0 .306 9 . 146 -0 268 7 0 . 175810 0 .213 0 .015 50 .312 -0 .589 8 0 .921007 0 .913 1 .456 0 .571 -0. .815 9 0 .675432 0 .686 0 .241 24 .612 0 292
10 0 .959044 0 .954 1 .566 0 .924 -0 .572
0 . 134455 0 .206175 0.316819
APPROXIMATE SOLUTION AFTER 10 ITERATIONS 0.144840 0.221248 0.340533
0.155511 0.237454 0.364837
0 . 166852 0.254949
0.179032 0.273920
0 . 192125 0 . 294529
11 0. , 773107 0 .784 0. . 042 6 .200 0 .695 12 0. 545754 0 .537 11 . .492 58 711 -0 .260 13 0. .964337 0 .970 0 . 653 8 .117 -0 . 253 14 0. ,491388 0 .464 2 . . 928 7 . .769 -0 8 88 15 0 , .810646 0 862 0 .082 52 .412 -0 638 16 0. 930735 0 .929 1 . 117 4 . . 835 -0 .581 17 0 . .772209 0 . 777 0 . 124 2 . . 197 0 . 759 18 0 . 901011 0 , 902 0 .259 44 488 -0 . 004 19 0 , .870454 0 . 864 17 . 129 1 .110 -0 . 893 20 0. .935846 0 .936 0 .973 7 . 972 -0 .220
APPROXIMATE SOLUTION AFTER 20 ITERATIONS 0.135973 0.146267 0.156990 0.168578 0.131033 0.194378 0.208699 0.224108 0.240700 0.258540 0.277667 0.298164 0.320253 0.344197 0.369135
5 2
Run # 2
N a 15 ITER * 20 TOLERANCE = 0.0000001 EPS1LON a 0.001
SCALE1 = 1. SCALE 2 = 1. RESTART INT. = 5
Initial guess : u(x) = - 1 .
new
'old
4> new
* old
'new Distance ; Moved 'old
Cosine
1 0 .024627 0 .011 0 .069 0 . 419 2 0 .270031 0 .353 0 .227 4 . 101 3 0 .384403 0 .267 0 .478 0 .498 4 0 .192316 0 .066 0 .249 6 . 337 5 0 .943803 1 .937 0 . 826 1. . 179 6 0 .147016 0 , . 195 0 . 134 10 .560 7 0. 897035 1 .236 0 759 1 .424 8 0 . 715835 1 . 125 0 . 445 2 .440 9 0 . ,424352 0. . 191 0 . 816 0 . . 740
10 0. 762809 1. . 101 0 .630 5 .825
597 603 810
-0.625 -0. 854
879 606
-0.921 -0 .923 -0 .714
a APPROXIMATE SOLUTION AFTER 10 ITERATIONS -0.010818 -0.012083 -0.013298 -0.014534 -0 015814 •0.018495 -0.019897 -0.021364 -0.022940 -0 024683 -0.028880 -0.031336 -0.033854
-0,017136 -0.026652
11 12 13 14 15 16 17 18 19
0.753074 0.164670 0 .697761 0.475691 0.934011
345026 561565 930327 106747
TOLERANCE MET 20 0.106747
0 .497 0 .923 0 .921 0 .501 0 .042 54 .514 0 .600 1 . 130 0 .611 0 .535 0 .336 7 .775 0 .977 0 .775 0 .545 0 ,371 0 , .222 42 .955 0. ,525 0 , 845 12 , .380 0 . 886 1, . 148 6 . .740 0 . 120 0 056 6 . . 748
0. 120 0 . 056 6 . 748
-0 .922 -0.902 -0.943 -0.170 0.833
-0 .784 -0.538 -0.819 0,850
SOLUTION TO TOLERANCE AFTER 20 ITERATIONS -0.000140 -0.000202 -0.000203 -0.000212 -0 000225 -0.000248 -0.000266 -0.000295 -0.000329 -0 000357 -0.000389 -0.000432 -0.000464
-0 .000236 -0 .000373
53
Run # 3
N = 15 ITER « 20 TOLERANCE = 0.0000001 EPSILON = 0.001
SCALE 1 a 1. SCALE2 a 1. RESTART INT. a 5
Initial guess : u(x) = 0.1
e new
old
new
old
new
'old
Distance Moved
Cosine
1 0 .018135 0 .016 0 .045 0 .491 0 .987 2 0 .106167 0 .017 0 .628 8 .948 -0 .309 3 0 .988131 0 .689 1 .035 1 .050 -0 .963 4 0 .317371 2 .782 0 .057 61 .373 -0 .947 5 0 .951598 0 .905 1 . 190 1 .730 -0 .785 6 0 .531684 0 .530 0 .540 6 .751 0 .547 7 0 .273018 0 .098 0 .939 19 .800 -0 .642 8 0 .972673 1 .206 0 . 880 2 .979 -0 .934 9 0 .025587 0 .071 0 .001 21 .369 -0 .949
10 0 .848208 0 .855 0 .555 4 .434 -0 . 101
-0.000060 -0.000091 -0.000183
11 0,808667 TOLERANCE MET
12 0.806667
APPROXIMATE SOLUTION AFTER 10 ITERATIONS -0.000045 -0.000049 -0.000101 -0.000103 -0.000196 -0.000188
0.775 2.880
0.775 2.880
-0.000052 -0.000105
1.083
1.083
-0.000060 -0.000118
-0.937
-0.000075 -0.000148
SOLUTION TO TOLERANCE AFTER 12 ITERATIONS -0.000103 -0.000093 -0.000102 -0.000110 -0.000122 -0.000139 -0.000157 -0.000169 -0.000172 -0.000175 -0.000187 -0.000215 -0.000249 -0.000260 -0.000250
54
Run # 4
N a IS ITER » 20 TOLERANCE = 0.0000001 EPSILON « 0.001
SCALE 1 a .8 SCALE2 a .2 RESTART INT. a S
Initial guess : u(x) a i.
new
C old
new
old
new
'old
D istance Moved
Cosine
1 0 . 0 1 7 8 4 0 0 . 0 1 6 0 . 0 0 . 6 3 5 2 0 . 0 1 3 6 6 8 0 . 0 1 0 0 . 0 5 9 10 . 0 2 2 3 0 . 9 7 9 7 4 3 0 . 9 9 2 0 . 9 5 7 1 . 0 7 8 4 0 . 7 4 8 6 1 7 0 . 7 4 3 0 . 7 6 0 10 . 4 2 5 5 0 . 8 7 0 3 4 1 0 . 8 4 9 0 . 9 1 0 0 . 6 5 8 6 0 . 2 4 0 6 6 0 0 . 2 3 9 0 , . 2 4 4 8 1 . . 2 9 5 7 0 . 7 2 2 3 4 6 0 . 6 4 7 0 . 8 4 8 2 7 , . 0 1 5 8 0 . 9 5 9 8 6 1 0 . 9 2 7 1 , . 0 0 1 1 . 4 9 9 9 0 . 3 1 1 4 1 3 0 . 3 5 2 0 . 2 6 3 18 . 9 5 8
10 0 . 7 8 4 8 0 9 0 . 8 4 0 0 . 6 9 7 6 4 . 0 0 4
0 4 6 9 622
- 0 . 3 2 4 0 . 7 2 2
- 0 . 4 8 4 - 0 . 5 5 8
5 8 2 7 0 7
- 0 . 6 9 5
0 . 1 4 4 9 2 0 0 . 2 2 2 6 4 9 0 . 3 4 2 0 7 6
APPROXIMATE SOLUTION AFTER 10 I T E R A T I O N S 0 . 1 5 5 8 5 7 0 . 2 3 9 1 9 8 0 . 3 6 7 8 2 5
0 . 1 6 7 2 0 1 0 . 2 5 6 8 8 6 0 . 3 9 4 4 3 3
0 . 1 7 9 4 6 8 0 . 2 7 5 8 1 7
0 . 1 9 2 7 9 2 0 . 2 9 6 1 6 6
0 . 2 0 7 1 8 6 0 . 3 1 8 1 7 4
1 1 0 . 8 6 7 9 0 1 0 . 9 4 2 0 . 7 2 6 0 . 9 9 0 12 0 . 6 5 8 7 1 2 0 . 6 6 6 0 . 6 4 0 2 0 . 7 5 0 1 3 0 . 9 3 4 1 5 3 0 9 8 2 0 . 8 1 1 10 . 7 6 0 1 4 0 . 3 7 9 0 3 2 0 . 4 4 8 0 . 1 6 5 6 . 5 1 4 15 0 . 8 8 1 1 8 5 0 . 9 0 3 0 . . 6 9 9 1 7 . 5 7 0 16 0 . 9 8 8 4 0 2 0 . 9 8 5 1 . 0 2 6 1 . 8 1 5 17 0 6 0 8 5 0 0 0 . 6 4 1 0 , 2 7 1 3 3 . 4 9 6 1 8 0 . 8 2 9 8 8 1 0 . 8 5 4 0 . 2 4 3 1 2 5 . , 3 7 5 19 0 , 9 6 3 7 8 8 0 , . 9 6 4 0 . 9 3 3 0 . 7 6 4 2 0 0 . 9 6 6 0 4 0 0 . , 9 6 9 0 . 7 2 9 1 8 . 9 2 8
0 . 7 3 3 - 0 . 6 6 9 - 0 . 3 5 8 - 0 . 8 4 6 - 0 . 1 0 1 - 0 . 7 4 8
0 . 4 6 6 - 0 . 4 3 0 - 0 . 8 7 9 - 0 . 0 9 6
0 . 1 3 6 2 0 4 0 . 2 0 9 0 3 6 0 . 3 2 0 6 4 3
APPROXIMATE SOLUTION AFTER 2 0 I T E R A T I O N S 0 . 1 4 6 4 1 3 0 . 1 5 7 1 0 0 0 . 1 6 8 7 1 5 0 . 1 8 1 2 2 3 0 . 2 2 4 4 9 9 0 . 2 4 1 1 1 7 0 . 2 5 8 9 6 3 0 . 2 7 8 0 9 2 0 . 3 4 4 6 3 6 0 . 3 6 9 7 0 4
0 . 1 9 4 6 3 9 0 . 2 9 8 5 7 9
55
Run # 5
N = 15 ITER a 20 TOLERANCE a 0.0000001 EPSILON a 0.001
SCALE 1 a .2 SCALE2 * .8 RESTART INT. a 5
Initial guess : u(x) a i.
new
'old
new
old
new
'old
Distance Moved
Cos ine
1 0 . 0 3 9 4 1 0 0 0 1 7 0 . 0 2 . . 5 1 1 0 . 0 2 0 . 2 2 4 5 9 8 0 . 3 6 7 0 . 1 2 2 8 . . 0 6 9 0 . . 2 9 2 3 0 . 8 5 5 7 7 3 0 . . 9 7 8 0 . 5 9 0 3 , . 6 2 2 0 . 6 3 0 4 0 . 2 2 0 8 5 0 0 . 2 3 4 0 . 1 7 5 1 6 . . 2 1 0 - 0 . 3 3 2 5 0 . 5 6 6 5 4 4 0 . 6 3 8 0 , . 2 2 5 2 . . 5 5 8 0 . . 8 9 4 6 0 . 0 8 6 4 0 3 0 . . 0 8 5 0 . 1 0 6 3 1 , . 7 1 5 - 0 , . 2 8 2 7 0 . 9 5 7 2 4 2 0 . 9 1 7 1 . 3 9 9 1 3 . 6 8 8 - 0 . 8 3 0 8 0 . 5 7 1 3 1 5 0 . 6 2 4 0 . 1 9 3 2 6 . 9 3 8 0 . 3 4 4 9 0 . 6 9 3 0 7 5 0 . . 6 5 1 1 . 6 6 8 4 . . 7 8 0 - 0 . . 8 9 5
1 0 0 . 7 5 6 4 4 5 0 . 8 3 4 0 . 0 5 0 3 2 . 1 0 5 - 0 . 3 8 8
0 . 1 3 2 6 3 6 0 . 2 0 4 5 6 3 0 . 3 1 6 2 0 6
APPROXIMATE SOLUTION AFTER 1 0 I T E R A T I O N S 0 . 1 4 3 2 2 3 0 . 2 1 9 7 3 6 0 . 3 3 9 5 0 2
0 . 1 5 4 0 3 1 0 . 2 3 6 1 6 2 0 . 3 6 3 2 2 2
0 . 1 6 5 3 8 9 0 . 2 5 3 9 6 0
0 . 1 7 7 5 0 5 0 . 2 7 3 2 3 1
0 . 1 9 0 5 2 4 0 . 2 9 4 0 1 0
1 1 0 . 8 7 8 3 0 4 0 . 8 7 5 1 . 3 2 1 3 . 2 9 9 - 0 . 8 8 7 1 2 0 . 3 5 4 1 0 5 0 . 3 1 1 4 . 7 4 6 4 2 0 . 3 7 4 - 0 . 4 9 6 1 3 0 . 8 6 9 7 0 5 0 . 9 2 9 0 . 4 7 7 2 . 8 1 8 0 . 8 9 5 1 4 0 . 6 4 2 2 8 3 0 . 6 9 2 0 . 0 0 6 4 4 . 7 4 7 - 0 . 5 7 5 1 5 0 . . 9 8 4 1 8 8 0 . 9 8 0 7 5 6 5 17 . 1 1 1 - 0 . 5 3 3 1 6 0 . 8 6 8 6 7 9 0 . 8 6 0 2 . 5 8 1 2 4 . 2 8 1 0 . 7 5 2 17 0 . 7 4 7 4 4 9 0 . 7 5 6 0 . 1 8 6 1 1 6 . 9 0 8 - 0 . 3 4 7 1 8 0 , . 5 8 0 8 7 7 0 . 5 8 3 0 . 0 3 1 4 6 8 . 0 5 6 - 0 . 3 2 1 1 9 0 . 6 4 3 9 0 2 0 . 5 6 7 3 8 9 . 2 4 3 3 . 4 7 7 - 0 . 9 0 1 2 0 0 . 8 6 9 4 4 3 0 . 9 8 6 0 . 0 1 3 1 3 4 . 3 2 1 - 0 . 9 1 1
APPROXIMATE SOLUTION AFTER 2 0 I T E R A T I O N S Q . 1 3 5 0 7 0 0 . 1 4 5 3 2 7 0 . 1 5 6 0 0 5 0 . 1 6 7 5 5 3 0 . 1 7 9 9 7 7 0 . 1 9 3 2 9 0 0 . 2 0 7 5 8 2 0 . 2 2 2 9 7 2 0 . 2 3 9 5 4 5 0 . 2 5 7 3 3 2 0 . 2 7 6 3 5 0 0 . 2 9 6 7 0 6 0 . 3 1 8 6 8 3 0 . 3 4 2 5 5 2 0 . 3 6 7 4 2 3
5 6
Run # 6
N - 15 ITER = 20 TOLERANCE == 0.0000001 EPSILON » 0.001
SCALE 1 « 1. SCALE2 = 1. RESTART INT. = 1
Initial guess : u(x) = 1.
new
'old
<#> new
4> old
new
'old
Distance Moved
Cos ine
1 0 . 0 2 2 2 3 7 0 . 0 1 6 0 . 0 0 . 5 0 7 2 0 . 2 2 6 4 5 2 0 . 2 9 2 0 . 0 4 5 4 . 4 0 8 3 0 . 4 7 7 4 6 0 0 . 4 0 5 1 . 7 9 1 0 . 5 3 3 4 0 . 5 0 1 0 3 2 0 . 5 7 2 0 . 2 0 9 3 . 8 7 1 5 0 . 5 4 3 3 7 2 0 . 4 4 7 1 . 6 2 8 0 , . 5 3 4 6 0 , . 5 8 6 1 0 6 0 . 6 6 7 0 , . 3 3 7 3 . . 8 7 5 7 0 . 6 5 0 5 0 1 0 . 5 2 5 1 . 4 1 6 0 . . 5 3 6 8 0 , 7 1 5 7 0 9 0 8 2 4 0 . 4 7 0 3 , . 9 4 2 9 0 . 7 8 7 8 7 4 0 6 6 8 1 . . 2 6 5 0 . 5 3 9
10 0 . 8 3 7 4 9 1 0 . . 9 6 2 0 . 5 7 7 4 . 1 3 5
0.0 0 . 4 3 6
- 0 . 5 6 3 0 . 4 0 1
- 0 . 6 5 4 0 . 3 1 6
- 0 . 7 0 5 0 . 1 7 1
- 0 . 7 4 2 - 0 . 0 7 1
0 . 1 5 6 9 9 9 0 . 2 3 8 3 2 1 0 . 3 6 6 7 5 9
APPROXIMATE SOLUTION AFTER 10 I T E R A T I O N S 0 . 1 6 8 9 2 8 0 . 2 5 5 8 7 2 0 . 3 9 2 6 6 1
0 . 1 8 1 0 7 8 0 . 2 7 4 9 8 1 0 . 4 1 8 8 8 5
0 . 1 9 3 8 2 4 0 . 2 9 5 6 9 1
0 . 2 0 7 4 6 2 0 . 3 1 7 9 8 0
0 . 2 2 2 2 3 2 0 . 3 4 1 7 4 2
1 1 0 . 8 8 0 8 3 7 0 . 7 9 3 1 . 1 8 8 0 . 5 4 4 12 0 . 9 0 0 3 0 4 1 . 0 1 1 0 . 6 4 2 4 . 4 3 8 1 3 0 . 9 1 7 0 7 4 0 . 8 5 0 1 . 1 6 3 0 . 5 4 8 1 4 0 . 9 2 2 7 2 0 1 . 0 1 3 0 . 6 8 1 4 , . 6 3 8 1 5 0 . 9 2 7 9 7 7 0 . 8 6 9 1 . 1 6 1 0 . 5 5 0 1 6 0 . 9 3 0 0 8 0 1 . 0 0 6 0 . . 7 0 3 4 , . 7 0 0 17 0 . 9 3 2 1 7 5 0 . 8 7 8 1 . . 1 6 4 0 5 5 1 1 8 0 . . 9 3 3 5 2 0 1 . 0 0 1 0 . 7 1 5 4 , . 7 1 4 1 9 0 . 9 3 4 9 0 7 0 . 8 8 4 1 . 1 6 6 0 . 5 5 1 2 0 0 . 9 3 6 0 7 2 0 . 9 9 9 0 . 7 2 1 4 . 7 1 7
- 0 . 7 7 5 - 0 . 3 2 3 - 0 . 8 0 1 - 0 . 4 4 6 - 0 . 8 1 7 - 0 . 4 8 2 - 0 . 8 2 5 - 0 . 4 9 2 - 0 . 8 3 1 - 0 . 4 9 4
* APPROXIMATE SOLUTION AFTER 2 0 I T E R A T I O N S 0 . 1 4 8 4 3 7 0 . 1 5 9 7 3 4 0 . 1 7 1 2 7 2 0 . 1 8 3 4 2 6 0 . 1 9 6 4 5 3 0 . 2 2 5 8 4 3 0 . 2 4 2 4 7 7 0 . 2 6 0 5 5 3 0 . 2 8 0 1 5 8 0 3 0 1 3 4 2 0 . 3 4 8 2 4 0 0 . 3 7 3 4 6 6 0 . 3 9 9 1 0 2
0 . 2 1 0 5 4 3 0 . 3 2 4 0 8 3
APPENDIX
C PROGRAM NAME: FH2.CGR STEEPEST DESCENT, SOBOLEV NORM C SOLVES F(X,U,U',U*')=0, B(U(0),U(1),U'(0),U'(1)) C BEGIN DATE: 22 FEB 1984 C REVISED: 24 OCT 1984 ..
SECTION 1
IMPLICIT REAL * 8 (A-H,0-Z) . REAL * 8 U(200),GR(200), BGR(200),XIGR(200),SXIGR(200) REAL*8 WR1(200),Z(200),SGOLD(200) REAL * 8 ABD(3,200) REAL*8 F,F2,F3,F4,B,B1,B2,B3,B4
C
C###
*
F(X,U,U1,U2)= U1 - U F2(X,U,Ul,U2)= -1.D0 F3{X,U,U1,U2)= 1.D0 F4(X,U,U1,U2) = 0 . DO B(A,B,C,D)= A - B**2 B1(A, B, C , D ) = 1 .DO B2 (A,B,C,D)= -2.DO *B B3(A,B,C,D)= 0 • DO B4(A,B,C,D)= 0 .DO WRITE(6,1) FORMAT(IX,'PGM=FH2.CGR '/,
IX, 'EQ: IT ' = U ' {' IX,'BC: U(0) = U(1)* *2 ' )
SECTION
N=15 IF(N.GT.200) CALL EXIT NMl=N-l NM2=N-2 NM3=N-3 NM1SQ=NM1* * 2 PI = 4.DO *DATAN(1.DO) ELFT= 0.DO ERHT= 1.DO DX=(ERHT-ELFT)/(N-l) ITER =20 ITMOD=l MODPNT=10 IREST=1 . RIN =1.0 TOLER=l.0D-7 EPSILN= 1.0D-03 SCALE1=1.0 SCALE2=1.0
57
58
WRITE(6,10) ELFT,ERHT,N,ITER,TOLER,EPSILN, * SCALE1,SCALE2,IREST
10 FORMAT(IX,'ON INTERVAL ,,F5.2,' TO *,F5.2 /, * IX,'N= *,15,' ITER= ',110, * ' TOLERANCE= ',E15. 7, ' EPSILON=',F10.7/, * IX,' SCALEl= ,F5.3,' SCALE2= ',F5.3, * 1 RESTART INT.=',14/, * 20X,' INITIAL GUESS: ') DO 20 1=1,N
20 U(I)=RIN WRITE(6,990) (U(I),I=1,N)
c SECTION 3
ALP= 1./(4•*DX**2) BET= 1./(DX**4) ABD(1,1)=0. ABD(1,2)=0. DO 35 J=3,N
35 ABD(1,J)= -ALP + BET ABD(2,1)= 0. ABD(2,2)= -2.*BET DO 40 J=3,NM1
40 ABD( 2 , J) = -4.*BET ABD( 2 , N) = -2.*BET ABD(3,1)= ALP + BET ABD(3,2)= 1. + ALP + 5.*BET DO 45 J=3,NM2
45 ABD(3,J)= 1. + 2.*ALP + 6.*BET ABD(3,N-l)= 1. + ALP + 5.*BET ABD(3,N)= ALP + BET CALL DPBFA(ABD,3,N,2,INFO) IF(INFO.NE.0) CALL EXIT
c BEGIN S.D. ITERATION DO 1000 KK=1,ITER
c SECTION 4
PHIOLD=0.DO DO 80 1=1,N
80 GR(I)=0. DO 100 1=2,NM1 CX=DX*(I-1.)
100
CU=U(I) CU1= (U(I+1)-U(I-1))/(2.*DX) CU2= (U(1+1)-2.*U(I)+U(1-1))/(DX* *2) CF=F(CX,CU,CU1,CU2) CF2=F2(CX,CU,CU1,CU2) CF3=F3(CX,CU,CU1,CU2) CF4=F4(CX,CU,CU1,CU2) PHIOLD=PHIOLD + CF*CF GR (1-1) ==GR( 1-1) - ( CF 3 * CF) / ( 2 . * DX) + ( CF4 * CF ) / (DX 2 ) GR(I)=GR(I) + CF2*CF - (2.*CF4*CF)/(DX**2) GR(1+1)=GR(1+1) + (CF3*CF)/(2.*DX) + (CF4*CF)/(DX**2) PHIOLD=.5*PHIOLD
59
SECTION 5 DO 380 1=1, N
380 BGR(I) = 0 Bli-K { 1 ) = U . Cl= (U(3)-U(1))/(2.*DX) C2 = (U(N)-U(N-2))/(2.*DX) CB = B{U(1),U(N),CI,C2) CB1= B1(U(l),U(N),CI,C2) CB2= B2(U(1),U(N),C1,C2) CB3= B3(U(1),U(N),CI,C2) /-.-O A _ nil/Tr/1 \ TT / XT V n 1 \
BGR(N-2)= - ( C B 4 X C B ) / ( 2 . * D X ) BGR(N) = CB2*CB + (CB4*CB)/(2. BCOLD = .5*CB* *2
SECTION 6 DO 400 1=1,N
400 XIGR(I)=SCALE1*GR(I)+SCALE2*BGR(I) XIOLD = SCALE1*PHI0LD + SCALE2*BCOLD IF(XIOLD.GT.TOLER) GO TO 550 WRITE(6,992)
992 FORMAT(IX," TOLERANCE MET") GO TO 1100
550 CONTINUE c SECTION 7 C SOBOLEV GRADIENT
CALL DPBSL(ABD,3,N,2/GR) CALL DPBSL(ABD,3,N,2,BGR)
C** COMPUTE COSINE IN H2 SUM1=0.DO SUM2=0.DO DOT=0.DO DO 625 1=2,NM1 CGI = (GR(1+1)-GR(1-1))/(2.*DX) CG2 = (GR(1+1)-2.*GR(I)+GR(1-1))/(DX* *2) CB1 = (BGR(1+1)-BGR(1-1))/(2.*DX) CB2 = (BGR(1+1)-2.*BGR(I)+BGR(1-1))/(DX* *2) SUM1=SUM1+ GR(I)* * 2 + CG1**2 + CG2**2 SUM2=SUM,2 + BGR( I) * *2 + CB1**2 + CB2**2
625 DOT=DOT + GR(I)*BGR(I) + CG1*CB1 + CG2*CB2 DENOM=DSQRT(SUM1* SUM2) COSINE=0.DO IF(DENOM.GT.0.) COSINE= DOT/DENOM DO 635 1=1,N
635 SGOLD(I)=SXIGR(I) DO 640 1=1,N
640 SXIGR(I)= SCALE1*GR(I) + SCALE2*BGR(I) c SECTION 8
IF(MOD(KK,IREST).NE.0) GO TO 644 DO 642 1=1,N
642 Z(I)=SXIGR(I)
60
646
GO TO 650 644 CONTINUE
DO 645 1=1,N 645 WR1(I)=SXIGR(I)-SGOLD(I)
DOT=O.DO DO 646 1=2,NM1 CW1= (WR1(1+1)-WR1(1-1))/(2.*DX) CW2= (WR1(I+1)-2.*WR1(I)+WR1(I-1))/(DX**2) CS1= (SXIGR( 1+1) -SXIGR( 1-1) ) / (2 . *DX) % w / . _ . CS2= (SXIGR(1+1)-2.*SXIGR(I)+SXIGR(1-1))/(DX 2) DOT = DOT + WR1(I)*SXIGR(I) + CW1*CS1 + CW2 CS2 SUM1=0.DO DO 647 1=2,NM1
647 SUM1=SUM1+ SGOLD(I)**2 + ((SGOLD(1+1)-SGOLD(1-1))/(2.*DX)) *2 +
* ((SGOLD(I+l)-2.*SGOLD(I)+SGOLD(I-l))/(DX**2))**2 GAMMA=0.DO IF(SUM1.GT.0.DO) GAMMA= DOT/SUM1 DO 648 1=1,N
648 Z(I)=SXIGR(I) + GAMMA*Z(I)
SECTION 9 COMPUTE OPTIMUM DISTANCE - SECANT FORMULA
C C 800 Rl = 0.DO
805 1=2,NM1
DX * * 2) /v 2.*DX)
. . +SXIGR( 1-1) ) / (DX* *2 ) 805 Rl"= Rl" + Z(I)*SXIGR(I) + CZ1*CS1 + CZ2*CS2
DO 810 1=1,N 810 GR(I)=0.
DO 850 1=2,NM1 CX=DX*(1-1) CUMl=U(1-1) + EPSILN* Z(1-1) CU= U(I) + EPSILN* Z(I) CUP1=U(I+1) + EPSILN*Z(I+1) CU1= (CUP1-CUM1)/(2.*DX) CU2= (CUP1-2.*CU+CUM1)/(DX**2) CF=F(CX,CU,CU1,CU2) CF2=F2(CX,CU,CU1,CU2) CF3=F3(CX,CU,CUl,CU2) CF4=F4(CX,CU,CU1,CU2) v , GR(I-1)=GR(I-1) - (CF3*CF)/(2.*DX) + (CF4*CF)/(DX* 2) GR(I)=GR(I) + CF2*CF - (2.*CF4 *CF)/(DX* *2)
850 GR(1+1)=GR(1+1) + (CF 3 * CF)/(2.* DX) + (CF4*CF)/(DX 2) DO 890 1=1,N
890 BGR(I)=0. CWl=U(1)+EPSILN* Z(1)
\ « - r r i t * rr / 1 \ CW3=U(3)+EPSILN*Z(3) CWNM2=U(N-2)+EPSILN*Z(N-2)
61
CWN=U(N)+EPSILN* Z(N) Cl= (CW3-CW1)/(2.*DX) C2= (CWN-CWNM2) / (2 .*DX) CB = B(CW1,CWN,CI,C2) CB1= B1(CW1,CWN,CI,C2) CB2= B2(CW1,CWN,CI,C2) CB3= B3(CW1,CWN,CI,C2) CB4= B4(CW1,CWN,CI,C2) BGR( 1 )=CB1*CB - (CB3*CB) / ( 2 . *DX) BGR ( 3 ) == (CB3*CB)/(2.*DX) BGR(N-2)= - (CB4*CB)/(2.*DX) BGR(N) =CB2*CB + (CB4*CB)/(2.*DX) DO 900 1=1,N
900 WR1(I)=SCALE1* GR(I) + SCALE2*BGR(I) CALL DPBSL(ABD,3,N,2,WR1) R2= 0.DO DO 920 1=2,NM1 CW1= (WR1(1+1)-WR1(1-1))/(2.*DX) CW2= (WR1(1+1)-2.*WR1(I)+WR1(1-1))/(DX**2) CZ1= (Z(I+1)-Z(I-1))/(2.*DX) CZ2= (Z(I+1)-2.*Z(I)+Z(I-1))/(DX**2)
920 R2 = R2 + WR1(I)* Z(I) + CW1*CZ1 + CW2*CZ2 R3 = (EPSILN*R1)/(R2-R1)
c SECTION 10 ICOUNT=0
925 DO 930 1=1,N 930 WR1(I) = U(I) - R3*Z(l)
PHINEW=0.DO DO 935 1=2,NM1 CX=DX*(I-1) CU=WR1(I) CU1= (WR1(1+1)-WR1(1-1))/(2.*DX) CU2= (WR1(1+1)-2.*WR1(I)+WR1(1-1))/(DX* *2)
935 PHINEW==PHINEW + F (CX, CU, CU1, CU2 ) * *2 PHINEW=.5*PHINEW Cl= (WR1 ( 3 ) -WR1 (1) ) / (2 . *DX) C2=(WR1(N)-WR1(N-2))/(DX**2) BCNEW= .5*B(WR1(1),WR1(N),CI,C2)* *2 XINEW=SCALE1*PHINEW + SCALE2*BCNEW RR= 0.DO RPHI= 0.DO RBC= 0.DO IF(XIOLD.NE.0.0) RR=XINEW/XIOLD IF(PHIOLD.NE.0.0) RPHI=PHINEW/PHIOLD IF(BCOLD.NE.0.0) RBC=BCNEW/BCOLD IF(RR.LT.0.999999999) GO TO 950 WRITE(6,993) KK,RR
993 FORMAT(IX,110,' RR = ',F12.6,', .GT. 0.999999999') R3=.5*R3 ICOUNT=ICOUNT+l IF(ICOUNT.LT.5) GO TO 925 WRITE(6,996)
62
996 FORMAT(IX,' STALLED ') 950 ICOUNT=0
DO 955 1=1,N 955 U(I)=WR1(I) C * *
IF(MOD(KK,ITMOD).EQ.0) * WRITE(6,994) KK,RR,RPHI,RBC,R3,COSINE
994 FORMAT(IX,17,F10.6,3F10.3,5X,F10.3) IF(MOD(KK,MODPNT).EQ.0) WRITE(6,990) (U(I),I=1,N)
990 FORMAT(IX,6F12.6) SECTION 10-
1000 CONTINUE C** 1100 CONTINUE
WRITE(6,994) KK,RR,RPHI,RBC,R3 WRITE(6,990) (U(I),I=1,N) CON=0.62737639353D0 CALL EXIT END
/INC LINP.DBL
BIBLIOGRAPHY
Books
Adams. Robert. Sobolev Spaces. New York. Academic Press.
1975.
Berger. Melvyn, Nonlinearitu and Funct ional Analysis. New York. Academic Press. 1977.
Hestenes. Magnus. Con jugate Direct ion Methods In opt imizatIon. New York, Springer-Verlag. 1980.
Hobson. E., A Treat ise on Plane Trjqonometru. Cambridge, Cambridge University Press. 1928.
Muir. Thomas. The Theoru of Determinants. Vol.I I. The Period 1841 to i860. London, Macmillian and Co.. 1911.
Polak. E. . Computat ional Met hods _i_n Opt imi zat ion. New York, Academic Press, 1971.
Riesz, Frigyes. and Sz-Nagy. B.. Funct ional Analysis. New York, Frederick Ungar Publishing Co., 19SS.
Wismer. David, and Chattergy. R-. Int roduct ion to Nonli near Opt imi zat ion. New YOrk. North-Holland. 1978.
Articles
Brezis, Haim, and Nirenberg. L.. "Positive Solutions of
Nonlinear Elliptic Equations Involving Critical Sobolev Exponents." Communicat ions on Pure and Applied Mathematics. XXXVI (July, 1983), 437-477.
Cauchy. A.. "Methode generale pour la resolution des systems d'equations simultanees," Compt es Rendus Hebdomada i res des Seances de 1'Academie des Sciences. 25 ( 1847), 5 36-5 38.
D'Hondt, T., "The Approximate Solutions of Differential Systems with Nonlinear Boundary Conditions," Zei tschr i f t fur Anaewandt e Mat hemat1k und Mechani k. 58 (July. 1978). 423-425.
63
64
Lieb. Elliott, "Sharp constants in the Hardy-Littlewood-Sobolev and related inequalities," Anna 1 s of. Mathematics. 118 (September, 1983), 349-374.
Marti. J. T.. "Evaluation of the Least Constant in Sobolev's
inequality for H ^ O . s ) , " SI AM Journal on Numerical Analusis. 20 (December. 1983). 1239-1242.
Mawhin. Jean. "Recent Trends in Nonlinear Boundary Value Problems." Proceedings of the Seventh Internatlonal conference on Nonlinear Osci1lat ions. (Berlin September, 197S), 51-70.
Morosanu. G., "On a Class of Nonlinear Differential
Hyperbolic systems with Non-local Boundary Conditions,"
Journal of Different ial Equat ions. 43 (March, 1982),
345-368.
Neuberger, J. W.. "Gradient Inequalities and Steepest
Descent for Systems of Partial Differential Equations. ••Proceeding of the Conference on Numer 1 cal Analysis. Texas Tech. Universitu. 1983.
, "Some Global Steepest Descent Results for Nonlinear Systems." Trends in Theory and Pract ice of Nonlinear D i f ferential Equat ions. edited by V. Lakshmikantham. New York, Marcel Dekker. 1984, pp. 413-418.
"Steepest Descent and Time-Stepping for the Numerical Solution of Conservation Equations Proceedings of the Conference on Physica 1 Mat h, and Nonlinear Pi fferent ial Equat ions. Marcel Dekker. ( To appear >.
"Steepest Descent for General Systems of
Linear Differential Equations in Hilbert Space." Proceedings of a sumposium he 1d at Dundee. Scot land March-Julu 1982. Lecture Notes in Mathemat ics. Vol. 1032. New York. Springer-Verlag. 1983. 390-406.
, "Steepest Descent for Systems of Nonlinear Partial Differential Equations," Oak Ri dge Nat ional Laboratoru CSD/TM-161. (1981).
Palais. R.. and Smale. S.. "A Generalized Morse Theory." Bui let in of t he Amer ican Mathemat ica1 Societu. 70 ( 1964) . 165-171 .
Rosen. G.. "Minimum Value for c: in the Sobolev Inequality II«ll £ c*llv*ll ." S1 AM Journal on Applied © •+
Mat hemat i cs, 21 (July, 1971), 30-3 2.
6S
Rosen. J. B.. "The Gradient Projection Method for Nonlinear Programming. Part I. Linear Constraints." Journal of the Socletu for Industrial and Applied Mathemat ics. VIII (March. I960). 181-217.
Rosen. J. B.. "The Gradient Projection Method for Nonlinear Programming. Part I I. Nonlinear Constraints." Journal of the Societq for industrial and Applied Mathemat ics IX (December. 1961). 514-532.
Rosenbloom. P. C., "The Method of Steepest Descent." Symposia in Applied Mathemat ics. Providence. Rhode Island. American Mathematical Society. 1956. 127-176.
sobolev, S. L., "On a Theorem of Functional Analysis."
Matematiceski i SborniK. 4 (1938). 471-479. American Mathematical Society Translation Series 2. 34 (1963), 39-68.
Talenti. Giorgio. "Best Constant in Sobolev Inequality." Annali di Mathemat ica Pura ed Applicata. Series 4. vol. 110. (1976). 353-372.
Reports
Glowinski, Roland, Keller. H., and Reinhart. L.. Cont inuat ion-Con jugate Gradi ent Met hods for t he Least Square Solut ion of Nonlinear Boundaru Va1ue Prob1 ems. Rapports de Recherche, Institue National de Recherche en Informatique et en Automatique, 1982.