convergence and complexity of newton...
TRANSCRIPT
CONVERGENCE AND COMPLEXITY OF NEWTON ITERATION FOR OPERATOR EQUATIONS
J. F. Traub Department of Computer Science Carnegie-Mellon University
Pittsburgh, Pennsylvania 15213
H. Wozniakowski Department of Computer Science Carnegie-Melion University
Pittsburgh, Pennsylvania 15213 (Visiting from the University of Warsaw)
March 1977 Revised December 1977
This research was supported in part by the National Science Foundation under Grant MCS75-222-55 and the Office of Naval Research under Contract N0014-76-C-0370, NR 044-422,
ABSTRACT
An optimal convergence condition for Newton iteration in a Banach
space is established. We show there exist problems for which the iteration
converges but the complexity is unbounded. Thus for actual computation
convergence is not enough. We show what stronger condition must be imposed
to also assure "good complexity11 •
1. INTRODUCTION
Numerous papers have analyzed sufficient conditions for the convergence
of algorithms for the solution of non-linear problems. In addition to con
vergence, we consider another fundamental question. What stronger conditions
must be imposed to assure "good complexity11? This is clearly one of the
crucial issues (in addition to stability) if one is interested in actual com
putation. We believe it is also a most interesting theoretical question.
We consider Newton iteration for a simple zero of a non-linear operator
in a Banach space of finite or infinite dimension. We establish the optimal
radius of the ball of convergence with respect to a certain functional. There
exist problems where the iteration converges but the complexity increases log
arithmically to infinity as the initial iterate approaches the boundary of
the ball of convergence. (This phenomenon does not occur in the Kantorovich
theory of operator equations; see Section 3.) We establish the optimal radius
of the ball of good complexity.
In this paper we limit ourselves to the important case of Newton iteration.
In other papers (Traub and Wozniakowski [76b] and [77]) we study optimal con
vergence and complexity for classes of iterations.
We summarize the results of this paper. Definitions and theorems con
cerning the optimal ball of convergence are given in Section 2. We conclude
this Section by giving conditions under which the radius of the ball of con
vergence is a constant fraction of the radius of the ball of analyticity of
the operator.
Complexity of Newton iteration is studied in Section 3. We show that
Newton iteration may converge but have arbitrarily high complexity and conjec
ture this is a general phenomenon. We establish the radius of the ball of
good complexity as well as a lower bound on the complexity of Newton iteration.
-2-
2. CONVERGENCE OF NEWTON ITERATION
We consider the solution of the non-linear equation.
(2.1) F(x) - 0
where F: D_c -+ B2 and B^, B2 are Banach spaces over the real or complex
fields of dimension N, N = dim(B]L) = dim(B2) , 1 £ N ^ -H». We solve (2.1)
by Newton iteration which constructs the sequence [x 3 as follows :
xi+l = x i " F ,(x i)" 1F(x i)
where x Q is an initial guess and i = 0,1,... . Suppose that F has a simple
zero a, i.e., F(a> - 0 and Ff (or)"1 exists and is bounded. It is well-known
that if F f is a Lipschitz operator in a neighborhood of a and x^ is suffic
iently close to a then {x. 3 converges to a and the order of convergence is
two, see, e.g., Gragg and Tapia [74], Kantorovich [48], Ortega and Rheinboldt
[70] and Rail [69].
In this paper we are interested in sharp bounds on how close x^ has to be
to a to get quadratic convergence of Newton iteration and how close x^ has to
be to a to guarantee good complexity.
We begin with a theorem which describes the character of convergence of
Newton iteration.
Let F > 0 and J = {x: ^ V). From now on we assume that
a is a simple zero of F,
(2.2) J F1(x) exists for all x € J,
Note that if F n is continuous for x 6 J then
-3-
m d - sup r w 1 ^ ! !
which shows that in this case k^ is a bound on the "normalized11 second derivative.
Let q be any number such that 0 < q < 1
Theorem 2.1
If F satisfies (2.2) and
< 2 - 3 > V * l f e '
(2.4) x Q 6 J,
then Newton iteration is well-defined and
(2.5) lim x i = a9 e 1 + 1 £ qe ±, Vi,
(2.6) e. - £ C. e2. v i+l 1 1
where C± ~ k^ (1 - 2A 2e i) .
If F is continuously twice-differentiable at a then
(2-7) x. + 1 - c-|F ' ( a ) " V(c y)(x i-a) 2+ od^-alf).
Proof
The proof is by induction. Let x^ € J and let
(2.8) R(x;x ) = J^CF'Cx. + t(x-x.)) - F'(x.)}(x-x.)dt.
0 1 . 1 1
Define w » w(x;x^) as the polynomial of first degree which interpolates F(x^)
and F 1 (x^) . Then w(x;x^ * F(xi) 4- Fl(xi)(x-xi) and the next approximation x i + 1 is the zero of w whenever F 1 (x^) is invertible. From Rail [69, p. 124]
we get the error formula
(2.9) F(x) - w(x;x±) - R ^ X j )
for x € J while due to (2.2),
(2.10) |^,(oi)"1R(x;xi)|| * A2|(x-xi|f.
From the definition of we have
| | i-F ' < a ) " V < x ) | | * 2 A 2 | M I * 2A 2 r S - j fg^ < i .
This means that F!(x) is invertible for any x € J and is well-defined.
Furthermore ||F'(xrV(a)|| * 1/(1 - 2A2|(x-a|| ) ,
see Rail [69, p.43]. We can rewrite the polynomial w as w(x;x^) - F 1 (x ) (x-x + )
Set x 23 a in (2.9). Then F 1 (x^ (xi+j"c*) ~ R(»;xi> which yields
- 1 1 A 9 e 5 A 9 r
l h i + H I = Ifr' <xi> F' <«>F' Wx.)II * i ^ a ^ : * i r ^ f e± * q e i
due to (2.3). This proves (2.5) and (2.6).
If Ffl(x) is continuous at ot then
R(of;x±) = \ F»(a)( X i-a) 2 + od^-afp),
F'(x.) = F'(or) + Od^-crll )
which implies (2.7). •
Remark 2.1 Theorem 2.1 states quadratic convergence of Newton iteration since
2 e
i + l ^ Ci ei a n d C i ^ A 2 ^ ~ 2 A 2 e 0 ^ * I(" i s known ( s e e 0 r t e S a a n d Rheinboldt
-5-
[70, p.319]) that if F1 is not a Lipschitz operator then Newton iteration
does not possess, in general, quadratic convergence. For instance, apply
ing Newton iteration to F(x) - x + x* where a = 5/3 we get
ei+i • ( a - i > e i / < 1 + a e r 1 > # •
Remark 2.2
Theorem 2.1 states how fast Newton iteration converges. The speed of
convergence depends on the value of q. This technique based on q seems to be
general and can be applied for any iteration. See Traub and Wozniakowski [77]
where the class of interpolatory iterations I , n 3, is considered. •
We show that if we want (2.5) to hold then (2.3) is sharp for any value
of q, 0 < q < 1.
Theorem 2.2
There exists a polynomial F satisfying (2.2) for which the following is
true. For any q 6 (0,1) such that A 2 ( O r > q/(l+2q) there exists an x^ so
that for the Newton iterate Xj^ we have e. > qe Q.
Proof
2
Let F(x) « x + x , x 6 K. Then a = 0 and A 2 ( D = 1 . Suppose that
A 2T » r > q/(l .4- 2q) and let x Q - -F. The next approximation x^ constructed
by Newton iteration is equal to x^ - x Q - F f (x^) *F ( *q) ~ ^I(1-2H > qr - -q^Q-
Hence e^ > qe^. •
From theorem 2 . 1 with q -» l" it follows that Newton iteration converges
whenever
-6-
A 2 r < i / 3 .
We show that there exists a problem F such that k^T - l/3 and e Q =* T imply
that Newton iteration does not converge.
Theorem 2.3
There exists a problem F that satisfies (2.2) such that if
A 2(r> r = !
then there exists an x^ 6 J for which Newton iteration is not convergent.
Proof Define
f x + x 2 for x ^ 0 (2.11) F(x) =< 2
I x - x for x ^..0.
Then F(0) = 0, F1 (0) » 1 and F1 is a Lipschitz function with A 2(D a 1. Let 1 1 1 1 r = * and X Q - --j. Then x^ = — and - -g". Hence Newton iteration cycles
and is not convergent. •
Rail [74] showed that assuming the existence of or and using the Kantorovich
theorem it is necessary to assume
A 0F o.l5. 2 4
4"f*2 f2. Thus we have obtained an improvement by a factor of — = 2 . 3 .
We have established above that A^T < l/3 is a necessary and sufficient
condition for the convergence of Newton iteration. Using different tech
niques Rheinboldt [75] established the sufficiency of < l/3. Indepen
dently, den Heiger [76] established that it was also necessary.
-7-
We define optimal convergence number. Let X c: 31 be a set of
positive numbers r such that for any F with a simple zero or such that F f is
a Lipschitz operator in J, k^T < r implies that an iteration converges
whenever |jxQ-ar|| f.
Definition 2.1
r c is called the optimal convergence number (with respect to
r - sup X. • c
From theorem 2.1 and theorem 2.3 we have
Corollary 2.1
For Newton iteration
(2.11) * = 1/3. •
Consider the non-linear scalar function.
g ( D = A 2(nr, r * o.
Provided F is not linear, g(D is strictly increasing, g(0) = 0, g(°°) = °°.
Let h denote the inverse function g Then Newton iteration converges
provided ||KQ-<*|| h(r) , r < j.
Definition 2.2
is called the optimal radius of the ball of convergence (with respect
to A 2) if
R c - h ( r c ) . •
Corollary 2.2
For Newton iteration
R = h(l/3). • c
-8-
Remark 2.3
There exists a class of iterative methods whose convergence condition
is of the form A2(Or < r. Examples include the secant method and Newton
like methods,,both for the scalar and multivariate cases. Let cp and \(r be any
two iterations whose convergence condition is of the form A 2 ( D T < r. Let
r c (cp), rc ( t ) denote the optimal convergence numbers of cp and i|r, and let
R c (cp), Rc ( t ) denote the corresponding optimal radii of the ball of convergence.
Then
Therefore the optimal radii of convergence of two iterations can be ordered if their convergence numbers are ordered.
In general, Newton iteration converges only locally. We give conditions
under which Newtoij iteration enjoys a "type of global convergence".
r c ( c p ) * r c W
implies
analytic in D = {x: |{x:-cy|| < R} e B i=l
where is a Banach space over the complex field.
( 2 . 1 2 ) r ^ r V ^ ^ n ^ i - i
for i = 2,3 , . . • , One way to find K is to use Cauchy's formula
we get W/R & KR (KR) i-1
which yields m/r £ K
-9-
Theorem 2.4 If F satisfies (2.12) then Newton iteration converges in J - {x: ||X-<Y|| T}
where
c i i
(2.13) r - and ^ =s| .
Proof A small calculation shows ||F' (oi)'1?^ || £ f (|(x-or||) where f (s)=s/(1-Ks)
then A 2 ( D T <> K F < \
z (1-KT) J
cl 1 which yields T = ^ p * i t h
c^ — *£• This proves (2.13). •
Note that this result is especially interesting if the domain radius R is
approximately equal to l/K, say R » C 0 / K . Then r = •§ = — R ~-r^- R and Newton z K c^ — oc 2
iteration enjoys a "type of global convergence".
-10-
3. COMPLEXITY OF NEWTON ITERATION
In the previous section we showed that the sequence of errors e^ = ||x£-ctf|j
for Newton iteration satisfies
A 2 2 6i
(3.1) e ± + 1 £ j _ 2 A e whenever A ^ < l/3. 2 i
We want to find x^ such that k is the smallest index for which e^ ^ € ? e o w ^ e r e
ef, 0 < e1 < 1, is a given number. Define c ef so that
(3.2) e k = ee Q.
The complexity of Newton iteration, which is the total cost of computing x^,
k ^ 1, is given by
(3.3) comp == k • c
where c is the cost of one Newton step. (See Traub and Wozniakowski [76a]
for a detailed discussion.) Let c ( F ^ ) denote the cost of one evaluation of
F ^ , j 8 8 0 and 1, and let the combinatory cost d be the cost of solving the
linear system Ff (x^) (x£+i~xi) 5 3 ~ F( X^) • Then the cost per step is given by
(3.4) c » c(F) + c(Ff) + d.
Note that the dimension of the problem N ^+ 0 0. For multivariate problems,
(j) N < +°°, we can assume that each arithmetic operation costs unity and c ( F ^ )
can denote the total number of arithmetic operations needed to evaluate F
and d =* d(N) is the cost of the solution of the linear system
F f (x.,) (x^-x,^ = -F(x ±). Hence d(N) = 0(NP) with P £ 3.
-11-
For the rest of this section N £ +<». Define
A f 2
< 3- 5 ) f i + i = T ^ A 7 T ' f o " V i s s ° ' 1
Clearly, the sequence {f^} is a majorizing sequence of {e^3> i.e., e^ ^ f ,
Vi. Let comp- = k c where k is the smallest index for which f ^ ee n. Of 1 1 1 K. 0
course, comp ^ comp^. Let p » A^EQ - A2 e0* N o t e that comp^ = comp^(e,p).
We derive some properties of comp^ as p approaches l/3 (p < -j. is the necessary
and sufficient condition for guaranteeing convergence)•
Lemma 3.1
Let n be any fixed integer. If p = ~ - 6 with 6 then 4
(3.6) f. = (1 - g i6)f Q where 0 £ g± <z 4 1 + 1 - 4
for i = 0,1,..., n.
Proof
Assume by induction that f i = (1 - S i6)f Q with 0 ^ g ± ^ 4 1 + 1 - 4. Since
fi+i * fo t h e n gi+i * 0 a n d
_ P(l-g16)2f0
Ei+1 = l-2p(l-gi6)
which yields after algebraic manipulations
8i+l * 4 g i + 9 + 3 ( 8 i 6 ) 2 * 4 1 + 2 " 4 + 3<8i 6> 2 " 3 * 4 1 + 2 " 4 '
This proves (3.6).
We are ready to prove
-12-
Lemma 3.2
(3.7) | lg(-|)(l + o(l)) <: comPl(e,p) £ c log(-|)(l + o(l))
where p = ~ - 6 with 6 -* 0 + and lg is the logarithm to base 2,
Proof
Recall that comp^ - k c and we seek bounds on k^. Since
f * A * f 2 y ^ o V " 1
f = / l i l f i V * 1 f
0
r i l-2A 2 f Q
r i - l I l-2A„f J r0 then
k ] _ * i g ^ . i g « / i g ^ i | | ^ = i g i ( i + o ( i ) ) .
This proves the righthand side of (3.7).
Let 6 » l/4 n + 1. From Lemma 3.1 for i = 1,2,..., bi/2J we get
f. * (1 - (4 i + 1-4)/4 n + 1)f Q * (l-2~n)f0 > efQ
for large n. Thus k^ ^ U*/ 2 J - lg("|)(l + o(l)') which proves the lefthand
side of (3.7). •
Lemma 3.2 states the important difference between convergence and complex
ity of the sequence (f^3. p < assures convergence of {f^} but with only
this condition complexity can be arbitrarily (logarithmically) large.
We believe results similar to Lemma 3.2 also hold for comp (rather than
comp^) of an arbitrary iteration and therefore propose
Conjecture 3.1
For any superlinear one-point iteration $ there exists a function F with
a simple zero a and a number T, 0 < T < +00, such that
-13-
(i) the sequence ^ - $(x^,F) constructed by the iteration $
is convergent to a and ||>ci+^-cy|| < or[|9 Vi, whenever an
initial approximation x^ is any point of J = {x: ||x-cy||< r )
(ii) the complexity comp, which is now a function of x^ and e,
satisfies
lim comp(xA,e) = +» Ve > 0. •
Conjecture 3.1 states that starting from any point of J we get convergence;
however, comp(Xg;e) can be arbitrarily high when x^ approaches the boundary of
J.
We prove Conjecture 3.1 for Newton iteration.
Theorem 3.1
If Newton iteration is applied to
2 CYL '+ x for x ^ 0
(3.8) F(x) =< 2
ix - x for x ^ 0
then for e fixed comp(Xg,e) tends logarithmically to infinity as A2F approaches
l/3 and X Q * T.
Proof
Note that a 8 8 0 and A 2 ( 0 = 1, see (2.11). Applying Newton iteration to
(3.8) we get 2 2 -sign(xj[)xi e±
x i ( 1 s , \ and e i+1 l-2sign(xi)xi
ci+l l-2ei
which is equivalent to (3.5) with A 2 - 1. Let x Q - T =» | - 6. From Lemma (3.2)
we get
-14-
- comp(x.,e) T(l+o(l)) * T — * 1 + o ( 1 ) a s 6 "* °' 4 c l g ( J )
This proves theorem 3.1. "
This phenomenon aoes not occur in the Kantorovich theory. If h * (in
the usual notation; see Rail [74]), the character of the convergence depends
on the multiplicity of or. If a is a simple zero, the convergence is quadratic
and the complexity g^es as lg lg(l/e). If a is a multiple zero, the conver
gence is linear and the complexity goes as lg(l/e).
Note that F defined in (3.8) is not twice-differentiable at a. "'It
seems to us that comp can tend to infinity as A^T approaches l/3 for twice-
differentiable problems. Therefore we propose
Conjecture 3.2
There exists a twice-differentiable function F with a simple zero a such
that the comp « c o m p ^ D of Newton iteration satisfies
lim comp(A9D = + 0 0 • A 2r-i/3~ 1
Theorem 3.1 states that if k^V is close to l/3 then complexity can be
arbitrarily large. We seek a bound on k^T (stronger than A 2 r < l/3) to be
sure that complexity is not too large. Define
(3.9) t - max(l, lg l/e).
Theorem 3.2
If A 2T £ l/4 then the complexity of Newton iteration satisfies
(3.10) comp £c lg(l+t).
-15-
Proof
Let h i+1 - Kh ± with K = A2/(l - 2A 2e Q) and h 0 8 3 e 0 . From (3.1), e ± £ h ±
._ /-rr _ \ 1 . r\ . i
4 > Vi. In Traub and Wozniakowski [76a] we proved that if w » (Keg)"1 ^ 2 then
the complexity comp(h) of the sequence (hu) is bounded by
comp(h) £ c lg(l+t) •
Remark 3.1
We have chosen the endpoint w = 2 in the inequality w ^ 2 only for con
venience and definiteness. Any value w bounded away from unity can be used 1/v
to get a good bound on complexity. For instance, w ^ 2 implies comp £ c log(l+vt). See Traub and Wozniakowski [76a]. g
Theorem 3.2 motivates the definition of the optimal complexity number for
a superlinear iteration. Let Y <z SR be a set of positive numbers r such that
for any F with a simple zero a such that F 1 is a Lipschitz operator in J,
A^F £ r implies that an iteration converges and comp ^ c Ig(lH-t), Vt ^ 1,
whenever |JxQ-a|| ^ r.
Definition 3.1
r is called the optimal complexity number [with respect to A 9 and the
complexity criterion comp £ c lg(l-ft)] for a superlinear iteration if
Since comp ^comp(h), (3.10) follows.
r = sup Y.
Compare with the definition of the optimal convergence number r^. Of
course r £ r and from Theorem 3.2 it follows that r ^ l/4 for Newton g c g iteration.
-16-
Theorem 3.3
For Newton iteration
(3.11) r = ~ • g 4
Proof 2
Let F(x) - x + x . Then F(a) - 0, F 1 (a) =» 1 and - F. Newton itera
tion produces [x. 3 such that
2 X J
(3.12) x . - 2 #
1 + 1 l+2X. L
1 J. Let t - 1. Then comp £ c is equivalent to e
x ^ 2 E 0 # S E T X 0 " F 2 *
From (3.12) we get e ^ - R 2/(l-2D. Thus e 1 £ ~ e Q is equivalent to F ^
Hence A 0 ( T ) R ^ r £ y and from Theorem 3.2 we conclude r = 7. • 2 g 4 g 4
Remark 3.2
It follows from Remark 3.1 that as long as A 2 F is bounded away from l/3
we are assured of good complexity. m
com-We summarize our results on optimal convergence number and optimal plexity number (Corollary 2.1 and Theorem 3 . 3 ) :
W A 2 E 0 < rc = ^ 3 a s s u r e s convergence but complexity can be arbitrarily large
(ii) A 2 e Q £ r g » 1/4 assures convergence and good compl exity.
-17-
Definition 3.2
R is called the optimal radius of the ball of good complexity Twith
respect to k^ and comp ^ c lg(l+t)] for a superlinear iteration if
R - h(r ). g g
Corollary 3.1
For Newton iteration
R g = h(l/4)
Remark 3.3
A remark analogous to Remark 2.3 holds here concerning the ordering of
the optimal radii of good complexity for two iterations. •
The ratio between the optimal radius of the ball of convergence and the
optimal radius of the ball of good complexity,
S a Ml/3)
indicates what stronger condition must be imposed to assure good complexity.
We end this paper by deriving a lower bound on the complexity of Newton
iteration. Let F be any operator for which
(3.13) e 1 + 1 * a 2 e 2
for a positive a^. We show that the class of such operators is not empty. Let x
F(x) • - - a, a > 0, x £ 0. Then
ei+i • a e i -
-18-
Recall that c - c(F) + c(F') + d is the cost of one Newton step where d
is the cost of solving a linear system of size N. Let cT and c denote lower L U
and upper bounds on c. If a2 eg ^ V f c > then Theorem 3.1 in Traub and
Wozniakowski [76a] yields comp ^ c (lgt-lglgt). L
We summarize the complexity analysis for Newton iteration.
Theorem 3.4 if i i
then the complexity of Newton iteration satisfies
cL(lgt-lglgt) £ comp £ C u lg(l+tX
where t = max(l, lg l/e) .
ACKNOWLEDGMENT
We are indebted to H. T. Kung for several important suggestions on this
paper.
-19-
REFERENCES
Kantorovich [48]
Gragg and Tapia [74]
den Heijer [76]
Kantorovich, L. V., "Functional Analysis and Applied Mathematics," Uspehi Mat. Nauk, 3 (1948), 89-185 (Russian). Tr. by C. D. Benster, National Bureau of Standards Report No. 1509, Washington, D.C., 1952. Gragg, W. B. and Tapia, R. A., "Optimal Error Bounds for the Newton-Kantorovich Theorem," SIAM J, Numer. Anal. 11 (1974), 10-13.
den Heijer, C , "Iterative Solution of Nonlinear Equations by Imbedding Methods," Report NW 32/76, Mathematisch Centrum, Amsterdam, August 1976.
Ortega and Rheinboldt [70]
Rail [69]
Ortega, J. M. and Rheinboldt, W. C , Iterative Solution of Nonlinear Equations in Several Vari ables, Academic Press, New York, 1970.
Rail, L. B., Computational Solution of Nonlinear Operator Equations, John Wiley and Sons, New York, 1969.
Rail [74]
Rheinboldt [75]
Rail, L. B., "A Note on the Convergence of Newton1s Method," SIAM J. Numer. Anal.. 11 (1974), 34-36.
Rheinboldt, W. C , "An Adaptive Continuation Process for Solving Systems of Nonlinear Equations," Technical Report TR-393, University of Maryland, July 1975.
Traub and Wozniakowski [76a]
Traub and Wozniakowski [76b]
Traub and Wozniakowski [77]
Traub, J. F. and Wozniakowski, H.,"Strict Lower and Upper Bounds on Iterative Computational Complexity,11 in Analytic Computational Complexity, edited by J. F. Traub, Academic Press, 1976, 15-34.
Traub, J. F., and Wozniakowski, H., "Optimal Radius of Convergence of Interpolatory Iterations for Operator Equations," Dept. of Computer Science Report, Carnegie-Mellon University, 1976.
Traub, J. F., and Wozniakowski, H., "Convergence and Complexity of Interpolatory-Newton Iteration in a Banach Space," Dept. of Computer Science Report, Carnegie-Mellon University, 1977.
UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAGE (When Data Entetod)
REPORT DOCUMENTATION PAGE READ INSTRUCTIONS BEFORE COMPLETING FORM
1. REPORT NUMBER 2. GOVT ACCESSION NO. 3. RECIPIENT'S CATALOG NUMBER
4. T ITLE (and Subtitle)
CONVERGENCE AND COMPLEXITY OF NEWTON ITERATION FOR OPERATOR EQUATIONS
5. TYPE OF REPORT & PERIOD COVERED
Interim
4. T ITLE (and Subtitle)
CONVERGENCE AND COMPLEXITY OF NEWTON ITERATION FOR OPERATOR EQUATIONS 8. PERFORMING ORG. REPORT NUMBER
7. AUTHORfO
J. F # Traub and H. Wozniakowski
8. CONTRACT OR GRANT NUM8ERO)
N00014-76-C-0370; NR 044-422
9. PERFORMING ORGANIZATION NAME AND ADDRESS
Carnegie-Mellon University Computer Science Dept. Pittsburgh, PA 15213
10. PROGRAM ELEMENT. PROJECT, TASK AREA & WORK UNIT NUMBERS
11. CONTROLLING OFFICE NAME AND ADDRESS
Office of Naval Research Arlington, VA 22217
12. REPORT DATE
March 1977 11. CONTROLLING OFFICE NAME AND ADDRESS
Office of Naval Research Arlington, VA 22217 13. NUMBER OF PAGES
22 14. MONITORING AGENCY NAME A ADDRESSfi/ different from Controlling Office) 15. SECURITY CLASS, (of thia report)
UNCLASSIFIED
14. MONITORING AGENCY NAME A ADDRESSfi/ different from Controlling Office)
15«. DECLASSIFICATION/DOWNGRADING SCHEDULE
16. DISTRIBUTION STATEMENT (of thia Report)
Approved for public release; distribution unlimited.
17. DISTRIBUTION STATEMENT (of the abetract entered in Block 20, / / different from Report)
18. SUPPLEMENTARY NOTES
19. KEY WORDS (Continue on reverae elde If naceeaary end identity by block number)
20. ABSTRACT (Continue on reveree elde It neceeemry end Identify by btock number)
An optimal convergence condition for Newton iteration in a Banach space is established. There exist problems for which the iteration converges but the complexity is unbounded. We show what stronger condition must be imposed to also assure "good complexity",
DO |JAN*73 1473 EDITION OF 1 NOV 6S IS OBSOLETE S/N 0102*014* 6601 | UNCLASSIFIED
SECURITY CLASSIFICATION OF THIS HAGE (*hmn Dele Bnieted)