a consistently rapid algorithm for solving polynomial equations

Upload: amin-alisadeghi

Post on 03-Apr-2018

222 views

Category:

Documents


0 download

TRANSCRIPT

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    1/12

    J. Inst. Maths Applies (1976) 17,99-110

    A Consistently Rapid Algorithm for Solving Polynomial Equations ~J. B. MOORE

    Department of Electrical Engineering, University of Newcastle,New South Wales, 2308, Australia

    [Received 25 March 1974 and in revised form 20 September 1974]Novel approaches are used to ensure consistently rapid convergence of an algorithm,based on Newtons method, for the solution of polynomial equations with real orcomplex coefficients.lt appears that in terms of algorithm complexity and calculationtime, the new algorithm represents a considerable improvement on the various alwaysconvergentalgorithms in the literature. In terms of algorithm accuracy no improvementsare claimed.

    1. IntroductionSINCEthe number of algorithms available for solving polynomial equations is legion(indicating perhaps the unsatisfactory performance of existing algorithms) theremust be strong justification for introducing yet another such algorithm. To grasp thesigniiicanee of the algorithm introduced in this paper, a broad classification of thevarious existing algorithms is in order.First, there are the simple algorithms which usually converge for initial approxima-tions in the neighborhood of a polynomial zero but may not converge, or mayconverge very slowly, when the initial approximation is not in a neighborhood of a

    zero (Wilkinson, 1963). These algorithms fall naturally into three categories. There arethose which, when convergent, converge linearly (Lins algorithm) or superlinearly(Mullers algorithm) in the neighbourhood of a zero and require only the evaluationof the polynomial itself at each iteration. Then there are the more popular quadraticallyconvergent (when convergent) algorithms of Newton and of Bairstow. For these,both the polynomial and its derivative are evaluated at each iteration. Bairstowsmethod is restricted to polynomials with real coefficients and is marginally moreefficient than Newtons method for these polynomials. Then again there are thecubically convergent methods such as those of Laguerre. For these, the polynomial andboth its first and second derivative are evaluated at each iteration. These latteralgorithms give the least trouble. In fact, it is possible to arrive at a good universaipolynomial solver by first using Laguerres method and then, if convergence is notreached in say thirty iterations, switching to Bairstows method. More reeently, simplealgorithms which converge to all zeros simultaneously either quadratically or cubicallyhave been developed as in Alberth (1973).A second group of polynomial solving algorithms is the one where convergence is

    guaranteed (Jenkins & Traub, 1970, 1972; Dejon & Nickel, 1969; Moore, 1967, 1970;Grant & Hitchins, 1971; Ward, 1957; Householder, 1971; Stewart, 1969; Lehmer,1961; Back, 1969). These algorithms are more sophisticated, particularly those that

    f Worksupportedby the Australian Resemch Grants Committee.99

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    2/12

    100 J. B. MOOREguarantee a rapid rate of convergence. See, for example, the three stage algorithm ofJenkins & Traub (1970).In a previous paper by the author (Moore, 1967) a simple always convergent

    algorithm is described. (This algorithm has been rediscovered recently in Grant &Hitchins (1971)). The algorithm consists of Newtons algorithm augmented with afew logic instructions and a minimal number of calculations to ensure convergence ofthe method. The resulting algorithm is simple enough, but there is still the difficultythat for some polynomials and for some initial approximations convergence may bequite slow. An attempt to accelerate this algorithm (Moore, 1970) using standardtype accelerating techniques has been reasonably successful for low order (less thantenth order) polynomials, but these techniques result in slow convergence for certainhigh order polynomials and certain initial conditions.From the above sketchy review of available methods, one point emerges. There isstill the need for a simple always-convergent algorithm which gives consistently rapidconvergence irrespective of the polynomial equation considered and irrespective ofthe first approximation to a root of this equation. The algorithm of this paper isdesigned to supply this need.Novel approaches are employed in an attempt to minimize the number of iterationsrequired for polynomial zero evaluations subject to the constraint that the calculation

    time for each iteration be of the same order as that of the particular simple algorithmon which it is based. The paper will be concerned with algorithms based on thefamiliar quadratically convergent (when convergent) Newtons method.A brief outline of the remainder of the paper is as follows. Section 2 reviews thederivation of the ideas and results of Moore (1967) and Grant & Hitchins (1971) and

    develops concurrently additional results and insights necessary for understanding thealgorithms of the paper. Section 3 introduces the novel approaches taken in this paperto achieve a consistently rapid polynomial zero finding algorithm, while comparativeperformance data and concluding remarks are the material of the final Section 4.2. Theory andBackgroundConsider the polynomial equation

    f(z) = ,$0kz-= o, (7D = t. (t)where z = x + iy k the complex variable and a are the coefficients which may be realor complex. We seek an iterative technique for updating an estimate of a real orcomplex solution to this equation.Consider now the algorithm

    AZj = f(zj) f,- 1(zj) where AZ = AX+ iAY,Zj = zj_l +aAzj_l, ZOgiven, (2)

    forj=l,2, . . . where zj is the .jth estimate of a zero off(z). The notation ~z(z) isused to indicate the derivative f(z). The possibly complex quantity u is termed thestep size scale factor. When u = 1, the algorithm (2) is simply Newtons algorithmwhich when convergent is quadratically convergentat least to a simple zero off(z).For some initial approximations ZO,Newtons algorithm may not converge. For

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    3/12

    COIWSTENTLYAPIDALGORITHM 101example, if~(z) has no real roots and ZOis real, then Zj is real for allj and the algorithmwill not converge.Straightforward extensions to Newtons algorithm ensure its convergence (Moore,

    1967; Grant & Hitchins, 1971). Some of the ideas of these papers are restated withadditional insights into the problem included in the restatement.Consider the functions (norms off(z) and Az = ~(z)~z- l(z) respectively)

    F(z) = l~(z)l and D(z) = l~(z)~~ (z){. (3)lt is not difficult to show that these functions have the following properties.(a) The functions are non-negative for all z and become infinite as z becomes infiniteq(b) The functions and their inverses are continuous and differentiable in the complex

    plane except possibly at isolated points ~(z) is analytic for all z, ,~- 1(z) andAz- (z) are analytic for all z except the zeros of ~(z), and Az(z) is analytic for all: except for the zeros of~,(z)].(c) The zeros of each of the functions are the zeros off(z).

    (d) The only minima of the functions are the zeros off(z).(e) In any closed domain of the z-plane [not containing a zero of~z(z)], the functionF(z)[D(z)] cannot have a maximum.(f) Extrema of F(z) that are not zeros of F(z) are saddle points. Saddle points of F(z)

    are poles of D(z) or equivalently zeros of F=(z).Notice that (d) and (e) do not exclude the existence of saddle points of F(z) and D(z).As a consequence of properties (a)-(f), the problem of finding a zero of ~(z) is

    equivalent to the problem of finding the location of minima (or equivalently zeros) ofF(z) [or of D(z)].The algorithms of Moore and Grant & Hitchins incorporate modifications ofNewtons algorithm to ensure that F(z) is decreased at each step. In particular, for thealgorithm (2), the step size scale factor a is selected as x = 1, ~, ~, . . . until there is adecrease in F(z). The key result of Grant & Hitchins is that if saddle points of F(z)are not encountered (that is ]~:(zj) I > c for somes > 0 then the percentage decreasein F(z) is bounded below and thus the algorithm converges to a zero of F(z), orequivalently to a zero off(z). An important intermediate result is that the directionsof the increments z (for the cmes~(z) and~,(z) non-zero) are in the direction of steepestdescent of F(z). The steepest descent directions are denoted 8.Insight into the algorithms of Moore, Grant & Hitchins is gained if one notes theproperty of the contours of constant [) = Az, namely that these contours changedirection dramatically in the vicinity of a saddle point region of F(z) or pass throughthe saddle points (see Fig. 1 for an example). Now since for a real, the algorithm (2)gives increments tangent to the constant-O contours, and since the magnitude of theseincrements increases as saddle point regions of F(z) (zeros of ~Z(z))are approached,it is clear that, to allow navigations of the sharp direction changes of the 8 contours,the step size alAzl must be decreased, and many iterations consumed. The restrictionof a to the set of real numbers promotes slow convergence in the saddle point regionsand allows the possibility of convergence to a saddle point of F(z). This latter possi-bilityy can be avoided by the introduction of certain heuristics to the algorithm whichwill not be discussed here.

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    4/12

    102 J. B. MOOREThe algorithm of this paper is a variation of the algorithms of Moore and Grant &Hitchins. The variations appear minor at first glance but the impact of the variations

    is very significant. The variations are now summarized qualitatively.

    5=x

    iy7IIeromod IOlgor

    2

    (b)FIG.1,(a)Constant Fand Ocontours for Z8+ 1 = O.(b) Algorithm trajectories against a backgroundof constant D contours for Z8+ 1 = O.lt is not difficult to establish that an increasein f)(z) at any jttxaljon of the standard

    Newton algorithm (2) indicates that the iterations are heading for a saddle point regionof F or equivalentlyfor the region of sharp direction changes of the steepestdescent~ CO1ltOUl$,etreathg parameciatdikeotheprevious zero approximation and thenventuring forward ag~lnv~~. a ide stepping kelahn

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    5/12

    CONSISTENTLYAPID ALOORITHM 103to the left (right) of the steepest descent direction if any change in this direction ateach iteration is to the left (right).The heuristics also accelerate Newtons algorithm by a selection of the step size

    scale factor as a = 1, 2, . . . . provided that the following three conditions are satisfied.(a) Convergence is slow.(b) There is no increase in D.(c) Any change in direction is less than (say) 12.

    The acceleration steps are invaluable when converging to multiple zeros or when theinitial approximation is considerably in error.A further variation is to set up the algorithm as a minimization of D(z) rather thanof F(z) as in Moore and Grant & Hitchins. The reason for this variation is to simplifythe algorithm and also experience indicates that it works somewhat better. Con-vergence proofs are less satisfactory than those in Grant & Hitchins but since theyparallel these and tell us nothing of the rate of convergence except in the vicinity of azero off(z), they will not be included here. Suffice it to say that with a few additionalinstructions, it is possible to ensure that at (say) every twentieth iteration F(z) isdecreased using the ideas of Grant & Hitchins. Such an algorithm is guaranteed toconverge using the proofs of Grant & Hitchins. Our experience is that such modifica-tions are superfluous,3. A Consistently Rapid Algorithm Based on Newtons MethodIn constructing the polynomial equation solving algorithm of this paper, algorithmcomplexity is kept to a minimum while still achieving good computational efficiency.This is achieved by a judicious selection of permissible scale factors a. A classificationof permissible increments or steps that can be taken at any iteration, is as follows.(1) Newton step. Here u = 1.(2) Short Newton step. Here a = 1 but Zj = zj_l +AzJDJDj rather than Zj =Zj.l + Az as in (2) where D~ is a maximum allowable step size.(3) Side step. Here a = ~ 15/45.The particular sign of the 45 direction changeis chosen to anticipate th~direction change of the constant f3contours. It is

    calculated using 0 information as follows.sgn[Sin(Oj j-~)] = sgn(sinOj COS oj-~ COS Oj sin j-~)

    . sgn(Ayjbj.l AxjAyj-l).If 8 = 8j_l then an arbitrary sgn is used.

    (4) Acceleration step. Here u = 2,3,...(5) Deceleration step. Here l~jl = ~laj-l 1,/aj = /~j-l + 131.A number of tests are employed to determine when each of the above steps is to beused. The tests are now listed.(1) (Dj Dj-l)t@s whether D is decreasing at each stage.(2) (Dj D~)tests whether D is less than some maximum allowable step size.(3) (YP) where (YP) = ~DjDj-l cos(oj @j-l) = (A~jAXj.l+ A~jAyj-~)

    0.98 JDjDj-l (tests direction changes).

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    6/12

    104(4)(5)

    1.B. MOORE(la]Sj Sj-l) where Sj = Fj/Fj-l (tests whether convergence is betterlinear).(la! 0.96)tests magnitude of Ial. The value 0.96 is a flag or indicator.

    than

    The flow chart of Fig. 2 indicates an algorithm which is considered to be a reason-able compromise on complexity and calculation efficiency.A number of comments are now offered on this algorithm.(1) Although at first glance the algorithm appears to be more complicated than the

    algorithms of Moore and Grant & Hitchins in which a is always real, in fact the

    Select 2., set a Q95, L-=19-7,G =Fo=1035,,9~=05I IIc!=-olculote f(zj), fz(fi), w = I f=(zj)lz I

    I iI 4 ~=lf(~)lz, S/.Fj/Fj_lI 11 Logic for selection of a [see Fig. 2(b)]

    I J

    1 I

    ~

    1j= ~-l+aA+l, j=j+t Accept - ?j as a zero of f(z)FIG.2(a), Algorithm flowchart.

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    7/12

    CONSISTENTLY RAPID ALGORITHM 105computerprogram requires less than twenty additional FORTRANtatements. The gain,as far as computational efficiency is concerned, is considerable as the next sectionindicates.(2) Notice that acceleration is only permitted when Dj < Dj_l, lA6jl P) and ]til~j > Sj-l or on a side step (see (5) above). The first requirementneeds no explanation. The second requirement is simply that the direction change atan iteration is small. Clearly it is unwise to consider acceleration when the directionchange at an iteration is more than say 12. The third requirement is that the algorithm

    without acceleration is achieving no better than linear convergence of F. Accelerationis achieved by incrementing a by unity.

    r___ ___r+l---l 1,a~ I,,ui i( ? H

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    8/12

    106 J . B .MoORE(3) The short Newton step is taken whenever Dj > D~ where D~ is a maximum

    allowed step size. Whenever Dj < D~, then DM is set equal to Dj. The initial value ofD~ is set to be the suspected diameter of the smallest region containing but one zero.After a zero removal of nonzero magnitude, D~ is set to be the magnitude of thepreviously found zero.

    (4) A side stepping (acceleration) iteration is taken whenever Dj ~ Dj_l andIal >1. In other words, whenever a saddle point region of F(z) is suspected, anattempt is made to side step this region. Such an iteration rarely fails to reduce D,but if it does not reduce D, a decderation step is used.(5) The algorithm is designed so that rarely is Dj > Dj_l after one side steppingiteration. In this event a deceleration step is Used. It may be possible to construct apolynomial and an initial value such that in some very complicated saddle pointregions of F(z), D does not decrease for the deceleration steps prescribed by the

    algorithm. Such nonconvergent or its cousin S1OWonvergen~ is easily detected anda new starting point can be chosen to avoid such difficulties. This possible improve-ment is not incorporated into the flow chart of Fig. 2 to avoid overcomplication.(6) Mult@e zeros off(z). For multiple roots, acceleration steps are required inorder to give rapid convergence. Accuracy may be a problem as pointed out indiscussing iteration termination.(7) Calculation time for each iteration. The increase in calculation time for eachiteration over that of a regular Newton iteration is at most the equivalent of (say)twelve multiplications. This is negligible compared to the 8n or so multiplicationsrequired for the regular Newton iteration once n is four or greater. The decision

    network to select a suitable a consists of about four or so decisions for each paththrough the network and thus represents a negligible cost.(8) It might be thought that since saddle points of F cause trouble in algorithms ofMoore and Grant & Hitchins, saddle points of D may muse trouble in the algorithm

    of Fig. 2. This is not the case. On the contrary, it is in these regions that the side stepiteration is most efficient. Moreover, the algorithm is not attempting to decrease Dalong the direction of steepest descent of D. (To calculate this direction, higherderivatives would be needed.) Actually it would be convenient to have available thesteepest descent direction of D, for then a composite direction of steepest descent of Fand this direction would result in a highly efficient algorithm. In essence, the sidestepping iteration of the present algorithm is a crude attempt to attain this compositedirection when it is needed most.

    (9) Slow convergence or nonconvergence may sometimes occur in practice due topoor scaling and a poor initial zero estimate combined with a selection of too small amaximum allowable step size, computer overflow, etc. However, since convergence isnormally consistently rapid, such abnormal situations are quickly detected. If the&~gOli~hmhas not converged to a zero in

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    9/12

    CONSISTENTLYRAPIDALGORITHM 1074. ComparativePerformanceAny realistic comparison must consider the three factors, accuracy, calculation

    time, and program complexity.In the first instince it is reasonable to compare the present algorithm with otheralgorithms based on Newtons method. Now since most algorithms based on Newtonsmethod use the same recursive equation in the final iterations for finding a polynomial

    zero, the accuracy of all the various algorithms based on Newtons method is of thesame order onm the precision of the calculations is specified. The calculation time foreach iteration is approximately the same for the two cases studied in this section.The complexity of the present algorithms require the addition of about 30 FORTRANstatements to the simplest Newton algorithm. We suggest that the iteration number isa reasonable criterion for comparison.Figure 3(a) presents comparative information on the number of iterations to find azero of z+ 1 = Ofor n = 3 to 100 with a fist approximation ZO= 1+ il. Figure 3(b)indicates the average number of iterations to solve for all the zeros of z+ 1 = Ousing deflation and using the deflation zero as the starting point for the next zerowdafk?d Newtonalgor, Ihm) AlgOrlthm of F !g z

    ,~DegreeN ofFolymmIol+ I=0

    10

    6 7-65-4 3~ I Io 10 20 30 40 5(n

    FIG. 3. (a) Performance data. (b) Solution of z+ 1 = Ofor all zeros.

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    10/12

    108 J. B. MOOREcalculation. Complex zeros are extracted in pairs and the iteration number for theextraction of real roots is weighted by one half since real arithmetic is involved.Double precision arithmetic is used, but even so ill-conditioning becomes a difficultyfor some of the polynomials above order 60. Notice that the algorithm of Moore andGrant & Hitchins is very inefficient for some n. In fact, convergence is so slow insome instances as to be just as annoying as an algorithm which fails to converge forsome initial conditions. The algorithm of this paper clearly represents a con.~iderableimprovement in consistency of convergence rate. (Almost identical results are obtainedfor other first approximations with /zOI < lat least with the maximum allowablestep size set initially as D~ = 0.4.)The same problem is solved by an always convergent algorithm in Back (1969)where each iteration requires three function evaluations as well as a derivativeevaluation (that is each iteration involves twice the amount of calculations as thealgorithms of Sections 2 and 3). @ the basis of the results reported in Back, hisalgorithm requires at best twice as many iterations as that of the algorithm of thispaper and at worst ten times as many iterations. (That is, the present algori!hm isfrom four to twenty times faster than that of the algorithm of Back and only a little(if any) more complicated.) This comparison points up the consistency quality of thepresent algorithm.Of course the polynomial~(z) = z+ 1 is not the only standard difficult poiy nomialto solve but the results for this problem are in fact almost identical to those obtainedwith other difficult polynomials.For ten polynomials each with [1O,20, 30, 40, 50, 60] random zeros (Izl < 2), theaverage number of iterations to solve for all zeros of each polynomial (extracting

    complex zeros in pairs, weighting real zero iterations by one half, and using doubleprecision arithmetic) is in the range [5f- 1, 5++ 1, 6f 1, 6++ 1, 7*1, 8+1] confirmingthe statement above.For polynomials with low order multiple zeros there is little or no variation fromthe results indicated so far. For the case of high order multiple zeros inherent accuracylimitations are encountered as illustrated in for example the solution of (z+ 1~ = Owith first estimate [0.5, 0,01]. Here with n = 3, 4, 5, 6, 7, 8, 9 the number ofiterations to yield a first zero is 22, 24, 27, 27, 27 and the average number of iterationsto yield all zeros is 7, 6, 7, 6, 8, 6, 9 respectively. For higher order polynomkds theinaccuracies using double precision arithmetic are intolerable. The methods of thispaper have not improved the accuracy of the basic Newton algorithm-only speededit up and avoided poor convergence characteristics.To compare the present algorithm with those of Jenkins & Traub, Dejon & Nickeland Lehmer is best done first on the basis of complexity. These algorithms involveconsiderably more computation for each iteration than the present algorithm.For example, the algorithm of Dejon & Nickel requires all derivatives off(z) to becalculated. Thus the calculation time for each iteration goes up in proportion ton I I~ i as compared to ~ i as in the present algorithm. It is not surprising thereforei=] i=~1that the number of iterations is less than that of the algorithm of Section 3. Even so,the complexity and calculation times appear to be of an order of magnitude greaterthan that of the algorithm of this paper. The samemay be said of many of the always

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    11/12

    CONSISTENTLYAPIDALGORITHM 109convergent algorithms in the literature. We shall not give a detailed comparison foreach ofthese methods.To conclude, it is perhaps worthwhile to mention that investigations so far carried

    out in applying the techniques of this paper to Bairstows method and Laguerresmethod have also been partially successful and fully successful respectively. As forNewtons method, the first indication that convergence may not be proceedingsatisfldctorily in either of the above methods is that the step size increases at a particulariteration. Also, as for Newtons method, the application of ideas of Section 3, namelyto introduce a direction change when step size increases, helps considerably. However,with second derivatives information available as in Laguerres method, more efficientways of dealing with the problem mentioned are available. Even so, the algorithm ofthe present paper is more eflicient in terms of calculation time and program complexity.With the modified Bairstows method, due to the existence of local minima of D insome polynomials, convergence cannot be guaranteed and frequently slow convergenceoccurs. Even when the modified Bairstows method converges its performance on theaverage is not as good as for the modified Newton method.

    REFERENCESADAMS, D. A. 1967 Communs A. C.&l. 10, 655-658.ALBERTH,O. 1973Maths Cornput. 27, 339-344.BACK, H, 1969Communs A. C.&l. 12, 675-684.DEJ ON,B. & NICKEL,K. 1969In Constructive aspects of the flmdamenta[ theorem of algebra;proceedings of the symposium conducted at the IBM Research Laboratory, Zurich-Ruscldikon, Switzerland, June 5-7, 1967 (eds B. Dejon & P. Henrici). London: Wiley-

    Interscience,pp. 1-35,GRANT, J . A. & HITCHINS, G. D. 1971.1. Inst. Maths AppIics 8, 122-129.HOUSEHOLDER, A. S. 1971 Num. Math. 16, 375-382.J ENKINS, M. A. & TRAUB, J . F. 1970 Num. Math. 14, 252263. (See also 1972 Communs

    A.C./W. 15, 97-99.)LEIiMER, D. H. 1961J. Ass. cornput. Mach. 8, 151-162.MOORE,J . B. 1967 f. Ass. comput. Mach. 14, 311-315.MOORE,J . B. 1970 IEEE Trans. on Computers C-19(1), 79-80.PETERS,G. & WILKINSON, J . H. 1971J. Inst. Maths Applies 8, 16-35.STEWART,G. W. 1969 Num. Math. 13, 458-471.WARD, J . A. 1957 J. Ass. comput. Mach. 4, 148-150.WILKINSON, J . H. 1963 Rounding errors in algebraic processes. Englewood Cliffs, N.J.:

    Prentice-Hall.

    Appendix(1) Evaluation off(z) andfJz) for the algorithm (2) requires 2n complex (8n real)

    multiplications for the case when ak are complex and half this amount when thecoeficien~s a~a~e reaL Perhaps the Qmplest method to evaluate these quantities is bynested rnuhip~~cation as follows. Two sequences bk and c~defined by

    bO= 1, b,+, = z b~+a~co = 1, c~+~= ZCk+bk

    are calculated simultaneously for k = 1 to n to obtain ~(z) = b.+l and f.(z) = co.

  • 7/28/2019 A Consistently Rapid Algorithm for Solving Polynomial Equations

    12/12

    110 J . B. MOOREAfter a zero of ~(z) has been accepted and divided out from the polynomial, then- 1reduced order polynomial is ~ bkzn-1 -.k=o

    (2) Termination of the algorithm. Notice that the algorithm of Fig. 2 accepts a zeroof~(z) when D has reached its minimum value achievable using the Newton algorithm(2). This is detected when D increases at an iteration and yet ~D is less than a smallvalue E which is initially set for any zero removal as a conservative estimate of around-off error bound. Notice also that in the event that the initial estimate E is settoo low, as will usually be the case when converging to a high order multiple zero forexample, the actual round-off errors cause oscillations in D. These oscillations in Dare detected and cause an increase in E so that the algorithm will terminate. For realpolynomials an alternative approach is to use a round-off error bound for E (Adams,1967).(3) Estimate of calculation accuracy. The value of ~Dj at the final iteration is areasonable estimate of the computation accuracy and should be printed out alongwith the value of zj at the final iteration.(4) Polynomial defiation. Here the results of Peters & Wilkinson are relevant if fullaccuracy is required in a composite deflation process. With the simpler forwarddeflation process, zeros should be found in increasing magnitude. This presents no

    special difficulty since with the initial estimate of [0, O],the algorithm usually finds thezero of minimum magnitude. Once zeros are found using a deflation technique (withhigh precision arithmetic in at least the last four iterations), improvement in accuracycan be achieved using the undefeated polynomial. If ill-conditioning is suspected, thezeros found using the techniques above could be chosen as starting points for thealgorithms of which find all roots simultaneously. The combination of the methods ofthis paper with the methods of Alberth (1973) will yield more rapid solutions thanthe methods of Alberth aloneat least when there is little a priori information con-cerning zero locations.(5) Selection of initial zero estimate Zo. It is perhaps worthwhile for the algorithm to

    select an initial approximation to a zero off(z) from two or three tentative estimates ofthe minimum magnitude zero by choosing the one with the minimum value of D.Suggestionsfor the various initial estimates are: (a) standard initial estimatessuch as[O@05,O]or random zeros in the unit square, (b) the conjugate value of the previouslycalculated root (for the case of real coefficients this is a good check on accuracyinvolving only a few extra iterations), (c) the previously calculated zero value (incase there is a multipIe root or cluster of roots).