example-taylorseriesmethod y (0)=1atkinson/m171.dir/sec_6-10.pdf · example-taylorseriesmethod...

39
EXAMPLE - TAYLOR SERIES METHOD Consider solving y = y cos x, y (0) = 1 Imagine writing a Taylor series for the solution Y (x), say initially about x = 0. Then Y (h)= Y (0) + hY (0) + h 2 2 Y (0) + h 3 6 Y (0) + ··· We can calculate Y (0) = Y (0) cos(0) = 1. How do we calculate Y (0) and higher order derivatives? Y (x)= Y (x) cos(x) Y (x)= Y (x) sin(x)+ Y (x) cos(x) Y (x)= Y (x) cos(x) 2Y (x) sin(x)+ Y (x) cos x

Upload: hakhue

Post on 19-Aug-2018

233 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

EXAMPLE - TAYLOR SERIES METHOD

Consider solving

y′ = y cosx, y(0) = 1

Imagine writing a Taylor series for the solution Y (x),

say initially about x = 0. Then

Y (h) = Y (0) + hY ′(0) + h2

2Y ′′(0) + h3

6Y ′′′(0) + · · ·

We can calculate Y ′(0) = Y (0) cos(0) = 1. How do

we calculate Y ′′(0) and higher order derivatives?

Y ′(x) = Y (x) cos(x)Y ′′(x) = −Y (x) sin(x) + Y ′(x) cos(x)

Y ′′′(x) = −Y (x) cos(x)− 2Y ′(x) sin(x) + Y ′′(x) cosx

Page 2: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Then

Y ′′(0) = −Y (0) sin(0) + Y ′(0) cos(0) = 1Y ′′′(0) = −Y (0) cos(0)− 2Y ′(0) sin(0) + Y ′′(0) cos 0 = 0

Thus

Y (h) = Y (0) + hY ′(0) + h2

2 Y ′′(0)+h3

6 Y ′′′(0) + · · ·= 1 + h + h2

2 + · · ·We can generate as many terms as desired, obtaining

added accuracy as we do so. In this particular case,

the true solution is Y (x) = exp (sinx). Thus

Y (h) = 1 + h +h2

2− h4

8+ · · ·

Page 3: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We can truncate the series after a particular order.

Then continue with the same process to generate ap-

proximations to Y (2h), Y (3h), ... Letting xn = nh,

and using the order 2 Taylor approximation, we have

Y (xn+1) = Y (xn) + hY ′(xn) +h2

2Y ′′(xn) +

h3

6Y ′′′(ξn)

with xn ≤ ξn ≤ xn+1. Drop the truncation error

term, and then define

yn+1 = yn + hy′n +h2

2y′′n, n ≥ 0

with

y′n = yn cos(xn)

y′′n = −yn sin(xn) + y′n cos(xn)

We give a numerical example of computing the nu-

merical solution with Taylor series methods of orders

2, 3, and 4.

Page 4: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

A 4th-ORDER EXAMPLE

Consider solving

y′ = −y, y(0) = 1

whose true solution is Y (x) = e−x. Differentiating

the equation

Y ′(x) = −Y (x)

we obtain

Y ′′ = −Y ′ = Y

Y ′′′ = Y ′ = −Y, Y (4) = Y

Then expanding Y (xn + h) in a Taylor series,

Y (xn+1) = Yn + hY ′n +

h2

2Y ′′

n +h3

6Y ′′′

n

+h4

24Y(4)n +

h5

120Y (4)(ξn)

Page 5: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Dropping the truncation error, we have the numerical

method

yn+1 = yn + hy′n + h2

2 y′′n + h3

6 y′′′n + h4

24y(4)n

=(1− h + h2

2 − h3

6 + h4

24

)yn

with y0 = 1. By induction,

yn =

(1− h +

h2

2− h3

6+

h4

24

)n

, n ≥ 0

Recall that

e−h = 1− h +h2

2− h3

6+

h4

24− h5

120e−ξ

with 0 < ξ < h. Then

yn =(e−h + h5

120e−ξ)n

= e−nh(1 + h5

120eh−ξ

)n

.= e−xn

(1 + xnh4

120 eh−ξ)

Thus

Y (xn)− yn = xne−xn · O(h4)

Page 6: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

ERROR

Returning to the preceding order 2 approximation, we

have

Y (xn+1) = Y (xn) + hY ′(xn) +h2

2 Y ′′(xn) +h3

6 Y ′′′(ξn)yn+1 = yn + hy′n + h2

2 y′′n, n ≥ 0

Subtracting,

en+1 = en + he′n +h2

2e′′n +

h3

6Y ′′′(ξn)

e′n = Yn cos(xn)− yn cos(xn)

= en cos(xn)∣∣e′n∣∣ ≤ |en|Similarly,

e′′n = −en sin(xn) + e′n cos(xn)∣∣e′′n∣∣ ≤ |en|+∣∣e′n∣∣

≤ 2 |en|

Page 7: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Returning to the error equation

en+1 = en + he′n +h2

2e′′n +

h3

6Y ′′′(ξn)

we have

|en+1| ≤ |en|+ h∣∣e′n∣∣+ h2

2

∣∣e′′n∣∣+ h3

6

∣∣Y ′′′(ξn)∣∣

≤[1 + h + h2

]|en|+ h3

6

∥∥Y ′′′∥∥∞We can now imitate the proof of convergence for Eu-

ler’s method, obtaining

|en| ≤ e2xn |e0|+ h2e2xn − 1

12

∥∥∥Y ′′′∥∥∥∞provided h ≤ 1. Thus

|Y (xn)− yh(xn)| ≤ O(h2)

By similar means, we can show that for the Taylor

series method of order r, the method will converge

with

|Y (xn)− yh(xn)| ≤ O(hr)

Page 8: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We can introduce the Taylor series method for the

general problem

y′ = f(x, y), y(x0) = Y0

Simply imitiate what was done above for the particular

problem y′ = y cos x.

In general,

Y ′(x) = f (x, Y (x))Y ′′(x) = fx (x, Y (x)) + fy (x, Y (x))Y ′(x)

= fx (x, Y (x)) + fy (x, Y (x)) f (x, Y (x))Y ′′′(x) = fxx + 2fxyf + fyyf2 + fyfx + f2y f

and we can continue on this manner. Thus we can

calculate derivatives of any order for Y (x); and then

we can define Taylor series of any desired order.

This used to be considered much too arduous a task

for practical problems, because everything had to be

done by hand. But with symbolic programs such as

Mathematica and Maple, Taylor series can be con-

sidered a serious framework for numerical methods.

Programs that implement this in an automatic way,

with varying order and stepsize, are available.

Page 9: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

RUNGE-KUTTA METHODS

Nonetheless, most researchers still consider Taylor se-

ries methods to be too expensive for most practical

problems (a point contested by others). This leads us

to look for other one-step methods which imitate the

Taylor series methods, without the necessity to cal-

culate the higher order derivatives. These are called

Runge-Kutta methods. There are a number of ways in

which one can approach Runge-Kutta methods, and

I will follow a fairly classical approach. Later I may

introduce some other approaches to the development

of such methods.

Page 10: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We begin by considering explicit Runge-Kutta meth-

ods of order 2. We want to write

yn+1 = yn + hF (xn, yn, h; f)

with F (xn, yn, h; f) some carefully chosen approxima-

tion to f(x, y) on the interval [xn, xn+1]. In particu-

lar, write

F (x, y, h; f)

= γ1f(x, y) + γ2f(x + αh, y + βhf(x, y))

This is some kind of “average” derivative. Intuitively,

we should restrict α so that 0 ≤ α ≤ 1. Otherwise,

how should the constants γ1, γ2, α, β be chosen?

We attempt to make the leading terms in the Taylor

series of Y (x + h) look like the corresponding terms

in a Taylor series expansion of

Y (x) + hF (x, Y (x), h; f)

More precisely, introduce the truncation error

Tn(Y ) = Y (x + h)− [Y (x) + hF (x, Y (x), h; f)]

Page 11: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Expand Y (x + h), obtaining

Y (x + h) = Y (x) + hY ′(x) + h2

2 Y ′′(x) + · · ·= Y (x) + hf + h2

2 [fx + fyf ]

+h3

6

[fxx + 2fxyf + fyyf2 + fyfx + f2y f

]in which all functions f , fx, etc. are evaluated at

(x, Y (x)).

Next expand f(x + αh, Y (x) + βhf(x, Y (x))) in a

series in powers of h. This requires the multivariable

Taylor series:

f(a + δ, b + η)

=r∑

j=0

1j!

(δ ∂∂x + η ∂

∂z

)jf(x, z)

∣∣∣∣x=az=b

+ 1(r+1)!

(δ ∂∂x + η ∂

∂z

)r+1f(x, z)

∣∣∣∣x=a+θδz=b+θη

in which 0 < θ < 1. Use this with a = x, b = Y (x),

δ = αh, and η = βhf(x, Y (x)).

Page 12: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We write out the first few terms:

f(x + αh, Y (x) + βhf(x, Y (x)))

= f(x, Y (x)) + αhfx + βhfyf

+12

[(αh)2 fxx + 2 (αh) (βhf) fxy + (βhf)2 fyy

]+ · · ·

in which all terms f , fx, fy, etc. are evaluated at

(x, Y (x)). Introduce this into the formula

F (x, Y (x), h; f)

= γ1f(x, Y (x)) + γ2f(x + αh, Y (x) + βhf(x, Y (x)))

and collect together common powers of h. Then sub-

stitute this into

Tn(Y ) = Y (x + h)− [Y (x) + hF (x, Y (x), h; f)]

and also substitute the earlier expansion of Y (x+ h).

Page 13: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

This leads to the formula

Tn(y) = c1h + c2h2 + c3h

3 + · · ·

c1 = [1− γ1 − γ2] f(x, Y (x))

c2 =(1

2− γ2α

)fx +

(1

2− γ2β

)fyf

c3 =(16 − 1

2γ2α2)fxx +

(13 − γ2αβ

)fxyf

+(16 − 1

2γ2β2)fyyf2 +

16fyfx + 1

6f2y f

We try to set to zero as many as possible of the coef-

ficients c1, c2, c3,... Note that c3 cannot be made to

equal zero in general, because of the final terms

1

6fyfx +

1

6f2y f

which are independent of the choice of the coefficients

γ1, γ2, α, β.

Page 14: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We can make both c1 and c2 zero by requiring

1− γ1 − γ2 = 012 − γ2α = 012 − γ2β = 0

Note that we cannot choose γ2 = 0, as that would

lead to a contradiction in the last two equations. Since

there are 3 equations and 4 variables, we let γ2 be

unspecified and then solve for α, β, γ1 in terms of γ2.

This yields

α = β =1

2γ2, γ1 = 1− γ2

Case: γ2 =12

yn+1 = yn + h2 [f(xn, yn)

+f(xn + h, yn + hf(xn, yn))]

This is one iteration of the trapezoidal rule with an

Euler predictor.

Case: γ2 = 1.

yn+1 = yn + hf(xn + h2 , yn + h

2f(xn, yn))

Page 15: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We can derive other second order formulas by choos-

ing other values for γ2. Sometimes this is done by

attempting to minimize the next term in the trunca-

tion error. In the case of the above formula,

Tn(y) = c3h3 + · · ·

with

c3 =(16 − 1

2γ2α2)fxx +

(13 − γ2αβ

)fxyf

+(16 − 1

2γ2β2)fyyf2 +

16fyfx + 1

6f2y f

We can regard this as the dot product of two vectors:

c3 =

16 − 1

2γ2α2

13 − γ2αβ16 − 1

2γ2β2

1616

·

fxx

fxyf

fyyf2

fyfx

f2y f

Page 16: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Then

|c3| ≤ d1(γ2) d2(f)

with

d1(γ2) =[(

16 − 1

2γ2α2)2

+(13 − γ2αβ

)2+(16 − 1

2γ2β2)2

+ 118

]12

and d2(f) the lenth of the second vector. Pick γ2to minimize d1(γ2). This occurs at γ2 = 3

4, and this

yields the formula

yn+1 = yn + h4 [f(xn, yn)

+3f(xn + 23h, yn + 2

3hf(xn, yn))]

Page 17: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

3-STAGE FORMULAS

We have just studied 2-stage formulas. To obtain a

higher rate of convergence, we must use more deriva-

tive evaluations. A 3-stage formula looks like

yn+1 = yn + hF (xn, yn, h; f)

F (x, y, h; f) = γ1V1 + γ2V2 + γ3V3

V1 = f(x, y)

V2 = f(x + α2h, y + β2,1hV1)

V3 = f(x + α3h, y + β3,1hV1 + β3,2hV2)

Again it can be analyzed by expanding the truncation

error

Tn(Y ) = Y (x + h)− [Y (x) + hF (x, Y (x), h; f)]

in powers of h. Then the coefficients are determined

by setting the lead coefficients to zero. This can be

generalized to dealing with p-stage formulas. The al-

gebra becomes very complicated; and the equations

one obtains turn out to be dependent.

Page 18: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

p-STAGE FORMULAS

We have just studied 2-stage formulas. To obtain a

higher rate of convergence, we must use more deriva-

tive evaluations. A p-stage formula looks like

yn+1 = yn + hF (xn, yn, h; f)

F (x, y, h; f) =p∑

j=1

γiVi

V1 = f(x, y)

V2 = f(x + α2h, y + β2,1hV1)...

Vp = f

x + αph, y + hp−1∑j=1

βp,jVj

Again it can be analyzed by expanding the truncation

error

Tn(Y ) = Y (x + h)− [Y (x) + hF (x, Y (x), h; f)]

in powers of h. Then the coefficients are determined

by setting the lead coefficients to zero.

Page 19: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

How high an order can be obtained?

A number of years ago, John Butcher derived the re-

sults shown in the following table.

p 1 2 3 4 5 6 7 8Max order 1 2 3 4 4 5 6 6

This is counter to what people had believed; and it has

some important consequences. We will return to this

when considering the definition of some new Runge-

Kutta formulas.

Page 20: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

A POPULAR FOURTH ORDER METHOD

A popular classical formula uses

yn+1 = yn + hF (xn, yn, h; f)

F (x, y, h; f) =1

6[V1 + 2V2 + 2V3 + V4]

V1 = f(x, y)

V2 = f(x + 12h, y + 1

2hV1)

V3 = f(x + 12h, y + 1

2hV2)

V4 = f (x + h, y + hV3)

For y′ = f(x), this becomes

yn+1 = yn +h

6

[f(x) + 4f(x + 1

2h) + f(x + h)]

which is simply Simpson’s integration rule.

Page 21: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

It can be shown that the truncation error is

Tn(Y ) = c(xn)h5 + O(h6)

for a suitable function c(x). In addition, we can show

|Y (xn)− yh(xn)| ≤ d(xn)h4

with a suitable d(x). Finally, one can prove an asymp-

totic error formula

Y (x)− yh(x) = D(x)h4 + O(h5)

This leads to the Richardson error estimate

Y (x)− yh(x) ≈1

15[yh(x)− y2h(x)]

EXAMPLE. Solve

y′ = 1

1 + x2− 2y2, y(0) = 0

Its true solution is

Y (x) =x

1 + x2

Use stepsizes h = .25 and 2h = .5.

Page 22: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

CONVERGENCE

For the numerical method

yn+1 = yn + hF (xn, yn, h; f)

we define the truncation error as

Tn(Y ) = Y (xn+1)− [Y (xn) + hF (xn, Y (xn), h; f)]

with Y (x) the true solution of

y′ = f(x, y), y(x0) = Y0

Introduce

τn(Y ) = 1hTn(Y )

=Y (xn+1)− Y (xn)

h− F (xn, Y (xn), h; f)

We will want τn(Y ) → 0 as h → 0. In this connection,

introduce

δ(h) = maxx0≤x≤b

−∞<y<∞|f(x, y)− F (x, y, h; f)|

Page 23: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

and assume

δ(h) → 0 as h → 0

Note then that

τn(Y ) =

[Y (xn+1)− Y (xn)

h− Y ′(xn)

]+ [f(xn, Y (xn))− F (xn, Y (xn), h; f)]

The first term on the right side goes to zero from

the definition of derivative; and the second term is

bounded by δ(h), and thus it too goes to zero with h.

Returning to

Tn(Y ) = Y (xn+1)− [Y (xn) + hF (xn, Y (xn), h; f)]

we rewrite it as

Y (xn+1) = Y (xn) + hF (xn, Y (xn), h; f) + hτn(Y )

The numerical method is

yn+1 = yn + hF (xn, yn, h; f)

Page 24: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

To analyze the convergence, we subtract to get

en+1 = en + h [F (xn, Yn, h; f)− F (xn, yn, h; f)]

+hτn(Y )

To continue with this approach, we need to know what

happens to F when the y argument is perturbed. In

particular, we assume

|F (x, y, h; f)− F (x, z, h; f)| ≤ L |y − z|for all x0 ≤ x ≤ b, −∞ < y, z < ∞, and all small

values of h, say 0 < h ≤ h0 for some h0. This

generally can be derived from the Lipschitz condition

for the function f(x, y).

For example, recall the second order method

yn+1 = yn + h2 [f(xn, yn)

+f(xn + h, yn + hf(xn, yn))]

in which

F (x, y, h; f) =1

2f(x, y) +

1

2f(x + h, y + hf(x, y))

Page 25: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Then

F (x, y, h; f)− F (x, z, h; f)

= 12 [f(x, y)− f(x, z)]

+12 [f(x + h, y + hf(x, y))

−f(x + h, z + hf(x, z))]

Use the Lipschitz condition on f :

|F (x, y, h; f)− F (x, z, h; f)|≤ 1

2 |f(x, y)− f(x, z)|+1

2 |f(x + h, y + hf(x, y))

−f(x + h, z + hf(x, z))|≤ 1

2K |y − z|+ 12K [|y − z|

+ h |f(x, y)− f(x, z)|]This leads to

|F (x, y, h; f)− F (x, z, h; f)| ≤[K + K2h

4

]|y − z|

For h ≤ 1, take L = K (1 + K/4). Then

|F (x, y, h; f)− F (x, z, h; f)| ≤ L |y − z|

Page 26: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

Return to the error equation

en+1 = en + h [F (xn, Yn, h; f)− F (xn, yn, h; f)]

+hτn(Y )

Taking bounds,

|en+1| ≤ |en|+ hL |en|+ hτ(h)

with

τ(h) = maxx0≤xn≤b

|τn(Y )|

We analyze

|en+1| ≤ (1 + hL) |en|+ hτ(h)

exactly as was done for Euler’s method. This yields

|Y (xn)− yn| ≤ e(b−x0)L |e0|+e(b−x0)L − 1

Lτ(h)

Generally, e0 = 0; and the speed of convergence is

that of τ(h).

Page 27: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

STABILITY

Using the ideas introduced above, we can do a stability

and rounding error analysis that is essentially the same

as that done for Euler’s method. We omit it here.

Consider now applying a Runge-Kutta method to the

model equation

y′ = λy, y(0) = 1

As an example, use the second order method

yn+1 = yn + hf(xn + h2 , yn + h

2f(xn, yn))

This yields

yn+1 = yn + hλ[yn + h

2f(xn, yn)]

= yn + hλ[yn + h

2λyn

]= yn + hλyn + 1

2 (hλ)2 yn

=[1 + hλ + 1

2 (hλ)2]yn

Thus

yn =[1 + hλ + 1

2 (hλ)2]n

, n ≥ 0

Page 28: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

If we consider λ < 0 or real(λ) < 0, we want to know

when yn → 0. That will be true if∣∣∣1 + hλ + 12 (hλ)2

∣∣∣ < 1

For λ real, this is true if

−2 < hλ < 0

This is the region of absolute stability for this second

order Runge-Kutta method. It is not much different

than that of some of the Adams family of methods.

It can be shown that there are no A-stable explicit

Runge-Kutta methods. Thus explicit Runge-Kutta

methods are not suitable for stiff differential equa-

tions.

Page 29: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

ERROR CONTROL

Again we want to estimate the local error in solving

the differential equation. In particular, let un(x) de-

note the true solution of the problem

y′ = f(x, y), x ≥ xn, y(xn) = yn

In the text, on page 428, we derive the formula

un(xn + 2h)− yh(xn + 2h).=

1

2m − 1[yh(xn + 2h)− y2h(xn + 2h)]

where m is the order of the Runge-Kutta method. We

denote the error estimate on the right by trunc. Then

for error control purposes, as earlier with Detrap, we

want to have trunc satisfy a local error control per

unit stepsize:

.5εh ≤ |trunc| ≤ 2εh

If this violated, then we seek a new stepsize h to have

trunc = εh, and we continue with the solution pro-

cess.

Page 30: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

We could also use the extrapolated solution

yh(xn + 2h) = yh(xn + 2h) + trunc

This would have to be analyzed for stability and con-

vergence. But with it, an error per step criteria applied

to yh(xn + 2h),

.5ε ≤ |trunc| ≤ 2ε

while keeping yh(xn + 2h) will usually suffice. Many

codes do something of this kind.

Page 31: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

COST IN FUNCTION EVALUATIONS

We are advancing from xn to xn+2. what does this

cost, if we include calculation of trunc? Consider it

for the classical RK method

yn+1 = yn + hF (xn, yn, h; f)

F (x, y, h; f) =1

6[V1 + 2V2 + 2V3 + V4]

V1 = f(x, y)

V2 = f(x + 12h, y + 1

2hV1)

V3 = f(x + 12h, y + 1

2hV2)

V4 = f (x + h, y + hV3)

Calculating yh(xn + h): 4 new evaluations of f

Calculating yh(xn + 2h): 4 new evaluations of f

Calculating y2h(xn + 2h): 3 new evaluations of f

Total evaluations: 11

For comparison, an implicit multistep method will typ-

ically use 4 evaluations to go from xn to xn+2.

Page 32: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

RUNGE-KUTTA-FEHLBERG METHODS

There are 6-stage methods of order 5. Fehlberg thought

of imbedding a order 4 method into the 6-stage method,

and to then use of the difference of the two solutions

to estimate the local error in lower order method. In

particular, he introduced methods

yn+1 = yn + h5∑

j=1

γjVj

yn+1 = yn + h6∑

j=1

γjVj

V1 = f(xn, yn)

Vi = f

x + αih, y + hi−1∑j=1

βi,jVj

, i = 2, ..., 6

He showed the coefficients could be chosen so that

yn+1 was an order 4 method and yn+1 was an order

5 method. Then he estimated the local error using

un(xn+1)− yn+1 ≈ yn+1 − yn+1

Page 33: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

DISCUSSION

In determining the formulas for yn+1 and yn+1, Fehlberg

had some additional freedom in choosing the coeffi-

cients. He looked at the leading terms in the expan-

sion of the errors

un(xn+1)− yn+1, un(xn+1)− yn+1

in powers of h, say

un(xn+1)− yn+1 = c5h5 + c6h

6 + · · ·

un(xn+1)− yn+1 = c6h6 + c7h

7 + · · ·He chose the coefficients so as

(1) to make yn+1 − yn+1 an accurate estimator of

un(xn+1)− yn+1,

[un(xn+1)− yn+1]− [yn+1 − yn+1]

un(xn+1)− yn+1

=c6h

6 + c7h7 + · · ·

c5h5 + c6h6 + · · · ≈c6c5

h

Page 34: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

(2) to make c5 as small as possible, while having

c5 ≥ c6

His choice of coefficients for the order (4,5) pair of

formulas is given in the text.

EXAMPLE. Apply the fourth order formula to the sim-

ple problem

y′ = x4, y(0) = 0

The RKF method of order 4 has the truncation error

Tn(Y ).= 0.00048h5

The classical RK method, discussed earlier, has the

truncation error

Tn(Y ).= −0.0083h5

Thus the RKF method appears to be more accurate

than the classical formula.

Page 35: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

COSTS For its computational cost when stepping from

xn to xn+2, 12 evaluations of f are required, as com-

pared to the 11 evaluations required in estimating the

local error in the classical fourth order method. In

fact, we do not need to take two steps to estimate the

local error, but can in fact change stepsize at each xn

if so needed.

PROGRAMS

In MATLAB, look at the programs ode23 and ode45.

Also look at the associated demo programs.

In the class account, look at the Fortran program

fehlberg-45.f. It takes a single step, returning both

yn+1 and the estimated local error yn+1 − yn+1.

In addition, I have included the program rkf45.f, which

implements the Fehlberg (4,5) pair with automatic

control of the local error. It is from Sandia National

Labs, and it is a very popular implementation.

Page 36: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

IMPLICIT RUNGE-KUTTA METHODS

As before, define a p-stage method

yn+1 = yn + hF (xn, yn, h; f)

F (x, y, h; f) =p∑

j=1

γiVi

But now, let

Vi = f

x + αih, y + hp∑

j=1

βi,jVj

, i = 1, ..., p

Thus we must solve p equations in p unknowns at

each step of solving the differential equation.

The reason for doing this is that it leads to numerical

methods with better stability properties when solving

stiff differential equations. In particular, we can obtain

A-stable methods of any desired order.

Page 37: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

COLLOCATION METHODS

An important category of implicit Runge-Kutta meth-

ods are obtained as follows. Let p ≥ 1 be an integer,

and let the points

0 ≤ α1 < α2 < · · · < αp ≤ 1

be given. Assuming we are at the point (xn, yn), find

a polynomial q(x) of degree p for which

q(xn) = yn

q′(xn + αih) = f (xn + αih, q(xn + αih))

for i = 1, ..., p. Then this is equivalent to an implicit

Runge-Kutta method; and it is also called a block by

block method.

Page 38: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

ORDER OF CONVERGENCE

Assume {αi} have been so chosen that∫ 1

0ω(τ)τj dτ = 0, j = 0, 1, ...,m − 1

ω(τ) = (τ − α1) · · · (τ − αp)

Then for our collocation method,

Tn(Y ) = O(hp+m+1)

and the order of convergence is p + m.

Defining {α1, ..., αp} as the zeros of the Legendre

polynomial of degree p, normalized to [0, 1], leads to

a method of order 2p. These methods are A-stable

for every p ≥ 1.

Page 39: EXAMPLE-TAYLORSERIESMETHOD y (0)=1atkinson/m171.dir/sec_6-10.pdf · EXAMPLE-TAYLORSERIESMETHOD Considersolving ... Drop the truncation error term,andthendefine ... Taylor series

ADDITIONAL REFERENCES

Larry Shampine, Numerical Solution of Ordinary Dif-

ferential Equations, Chapman & Hall Publishers, 1994.

Larry Shampine and Mark Reichelt, “The Matlab ODE

Suite”, SIAM Journal On Scientific Computing 18

(1997), pp. 1-22.

Arieh Iserles, Numerical Analysis of Differential Equa-

tions, Cambridge University Press, 1996. This also

includes as extensive development on the numerical

solution of partial differential equations.

Uri Ascher, Robert Mattheij, and Robert Russell, Nu-

merical Solution of Boundary Value Problems for Or-

dinary Differential Equations, Prentice-Hall, 1988.

Uri Ascher and Linda Petzold, Computer Methods

for Ordinary Differential Equations and Differential-

Algebraic Equations, SIAM Pub., 1998.