explicit high-order time-stepping based on componentwise ... · from “matrices, moments and...

23
NUMERICAL LINEAR ALGEBRA WITH APPLICATIONS Numer. Linear Algebra Appl. 2010; 00:123 Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nla Explicit High-Order Time-Stepping Based on Componentwise Application of Asymptotic Block Lanczos Iteration James V. Lambers University of Southern Mississippi, 118 College Dr #5045, Hattiesburg, MS, 39406-0001, USA SUMMARY This paper describes the development of explicit time-stepping methods for linear partial differential equations that are specifically designed to cope with the stiffness of the system of ordinary differential equations that results from spatial discretization. As stiffness is caused by the contrasting behavior of coupled components of the solution, it is proposed to adopt a componentwise approach in which each coefficient of the solution in an appropriate basis is computed using an individualized approximation of the solution operator. This has been accomplished by Krylov subspace spectral (KSS) methods, which use techniques from “matrices, moments and quadrature” to approximate bilinear forms involving functions of matrices via block Gaussian quadrature rules. These forms correspond to coefficients with respect to the chosen basis of the application of the solution operator of the PDE to the solution at an earlier time. In this paper it is proposed to substantially enhance the efficiency of KSS methods through the prescription of quadrature nodes based on asymptotic analysis of the recursion coefficients produced by block Lanczos iteration for each Fourier coefficient as a function of the frequency. The potential of this idea is illustrated through numerical results obtained from the application of the modified KSS methods to diffusion equations and wave equations. Copyright c 2010 John Wiley & Sons, Ltd. Received . . . KEY WORDS: Lanczos algorithm; spectral methods; Gaussian quadrature; heat equation; wave equation 1. INTRODUCTION The rapid advancement of computing power in recent years has allowed higher resolution models. This has introduced greater stiffness into systems of ordinary differential equations (ODE) that arise from discretization of time-dependent partial differential equations (PDE), which presents difficulties for time-stepping methods due to the coupling of components of the solution. Consider a system of linear ODE of the form u (t)+ Au = 0, u(t 0 )= u 0 , (1) where A is an N × N real, symmetric positive definite matrix. One-step methods such as Runge- Kutta methods yield an approximate solution of the form u n+1 = f (At)u n , Δt = t n+1 t n , where u n u(t n ) and f (At) is a polynomial of A that approximates exp[AΔt] if the method is explicit, or a rational function of A if the method is implicit. * Correspondence to: Department of Mathematics, University of Southern Mississippi, 118 College Dr #5045, Hattiesburg, MS, 39406-0001, USA. Copyright c 2010 John Wiley & Sons, Ltd. Prepared using nlaauth.cls [Version: 2010/05/13 v2.00]

Upload: others

Post on 15-Jul-2020

2 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

NUMERICAL LINEAR ALGEBRA WITH APPLICATIONSNumer. Linear Algebra Appl.2010;00:1–23Published online in Wiley InterScience (www.interscience.wiley.com). DOI: 10.1002/nla

Explicit High-Order Time-Stepping Based on ComponentwiseApplication of Asymptotic Block Lanczos Iteration

James V. Lambers∗

University of Southern Mississippi, 118 College Dr #5045, Hattiesburg, MS, 39406-0001, USA

SUMMARY

This paper describes the development of explicit time-stepping methods for linear partial differentialequations that are specifically designed to cope with the stiffness of the system of ordinary differentialequations that results from spatial discretization. As stiffness is caused by the contrasting behavior of coupledcomponents of the solution, it is proposed to adopt a componentwise approach in which each coefficientof the solution in an appropriate basis is computed using an individualized approximation of the solutionoperator. This has been accomplished by Krylov subspace spectral (KSS) methods, which use techniquesfrom “matrices, moments and quadrature” to approximate bilinear forms involving functions of matrices viablock Gaussian quadrature rules. These forms correspond tocoefficients with respect to the chosen basisof the application of the solution operator of the PDE to the solution at an earlier time. In this paper itis proposed to substantially enhance the efficiency of KSS methods through the prescription of quadraturenodes based on asymptotic analysis of the recursion coefficients produced by block Lanczos iteration foreach Fourier coefficient as a function of the frequency. The potential of this idea is illustrated throughnumerical results obtained from the application of the modified KSS methods to diffusion equations andwave equations.Copyright c© 2010 John Wiley & Sons, Ltd.

Received . . .

KEY WORDS: Lanczos algorithm; spectral methods; Gaussian quadrature; heat equation; wave equation

1. INTRODUCTION

The rapid advancement of computing power in recent years hasallowed higher resolution models.This has introduced greater stiffness into systems of ordinary differential equations (ODE) thatarise from discretization of time-dependent partial differential equations (PDE), which presentsdifficulties for time-stepping methods due to the coupling of components of the solution.

Consider a system of linear ODE of the form

u′(t) +Au = 0, u(t0) = u0, (1)

whereA is anN ×N real, symmetric positive definite matrix. One-step methodssuch as Runge-Kutta methods yield an approximate solution of the form

un+1 = f(A; ∆t)un, ∆t = tn+1 − tn,

whereun ≈ u(tn) andf(A; ∆t) is a polynomial ofA that approximatesexp[−A∆t] if the methodis explicit, or a rational function ofA if the method is implicit.

∗Correspondence to: Department of Mathematics, Universityof Southern Mississippi, 118 College Dr #5045,Hattiesburg, MS, 39406-0001, USA.

Copyright c© 2010 John Wiley & Sons, Ltd.

Prepared usingnlaauth.cls [Version: 2010/05/13 v2.00]

Page 2: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

2 J. V. LAMBERS

Similarly, Krylov subspace methods based on exponential integrators, such as those describedin [16, 17] yield approximate solutions that are equal to a polynomialof A times a vector. Thepolynomial is obtained by computing the exponential of a much smaller matrix that is generatedby a process such as Lanczos iteration or Arnoldi iteration.For example, consider the problem ofcomputingw = e−Atv for a given symmetric matrixA and vectorv. An approach described in [17]is to apply the Lanczos algorithm toA with initial vectorv to obtain, at the end of thejth iteration,an orthogonal matrixXj and a tridiagonal matrixTj such thatXT

j AXj = Tj. Then, we can computethe approximation

wj = ‖v‖2Xje−Tjte1, (2)

wheree1 =[

1 0 · · · 0]

. As each columnxk, k = 1, . . . , j, of Xj is of the formxk =pk−1(A)v, wherepn(A) is a polynomial of degreen in A, it follows thatwj is the product of apolynomial inA of degreej − 1 andv.

The effectiveness of this approach, for generalv, depends on the eigenvalues ofA. If theeigenvalues are not clustered, which is the case ifA arises from a stiff system of ODEs such thatobtained from spatial discretization of a time-dependent PDE, a good approximation cannot beobtained using a small number of Lanczos iterations. This can be alleviated using an outer iterationof the form

wm+1j ≈ e−A∆twm

j , m = 0, 1, . . . , w0j = v, (3)

for some∆t ≪ t, where the total number of outer iterationsM satisfiesM∆t = t. However, thisapproach is not practical if∆t must be chosen very small. Modifications described in [34, 39]produce rational approximations that reduce the number of Lanczos iterations, but these methodsrequire solving systems of linear equations in which the matrix is of the formI + hA, whereh is aparameter. Unlessh is chosen very small, these systems may be ill-conditioned.

In summary, time-stepping methods that, when applied to a system of linear ODEs of the form(1), compute a solution of the formu(t+∆t) = f(A; ∆t)u(t) wheref(A; ∆t) is a polynomial orrational approximation ofe−A∆t, tend to have difficulty with stiff systems, such as those that arisefrom the spatial discretization of time-dependent PDE. Thesolution of a stiff system, even one thatrepresents a typically smooth solution of a parabolic PDE, includes rapidly-changing componentsthat are coupled with components that evolve more slowly. Because of this phenomenon, explicitmethods for such systems require small time steps, while implicit methods typically require thesolution of ill-conditioned systems of linear equations.

It is therefore proposed to employ acomponentwiseapproach to time-stepping in an effort tocircumvent these difficulties for both parabolic and hyperbolic PDE. This goal is to be accomplishedby continuing the evolution of Krylov subspace spectral (KSS) methods [20, 21, 25, 27]. Thesemethods feature explicit time-stepping with high-order accuracy, and stability that is characteristicof implicit methods. This “best-of-both-worlds” combination is achieved through a componentwiseapproach, in which each Fourier coefficient of the solution is computed using an approximation ofthe solution operator that is, in some sense, optimal for that coefficient.

Initial work on KSS methods [14, 20, 21, 22, 24, 25, 26, 27, 28, 29, 31, 32] has yielded promisingresults, in terms of accuracy and stability. However, because they implicitly generateO(Nd) low-dimensional Krylov subspaces, whereN is the number of grid points per spatial dimension andd is the number of spatial dimensions, efficient implementation can be difficult except for certainspecial cases, even though the computational complexity per time step isO(N logN). As most of thecomputational expense arises from the need to compute component-dependent nodes and weightsof block Gaussian quadrature rules, this paper explores theidea of using component-dependentquadrature rules in which the nodes are prescribed in such a way as to approximate the blockGaussian nodes, in an effort to achieve comparable accuracybut with far greater efficiency. Theapproximation strategy is based on asymptotic analysis of the recursion coefficients produced byblock Lanczos iteration.

The outline of the paper is as follows: Section 2 describes how block KSS methods arise fromtechniques introduced by Golub and Meurant in [10] for approximating bilinear forms involvingfunctions of matrices. Section 3 describes how asymptotic analysis of the quantities computed byblock KSS methods can be used to efficiently approximate the extremal block Gaussian nodes.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 3: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 3

Numerical results are presented in Section 4. Discussion ofcurrent and future research directions isprovided in Section 5, and conclusions are given in Section 6.

2. MATRICES, MOMENTS, QUADRATURE AND PDE

This section describes the main ideas behind block KSS methods. For simplicity, the one-dimensional case is discussed first, followed by generalization to higher dimensions. LetS(t) =exp[−Lt] represent the solution operator of a parabolic PDE on(0, 2π),

ut + Lu = 0, t > 0, (4)

with appropriate initial conditions and periodic boundaryconditions. The operatorL is a linear,second-order, self-adjoint, positive definite differential operator.

Let 〈·, ·〉 denote the standard inner product of functions defined on[0, 2π]. KSS methods,introduced in various forms in [20, 21, 25, 27], are time-stepping algorithms that compute thesolution at timet1, t2, . . ., wheretn = n∆t for some choice of∆t. Given the computed solutionu(x, tn) at timetn, the solution at timetn+1 is computed by approximating the Fourier coefficientsthat would be obtained by applying the exact solution operator to u(x, tn),

u(ω, tn+1) =

1√2π

eiωx, S(∆t)u(x, tn)

, (5)

whereω is an integer.Clearly, such an approach requires an effective method for computing bilinear forms. KSS

methods represent the first application to time-dependent PDE of techniques from the areaof “matrices, moments and quadrature”: the approximation of bilinear forms involving matrixfunctions by treating them as Riemann-Stieltjes integrals, and then applying (block) Gaussianquadrature rules that are generated by the (block) Lanczos algorithm applied to the matrix. Wenow provide some background on this area.

2.1. Elements of Functions of Matrices

In [10] Golub and Meurant described a method for computing quantities of the form

uT f(A)v, (6)

whereu andv areN -vectors,A is anN ×N symmetric matrix, andf is a smooth function. Our goalis to apply this method withA = LN , whereLN is a spectral discretization ofL, f(λ) = exp(−λt)for somet, and the vectorsu andv are derived fromeω andun, whereeω is a discretization of1√2π

eiωx, andun is a discretization of the solution at timetn on anN -point uniform grid. In thefollowing exposition, it is assumed thatu and v are real; generalization to the complex case isstraightforward.

Since the matrixA is symmetric positive definite, it has real eigenvaluesb = λ1 ≥ λ2 ≥ · · · ≥λN = a > 0, and corresponding orthonormal eigenvectorsqj , j = 1, . . . , N . Therefore, the quantity(6) can be rewritten as

uT f(A)v =

N∑

j=1

f(λj)uTqjq

Tj v, (7)

which can also be viewed as a Riemann-Stieltjes integral

uT f(A)v = I[f ] =

∫ b

a

f(λ) dα(λ), (8)

where the measureα(λ) is derived from the coefficients ofu andv in the basis of eigenvectors.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 4: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

4 J. V. LAMBERS

As discussed in [3, 8, 9, 10], the integralI[f ] can be approximated using Gaussian, Gauss-Radauor Gauss-Lobatto quadrature rules, which yields an approximation of the form

I[f ] =

K∑

j=1

wjf(λj) +R[f ], (9)

where the nodesλj , j = 1, . . . ,K, and the weightswj , j = 1, . . . ,K, can be obtained using theLanczos algorithm and variations [4, 6, 7, 12]. If u = v, then the measureα(λ) is positive andincreasing, which ensures that the weights are positive.

2.2. Block Gaussian Quadrature

In the caseu 6= v, there is the possibility that the weights may not be positive, which destabilizesthe quadrature rule [1]. Instead, one can consider the approximation of

[

u v]T

f(A)[

u v]

, (10)

which results in the2× 2 matrix integral

∫ b

a

f(λ) dµ(λ) =

[

uT f(A)u uT f(A)vvT f(A)u vT f(A)v

]

, (11)

whereµ(λ) is a2× 2 matrix, each entry of which is a measure of the formα(λ) from (8).As discussed in [10], this matrix integral can be approximated using a quadrature rule of the form

∫ b

a

f(λ) dµ(λ) =

2K∑

j=1

f(λj)vjvTj + error, (12)

where, for eachj, λj is a scalar andvj is a 2-vector. Each nodeλj is an eigenvalue of the matrix

TK =

M1 BT1

B1 M2 BT2

.. .. ..

. ..BK−2 MK−1 BT

K−1

BK−1 MK

, (13)

which is a block-tridiagonal matrix of order2K. The vectorvj consists of the first two elements ofthe corresponding normalized eigenvector. The matricesMj andBj are computed by performingKiterations of the block Lanczos algorithm, which was proposed by Golub and Underwood in [11].

Throughout the remainder of this paper, we will refer to the scalar nodesλj , j = 1, 2, . . . , 2K,from (12) as “block Gaussian quadrature nodes”, to distinguish themfrom the Gaussian quadraturenodes that are obtained through the standard (non-block) Lanczos iteration applied to a single initialvector. A key difference between the two sets of nodes is thatwhile aK-node Gaussian rule is exactfor polynomials of degree2K − 1, it takes2K block Gaussian nodes to achieve the same degree ofaccuracy; that is, block Gaussian rules have the same degreeof accuracy as general interpolatoryquadrature rules. However, unlike general interpolatory quadrature rules, they are obtained froma polynomial of degreeK whose coefficients are2× 2 matrices, just asK standard Gaussianquadrature nodes are the roots of a polynomial of degreeK with scalar coefficients.

2.3. Block KSS Methods

Block KSS methods proceed as follows. We assume for convenience thatN is even. For each wavenumberω = −N/2 + 1, . . . , N/2, we define

R0(ω) =[

eω un]

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 5: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 5

and then compute theQR factorization

R0(ω) = X1(ω)B0(ω),

which yields

X1(ω) =[

eω unω/‖un

ω‖2]

, B0(ω) =

[

1 eHω un

0 ‖unω‖2

]

,

where theH superscript denotes the Hermitian transpose, and

unω = un − eωe

Hω un.

Then, block Lanczos iteration is applied to the discretizedoperatorLN with initial block X1(ω),producing a block tridiagonal matrixTK(ω) of the form (13), where each entry is afunctionof ω.Then, each Fourier coefficient of the solution attn+1 can be expressed as

[un+1]ω =[

BH0 EH

12 exp[−TK(ω)∆t]E12B0

]

12, E12 =

[

e1 e2]

. (14)

In this paper, we continue to follow the convention used in [21, 22, 30, 32] and identify a block KSSmethod that usesK matrix nodes, or2K scalar nodes, as aK-node KSS method. The term “KSSmethod” is used to refer to a larger class of methods, including block KSS methods but also non-block variations such as those described in [20, 27] and variations introduced in the next section.What characterizes all KSS methods is that they compute eachcomponent of the solution in somebasis by approximating a bilinear form involving a matrix function with an interpolatory quadraturerule that is tailored to that component.

This algorithm has local temporal accuracyO(∆t2K−1) for the parabolic problem (4) [21]. Bycontrast, methods that apply Lanczos iteration only to the solution from the previous time step (see,for example, [17]) achieveO(∆tK−1) accuracy, whereK is the dimension of the Krylov subspacesused. Even higher-order accuracy,O(∆t4K−2), is obtained for the second-order wave equation [22].Furthermore, in [21, 22], it is shown that under appropriate assumptions on the coefficients of thedifferential operatorL in (4), the 1-node block KSS method isunconditionally stable.

2.4. Implementation

KSS methods compute a Jacobi matrix corresponding toeachFourier coefficient, in contrast totraditional Krylov subspace methods (see, for example, [16, 17, 18, 34, 39]) that use only a singleKrylov subspace generated by the solution from the previoustime step. While it would appear thatKSS methods incur a substantial amount of additional computational expense, that is not actuallythe case, because nearly all of the Krylov subspaces that they compute are closely related by thewave numberω, in the 1-D case, or~ω = (ω1, ω2, . . . , ωn) in then-D case.

In fact, the only Krylov subspace that is explicitly computed is the one generated by the solutionfrom the previous time step, of dimension(K + 1), where2K is the number of block Gaussianquadrature nodes. In addition, the averages of the coefficients ofLj, for j = 0, 1, 2, . . . , 2K − 1, arerequired, whereL is the spatial differential operator. When the coefficientsof L are independent oftime, these can be computed once, during a preprocessing step. This computation can be carried outin O(N logN) operations using symbolic calculus [26, 29].

With these considerations, the algorithm for a single time step of a 1-node block KSS method forsolving (4), whereLu = −puxx + q(x)u, with appropriate initial conditions and periodic boundaryconditions, is as follows. The average of a functionf(x) on [0, 2π] is denoted byf .

un = fft(un), v = LNun, v = fft(v)for eachω do

α1 = −pω2 + q (in preprocessing step)β1 = v(ω)− α1u

n(ω)

α2 = 〈un,v〉 − 2Re[un(ω)v(ω)] + α1|un(ω)|2eω = [〈un,un〉 − |un(ω)|2]1/2

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 6: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

6 J. V. LAMBERS

Tω =

[

α1 β1/eωβ1/eω α2/e

]

un+1(ω) = [e−Tω∆t]11un(ω) + [e−Tω∆t]12eω

endun+1 = ifft(un+1)

As with the abovementioned preprocessing step, the amount of work per time step isO(N logN),assuming that the FFT is used for differentiation in the computation ofv. This same complexityapplies to KSS methods with a higher number of quadrature nodes [25]. Furthermore, KSS methodsallow substantial parallelism, as each component of the solution is obtained from its associatedblock Jacobi matrix, independently of other components. Coefficients of powers ofL can also becomputed in parallel.

Generalization of this algorithm to higher dimensions and other PDE is straightforward [13, 15].For example, on the domain[0, 2π]n, if L = −p∆+ q(x), the for loop would iterate over alln-tuples~ω = (ω1, ω2, . . . , ωn), and for each such~ω, α1 = −p‖~ω‖22 + q, whereq is the average ofq(x)on the domain. No additional modification to the algorithm isnecessary. For the second-order waveequation, two matrices of the formTω are computed for each~ω, corresponding to the solution fromthe previous time step and its time derivative. KSS methods have also been generalized to systemsof coupled equations; the details are presented in [24]. In particular, it is shown in [32] that KSSmethods are effective for systems of three equations in three spatial variables, through applicationto Maxwell’s equations.

Although the 1-node block KSS method described above can easily be implemented, and KSSmethods can be implemented withO(N logN) complexity for any number of nodes, the asymptoticconstant of the number of floating-point operations can growquite quickly as a function ofK dueto the need to compute coefficients ofLj , for j = 2, . . . , 2K − 1. Optimization can also be madedifficult by the need to maintain data structures that store representations of recursion coefficients[26, 30]. Therefore, it is necessary to consider whether alternative approaches to selecting andcomputing quadrature nodes for each component of the solution may be more effective.

3. SOLUTION THROUGH ASYMPTOTIC BLOCK LANCZOS ITERATION

The central idea behind KSS methods is to compute each component of the solution, in some basis,using an approximation that is specifically tailored to thatcomponent, even though all componentsare coupled. It follows that each component uses a differentpolynomial approximation ofS(LN),where the functionS is based on the solution operator of the PDE, andLN is the discretization ofthe spatial differential operator. Taking all components together reveals that the computed solutionhas the form

un+1 = f(LN ; ∆t)un =

M∑

j=0

Dj(∆t)Ajun (15)

whereM = 2K, K being the number of block Lanczos iterations, andDj(∆t) is a matrix that isdiagonal in the chosen basis. In other words, KSS methods areexplicit methods that are inherentlymore flexible than explicit time-stepping methods that use apolynomial or rational approximationof the solution operator. We now consider how we can exploit this flexibility.

In the block KSS method, in which block Lanczos iteration is applied to the matrixLN , the initialblock recursion coefficientM1 in (13) is given by

M1(ω) =

[

eHω LN eω eHω LNunω

[unω]

HLN eω [unω ]

HLNunω

]

. (16)

Now, suppose thatL is a second-order differential operator of the form

Lu = −puxx + q(x)u,

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 7: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 7

wherep is a constant. Then we have

M1(ω) =

[

pω2 + q eHω qunω

[unω ]

Hqeω [un

ω]HLNun

ω

]

. (17)

As before, we use the notationf to denote the mean of a functionf(x) defined on[0, 2π], anddefineq(x) = q(x) − q. We denote byq the vector with components[q]j = q(xj). For convenience,multiplication of vectors, as in the off-diagonal elementsof M1(ω), denotes componentwisemultiplication.

It follows that as|ω| increases, we have

M1(ω) ∼[

pω2 + q 0

0 [un]HLNun

[un]Hun

]

.

That is, the off-diagonal entries become negligible compared to the(1, 1) entry. Continuing with theblock Lanczos process to computeB1(ω), we see that for higher frequencies, the diagonal entriesof M1(ω) are approximate eigenvalues ofTK(ω).

In fact, it has been observed in numerical experiments that for the 2-node block KSS method,half of the 4 scalar nodesλj from (12) are clustered around each diagonal entry ofM1(ω) [21]. Wetherefore propose to first prescribe these values as the smallest and largest nodes:

a =[un]HLNun

[un]Hun, b =

eHω LN eω

eHω eω, λ1(ω) = min{a, b}, λ2(ω) = max{a, b}. (18)

Then, we prescribe the remaining nodesλ2(ω), . . . , λ2K−1(ω) to be equally spaced betweenλ1(ω)andλ2K(ω).

It is generally known that quadrature rules with equally spaced nodes are not effective when thenumber of nodes is large (for example, Newton-Cotes rules with 11 or more nodes are guaranteedto have at least one negative weight). However, the number ofnodes,2K, is to be chosen based onthe desired temporal order of accuracy of2K − 1, and therefore the number of nodes can be chosento be very small. Alternative node distributions will be discussed in Section 5.

As will be seen in the numerical experiments, the number of nodes need not be even; it is assumedto be even in this discussion due to the proposed method beingdescribed as a modification of blockKSS methods that use2K nodes obtained fromK iterations of block Lanczos. In general, a KSSmethod applied to a PDE of the formut + Lu = 0 that usesM prescribednodes requires a Krylovsubspace of dimensionM to be generated from the solution from the previous time stepin order tocompute an approximation of the form (15), and has temporal order of accuracyM − 1.

As will be demonstrated in the numerical results in the next section, this approach of prescribingthe extremal nodes as in (18) is not effective for an operator with a variable leading coefficient, suchas

Lu = −(p(x)ux)x + q(x)u.

By applying block Lanczos to a matrixLN arising from discretization of this operator, and initialblockR0(ω), we find that the blockR1(ω) produced by the block Lanczos algorithm,

R1(ω) = LNX1(ω)−X1(ω)M1(ω),

contains vectors that are nearly parallel topeω and eω, wherep(x) = p(x)− p(x). Therefore, inorder to approximate the largest block Gaussian node, whichis the largest eigenvalue ofTK , we seeka linear combination of these two vectors,vω = peω + ceω that maximizes the Rayleigh quotient

vHω LNvω

vHω vω

. (19)

The constantc satisfies the quadratic equation

‖p‖2c2 + pH p2c− ‖p‖22 = 0 (20)

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 8: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

8 J. V. LAMBERS

and we setλ2K(ω) equal to the corresponding Rayleigh quotient.We will also examine the idea of choosing the largest node foreach component so that it serves

as a sharp upper bound, rather than a sharp lower bound, on thelargest node of a block Gaussianrule. To that end, we prescribe

λ2K(ω) = c0ω2 + q (21)

wherec0 is chosen to be an approximate upper bound on the asymptotic constant for the largestblock Gaussian node as a function ofω. In all variations, the interior nodesλ2, . . . , λ2K−1 arechosen to be equally spaced betweenλ1 andλ2K .

Once the nodes are prescribed, we use the fact that any interpolatory quadrature rule computesthe exact integral of a polynomial that interpolates the integrand at the nodes, in conjunction withthe fact that each integral represents a bilinear formeωf(A)u

n for some functionf , to avoid havingto explicitly compute any quadrature weights. Instead, we construct the solution at timetn+1 usingthe representation (15), in which the diagonal entries of the matricesDj are the coefficients of thepolynomials that interpolatef(λ) at the prescribed nodes.

Time-stepping methods that employ a polynomial approximation of the solution operatoreffectively use the same quadrature nodes and weights for all components of the solution. Byviewing such methods as special cases of KSS methods with component-independent nodes andweights, it can readily be seen why such methods have difficulties with stiff systems. Even thoughthe bilinear form representing each component of the solution uses the same function of the matrix,the bilinear form is an integral with a different measureα(λ), that is influenced by both the solutionand each basis function. As such, there is little hope that a single quadrature rule can yield sufficientaccuracy for each member of a family of integrals with very different weight functions, unless it hasa very large number of nodes.

4. NUMERICAL RESULTS

Numerical experiments comparing KSS methods to a variety oftime-stepping methods, includingfinite-difference methods, Runge-Kutta methods and backward differentiation formulae, can befound in [22, 24, 27, 29]. Comparisons with a number of Krylov subspace methods based onexponential integrators [16, 17, 18], including preconditioned Lanczos iteration [34, 39], are madein [31].

In this section, we will solve various PDE using the method described in (2) with 4 iterations ofthe Lanczos algorithm, a 2-node block KSS method that uses 4 (scalar) block Gaussian nodes, and a4-node KSS method with equally spaced nodes chosen using asymptotic block Lanczos iteration. Allmethods are configured to use Krylov subspaces of the same dimension. Since the exact solutionsare not known, accuracy will be gauged using the 2-norm of a relative error estimate that is obtainedby comparing the computed solution to a solution computed using a smaller time step, which willconvey order of convergence in time.

4.1. Parabolic Problems

4.1.1. One-Dimensional ProblemsWe first demonstrate the accuracy of KSS methods combinedwith asymptotic block Lanczos iteration on a parabolic equation in one space dimension,

ut − (p(x)ux)x + q(x)u = 0, 0 < x < 2π, t > 0, (22)

where the coefficientsp(x) andq(x), given by

p(x) = 1, q(x) =4

3+

1

4cosx, (23)

are chosen to be smooth functions. The initial condition is

u(x, 0) = 1 +3

10cosx− 1

20sin 2x, 0 < x < 2π, (24)

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 9: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 9

and periodic boundary conditions are imposed.Since the leading coefficient is constant, we use (18) to prescribe the extremal nodes for the

KSS method combined with asymptotic block Lanczos iteration. The results are shown in Figure1 and TableI. We observe that both variations of KSS methods exhibit approximately third-orderaccuracy in time at the chosen time steps, but the Lanczos method described by (2) is far lessaccurate, in terms of both order of convergence and the magnitude of the error; only at smaller timesteps does it exhibit the expected third-order convergence. We note that the accuracy of Lanczositeration is degraded somewhat when the number of grid points is doubled, but that does not affectthe performance of either KSS method.

Figure 1. Estimates of relative error att = 1 in the solution of (22), (23), (24), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve), a 2-node block KSS method with 4 block Gaussian nodes(dashed curve), and Lanczos iteration asdescribed in (2) (dotted curve) on anN-point grid and various time steps. All methods are 3rd-order accurate

in time.

N ∆t KSS-equi KSS-Gauss Lanczos1 3.591e-003 4.298e-004 1.386e-0011/2 3.321e-004 1.973e-005 1.384e-001

128 1/4 3.625e-005 1.483e-006 9.498e-0021/8 4.256e-006 1.419e-007 4.903e-0021/16 5.120e-007 1.397e-008 3.043e-0021 3.590e-003 4.298e-004 1.935e-0011/2 3.321e-004 1.973e-005 1.933e-001

256 1/4 3.616e-005 1.483e-006 1.297e-0011/8 4.168e-006 1.419e-007 6.898e-0021/16 4.285e-007 1.397e-008 1.133e-002

Table I. Estimates of relative error att = 1 in the solution of (22), (23), (24), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (KSS-equi), a 2-node block KSS method with 4 block Gaussian nodes (KSS-Gauss), and Lanczos iteration asdescribed in (2) (Lanczos) on anN-point grid and various time steps∆t. All methods are 3rd-order accurate

in time.

The effectiveness of choosing the nodes using (18) can be explained with the assistance of Figure2. It can be seen that the extremal nodes agree very well with the block Gaussian nodes computed bythe standard block KSS method. It is interesting to note thatchoosing the interior nodes to achieveequal spacing is sufficient to achieve nearly as much accuracy as computing the block Gaussiannodes for each Fourier coefficient, which requires substantially greater computational effort.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 10: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

10 J. V. LAMBERS

Figure 2. Quadrature nodes used by 4-node KSS method with equally spaced nodes prescribed by (18)(blue circles, only extremal nodes shown) and 2-node block KSS method with 4 block Gaussian nodes (red

crosses) applied to the problem (22), (23), (24).

We now repeat this experiment of solving (22), except with more oscillatory coefficients

p(x) = 1 +1

2cosx− 1

4sin 2x+

1

8cos 3x,

q(x) = 1 +1

4sinx− 1

4cos 2x+

1

8sin 3x− 1

8cos 4x, (25)

and initial data

u(x, 0) = 1 +3

10cosx− 3

20sin 2x+

3

40cos 3x, 0 < x < 2π. (26)

The results are shown in Figure3 and TableII . We see that compared to the case of smoothercoefficients and data, the KSS method combined with asymptotic block Lanczos iteration accordingto (18) is a failure. An explanation for this difference in performance can be found in Figure4.The extremal nodesλ4(ω) are not in agreement with the largest block Gaussian nodes, except inasymptotic growth rate. It is also worth noting that the block Gaussian nodes are not as clusteredaround specific values as they are in the case of smoother coefficients.

Based on the variable leading coefficient, we instead selectthe extremal nodes according to (19)and solve the same problem. The results are shown in Figure3 and TableII . We observe thatnot only does the KSS method with asymptotic block Lanczos iteration yield far more accuracythan when (18) was used, but it is actuallymore accurate than the standard KSS method withblock Gaussian nodes. While KSS with block Gaussian nodes converges only superlinearly forthe chosen time steps, KSS with equally spaced nodes converges superquadratically. In Figure5, it can be seen that the extremal nodes of the KSS method with equally spaced nodes are inmuch better agreement with the extremal block Gaussian nodes, which helps explain the substantialimprovement in performance.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 11: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 11

Figure 3. Estimates of relative error att = 1 in the solution of (22), (25), (26), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve) or (19) (dash-dot curve), a 2-node block KSS method with 4 block Gaussian nodes (dashed curve),and Lanczos iteration as described in (2) (dotted curve) on anN-point grid and various time steps. All

methods are 3rd-order accurate in time.

N ∆t KSS-equi-(19) KSS-equi-(20) KSS-Gauss Lanczos1 2.440e-001 9.833e-002 2.605e-002 4.274e-0021/2 1.964e-001 9.097e-003 7.846e-003 4.017e-002

128 1/4 1.891e-001 1.068e-003 2.875e-003 3.628e-0021/8 3.967e-001 2.099e-004 1.074e-003 2.047e-0021/16 1.983e-001 3.676e-005 2.958e-004 3.492e-0031 2.440e-001 9.833e-002 2.604e-002 6.822e-0021/2 1.964e-001 9.097e-003 7.838e-003 5.978e-002

256 1/4 1.891e-001 1.068e-003 2.868e-003 3.838e-0021/8 3.967e-001 2.099e-004 1.134e-003 2.921e-0021/16 1.983e-001 3.676e-005 3.126e-004 6.651e-003

Table II. Estimates of relative error att = 1 in the solution of (22), (25), (26), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (KSS-equi-(18)) or (19) (KSS-equi-(19)), a 2-node block KSS method with 4 block Gaussian nodes (KSS-Gauss),and Lanczos iteration as described in (2) (Lanczos) on anN-point grid and various time steps∆t. All

methods are 3rd-order accurate in time.

4.1.2. Two-Dimensional ProblemsWe now consider a parabolic problem in two space dimensions,

ut −∇ · (p(x, y)∇u) + q(x, y)u = 0, 0 < x, y < 2π, t > 0, (27)

where the smooth coefficients are given by

p(x, y) = 1, q(x, y) =4

3+

1

4cosx− 1

4sin y. (28)

The initial condition is

u(x, y, 0) = 1 +3

10cosx− 1

20sin 2y, 0 < x, y < 2π, (29)

and periodic boundary conditions are imposed.The results are shown in Figure6 and TableIII . We observe essentially the same behavior of

all three methods as in the one-dimensional case. The Lanczos-based exponential integrator (2)

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 12: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

12 J. V. LAMBERS

Figure 4. Quadrature nodes used by 4-node KSS method with equally spaced nodes prescribed by (18)(blue circles, only extremal nodes shown) and 2-node block KSS method with 4 block Gaussian nodes (red

crosses) applied to the problem (22), (25), (26)

does not exhibit the theoretically expected third-order accuracy, and its performance is significanlydegraded by an increase in the number of grid points. KSS methods, on the other hand, have no suchdifficulty with the higher resolution, and actually do yieldthird-order accuracy in time, with blockGaussian nodes being more accurate.

N ∆t KSS-equi KSS-Gauss Lanczos1 4.577e-003 1.385e-003 4.822e-0031/2 4.455e-004 9.948e-005 6.803e-004

16 1/4 4.980e-005 9.785e-006 7.680e-0051/8 5.819e-006 1.083e-006 1.293e-0041/16 6.203e-007 1.146e-007 4.175e-0061 4.577e-003 1.385e-003 4.702e-0031/2 4.455e-004 9.948e-005 5.772e-004

32 1/4 4.983e-005 9.785e-006 7.524e-0041/8 5.854e-006 1.083e-006 7.363e-0041/16 6.603e-007 1.146e-007 3.406e-004

Table III. Estimates of relative error att = 1 in the solution of (27), (28), (29), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (KSS-equi), a 2-node block KSS method with 4 block Gaussian nodes (KSS-Gauss), and Lanczos iteration asdescribed in (2) (Lanczos) on anN-point grid (per dimension) and various time steps∆t. All methods are

3rd-order accurate in time.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 13: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 13

Figure 5. Quadrature nodes used by 4-node KSS method with equally spaced nodes prescribed by (19)(blue circles, only extremal nodes shown) and 2-node block KSS method with 4 block Gaussian nodes (red

crosses) applied to the problem (22), (25), (26)

Figure 6. Estimates of relative error att = 1 in the solution of (27), (28), (29), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve), a 2-node block KSS method with 4 block Gaussian nodes(dashed curve), and Lanczos iteration asdescribed in (2) (dotted curve) on anN-point grid (per dimension) and various time steps. All methods are

3rd-order accurate in time.

We repeat the experiment with more oscillatory coefficients

p(x, y) = 1 +1

2cos(x+ y)− 1

4sin(2(x− y)) +

1

8cos(3(x+ y)),

q(x, y) = 1 +1

4sinx− 1

4cos 2y +

1

8sin 3x− 1

8cos 4y. (30)

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 14: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

14 J. V. LAMBERS

The initial data

u(x, y, 0) = 1 +3

10cosx− 3

20sin(2(x+ y)) +

3

40cos 3x, 0 < x, y < 2π, (31)

is more oscillatory as well. The results are shown in Figure7 and TableIV. Once again, we observethe same behavior as in the one-dimensional case: using equally spaced nodes chosen according to(18) yields extremely poor results. All three methods exhibit degradation of accuracy as the numberof grid points is increased, including order of accuracy, asthe KSS method with block Gaussiannodes is at best second-order accurate.

Figure 7. Estimates of relative error att = 1 in the solution of (27), (30), (31), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve) or (19) (dash-dot curve), a 2-node block KSS method with 4 block Gaussian nodes (dashed curve),and Lanczos iteration as described in (2) (dotted curve) on anN-point grid (per dimension) and various time

steps. All methods are 3rd-order accurate in time.

N ∆t KSS-equi-(19) KSS-equi-(20) KSS-Gauss Lanczos1 5.510e-001 3.204e-001 2.653e-002 1.351e-0011/2 3.305e-001 4.005e-002 8.202e-003 9.480e-002

16 1/4 7.355e-001 1.909e-003 2.836e-003 4.589e-0021/8 4.712e-001 2.761e-004 1.039e-003 6.060e-0031/16 1.388e-001 4.289e-005 1.581e-005 1.270e-0031 6.962e-001 3.204e-001 2.645e-002 1.222e-0011/2 5.798e-001 3.972e-002 8.585e-003 1.109e-001

32 1/4 8.134e-001 1.968e-003 3.280e-003 9.978e-0021/8 6.468e-001 3.171e-004 1.107e-003 8.071e-0021/16 4.171e-001 5.503e-005 2.788e-004 4.767e-002

Table IV. Estimates of relative error att = 1 in the solution of (27), (30), (31), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (KSS-equi-(18)) or (19) (KSS-equi-(19)), a 2-node block KSS method with 4 block Gaussian nodes (KSS-Gauss),and Lanczos iteration as described in (2) (Lanczos) on anN-point grid (per dimension) and various time

steps∆t. All methods are 3rd-order accurate in time.

In an effort to improve accuracy, we take the variation in theleading coefficientp(x, y) intoaccount and prescribe the largest nodeλ4(ω) for each Fourier coefficient using (19) rather than (18).The result of this adjustment is shown in Figure7 and TableIV. As in the one-dimensional case,KSS with equally spaced nodes actually outperforms KSS withblock Gaussian nodes, exhibitingsuperquadratic convergence in time, compared to superlinear for the block Gaussian case. We also

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 15: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 15

note that once again, the convergence of KSS with equally spaced nodes is negligibly affected byan increase in the number of grid points, unlike for KSS with block Gaussian nodes or the Lanczos-based exponential integrator (2).

4.2. Hyperbolic Problem

We now apply all three methods to the second-order wave equation

utt = (p(x)ux)x − q(x)u, 0 < x < 2π, t > 0, (32)

with smooth coefficientsp(x) andq(x) defined in (23). The initial data is

u(x, 0) = 1 +3

10cosx− 1

20sin 2x, ut(x, 0) =

1

2sinx+

2

25cos 2x, 0 < x < 2π, (33)

and periodic boundary conditions are imposed. KSS methods are applied to the wave equation byreducing it to a first-order system and then computing both the solution and its time derivative.PerformingK block Lanczos iterations each on the solution and its time derivative, whichcorresponds to2K scalar quadrature nodes for each, yieldsO(∆t4K−2) accuracy. Details can befound in [14, 22].

The results are shown in Figure8 and TableV. Because of the number of scalar nodes and thesecond-order derivative with respect to time, all three methods are theoretically sixth-order accuratein time for sufficiently small time steps, but for the time steps chosen in this experiment, only thetwo variations of KSS methods come close to exhibiting such convergence, and do so independentlyof the spatial resolution, while the accuracy of the Lanczos-based exponential integrator once againdeteriorates significantly from the increase in the number of grid points. We also note that whileKSS with block Gaussian nodes is again more accurate than KSSwith equally spaced nodes, as inthe parabolic case, the gap between the error estimates for the two methods is much smaller in thehyperbolic case.

Figure 8. Estimates of relative error att = 1 in the solution of (32), (23), (33), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve), a 2-node block KSS method with 4 block Gaussian nodes(dashed curve), and Lanczos iteration asdescribed in (2) (dotted curve) on anN-point grid and various time steps. All methods are 6th-order accurate

in time.

We now examine the performance over a longer time interval, with a final time oft = 10 ratherthant = 1, and using time steps that are ten times as large. While we do not observe the same orderof convergence in the two variations of KSS methods (greaterthan 4th-order accuracy on average),both methods still perform quite well in spite of the large time steps that are at least 25 times theCFL limit in the case ofN = 256 grid points, while the Lanczos-based exponential integrator isunable to produce a solution with reasonable accuracy.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 16: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

16 J. V. LAMBERS

N ∆t KSS-equi KSS-Gauss Lanczos1 1.124e-005 1.197e-005 4.831e-0061/2 3.976e-007 3.194e-007 7.661e-006

128 1/4 7.132e-009 5.008e-009 1.015e-0041/8 1.144e-010 7.465e-011 1.349e-0051/16 8.099e-012 1.114e-012 7.373e-0101 1.124e-005 1.197e-005 4.877e-0061/2 3.976e-007 3.194e-007 1.291e-003

256 1/4 7.132e-009 5.008e-009 9.163e-0041/8 1.143e-010 7.465e-011 7.380e-0031/16 7.888e-012 1.114e-012 4.456e-005

Table V. Estimates of relative error att = 1 in the solution of (32), (23), (33), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (KSS-equi), a 2-node block KSS method with 4 block Gaussian nodes (KSS-Gauss), and Lanczos iteration asdescribed in (2) (Lanczos) on anN-point grid and various time steps∆t. All methods are 6th-order accurate

in time.

Figure 9. Estimates of relative error att = 10 in the solution of (32), (23), (33), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve), a 2-node block KSS method with 4 block Gaussian nodes(dashed curve), and Lanczos iteration asdescribed in (2) (dotted curve) on anN-point grid and various time steps. All methods are 6th-order accurate

in time.

We now solve the second-order wave equation with the more oscillatory coefficients defined in(25), and initial data

u(x, 0) = 1 +3

10cosx− 3

20sin 2x+

3

40sin 3x,

ut(x, 0) =1

2sinx+

1

4cos 2x− 1

8sin 3x, 0 < x < 2π, (34)

with periodic boundary conditions. The KSS method with equally spaced nodes employs (18) tochoose the extremal nodes. The results are shown in Figure10 and TableVII . The performance ofall three method is dismal, with error actuallyincreasingas the time step decreases forN = 256 gridpoints. This contrasts with the parabolic case, in which theKSS method with block Gaussian nodesand the Lanczos-based exponential integrator were both able to deliver reasonable accuracy, but inthis hyperbolic problem, higher-frequency components in the solution persist over time instead ofbeing exponentially damped, thus highlighting the importance of choosing nodes wisely for suchproblems.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 17: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 17

N ∆t KSS-equi KSS-Gauss Lanczos10 1.339e+000 2.019e-002 7.869e-0015 1.259e-001 1.051e-002 4.492e-001

128 5/2 5.229e-003 7.018e-003 1.412e-0015/4 1.940e-004 1.392e-004 2.532e-0015/8 2.371e-006 1.190e-006 1.301e-00110 1.339e+000 2.019e-002 2.874e+0005 1.259e-001 1.051e-002 4.020e+000

256 5/2 5.229e-003 7.018e-003 5.556e+0005/4 1.940e-004 1.392e-004 5.545e+0005/8 2.371e-006 1.190e-006 1.628e+000

Table VI. Estimates of relative error att = 10 in the solution of (32), (23), (33), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (KSS-equi), a 2-node block KSS method with 4 block Gaussian nodes (KSS-Gauss), and Lanczos iteration asdescribed in (2) (Lanczos) on anN-point grid and various time steps∆t. All methods are 6th-order accurate

in time.

Figure 10. Estimates of relative error att = 1 in the solution of (32), (25), (34), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18) (solidcurve) or (19) (dash-dot curve with circles) or (21) (dash-dot curve with x’s), a 2-node block KSS methodwith 4 block Gaussian nodes (dashed curve), and Lanczos iteration as described in (2) (dotted curve) on an

N-point grid and various time steps. All methods are 6th-order accurate in time.

In the interest of choosing nodes more wisely, we turn to (19) for prescribing the largest nodefor each Fourier coefficient, as in the parabolic case. The results of this strategy for the hyperboliccase are shown in Figure10 and TableVII . We observe that the performance of the KSS methodwith equally spaced nodes is drastically improved comparedto the previous experiment in which(18) was used to choose the extremal nodes. However, as the time step decreases, an increase inthe relative error estimate occurs at the higher resolution, after previously exhibiting approximately5th-order accuracy in time. Therefore, further improvement in the node selection scheme is requiredfor robustness.

We note that in Figure5, the largest nodes for the KSS method with equally spaced nodes are stillslightly smaller than those of the KSS method with block Gaussian nodes. We therefore artificially“force” the largest nodes for the equally-spaced case to be slightly larger by using (21). The nodesof the two variations of the KSS method are shown in Figure11, and the results of this strategy areshown in Figure10 and TableVII . The KSS method with equally spaced nodes no longer exhibitsthe degradation in accuracy as the number of grid points increases, and consistently exhibits greater

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 18: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

18 J. V. LAMBERS

N ∆t KSS-equi-(19) KSS-equi-(20) KSS-equi-(22) KSS-Gauss Lanczos1 1.750e-002 1.115e-002 1.151e-002 9.667e-003 3.133e-0021/2 4.911e-004 5.051e-004 5.712e-004 4.455e-004 2.586e-003

128 1/4 7.951e-006 1.675e-005 2.008e-005 7.362e-006 1.830e-0031/8 6.793e-005 4.083e-007 4.997e-007 1.046e-005 9.908e-0041/16 2.524e-009 7.468e-009 9.197e-009 2.394e-009 3.646e-0071 1.750e-002 1.115e-002 1.151e-002 9.667e-003 3.133e-0021/2 4.911e-004 5.051e-004 5.712e-004 4.455e-004 1.040e-002

256 1/4 7.951e-006 1.675e-005 2.008e-005 7.395e-006 3.121e-0021/8 2.770e-003 4.083e-007 4.997e-007 3.067e-004 3.408e-0021/16 2.335e+000 8.018e-006 9.574e-009 5.368e-004 3.467e-002

Table VII. Estimates of relative error att = 1 in the solution of (32), (25), (34), with periodic boundaryconditions, computed by a 4-node KSS method with equally spaced nodes chosen according to (18)(KSS-equi-(18)), (19) (KSS-equi-(19)), or (21) (KSS-equi-(21)), a 2-node block KSS method with 4 blockGaussian nodes (KSS-Gauss), and Lanczos iteration as described in (2) (Lanczos) on anN-point grid and

various time steps∆t. All methods are 6th-order accurate in time.

than fifth-order accuracy in time, improving toward the theoretically expected sixth-order accuracyas∆t decreases. The insensitivity of the error to increase in thenumber of grid points is finallyachieved in the hyperbolic case for oscillatory coefficients and data, as it had been for other, “nicer”problems.

Figure 11. Quadrature nodes used by 4-node KSS method with equally spaced nodes prescribed by (21)(blue circles, only extremal nodes shown) and 2-node block KSS method with 4 block Gaussian nodes (red

crosses) applied to the problem (32), (25), (34)

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 19: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 19

4.3. Performance

We now compare the efficiency of a KSS method with equally spaced nodes against a Lanzcos-basedexponential integrator, as in (2), that is allowed to proceed to convergence, instead of being restrictedto a Krylov subspace of the same dimension. We also compare toa preconditioned Lanczos iterationthat uses a RD-rational approximation of the matrix exponential [34, 39], that is designed to reducethe number of Lanczos iterations needed for convergence. They are implemented in MATLAB andexecuted on a Dell Inspiron 6400/E1505 notebook with an 2.00GHz Core 2 Duo processor. TableVIII reports the execution time, in seconds, for all three methods applied to the problem (22), (23),(24) to compute the solution att = 1 on 512- and 1024-point grids. In Figure12, the execution timeis plotted against the relative error estimates from TableVIII .

It can be seen that a KSS method with equally spaced quadrature nodes prescribed throughasymptotic block Lanczos iteration yields much greater accuracy per unit of time than the Lanczos-based approximation (2). The performance of the KSS method is comparable to that of the RD-rational approximation when a larger time step is used, but for smaller time steps, the KSS methodis more efficient. It is interesting to note that as∆t decreases, the number of Lanczos iterationsneeded by the RD-rational approximation for convergence actually increases, whereas a decreaseoccurs for the standard Lanczos iteration. However, both Lanczos-based methods require higher-dimensional Krylov subspaces than the KSS method, and the RD-rational approximation incurs theadditional expense of solving a system of equations during each iteration.

It is also interesting to note that both the KSS method and theRD-rational approximation exhibitfavorable scalability properties in this experiment, as the execution time increases linearly with thenumber of grid points, as the dimensions of the Krylov subspaces on both grids are essentially thesame. On the other hand, the approximation of (2) does not scale as effectively, as an increase in thenumber of grid points leads to a corresponding increase in the number of iterations needed.

In addition to the Krylov subspace generated by the solutionfrom the previous time step, which allthree methods require and whose dimensions are indicated inTableVIII , the KSS method requiresstorage for2NK scalars, whereN is the total number of grid points andK is the number ofquadrature nodes per component of the solution. This storage is needed for the nodes themselves,and the coefficients of theN polynomials, each of degreeK − 1, that interpolatee−λ∆t (forparabolic problems) at theK nodes associated with each component.

Execution time (s) Krylov subspace dimensionN ∆t KSS-equi RD-rational Lanczos KSS-equi RD-rational Lanczos

1 0.031 0.22 1.8 5 9.1 62.90.5 0.063 0.41 1.5 5 9.2 45.6

512 0.25 0.14 0.97 1.0 5 11.2 31.20.125 0.25 2.3 0.94 5 12.9 21.80.0625 0.47 4.4 0.97 5 12.6 15.11 0.047 0.41 19.4 5 9.1 123.80.5 0.094 0.83 14.2 5 9.5 86.7

1024 0.25 0.25 2.0 9.1 5 11.5 60.20.125 0.42 4.7 7.5 5 13.0 41.50.0625 0.91 9.0 4.4 5 12.8 28.3

Table VIII. Execution times, in seconds, and average Krylovsubspace dimensions per time step, for thesolution of (22), (23), (24) at t = 1, with periodic boundary conditions, computed by a 5-node KSS methodwith equally spaced nodes chosen according to (18) (KSS-equi), a RD-rational approximation of the matrixexponential [34, 39], and Lanczos iteration as described in (2) (Lanczos) on 512- and 1024-point grids andvarious time steps∆t. The KSS method is 4th-order accurate in time; the exponential integrators iterate to

convergence during each time step.

It can be seen from this comparison that the componentwise approximation of the solutionoperatore−Lt allows a change in the selection of the dimension of the Krylov subspace generated

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 20: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

20 J. V. LAMBERS

Figure 12. Plot of relative error versus execution time (in seconds) for the solution of (22), (23), (24) att = 1, with periodic boundary conditions, computed by a 5-node KSS method with equally spaced nodeschosen according to (18) (KSS-equi), a RD-rational approximation of the matrix exponential [34, 39], andLanczos iteration as described in (2) (Lanczos) on 512- and 1024-point grids and various time steps∆t. TheKSS method is 4th-order accurate in time; the exponential integrators iterate to convergence during each

time step.

by the solution from the previous time step, and, by extension, the number of Arnoldi orLanczos iterations. In standard exponential integrators,this dimension is determined according toconvergence criteria, and can therefore vary based on the number of grid points or the time step.In a KSS method, however, the dimension is determined solelyby the desired temporal order ofaccuracy, and can therefore remain fixed throughout the entire computation.

5. DISCUSSION

We now discuss ongoing and upcoming research directions pertaining to the expansion of theapplicability of KSS methods, including their enhancementthrough asymptotic analysis of Lanczositeration as a function of parameters of the chosen basis functions.

5.1. Quadrature Node Distribution

In this paper, we have limited ourselves to using equally spaced nodes, once the extremal nodeshave been selected. However, this restriction is not necessary. For example, we can apply aChebyshev distribution to the nodes. Figure13compares the accuracy of equidistant and Chebyshevdistributions, applied to the parabolic problem (22), (23), (24). ForN = 512 grid points, the resultsare nearly identical, but forN = 2048, there is a slight degradation of accuracy for the Chebyshevdistribution for the smallest time steps, after performingalmost identically to the equidistantdistribution for larger time steps. Other experiments havesuggested that the results can be sensitiveto the distribution of the nodes, particularly if some of them are quite clustered. This will beinvestigated further in future work.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 21: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 21

1 1/2 1/4 1/8 1/1610

−12

10−10

10−8

10−6

time step

rela

tive

erro

r

parabolic problem, 1−D, N=512, T=1

KSS−equiKSS−cheb

1 1/2 1/4 1/8 1/1610

−12

10−10

10−8

10−6

time step

rela

tive

erro

r

parabolic problem, 1−D, N=2048, T=1

KSS−equiKSS−cheb

Figure 13. Estimates of relative error at t=0.2 in the solution of (22), (23), (24), with periodic boundaryconditions, computed by a 5-node KSS method with equally spaced nodes (solid curve) and with nodesspaced according to a Chebyshev distribution (dashed curve) on anN-point grid and various time steps.

Both methods are 4th-order accurate in time.

5.2. Adaptive Step Size Control

The time step∆t can be varied adaptively by obtaining an error estimate. This can be accomplishedefficiently by computing a second approximate solution thatinvolves one additional quadraturenode for each component of the solution. As it is not necessary to use equal spacing betweennodes, this node can be chosen to lie between the existing nodes for the given coefficient. By usingNewton interpolation to construct the approximation of each component from the chosen nodes, theadditional node can be accounted for efficiently to obtain the second approximation. The time stepcan then be adjusted according to the size of the error estimate as in standard adaptive methods.

5.3. Other Spatial Discretizations

The main idea behind KSS methods, that higher-order accuracy in time can be obtained bycomponentwise approximation, is not limited to the enhancement of spectral methods that employFourier basis functions. LetA be anN ×N matrix andf be an analytic function. Thenf(A)v canbe computed efficiently by approximation of each component,with respect to a basis{uj}Nj=1, by ablock Gaussian quadrature rule withK block nodes (that is,2K scalar nodes) if expressions of theform

uHj Akuj , uH

j Akv, vHAkv (35)

can be computed efficiently forj = 1, . . . , N andk = 0, . . . , 2K − 1, and transformation betweenthe basis{uj}Nj=1 and the standard basis can be performed efficiently.

The first expression in (35) can be computed analytically if the members of the basis{uj}Nj=1

can be simply expressed in terms ofj, as in Fourier spectral methods. The other two expressions in(35) are readily obtained from bases for Krylov subspaces generated byv. If A is sparse, then eachblock recursion coefficient, acrossall components, can be represented as the sum of a sparse matrixand a low-rank matrix [30].

Therefore, it is worthwhile to explore the adaptation of KSSmethods to other spatialdiscretizations for which the recursion coefficients can becomputed efficiently. The temporalorder of accuracy achieved in the case of Fourier spectral methods applies to such discretizations,as only the measures in the Riemann-Stieltjes integrals arechanging, not the integrands. Thisis demonstrated in [23], in which block KSS methods are adapted to spatial discretization viafinite elements and collocation with radial basis functions, as in [40]. Future work will consistof asymptotic analysis of the recursion coefficients arising from these discretizations in order todevelop effective algorithms for prescribing quadrature nodes.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 22: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

22 J. V. LAMBERS

5.4. Application to Nonlinear PDE

There are several exponential integrators that can be used for solving stiff systems of nonlinearODE, such as those proposed in [2, 5, 18, 19, 35, 37, 38]. These methods involve the approximationof products of matrix functions and vectors, where the functions include the matrix exponential. Acomparison of several of these methods with implicit and explicit integrators is given in [33]. It isworthwhile to explore whether methods of this type can be made more effective by replacing theirArnoldi- or Lanczos-based approaches to approximation of matrix functions with a componentwiseapproach such as those used by KSS methods, particularly with its enhancement through appropriateprescription of quadrature nodes. The 1-node block KSS method, which is first-order accurate intime for diffusion equations, has already been successfully applied to nonlinear diffusion equationsfrom image processing [13, 15]. To achieve higher-order accuracy, an approach such as thatdescribed in [33], in which the KSS method is used to approximate the product of a function ofthe Jacobian with a vector, will be investigated.

6. SUMMARY

In many cases, the solution of systems of ODE can be expressedin terms of the approximateevaluation of a matrix function such as the exponential. Over the last few decades, muchadvancement has been made in the approximation of quantities of the formf(A)v anduT f(A)v.Techniques for computingf(A)v, such as those described in [16, 17], have been used for severalyears to solve systems of ODE, but can encounter the same difficulties with stiffness that othertime-stepping methods have.

On the other hand, techniques for computing the bilinear form uT f(A)v, normally used forcomputing selected components of the solution of a system ofequations, open the door to rethinkinghow time-stepping is carried out. Because they allow individual attention to be paid to eachcomponent off(A)v in some basis, they can be used to construct new time-stepping methods thatare more effective for large-scale problems. KSS methods represent one approach of exploiting thisflexibility, and their use of distinct block Gaussian quadrature rules for each component make theman attractive choice for various problems in terms of their accuracy.

However, it has been established in this paper that the use ofblock Gaussian rules is neithernecessary nor sufficient for maximizing accuracy, and alternative approaches based on prescribingquadrature nodes, whether based on asymptotic block Lanczos iteration or not, can not only achievecomparable or greater accuracy than block Gaussian rules, but can also do so with much greaterefficiency. Therefore, it is worthwhile to continue exploration of such componentwise approachesto the solution of stiff systems of ODE, both linear and nonlinear, through the approximation ofmatrix functions.

REFERENCES

1. Atkinson, K.:An Introduction to Numerical Analysis, 2nd Ed.Wiley (1989)2. Caliari, M., Ostermann, A.: Implementation of Exponential Rosenbrock-type Integrators.Applied Numerical

Mathematics59 (2009) 568-582.3. Dahlquist, G., Eisenstat, S. C., Golub, G. H.: Bounds for the Error of Linear Systems of Equations Using the Theory

of Moments.J. Math. Anal. Appl.37 (1972) 151-166.4. Davis, P., Rabinowitz, P.:Methods of Numerical Integration, 2nd Ed. Academic Press (1984)5. Friesner, R. A., Tuckerman, L. S., Bornblaser, B. C., Russo, T. V.: A Method for Exponential Propagation of Large

Systems of Stiff Nonlinear Differential Equations.J. Sci. Comput.4 (1989) 5870-5876.6. Gautschi, W.: Construction of Gauss-Christoffel Quadrature Formulas.Math. Comp.22 (1986) 251-270.7. Gautschi, W.: Orthogonal Polynomials–Constructive Theory and Applications.J. of Comp. and Appl. Math.12/13

(1985) 61-76.8. Golub, G. H.: Some Modified Matrix Eigenvalue Problems.SIAM Review15 (1973) 318-334.9. Golub, G. H.: Bounds for Matrix Moments.Rocky Mnt. J. of Math.4 (1974) 207-211.

10. Golub, G. H., Meurant, G.: Matrices, Moments and Quadrature.Proceedings of the 15th Dundee Conference, June-July 1993, Griffiths, D. F., Watson, G. A. (eds.), Longman Scientific & Technical (1994)

11. Golub, G. H., Underwood, R.: The Block Lanczos Method forComputing Eigenvalues.Mathematical Software III,J. Rice Ed., (1977) 361-377.

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla

Page 23: Explicit High-Order Time-Stepping Based on Componentwise ... · from “matrices, moments and quadrature” to approximate bil inear forms involving functions of matrices via block

EXPLICIT TIME-STEPPING FROM ASYMPTOTIC LANCZOS ITERATION 23

12. Golub, G. H., Welsch, J.: Calculation of Gauss Quadrature Rules.Math. Comp.23 (1969) 221-230.13. Guidotti, P., Lambers, J. V., Wong, E.: A Nonlinear Nonlocal Diffusion Model for Color Image Denoising. In

preparation.14. Guidotti, P., Lambers, J. V., Sølna, K.: Analysis of 1-D Wave Propagation in Inhomogeneous Media.Numer. Funct.

Anal. Opt.27 (2006) 25-55.15. Guidotti, P., Longo, K.: Two Enhanced Fourth Order Diffusion Models for Image Denoising.Journal of

Mathematical Imaging and Vision40 (2011) 188-198.16. Hochbruck, M., Lubich, C.: A Gautschi-type Method for Oscillatory Second-order Differential Equations.Numer.

Math.83 (1999) 403-426.17. Hochbruck, M., Lubich, C.: On Krylov Subspace Approximations to the Matrix Exponential Operator.SIAM J.

Numer. Anal.34 (1996) 1911-1925.18. Hochbruck, M., Lubich, C., Selhofer, H.: Exponential Integrators for Large Systems of Differential Equations.

SIAM J. Sci. Comput.19 (1998) 1552-1574.19. Hochbruck, M., Ostermann, A.: Exponential Runge-Kuttamethods for Parabolic Problems,Applied Numerical

Mathematics53 (2005) 323-339.20. Lambers, J. V.: Derivation of High-Order Spectral Methods for Time-dependent PDE using Modified Moments.

Electron. T. Numer. Ana.28 (2008) 114-135.21. Lambers, J. V.: Enhancement of Krylov Subspace SpectralMethods by Block Lanczos Iteration.Electron. T. Numer.

Ana.31 (2008) 86-109.22. Lambers, J. V.: An Explicit, Stable, High-Order Spectral Method for the Wave Equation Based on Block Gaussian

Quadrature.IAENG Journal of Applied Mathematics38 (2008) 333-348.Available athttp://www.iaeng.org/IJAM/issues v38/issue 4/IJAM 38 4 10.pdf.

23. Lambers, J. V: High-order Time-stepping for Galerkin and Collocation Methods Based on Component-wiseApproximation of Matrix Functions.Proceedings of the World Congress on Engineering, London, 2011. In press.

24. Lambers, J. V.: Implicitly Defined High-Order Operator Splittings for Parabolic and Hyperbolic Variable-Coefficient PDE Using Modified Moments.International Journal of Computational Science2 (2008) 376-401.

25. Lambers, J. V.: Krylov Subspace Methods for Variable-Coefficient Initial-Boundary Value Problems. Ph.D. Thesis,Stanford University (2003).Available athttp://www.math.usm.edu/lambers/pub/thesis.pdf.

26. Lambers, J. V.: Krylov Subspace Spectral Methods for theTime-Dependent Schrodinger Equation with Non-Smooth Potentials.Numer. Algorithms51 (2009) 239-280.

27. Lambers, J. V.: Krylov Subspace Spectral Methods for Variable-Coefficient Initial-Boundary Value Problems.Electron. T. Numer. Ana.20 (2005) 212-234.

28. Lambers, J. V.: A Multigrid Block Krylov Subspace Spectral Method for Variable-Coefficient Elliptic PDE.IAENGJournal of Applied Mathematics39 (2009) 236-246.Available athttp://www.iaeng.org/IJAM/issues v39/issue 4/IJAM 39 4 07.pdf.

29. Lambers, J. V.: Practical Implementation of Krylov Subspace Spectral Methods.J. Sci. Comput.32 (2007) 449-476.30. Lambers, J. V.: Solution of Time-Dependent PDE Through Component-wise Approximation of Matrix Functions.

IAENG Journal of Applied Mathematics41 (2011) 1-10.Available athttp://www.math.usm.edu/lambers/papers/ijam2010.pdf.

31. Lambers, J. V.: Spectral Methods for Time-dependent Variable-coefficient PDE Based on Block GaussianQuadrature.Proceedings of the 2009 International Conference on Spectral and High-Order Methods(2010) inpress.Available athttp://www.math.usm.edu/lambers/papers/icosahom lambers final.pdf.

32. Lambers, J. V.: A Spectral Time-Domain Method for Computational Electrodynamics.Adv. Appl. Math. Mech.1(6)(2009) 781-798.Available athttp://www.global-sci.org/aamm/volumes/v1n6/pdf/16-781.pdf.

33. Loffeld, J., Tokman, M.: Comparative Performance of Exponential, Implicit and Explicit Integrators for StiffSystems of ODEs. Submitted.

34. Moret, I., Novati, P.: RD-rational Approximation of theMatrix Exponential Operator.BIT 44 (2004) 595-615.35. Ostermann, A., Thalhammer, M., Wright, W. M.: A Class of Explicit Exponential General Linear Methods.BIT

Numerical Mathematics46 (2006) 409-431.36. Shampine, L. F., Reichelt, M. W.: The MATLAB ODE suite.SIAM Journal of Scientific Computing18 (1997) 1-22.37. Tokman, M.: Efficient Integration of Large Stiff Systemsof ODEs with Exponential Propagation Iterative (EPI)

Methods.J. Comput. Phys.213 (2006) 748-776.38. Vaissmoradi, N., Malek, A., Momeni-Masuleh, S. H.: Error Analysis and Applications of the Fourier-Galerkin

Runge-Kutta Schemes for High-order Stiff PDEs.Journal of Computational and Applied Mathematics231 (2009)124-133.

39. van den Eshof, J., Hochbruck, M.: Preconditioning Lanczos Approximations to the Matrix Exponential.SIAM J. ofSci. Comput.27 (2006) 1438-1457.

40. Yao, G.: Local Radial Basis Function Methods for SolvingPartial Differential Equations. Ph.D. Thesis, Universityof Southern Mississippi (2010).

Copyright c© 2010 John Wiley & Sons, Ltd. Numer. Linear Algebra Appl.(2010)Prepared usingnlaauth.cls DOI: 10.1002/nla