a subspace preconditioning algorithm for eigenvector...

30
A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR/EIGENVALUE COMPUTATION James H. Bramble, Andrew V. Knyazev, and Joseph E. Pasciak Advances in Computational Mathematics, 6 (1996), no. 2, 159–189. We consider the problem of computing a modest number of the smallest eigenvalues along with orthogonal bases for the corresponding eigenspaces of a symmetric positive definite operator A defined on a finite dimensional real Hilbert space V . In our applications, the dimension of V is large and the cost of inverting A is prohibitive. In this paper, we shall develop an effective parallelizable technique for computing these eigenvalues and eigenvectors utilizing subspace iteration and preconditioning for A. Estimates will be provided which show that the preconditioned method converges linearly when used with a uniform preconditioner under the assumption that the approximating subspace is close enough to the span of desired eigenvectors. 1. Introduction. In this paper, we shall be concerned with computing a modest number of the smallest eigenvalues and their corresponding eigenvectors of a large symmetric ill- conditioned system. More explicitly, let A be a symmetric and positive definite linear operator on a N -dimensional real vector space V with inner product (·, ·) and norm k·k =(·, ·) 1/2 . The distinct eigenvalues, {λ i }, of A are positive real numbers which along with their respective eigenvectors {v i }, satisfy Av i = λ i v i . The eigenvalues {λ i } are ordered to be increasing with i and are assumed to be simple. The eigenvectors {v i } are orthogonal with respect to the inner product (·, ·) and chosen so that (Av i ,v i ) =1. The algorithms and results of this paper carry over to the case of Hermitian operators on complex inner product spaces without change. We consider the case where the first s eigenvalues are simple and well sepa- rated. It seems possible to extend the analysis to the case of eigenvalues of higher 1991 Mathematics Subject Classification. Primary 65N30; Secondary 65F10. This manuscript has been authored under contract number DE-AC02-76CH00016 with the U.S. Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty- free license to publish or reproduce the published form of this contribution, or allow others to do so, for U.S. Government purposes. This work was also supported in part under the National Science Foundation Grants No. DMS-9007185, DMS-9501507 and NSF-CCR-9204255, and by the U.S. Army Research Office through the Mathematical Sciences Institute, Cornell University. Typeset by A M S-T E X 1

Upload: nguyenkiet

Post on 28-May-2019

223 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

A SUBSPACE PRECONDITIONING ALGORITHM

FOR EIGENVECTOR/EIGENVALUE COMPUTATION

James H. Bramble, Andrew V. Knyazev, and Joseph E. Pasciak

Advances in Computational Mathematics, 6 (1996), no. 2, 159–189.

We consider the problem of computing a modest number of the smallest eigenvaluesalong with orthogonal bases for the corresponding eigenspaces of a symmetric positivedefinite operator A defined on a finite dimensional real Hilbert space V . In our

applications, the dimension of V is large and the cost of inverting A is prohibitive. Inthis paper, we shall develop an effective parallelizable technique for computing theseeigenvalues and eigenvectors utilizing subspace iteration and preconditioning for A.

Estimates will be provided which show that the preconditioned method convergeslinearly when used with a uniform preconditioner under the assumption that theapproximating subspace is close enough to the span of desired eigenvectors.

1. Introduction.

In this paper, we shall be concerned with computing a modest number of thesmallest eigenvalues and their corresponding eigenvectors of a large symmetric ill-conditioned system. More explicitly, let A be a symmetric and positive definitelinear operator on a N -dimensional real vector space V with inner product (·, ·)and norm ‖·‖ = (·, ·)1/2. The distinct eigenvalues, {λi}, of A are positive realnumbers which along with their respective eigenvectors {vi}, satisfy

Avi = λivi.

The eigenvalues {λi} are ordered to be increasing with i and are assumed to besimple. The eigenvectors {vi} are orthogonal with respect to the inner product (·, ·)and chosen so that (Avi, vi) =1. The algorithms and results of this paper carryover to the case of Hermitian operators on complex inner product spaces withoutchange.

We consider the case where the first s eigenvalues are simple and well sepa-rated. It seems possible to extend the analysis to the case of eigenvalues of higher

1991 Mathematics Subject Classification. Primary 65N30; Secondary 65F10.This manuscript has been authored under contract number DE-AC02-76CH00016 with the

U.S. Department of Energy. Accordingly, the U.S. Government retains a non-exclusive, royalty-

free license to publish or reproduce the published form of this contribution, or allow others todo so, for U.S. Government purposes. This work was also supported in part under the NationalScience Foundation Grants No. DMS-9007185, DMS-9501507 and NSF-CCR-9204255, and by the

U.S. Army Research Office through the Mathematical Sciences Institute, Cornell University.

Typeset by AMS-TEX

1

Page 2: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

2 BRAMBLE, ET. AL.

multiplicity. The extensions of the results to the case of eigenvalues with littleseparation is somewhat more tedious. If the operator A is a mesh analog of a PDEwith multiple eigenvalues, then A has clusters of eigenvalues and this is one of themost interesting practical examples of a bad separation of eigenvalues. However, forthis case the operator A can be viewed as a perturbation of an operator with wellseparated multiple eigenvalues, see [24]. The analysis of such perturbation appearspossible for the method described below, but will not be addressed in this paper.

We shall be interested in iterative schemes for computing {λi} and {vi}, fori = 1, . . . , s, where s is small compared to N. We will only assume that a procedurefor evaluating the action of A applied to vectors in V is available. Given a basisfor V the corresponding matrix representing A is often large and sparse and a fullcomputer representation, including the zero elements, is not feasible since its sizewould be too large to manage. We will also avoid the computation of the action ofA−1.

There are many methods for obtaining eigenvalues and eigenvectors for sym-metric positive definite matrices (see, [44]). Methods like the QR–algorithm workextremely well for relatively small matrices. Classical iterative methods involvesubspaces of vectors which result from applying A − λI or its inverse with a non-negative parameter λ which may change from iteration to iteration. Because A isill-conditioned and we seek the eigenspaces corresponding to the smaller eigenval-ues, to be effective, the classical methods require inversions of A− λI.

In this paper, we shall study an iterative eigenvalue scheme which utilizes sub-space iteration and preconditioning. A preconditioner B is a symmetric positivedefinite operator on V which “approximates” the inverse of A. For our purposes,we shall assume that B is scaled so that the operator I − BA is a reducer. Thismeans that there is a number γ in (0,1) satisfying

(1.1) |(A(I −BA)v, v)| ≤ γ(Av, v) for all v ∈ V.

Note that (1.1) implies that

(1.2) (1− γ)(Av, v) ≤ (BAv,Av) ≤ (1 + γ)(Av, v) for all v ∈ V.

There is a vast literature of techniques for developing preconditioners for sym-metric positive definite problems, especially in the case when the operator A is adiscretization of an elliptic partial differential equation (see, e.g., [1], [2], [15], [16],[27], [28]). The best preconditioners satisfy (1.1) with γ bounded away from one(independently of N). In addition, a good preconditioner is economical to evalu-ate. This means that the cost of computing the action of B applied to an arbitraryvector should not be much greater than that of applying A. When A correspondsto a discretization of a partial differential equation, often low cost preconditionersare known for which (1.1) holds with γ independent of the mesh size and hencethe number of unknowns (see, e.g., [3]–[6], [12], [21], [52]). Multigrid and domaindecomposition are two examples of effective techniques for developing precondition-ers for the discrete systems arising from approximations to elliptic boundary valueproblems (see, e.g., [4]–[13]).

The use of preconditioned iterations for computing the eigenvectors and eigenval-ues has been first studied by L. V. Kantorovitch’s graduate student B. A. Samokish

Page 3: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 3

[50], and then by W. V. Petryshyn [45]. Axel Ruhe [46], [47] clarified the asymp-totic connection of the such methods and similar preconditioned iterative methodsfor finding a nontrivial solution of the corresponding Helmholtz system.

Explicit convergence rate estimates, independent of the number of unknowns, forpreconditioned iterative methods in the case of one eigenvector were first obtainedin the Russian literature, by S. K. Godunov et. al. [29] and by E. G. D’yakonovet. al. [25], [26] (see also [22] and the included references). In particular, the baseiteration used in our preconditioned subspace iteration was used in [29] and wasfurther analyzed in [25].

There are a number of alternative preconditioned schemes which have been pro-posed to further improve convergence of the base method. The possibility of usingChebyshev parameters to accelerate the convergence of two stage preconditionedeigenvalue iterations was discovered by V. P. Il’in [30]. An analogous idea of usingLanczos method in the inner stage is due to D. Scott et. al. [51], [42]. Explicitconvergence rate estimates, independent of the number of unknowns, for thesemethods have been established in [33],[32]. The convergence estimates for the twostage method are better than those for the base method when high accuracy isrequired. The locally optimal preconditioned conjugate gradient method was sug-gested in [35]. In [35], [36], a new preconditioned variant especially suited for thedomain decomposition approach was presented and the corresponding convergencerate estimates were proved.

Davidson proposed a diagonally preconditioned subspace method [18] where thedimension of the subspace increased each step of the iteration. Although the originalDavidson method is likely to converge fast with a good preconditioner, the cost periteration and storage requirements grow with the iteration number. This makes themethod unacceptable in many large scale applications. Nevertheless, the methodbecame popular in computational molecular physics and was further developed (see,e.g., [19], [37], [41], [40], [43], [49], [20]). Theoretically Davidson-like methods arestill not well studied, for example, it is not proved that their convergence does notdepend on the number of unknowns with a proper preconditioner, though it seemsto be the case.

Generalizations of the preconditioned methods for the simultaneous computationof several leading eigenvalues and the corresponding invariant subspaces by usingsubspace iterations were suggested in [50], [39], [38], [14]. The first and, in fact, theonly explicit estimates on the convergence behavior, independent of the number ofunknowns, were obtained by Dyakonov and Knyazev (DK) in [23], [24], [22] wheresimplified methods with the same iteration operator for all vectors in a subspacewere developed and analyzed. This simplification, however, leads to a method whichcomputes only one eigenvalue, the largest in the group, at a time. To find another,smaller, eigenvalue, the method has to be used again with an initial subspace ofa smaller dimension and with an orthogonalization to the previously computedeigenvector. For the DK method, the convergence estimates do not depend on theseparation of the eigenvalues.

In the present paper we analyze a preconditioned subspace iteration technique.This involves a recursively generated sequence of subspaces. Given a subspace inthis sequence, we compute the approximate eigenvectors by applying the Rayleigh-Ritz procedure. The next subspace in the sequence is defined as the linear space

Page 4: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

4 BRAMBLE, ET. AL.

spanned by the vectors which result from the application of a simple preconditionedeigenvector iteration procedure to the Ritz eigenvectors. In contrast to the DKmethod, our iteration operator is different for different Ritz eigenvectors. On theother hand, our method differs from that of [39], [38] by using a fixed-shift basicalgorithm, whereas [39], [38] suggest a block version of the steepest descent method[50]. Our method could also be thought of as a truncated version of a block Davidsonmethod.

We present a theorem which guarantees convergence of the preconditioned sub-space method provided that the starting subspace is sufficiently accurate. As in theclassical block inverse power method, the convergence rate estimate for the smallereigenvalues and their corresponding eigenvectors is better than for eigenpairs whoseeigenvalues are closer to λs+1. The rates only depend on the first s+1 eigenvalues,δ and is independent of the largest eigenvalue λm and/or the number of unknowns.This is crucial for our applications where we seek the eigenvalues of a discrete sec-ond order elliptic operator. The largest eigenvalue of such an operator and thenumber of unknowns tend to infinity like h−2 where h is the mesh parameter. Theonly disadvantage of our theory is that the accuracy condition on the initial sub-space depends on the gap in first s + 1 eigenvalues, in contrast to the theory ofthe classical block inverse power method and of the simplified preconditioned sub-space DK method. Numerical experiments suggest that the actual convergence ofour method is, in fact, independent of the gap and that its overall performance ismuch better than the DK method. We have chosen the DK method for numericalcomparison as it was, before the present paper, the only preconditioned subspaceiteration method with proven explicit convergence rate estimates, independent ofthe number of unknowns.

The form of the algorithm proposed was motivated by a need for developing aparallelizable eigenvalue/eigenvector algorithm which would be effective in comput-ing a number of the smallest eigenvalues and their corresponding eigenvectors forlarge symmetric systems. The scheme is currently being applied to the computationof several hundred eigenfunctions for a problem with several thousand unknownswhich arises in first principle electronic structure computations [17].

The outline of the remainder of the paper is as follows. We describe the algorithmin Section 2 and give a theorem which bounds its convergence. Section 3 containsseveral useful estimates for the Rayleigh-Ritz method, partially based on results of[48], [33], [34]. The proof of the theorem is given in Section 4. Finally, the results ofnumerical experiments involving the preconditioned subspace technique are givenin Section 5.

2. The subspace preconditioning algorithm.

In this section, we describe the subspace preconditioning algorithm which willbe studied in this paper. This algorithm involves the development of a sequence ofsubspaces V n

s ⊂ V, for n = 1, 2, . . . , approximating the eigenspace

Vs = span{v1, . . . , vs}.

The initial approximation subspace V 0s of dimension s is assumed given.

Given an approximation subspace V ns of dimension s, the eigenvectors and eigen-

values of A are approximated by computing the Ritz eigenvectors {vni } ⊂ V n

s along

Page 5: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 5

with their corresponding eigenvalues λni satisfying

(2.1) (Avni , w) = λn

i (vni , w) for all w ∈ V n

s .

Here λn1 ≤ λn

2 ≤ · · · ≤ λns and {vn

i } are mutually orthogonal and normalized sothat (Avn

i , vni ) = 1 , for i = 1, . . . , s.

The new subspace V n+1s is generated from a basis which is defined by applying a

simple preconditioned eigenvalue iteration scheme to the Ritz vectors of the previoussubspace V n

s . Specifically, let

(2.2) vn+1

i = vni −B(Avn

i − λni v

ni ) for i = 1, . . . , s,

and defineV n+1

s = span{vn+1

1 , . . . , vn+1s }.

The iteration (2.2) has been studied as a stand alone iterative scheme for computingthe smallest eigenvalue λ1 and a vector in the corresponding eigenspace [25].

The above method could also be thought of as the simplest truncated version ofa block Davidson method which defines

V n+1s = V n

s + span{vn+1

1 , . . . , vn+1s }.

We shall need some additional notation to state our convergence estimates. Let(·, ·)A denote the inner product defined by

(v, w)A = (Av,w) for all v, w ∈ V.

The corresponding norm will be denoted by

||v||A ≡ (v, v)1/2

A for all v ∈ V.

Let Qs and Qns be the (·, ·)A orthogonal projection operators onto Vs and V n

s ,respectively. We measure the gap between the subspaces Vs and V n

s by consideringthe norm of the difference of these projectors:

(2.3) θn ≡ ||Qns −Qs||A.

We also defineθn

i ≡ ||vi −Qns vi||A, for i = 1, . . . , s,

and

∆ ≡ maxi=1,... ,s

λi+1 + λi

λi+1 − λi.

The main result of this paper is given in the following theorem.Theorem 2.1. Suppose that the initial approximation subspace V 0

s satisfies

(2.4)s

i=1

(θ0i )

2 ≤ (1− γ)2(λ1)4(1− λs/λs+1)

4

1999∆2(λs)4

Page 6: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

6 BRAMBLE, ET. AL.

where γ satisfies (1.1). Then the dimension of V ns for n > 0 is equal to s. Moreover,

we have

(2.5) ||(I − Pi)vni ||2A ≤

1.03

1− λi/λs+1

δ2ni (θ0

i )2

and

(2.6) 0 ≤ 1− λi/λni ≤

1.03

1− λi/λs+1

δ2ni (θ0

i )2

where

δi = γ + (1− γ)λi/λs+1 < 1,(2.7)

δi = δi +(1− δs)

2

(

λs+1 − λs

λs+1 − λi

)1/2λi

λs< 1.(2.8)

In (2.5), Pi denotes the orthogonal projector onto the one dimensional eigenspacespanned by vi.

Remark 2.1. Note that δi is less than or equal to δi+(1−δi)/2 which is clearly lessthan one. In addition, the convergence rate estimate δi is closer to δi for smallereigenvalues since λi is smaller than λs in that case.

Remark 2.2. Note that ||vni ||A = 1 and hence (2.5) implies that the eigenvector vn

i

converges to the eigenvector vi exponentially with n.

Remark 2.3. The above theorem can be applied to the problem of the approxima-tion of relatively few of the lowest eigenvalues and their corresponding eigenvectorsof an elliptic boundary value problem defined on a bounded domain in the case whenthe corresponding lower eigenvalues are simple. It is well known that these eigenval-ues are distinct. Moreover, these eigenvalues and eigenvectors can be approximatedby those which result from finite element discretization. The eigenvalues of the dis-crete system exhibit behavior similar to those of the continuous problem. Theyare well separated provided that the mesh parameter h is suitably small since thediscrete eigenvalues converge to those of the continuous problem. If A = Ah is thesystem corresponding to the discretization with mesh size h then the parametersλs, λ1, ∆ can be bounded independently of the mesh parameter h. If one uses apreconditioner B = Bh which satisfies (1.1) with γ < 1 (not depending h) then theabove proposition provides asymptotic rates of convergence which do not depend hand hence the dimension of the underlying system.

Remark 2.4. With a slightly more complicated proof, it is possible to reduce theconstant 1999 to 500 in the above theorem. Of course, even the latter constant isstill too large to convince someone theoretically that our method is useful. However,the method appears to be much more robust in practice. In our experience, themethod converges with initial subspaces consisting of vectors with random entries(see Section 5). Such an initial subspace fails to satisfy the accuracy condition(2.4).

Page 7: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 7

The following corollary show that the rates of convergence for the given eigen-value/eigenvector tend asymptotically to δi as the V

ns converges to Vs.

Corollary 2.1. Assume that the hypothesis of Theorem 2.1 hold. Then for anyε ∈ (0, 1] there exists an m = m(ε) such that for n ≥ m,

||(I − Pi)vni ||2A ≤

1.03

1− λi/λs+1

(δi + ε(δi − δi))2n−2mδ2m

i (θ0i )

2

and

0 ≤ 1− λi/λni ≤

1.03

1− λi/λs+1

(δi + ε(δi − δi))2n−2mδ2m

i (θ0i )

2

where δi and δi are as in (2.7) and (2.8).

3. Properties of the Ritz approximation

In this section, we give some lemmas which describe the approximation proper-ties of the eigenvalues and eigenvectors resulting from the Ritz subspace method.These lemmas only depend on the distribution of the eigenvalues of A and the ap-proximation properties of the subspace. Thus, we shall state them in terms of anarbitrary approximation subspace V ⊂ V of dimension less than or equal to s.

The Ritz eigenvectors {vi} ⊂ V along with their corresponding eigenvalues λi

satisfy

(Avi, w) = λi(vi, w) for all w ∈ V .

Here λ1 ≤ λ2 ≤ · · · and the vectors in {vi} are mutually orthogonal and normalizedso that (Avi, vi) = 1 for each i.

Let Q be the (·, ·)A orthogonal projection operator onto V and set

θ ≡ ||Q−Qs||A.

It will also be important to measure how well the actual eigenvectors can be ap-proximated in the subspace V . To this end, we define the quantities

θi ≡ ||vi − Qvi||A, for i = 1, . . . , s.

The eigenvalue accuracy can be estimated in terms of the above parameters by thefollowing lemma.

Lemma 3.1. If

(3.1) θ < 1

then the dimension of V is s and for i = 1, . . . , s,

0 ≤ 1− λi/λi ≤ θ2.

The proof of the above lemma follows [33] and uses two additional lemmas. Thefirst is essentially given in [31] (see Theorem 6.34).

Page 8: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

8 BRAMBLE, ET. AL.

Lemma 3.2. Let P and Q be two orthogonal projectors with M = Range(P ),N = Range(Q) such that dim(N) ≤ dim(M) and

‖(I −Q)P‖ = δ < 1.

Then dim(N) = dim(M) and

‖P −Q‖ = ‖(I − P )Q‖ = δ.

Lemma 3.3. Let Vi be the space spanned by the eigenvectors vj, for j =

1, . . . , i. We define the additional operators Qi and Qi to be the (·, ·)A orthogonal

projectors onto the spaces Vi and Vi ≡ Range(QVi). Assume that θ < 1. Then fori ≤ s, the maps

Qi : Vi 7→ Vi,

Qi : Vi 7→ Vi

are isomorphisms. Moreover,

(3.2) ||Qi −Qi||A = ||(I − Qi)Qi||A = ||(I −Qi)Qi||A ≤ θ.

Proof. By definition of Qi,

Qiv = Qv for all v ∈ Vi.

Thus,

||(I − Qi)Qi||A = maxv∈Vi,||v||A=1

||v − Qiv||A =

maxv∈Vi,||v||A=1

||v − Qv||A ≤ maxv∈Vs,||v||A=1

||v − Qv||A = θ.

This proves the last equality of (3.2). The lemma now follows from Lemma 3.2.

Proof (of Lemma 3.1). Let i be less than or equal to s and v∗ ∈ Vi be the eigenvectorwith maximal eigenvalue λ∗ eigenproblem:

(Av,w) = λ(v, w) for all w ∈ Vi.

By Lemma 3.3, the dimension of Vi is equal to i. By the Courant–Fischer Theorem,

(3.3) λi ≤ λi ≤ λ∗

from which the lower bound of the lemma follows. For the upper bound, it sufficesto bound the quantity

1− λi/λ∗.

We may assume that ||v∗||A = 1. We decompose

v∗ = Qiv∗ +Q⊥

i v∗.

Page 9: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 9

Let Rq(w) = (Aw,w)/ ‖w‖2 denote the Rayleigh quotient. Then

1− λi/λ∗ = (λ∗ − λi) ‖v∗‖2 .

Now it follows from (3.2) that Qiv∗ 6= 0. Clearly,

(3.4)1 = Rq(Qiv

∗) ‖Qiv∗‖2 + ||Q⊥

i v∗||2A

≤ λi ‖Qiv∗‖2 + ||Q⊥

i v∗||2A.

If Q⊥i v

∗ = 0 then v∗ ∈ Vi. In this case, it follows that λ∗ = λi and there is nothingmore to prove. If Q⊥

i v∗ 6= 0 then (3.4) implies

1 ≤ λi ‖Qiv∗‖2 + Rq(Q⊥

i v∗)∥

∥Q⊥i v

∗∥

2

= λi ‖v∗‖2 + (Rq(Q⊥i v

∗)− λi)∥

∥Q⊥i v

∗∥

2.

Consequently,

1− λi/λ∗ ≤ (Rq(Q⊥

i v∗)− λi)

∥Q⊥i v

∗∥

2

= [1− λi/Rq(Q⊥i v

∗)] ||Q⊥i v

∗||2A ≤ ||Q⊥i v

∗||2A≤ ||(I −Qi)Qi||2A ≤ θ2

where we used Lemma 3.3 for the last inequality. This completes the proof ofLemma 3.1

The next lemma provides a relationship between θ and θi.Lemma 3.4.

||(I − Q)Qs||A ≤( s∑

j=1

θ2j

)1/2

.

Moreover, if

(3.5) ||(I − Q)Qs||A < 1

then dim(V ) = s and for i = 1, . . . , s,

(3.6) θi ≤ θ = ||Q−Qs||A = ||(I − Q)Qs||A = ||(I −Qs)Q||A.

Proof. Let v be in Vs and write

v =

s∑

j=1

cjvj .

Clearly

||(I − Q)v||A = ||s

j=1

cj(I − Q)vj ||A

≤s

j=1

|cj | ||(I − Q)vj ||A

≤(

s∑

j=1

c2j)1/2(

s∑

j=1

θ2j

)1/2= ||v||A

(

s∑

j=1

θ2j

)1/2.

Page 10: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

10 BRAMBLE, ET. AL.

It follows that

||(I − Q)Qs||A ≤(

s∑

j=1

θ2j

)1/2.

The lemma follows from Lemma 3.2.

Lemma 3.1 implies that for sufficiently small θ, λj 6= λi whenever i, j ∈ {1 . . . s}and i 6= j. Thus, we can define the quantities

σij =λi

|1− λi/λj |for i 6= j, i, j = 1, . . . , s.

Let

(3.7) σi = maxj=1,... ,s, j 6=i

(σij),

The next lemma provides bounds for the eigenvector error in terms of the ap-proximation parameter θi. Its proof is essentially based on the analysis given in[48] and is included for completeness.

Lemma 3.5. Assume that (3.5) holds and that for a fixed index i ∈ [1, . . . , s],

λj 6= λi for j 6= i and j ∈ [1, . . . , s]. Then,

||(I − Pi)vi||2A ≤[

1 +σ2

i

(λ1)2θ2

]

θ2i ≡ C1θ

2i .

Here Pi is the projection onto the i’th eigenvector vi.

Proof. Let Pj denote the (·, ·)A orthogonal projector onto the subspace generatedby vj . Note that

||(I − Pi)vi||A = ||(I − Pi)Pi||A = ||(I − Pi)Pi||A= ||(I − Pi)vi||A

and that

(3.8)||(I − Pi)vi||2A = ||(I − Q)vi||2A + ||(Q− Pi)vi||2A

= θ2i + ||(Q− Pi)vi||2A.

We estimate the last term above as follows: Note that

Q =

s∑

j=1

Pj and QA−1Q =

s∑

j=1

1

λj

Pj .

Thus,

(3.9)(1

λi− 1

λi

)Pivi +

s∑

j=1, j 6=i

(1

λi− 1

λj

)Pjvi =

(

1

λi− QA−1Q

)

Qvi

= QA−1(I − Q)vi.

Page 11: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 11

The two terms on the left of (3.9) are (·, ·)A orthogonal and hence

(3.10)

1

σi||(Q− Pi)vi||A ≤ ||

s∑

j=1, j 6=i

(1

λi− 1

λj

)Pjvi||A

≤ ||QA−1(I − Q)vi||A≤ ||QA−1(I − Q)||A||(I − Q)vi||A.

In the last product the second term is θi by definition and it remains now toestimate the first term. Since the projector Qs commutes with A−1, Lemma 3.4and the triangle inequality give

(3.11)

||QA−1(I − Q)||A = ||Q(A−1 − 1

2λ1

I)(I − Q)||A

≤ ||Q(I −Qs)(A−1 − 1

2λ1

I)(I − Q)||A

+ ||Q(A−1 − 1

2λ1

I)Qs(I − Q)||A ≤θ

λ1

.

Combining (3.10) and (3.11) gives

(3.12) ||(Q− Pi)vi||2A ≤σ2

i

λ21

θ2θ2i .

The lemma follows combining (3.8) and (3.12).

The error in the corresponding eigenvalue is second order in θi and is boundedby the following lemma.

Lemma 3.6. Assume that the hypotheses of Lemma 3.5 hold. Then,

0 ≤ 1− λi/λi ≤ ||(I − Pi)vi||2A ≤ C1θ2i .

Proof. Using the identity λi = ‖vi‖−2gives

1− λi/λi = ((A− λi)vi, vi).

We clearly have

((A− λi)vi, vi) = (A(I − Pi)vi, (I − Pi)vi)− λi((I − Pi)vi, (I − Pi)vi)

≤ ||(I − Pi)vi||2A.

This completes the proof of Lemma 3.6.

Both the eigenvectors {vi} and the corresponding approximate eigenvectors {vi}are orthonormal with respect to the (·, ·)A inner product. The following lemma,based on [34], shows that for i 6= j, |(vi, vj)A| is small with θ since

|(vi, vj)A| = |((Qs − Pj)vi, vj)A| ≤ ||(Qs − Pj)vj ||A.

Page 12: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

12 BRAMBLE, ET. AL.

Lemma 3.7. Assume that (3.5) holds and that for a fixed index j ∈ [1, . . . , s],

λj 6= λi for i 6= j and j ∈ [1, . . . , s]. Then,

||(Qs − Pj)vj ||2A ≤ C2θ2θ2

j

where

C2 =(σ∗j )

2C1

(λ1)2

andσ∗j = max

i=1,... ,s, i6=j(σij).

Proof. We clearly have

1

σ∗j||(Qs − Pj)vj ||A ≤ ||(Qs − Pj)(A

−1 − λ−1

j I)vj ||A.

Since

(3.13) ((A−1 − λ−1

j I)vj , w)A = 0 for all w ∈ V ,

it follows that

||(Qs − Pj)(A−1 − λ−1

j I)vj ||A = ||(Qs − Pj)(I − Q)(A−1 − λ−1

j I)vj ||A≤ ||(Qs − Pj)(I − Q)||A||(A−1 − λ−1

j I)vj ||A.

Now by Lemma 3.4,

||(Qs − Pj)(I − Q)||A ≤ ||Qs(I − Q)||A = ||(I − Q)Qs||A = θ.

In addition, (3.13) implies that

||(A−1 − λ−1

j I)vj ||A ≤ ||(A−1 − λ−1

j I)vj ||A = ||(A−1 − λ−1

j I)(I − Pj)vj ||A≤ ||(A−1 − λ−1

j I)||A||(I − Pj)vj ||A.

The lemma follows combining the above estimates with Lemma 3.5.

The final lemma of this section shows that the approximation parameter θi canbe bounded in terms of the orthogonal component of the approximate eigenvector.

Lemma 3.8. If C2θ2 < 1 then

θ2i ≤

(

1− C2θ2)−1||Q⊥

s vi||2A.

Proof. It follows that

θ2i = ||(I − Q)vi||2A ≤ ||(I − Pi)vi||2A= ||(I − Pi)vi||2A= ||Q⊥

s vi||2A + ||(Qs − Pi)vi||2A.

Thus, applying Lemma 3.7 gives

θ2i ≤ ||Q⊥

s vi||2A + C2θ2θ2

i .

The lemma immediately follows.

Page 13: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 13

4. Convergence analysis.

In this section, we will prove the theorem and corollary stated in Section 2.Their proofs are based on three additional lemmas as well as the lemmas for theRitz eigenpairs.

We shall require some additional notation for this section. If θn is sufficientlysmall, Lemma 3.1 implies that λn

j 6= λi whenever i, j ∈ {1 . . . s} and i 6= j. FollowingSection 3, we define the quantities

σij(n) =λi

|1− λi/λnj |

for i 6= j, i, j = 1, . . . , s

and set

σi(n) = maxj=1,... ,s, j 6=i

(σij(n)) and σ∗j (n) = maxi=1,... ,s, i6=j

(σij(n)).

Let V ⊥s denote the A orthogonal complement of Vs. For each i in [1, 2, . . . , s], let

[·, ·]i denote the inner product

[v, w]i = ((A− λiI)v, w) for all v, w ∈ V ⊥s .

The corresponding norm (on V ⊥s ) will be denoted by ||| · ||| i ≡ [·, ·]1/2

i . It is equiv-alent to the original norm,

(4.1) |||v||| i ≤ ||v||A ≤(

λs+1

λs+1 − λi

)1/2

|||v||| i, v ∈ V ⊥s .

We start by considering the effect of a related iteration operator on the comple-ment of Vs. For each i in [1, 2, . . . , s], let

(4.2) Ei = I −Q⊥s B(A− λiI)

Note that Ei is symmetric on V ⊥s with respect to the inner product [·, ·]i. Moreover,

it is a reducer as guaranteed by the following lemma.Lemma 4.1. For all w in V ⊥

s ,

(4.3) |||Eiw||| i ≤ δi|||w||| i

where δi is given by (2.7).

Proof. From (1.2) it follows that

[1− γ](A−1w,w) ≤ (Bw,w) ≤ [1 + γ](A−1w,w) for all w ∈ V.

For u ∈ V ⊥s ,

[1− λi/λs+1]((A− λi)u, u) ≤ (A−1(A− λi)u, (A− λi)u)

= ((I − λiA−1)u, (A− λi)u) ≤ ((A− λi)u, u).

Page 14: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

14 BRAMBLE, ET. AL.

Combining the above inequalities gives

[1− γ][1− λi/λs+1]((A− λi)u, u) ≤ (B(A− λi)u, (A− λi)u)

≤ [1 + γ]((A− λi)u, u) for all u ∈ V ⊥s .

This implies that

|[(I −Q⊥s B(A− λiI))u, u]i| ≤ δi|||u||| 2i for all u ∈ V ⊥

s .

Inequality (4.3) follows from the symmetry of (I −Q⊥s B(A− λiI)) with respect to

the [·, ·]i inner product.The next lemma shows that θn+1 can be controlled in terms of θn.Lemma 4.2. Assume that

θn <λ1

(1 + γ)λns

.

Define

C0 =

[

1 +λn

s (1 + γ)

λ1

]2

.

Then dimV n+1s = s and

(4.4) (θn+1)2 ≤ C0(θn)2.

Proof. By the triangle inequality,

(4.5)θn+1 = ||Qn+1

s −Qs||A ≤ ||Qn+1s −Qn

s ||A + ||Qns −Qs||A

= ||Qn+1s −Qn

s ||A + θn.

By construction, dimV n+1s ≤ dimV n

s . We now estimate

||(I −Qn+1s )Qn

s ||A = maxvn∈V n

s,||vn||A=1

||(I −Qn+1s )vn||A.

Let vn be an arbitrary element of V ns with ||vn||A = 1, vn =

∑sj=1

αjvnj and

consider w =∑s

j=1αj v

n+1

j . We clearly have

||(I −Qn+1s )vn||A ≤ ||vn − w||A

≤ ||s

j=1

αj(vnj − vn+1

j )||A.

By definition,s

j=1

αj(vnj − vn+1

j ) = B

s∑

j=1

αj(Avnj − λn

j vnj ).

Page 15: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 15

It follows by (1.2) and the identity

((A−1 − (λnj )

−1I)vnj , w)A = 0 for all w ∈ V n

s

that

||s

j=1

αj(vnj − vn+1

j )||A ≤ (1 + γ)||s

j=1

αj(vnj − λn

j A−1vn

j )||A

= (1 + γ)||s

j=1

αj(I −Qns )(v

nj − λn

j A−1vn

j )||A

= (1 + γ)||(I −Qns )A

−1Qns

s∑

j=1

λnj αjv

nj ||A

Applying (3.11) gives

||s

j=1

αj(vnj − vn+1

j )||A ≤ (1 + γ)λns ||(I −Qn

s )A−1Qn

s ||A

≤ (1 + γ)λn

s

λ1

θn.

Finally, by the assumptions of the lemma, the last value is less than one, therefore,by Lemma 3.2, we have dimV n+1

s = s and

||Qn+1s −Qn

s ||A = ||(I −Qn+1s )Qn

s ||A ≤ (1 + γ)λn

s

λ1

θn.

The lemma follows combining (4.5) and the last estimate.

We will need the following technical lemma.Lemma 4.3. Let ∆ be as in Section 2 and suppose that

(θn)2 ≤ α

for some positive parameter α ≤ 1. Then Lemmas 3.5 – 3.8 apply with V = V ns .

Moreover, the quantities σij = σij(n) are well defined, and

(4.6) λni ≤ λi

(

1− α)−1,

(4.7) λ1 ≤ σi(n) ≤λs∆

2− α,

(4.8) λ1 ≤ σ∗i (n) ≤λs∆

2− α.

Page 16: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

16 BRAMBLE, ET. AL.

Proof. Lemma 3.1 shows that for i = 1, . . . , s,

1− λi/λni ≤ α

λi+1 − λi

λi+1 + λi.

A simple manipulation gives

(4.9)1

λni

≥ 1

λi

(

1− αλi+1 − λi

λi+1 + λi

)

.

Inequality (4.6) immediately follows.The left inequalities in (4.7) and (4.8) are based on the simple estimate

λ1 ≤ σ11(n).

We next prove the right inequalities. Suppose that j is greater than i. Thenλi ≤ λj−1 < λj ≤ λn

j and hence

(4.10)

λi

|1− λi/λnj |

=λi

1− λi/λnj

≤ λj−1

1− λj−1/λj

=λj−1λj

λj − λj−1

≤ λj(λj−1 + λj)

2(λj − λj−1).

If j < i ≤ s then λi ≥ λj+1 > λnj and (4.9) imply

(4.11)

λi

|1− λi/λnj |≤ λj+1

λj+1/λnj − 1

≤ λjλj+1

λj+1(1− α(λj+1 − λj)/(λj+1 + λj))− λj

≤ λj+1(λj + λj+1)

(2− α)(λj+1 − λj).

The inequality (4.7) follows from (4.10) and (4.11). We have also shown (4.8).

The next lemma provides a perturbation estimate which will be used to developbounds for convergence of the preconditioned subspace method.

Lemma 4.4. Assume that θn ≤ ∆−1/2, θn+1 ≤ ∆−1/2 and define

C1 = 1 +σ2

i (n)

(λ1)2(θn)2,

C2 =(σ∗i (n))

2C1

(λ1)2,

C3 =

[

(1− C2(θn)2)(1− λi/λs+1)

]−1

,

C4 = 1 +σ2

i (n+ 1)

(λ1)2(θn+1)2,

C5 =√

C3C1

(

1 +λn

i (1 + γ)

λ1

)(

1− C1(θni )

2

)−1/2(√

C4 + 1

)

,

C6 =

(

C3

1− C1(θni )

2

)1/2(1 + γ)

λ1

(

λni C1 + λi

C2

)

.

Page 17: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 17

If C2(θn)2 < 1 then

(4.12) |||Q⊥s v

n+1

i ||| i ≤ δi|||Q⊥s v

ni ||| i.

whereδi = (1− C1(θ

ni )

2)−1/2δi + θn+1C5 + θnC6.

Proof. Let us first notice that Lemma 4.3 guarantees that Lemmas 3.5 – 3.8 canbe applied with V = V n

s and with V = V n+1s , the quantities σi(n), σ

∗i (n), and

σ2i (n + 1) are well defined and the assumption C2(θ

n)2 < 1 of the lemma assuresC1(θ

ni )

2 < 1 because C2 ≥ C1 by (4.8). Therefore, all constants above are welldefined.

We start the proof with two technical estimates. By (1.2), it follows that

||B(Avni − λn

i vni )||A ≤ (1 + γ)||vn

i − λni A

−1vni ||A.

For i = 1, . . . , s,(vn

i − λni A

−1vni , v

ni )A = 0

and hence

(4.13)

||vni − λn

i A−1vn

i ||A ≤ λni /λi||vn

i − λiA−1vn

i ||A= λn

i /λi||(I − λiA−1)(I − Pi)v

ni ||A

≤ λni /λi||I − λiA

−1||A||(I − Pi)vni ||A

≤ λni /λ1||(I − Pi)v

ni ||A.

Consequently, by Lemma 3.5

(4.14) ||B(Avni − λn

i vni )||A ≤

(1 + γ)λni

λ1

C1θni .

Using the estimate above, we now prove that θni controls θn+1

i . Indeed,

θn+1

i ≤ ||vi − (vni , vi)

−1

A vn+1

i ||A =1

|(vni , vi)A|

||vn+1

i − Pivni ||A

Applying Lemma 3.5, we can estimate the first factor by

(4.15) |(vni , vi)A|−1 = ||Piv

ni ||−1

A ≤ (1− C1(θni )

2)−1/2.

We clearly have

(4.16) vn+1

i − Pivni = (I − Pi)v

ni + vn+1

i − vni = (I − Pi)v

ni −B(Avn

i − λni v

ni )

and thus (4.15), the triangle inequality, and (4.14) give

(4.17) θn+1

i ≤ (1− C1(θni )

2)−1/2√

C1

(

1 +λn

i (1 + γ)

λ1

)

θni .

Page 18: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

18 BRAMBLE, ET. AL.

We are now ready to prove the lemma. Define

uni =

vni

(vni , vi)A

,

un+1

i =vn+1

i

(vn+1

i , vi)A,

un+1

i =vn+1

i

(vni , vi)A

,

wn+1

i = EiQ⊥s u

ni .

Then|||Q⊥

s vn+1

i ||| i = |||Q⊥s u

n+1

i ||| i|(vn+1

i , vi)A|.Applying the triangle inequality to

Q⊥s u

n+1

i = wn+1

i +Q⊥s (u

n+1

i − un+1

i ) + (Q⊥s u

n+1

i − wn+1

i )

and Lemma 4.1 with w = Q⊥s u

ni gives

|||Q⊥s u

n+1

i ||| i ≤ δi|||Q⊥s u

ni ||| i + |||Q⊥

s (un+1

i − un+1

i )||| i+ |||Q⊥

s un+1

i − wn+1

i ||| i.

By multiplying both sides by |(vn+1

i , vi)A| and using the estimate |(vn+1

i , vi)A| ≤ 1for all terms on the right but the second one, it follows that

(4.18)|||Q⊥

s vn+1

i ||| i ≤ δi|||Q⊥s u

ni ||| i + |||Q⊥

s (un+1

i − un+1

i )||| i|(vn+1

i , vi)A|+ |||Q⊥

s un+1

i − wn+1

i ||| i.

We estimate the three terms in (4.18) separately. For the first, by (4.15)

(4.19) |||Q⊥s u

ni ||| i =

|||Q⊥s v

ni ||| i

|(vni , vi)A|

≤ (1− C1(θni )

2)−1/2|||Q⊥s v

ni ||| i.

By (4.1), the norm part of the second term of (4.18) can be bounded by

(4.20)

|||Q⊥s (u

n+1

i − un+1

i )||| i ≤ ||Q⊥s (u

n+1

i − un+1

i )||A= ||Q⊥

s Qn+1s (un+1

i − un+1

i )||A≤ θn+1||un+1

i − un+1

i ||A.

By (4.16) and the fact that Piuni = vi,

un+1

i − un+1

i = un+1

i − vi − (I − Pi)uni +B(Aun

i − λni u

ni )

and the triangle inequality gives

||un+1

i − un+1

i ||A ≤ ||(I − Pi)un+1

i ||A + ||(I − Pi)uni ||A + ||B(Aun

i − λni u

ni )||A

=||(I − Pi)v

n+1

i ||A|(vn+1

i , vi)A|+||(I − Pi)v

ni ||A + ||B(Avn

i − λni v

ni )||A

|(vni , vi)A|

.

Page 19: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 19

Multiplying both sides by |(vn+1

i , vi)A| and using the estimates |(vn+1

i , vi)A| ≤ 1and (4.15) for the last term give

||un+1

i − un+1

i ||A|(vn+1

i , vi)A| ≤ ||(I − Pi)vn+1

i ||A

+||(I − Pi)v

ni ||A + ||B(Avn

i − λni v

ni )||A

1− C1(θni )

2.

Applying Lemma 3.5 with V = V n+1s and V = V n

s and (4.14) gives

(4.21)

||un+1

i − un+1

i ||A|(vn+1

i , vi)A| ≤√

C4θn+1

i

+[

1 +(1 + γ)λn

i

λ1

]

√C1

1− C1(θni )

2θn

i .

By Lemma 3.8 and (4.1),

(4.22) (θni )

2 ≤[

1− C2(θn)2

]−1

||Q⊥s v

ni ||2A ≤ C3|||Q⊥

s vni ||| 2i .

Combining (4.20), (4.21), (4.17) and (4.22) gives

(4.23) |||Q⊥s (u

n+1

i − un+1

i )||| i|(vn+1

i , vi)A| ≤ C5θn+1|||Q⊥

s vni ||| i.

For the last term in (4.18), we use the fact that

Q⊥s u

n+1

i − wn+1

i = (λni − λi)Q

⊥s Bu

ni −Q⊥

s B(A− λiI)(Qs − Pi)uni

and hence by (4.1), (1.2), by Lemmas 3.6, 3.4, and 3.7, and by (4.15),

(4.24)

|||Q⊥s u

n+1

i − wn+1

i ||| i ≤ (1 + γ)|(vni , vi)A|−1

{

λni (1− λi/λ

ni )||A−1vn

i ||A+ ||(I − λiA

−1)(Qs − Pi)vni ||A

}

≤ (1 + γ)

λ1

λni C1 + λi

√C2

1− C1(θni )

2θnθn

i

≤ C6θn|||Q⊥

s vni ||| i.

We used (4.22) for the last inequality above. Combining (4.18), (4.19), (4.23) and(4.24) verifies (4.12). This completes the proof of the lemma.

Proof of Theorem 2.1. Assume that (2.4) holds. Let β be defined by

(4.25) β =(1− γ)2(λ1)

2(1− λs/λs+1)3

1941(λs)2.

We use mathematical induction to show that for k ≥ 0,

(4.26) (θk)2 ≤ β(λ1)

2

∆2(λs)2

Page 20: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

20 BRAMBLE, ET. AL.

and

(4.27) |||Q⊥s v

ki ||| i ≤ δk

i |||Q⊥s v

0i ||| i

where δi is given by (2.8). It follows from (2.4) and Lemma 3.4 that (4.26) holdsfor k = 0. The inequality (4.27) holds trivially for k = 0.

Assume that (4.26) and (4.27) hold for k = n, we show that they also hold fork = n+ 1. The induction assumption (4.26) with k = n, (4.6), (4.7), γ < 1, (4.25)and elementary manipulations give

(4.28)C0 ≤ 9.0062(λs)

2/(λ1)2,

C1 ≤ 1 + β/(2− β)2 ≤ 1.00013.

Thus by (4.4) and (4.26) with k = n,

(4.29) (θn+1)2 ≤ 9.0062(λs)2/(λ1)

2(θn)2 ≤ 9.0062∆−2β.

By (4.8),

σ2i (n+ 1) ≤ (λs)

2∆2

(2− 9.0062β)2.

We also have

C1(θn)2 ≤ β(1 + β/(2− β)2) ≤ .00052,

C2(θn)2 ≤ β(1 + β/(2− β)2)/(2− β)2 ≤ .00013,

(

1− C2(θn)2

)−1

≤ 1.00013,

C3 ≤1.00013

1− λi/λs+1

,

C4 ≤ 1 +9.0062(λs)

(2− 9.0062β)2(λ1)2≤ 1.00117,

θn+1C5 ≤ 18.025√

βλi

λ1

,

θnC6 ≤ 3.0031

(

β

1− λi/λs+1

)1/2λi

λ1

.

We used the inequality 1 − λi/λs+1 ≤ ∆ for the next to last inequality above and(4.6) in the last two inequalities above.

Applying Lemma 4.3 gives that (4.12) holds with

(4.30)

δi = (1 + 1.00026√

β)δi + 18.025√

βλi

λ1

+ 3.0031

(

β

1− λi/λs+1

)1/2λi

λ1

≤ δi + 22.028

(

β

1− λi/λs+1

)1/2λi

λ1

.

Page 21: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 21

It then follows from (4.25) that

δi ≤ δi +(1− γ)(1− λs/λs+1)

2

(

λs+1 − λs

λs+1 − λi

)1/2λi

λs= δi

and hence (4.27) holds for k = n+ 1.The bound (4.29) is not strong enough to imply (4.26) for k = n+ 1. We get a

better bound as follows. The above inequalities imply that

(θn+1)2 ≤ 9.0062∆−2β ≤ .00464(λ1)

2

(λs)2∆2,

(σ∗i (n+ 1))2

(λ1)2C4(θ

n+1)2 ≤ .00117.

Applying Lemma 3.4, (4.27) and the inequality analogous to (4.22) gives

(θn+1)2 ≤s

i=1

(θn+1

i )2 ≤ 1.00117

1− λi/λs+1

s∑

i=1

|||Q⊥s v

n+1

i ||| 2i

≤ 1.00117

1− λi/λs+1

s∑

i=1

||Q⊥s v

0i ||2A.

Now

(4.31) Q⊥s v

0i = (I − Pi)v

0i − (Qs − Pi)v

0i

and hence applying Lemmas 3.5 and 3.7 gives

(θn+1)2 ≤ 1.00117

1− λi/λs+1

s∑

i=1

(

(1 +√

β)C1 + (1 + 1/√

β)C2(θ0)2

)

(θ0i )

2

≤ 1.02984

1− λi/λs+1

s∑

i=1

(θ0i )

2.

In the above inequality, the constants C1 and C2 correspond to n = 0. The in-equality (4.26) for k = n+ 1 then follows from (2.4). This completes the inductionpart of the theorem.

We now show that (4.26) and (4.27) imply (2.5) and (2.6). Lemmas 3.5 and 3.7,(4.22) and (4.31) give

||(I − Pi)vni ||2A ≤ C1C3|||Q⊥

s vni ||| 2i ≤ C1C3δ

2ni |||Q⊥

s v0i ||| 2i

≤ C1C3

(

(1 +√

β)C1 + (1 + 1/√

β)C2(θ0)2

)

δ2ni (θ0

i )2

≤ 1.03

1− λi/λs+1

δ2ni (θ0

i )2.

A similar argument using Lemma 3.6 proves (2.6). This completes the proof of thetheorem.

Page 22: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

22 BRAMBLE, ET. AL.

Proof of Corollary 2.1. It follows from (3.6), (4.27) and Lemma 3.8 that for anym > 0,

(θm)2 ≤ C3δ2ms

s∑

i=1

|||Q⊥s v

0i ||| 2i .

LetM = ε−2

and choose m so large that

s∑

i=1

(θmi )2 ≤ (1− γ)2(λ1)

4(1− λs/λs+1)4

1999M∆2(λs)4.

Following the proof of the theorem, we prove by mathematical induction, that fork ≥ m,

(θk)2 ≤ β(λ1)

2

M∆2(λs)2

and|||Q⊥

s vki ||| i ≤ (δi + ε(δi − δi))

k−m|||Q⊥s v

mi ||| i.

It then follows that

||(I − Pi)vni ||2A ≤

1.03

1− λi/λs+1

(

δi + ε(δi − δi))2n−2m

δ2mi (θ0

i )2

and

0 ≤ 1− λi/λni ≤

1.03

1− λi/λs+1

(

δi + ε(δi − δi))2n−2m

δ2mi (θ0

i )2.

This completes the proof of the corollary.

5. Numerical Experiments.

The results of numerical experiments involving the preconditioned subspace it-eration technique are given in this section. We first give results for the case whenthe eigenvalues are well separated. We also consider an example where a group ofeigenvalues are distinct but much closer together. For comparison, we include theresults of numerical experiments involving a block preconditioned method analyzedin [23], [24], [22]. All of our experiments include subspaces of multiplicity greaterthan one.

The model problem which we shall consider is the one dimensional eigenvalueproblem

(5.1)u− ∂2u

∂x2= λu on (0, 1)

u(0) = u(1) and∂u

∂x(0) =

∂u

∂x(1).

The smallest eigenvalue for (5.1) is one and its eigenspace is equal to the space ofconstants. The larger eigenvalues are given by λj = 1+4π2j2 for j = 1, 2, . . . . Thecorresponding eigenspace has dimension two and is spanned by the vectors

v+

j = cos(2πjx), v−j = sin(2πjx).

Page 23: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 23

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

................. .................102

................. .................10−6

................. .................10−4

................. .................10−2

................. .................100

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

864

......................................................................................................................................................................................................................................................................................................................................................................................................................λ0

...............................................................................................................................................................................................................................................................................................................................................................................................λ1

..................................................................................................................................................................................................................................................................................................................................................................................................λ2

..............................................................................................................................................................................................................................................................................................................................................................................................λ3

..........................................................................................................................................................................................................................................................................................................................................................................λ4

.......................................................................................................................................................................................................................................................................................................................................................λ5

Figure 1.Eigenvalue error: The case of well separated eigenvalues.

We approximate (5.1) by using a spectral approximation. Let n be given anddefine h = (2n)−1 and xi = ih. V is the space of 2n dimensional vectors with innerproduct

(v, w) = h2n∑

i=1

viwi.

The i’th component of a vector v ∈ V corresponds to the nodal value at xi. Considerthe functions

(5.2) {θi} = {1, v+

1 , v−1 , . . . , v

+n , v

−n , v

+

n+1}

and let V be the corresponding linear span. Given a vector v ∈ V there exists aunique function v ∈ V such that

v(xi) = vi.

In fact, the transformation from v ∈ V to the coefficients of v in the basis (5.2) canbe rapidly computed by use of the fast Fourier transform. The operator A : V 7→ Vis defined by

(Av,w) =

∫ 1

0

(vw +∂v

∂x

∂w

∂x) dx.

Page 24: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

24 BRAMBLE, ET. AL.

The eigenvalues of A coincide with those of (5.1). In addition, the eigenvectors ofA are of the form (1, . . . , 1), (v+

j )i = v+

j (xi) and (v−j )i = v−j (xi).It is well known that different numerical approximations can often be used as

preconditioners for each other. Accordingly, we consider the finite difference ap-proximation

(5.3) (Av)i = vi + h−2(2vi − vi−1 − vi+1).

We interpret i−1 and i+1 modulo n above. The use of A−1 as a preconditioner forA would be a bit too trivial since A and A share the same eigenvectors. Instead, wedefine B by applying an additive overlapping domain decomposition preconditionerfor (5.3) [21], [12]. This is a more realistic choice since the resulting preconditionerB does not commute with A.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

................. .................10−6

................. .................10−4

................. .................10−2

................. .................100

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

864

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

...........................................................................................................................................................................................................................................................................................................................................................................................................................................v0

.......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................v1

.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................v2

...........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................v3

.............................................................................................................................................................................................................................................................................................................................................................................................................................................................v4

........................................................................................................................................................................................................................................................................................................................................................................................................................................................v5

Figure 2.Eigenvector error: The case of well separated eigenvalues.

For all of the numerical results reported, we use eight overlapping subdomains(of size 1/4) and a coarse grid of size H = 1/8. For this problem, the additivepreconditioner was scaled so that (1.1) holds with γ = 2/3 (independently of n).Such a value of γ is not unusual in many applications. For example, a well designedmultigrid V-cycle gives rise to γ ≈ 1/10. In all of our runs, we took n = 256. Theresults for other n were qualitatively the same.

Figure 1 illustrates the eigenvalue convergence for the eigenvalues

λ0 = 1

λ1 = 40.48

λ2 = 158.9

λ3 = 356.3

λ4 = 632.7

λ5 = 988.0.

Page 25: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 25

We applied the above algorithm with an eleven dimensional subspace. The initialsubspace was chosen to be the space spanned by 11 vectors with entries consistingof random numbers uniformly distributed in the interval [0,1]. No attempt wasmade to generate a sufficiently accurate starting subspace as required by the the-ory. Nevertheless, the method converged extremely well. Note the faster rate ofconvergence for the smaller eigenvalues. Figure 1 reports the actual error. Thusafter eight iterations the error in the fifth eigenvalue was less than 1.5 percent.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

................. .................102

................. .................10−6

................. .................10−4

................. .................10−2

................. .................100

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

864

..................................................................................................................................................................................................................................................................................................................................................................................................................................λ0

..................................................................................................................................................................................................................................................................................................................................................................................................................λ1

...............................................................................................................................................................................................................................................................................................................................................................................................................λ2

..........................................................................................................................................................................................................................................................................................................................................................................................................................

λ3

....................................................................................................................................................................................................................................................................................................................................................................λ4

....................................................................................................................................................................................................................................................................................................................................................λ5

Figure 3.Eigenvalue error: The case of clustered eigenvalues.

We did not report the eigenvalue error for the initial subspace. The reasonfor this is that these errors are very large and depend on the largest eigenvalueof the system. For the reported example (n = 256), the eigenvalue error in theinitial subspace was on the order of 106. This is to be expected since the initialsubspace is chosen at random and is dominated by high frequency components.It is somewhat surprising that the method reduces these errors so rapidly. Theeigenvector convergence behavior is illustrated in Figure 2. Here we report theeigenvector error as measured by

eni =

∥vi − Qn

i vi

‖vi‖where Qn

i is the (·, ·) orthogonal projector onto the the space V ni spanned by the

approximate eigenvectors. The first eigenvalue is of multiplicity zero and hence

Page 26: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

26 BRAMBLE, ET. AL.

V n0 is one dimensional. The remaining eigenvalues are of multiplicity two and

V ni involves the span of subsequent pairs of approximate eigenvectors. For the

multiplicity two case, we report the larger of the two values of eni . As predicted by

the theory, more rapid convergence is observed for the eigenvectors associated withthe smaller eigenvalues.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

................. .................10−6

................. .................10−4

................. .................10−2

................. .................100

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

2.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

864

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.............................................................................................................................................................................................................................................................................................................................................................................................................................v0

...................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................v1

.............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................. (v2, v3)

..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

.............................................................................................................................................................................................................................................................................................................................................................................................................................................................v4

........................................................................................................................................................................................................................................................................................................................................................................................................................................................v5

Figure 4.Eigenvector error: The case of clustered eigenvalues.

The estimates developed earlier in this manuscript deteriorate in the case ofeigenvalues with little separation. The next example suggests that the precondi-tioned subspace method still works well even when the eigenvalues cluster. For thisexample, we no longer use (5.1) but still preserve the same eigenspaces. We simplymove the second and fourth eigenvalues so that they are significantly closer to thethird. Specifically, we apply the preconditioned subspace method to the problemwith eigenvalues

λ0 = 1

λ1 = 157.7

λ2 = 158.9

λ3 = 160.9

λ4 = 632.7

λ5 = 988.0.

For this problem, there is a six dimensional eigenspace with eigenvalues which areseparated by less than two percent. We used the same preconditioner as in the firstexample but in this case (since A has changed) we have δ ≤ .7.

Figure 3 illustrates that we still get rapid eigenvalue convergence. In fact, there isnot a lot of difference in the performance when compared to the well separated casein Figure 1. Figure 4 gives the eigenvector convergence measured the same way as

Page 27: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 27

reported in the first example (Figure 2). Note that the eigenspaces correspondingto the three clustered eigenvalues still converge quite rapidly.

As our final example, we consider the DK block iterative method studied in [23],[24], [22]. We choose this method because it was the only other method availablewith a fully developed convergence theory. This method uses the shift correspondingto the largest eigenvalue in the subspace for all vectors in the space. Thus,

(5.4) V n+1s = span{vn+1

1 , . . . , vn+1s }.

where

(5.5) vn+1

i = vni −B(Avn

i − λns v

ni ) for i = 1, . . . , s

The above scheme is somewhat easier to implement than that studied in this paper.This is because the Ritz eigenvectors {vn

i } in (5.5) can be replaced by any spanningset for the subspace V n

s , e.g., {vni }.

It is shown in [23], [24], [22] that convergence is acheived for the eigenspacecorresponding to the largest eigenvector in the subspace Vs. In contrast to thetheory of the present paper, the convergence estimates for the DK method do notdepend on the clustering of eigenvalues.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

................. .................102

................. .................10−6

................. .................10−4

................. .................10−2

................. .................100

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

6.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

241812

.......................................................................................................................................................................................................................................................................................................................................λ0

.......................................................................................................................................................................................................................................................................................................................................

.......................................................................................................................................................................................................................................................................................................................................(λ1, λ2)

........................................................................................................................................................................................................................................................................................................................................

λ3........................................................................................................................................................................................................................................................................................................................................λ4

.......................................................................................................................................................................................................................................................................................................................................................................

λ5

Figure 5.Eigenvalue error: DK method, clusered eigenvalues.

Page 28: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

28 BRAMBLE, ET. AL.

To compare the method of the paper with the DK method, we ran the DKmethod for the clustered eigenvalue example. The eigenvalue convergence is illus-trated in Figure 5. The figure shows that the DK method does exhibit a rate ofconvergence for the largest eigenvalue λ5. However, it should be noted that theaccuracy achieved by the DK method for this eigenvalue using 18 iterations wasnot as good as that by the method of this paper using 8 iterations. Note that theapproximations for the lower eigenvalues are not improving. This is what one wouldexpect as the theory does not suggest any convergence for these eigenvalues.

The initial steps of the DK method show good convergence on the smaller eigen-values and eigenvectors. The eigenvector convergence is illustrated in Figure 6.Although a uniform convergence rate is achieved for the eigenspace correspondingto the largest eigenvector, the method stops converging the remaining eigenspaces.The method of this paper gives rise to a faster convergent approximation to theeigenspace corresponding to the largest eigenvalue λ5 while converging the smallereigenspaces at much better rates.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.........................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................

................. .................10−6

................. .................10−4

................. .................10−2

................. .................100

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

0 6.................

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

.

241812

.......................................................................................................................................................................................................................................................................................................................................v0

........................................................................................................................................................................................................................................................................................................................................v1

...........................................................................................................................................................................................................................................................................................................................................

v2 ........................................................................................................................................................................................................................................................................................................................................

v3.......................................................................................................................................................................................................................................................................................................................................v4

...............................................................................................................................................................................................................................................................................................................................................

v5

.................................................................................................................................................................

....................

.............................................................................................................................

....................................................................................................................................

.......................................................................................................

....................................................................................................

...........................................................................

Figure 6.Eigenvector error: DK method, clustered eigenvalues.

References

1. O. Axelsson, A generalized conjugate gradient, least squares method, Numer. Math. 51 (1987),209–228.

2. G. Birkhoff and A. Schoenstadt, Eds, Elliptic Problem Solvers II, Academic Press, New York,

1984.3. P.E. Bjørstad and O.B. Widlund, Solving elliptic problems on regions partitioned into sub-

structures, Elliptic Problem Solvers II (G. Birkhoff and A. Schoenstadt, eds.), Academic Press,

New York, 1984, pp. 245–256.4. J.H. Bramble, Multigrid Methods, Pitman Research Notes in Mathematics Series, Longman

Scientific & Technical, London. Copublished with John Wiley & Sons, Inc., New York, 1993.

5. J.H. Bramble and J.E. Pasciak, New convergence estimates for multigrid algorithms, Math.Comp. 49 (1987), 311–329.

Page 29: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

PRECONDITIONED EIGENVECTOR/EIGENVALUE COMPUTATION 29

6. J.H. Bramble and J.E. Pasciak, New estimates for multigrid algorithms including the V-cycle,

Math. Comp. 60 (1993), 447–471.

7. J.H. Bramble, J.E. Pasciak and A.H. Schatz , An iterative method for elliptic problems on

regions partitioned into substructures, Math. Comp. 46 (1986), 361–369.

8. J.H. Bramble, J.E. Pasciak and A.H. Schatz , The construction of preconditioners for elliptic

problems by substructuring, I, Math. Comp. 47 (1986), 103–134.

9. J.H. Bramble, J.E. Pasciak and A.H. Schatz , The construction of preconditioners for elliptic

problems by substructuring, II, Math. Comp. 49 (1987), 1–16.

10. J.H. Bramble, J.E. Pasciak and A.H. Schatz , The construction of preconditioners for elliptic

problems by substructuring, III, Math. Comp. 51 (1988), 415–430.

11. J.H. Bramble, J.E. Pasciak and A.H. Schatz , The construction of preconditioners for elliptic

problems by substructuring, IV, Math. Comp. 53 (1989), 1–24.

12. J.H. Bramble, J.E. Pasciak, J. Wang, and J. Xu, Convergence estimates for product iterativemethods with applications to domain decomposition, Math. Comp. 57 (1991), 1–21.

13. J.H. Bramble, J.E. Pasciak and J. Xu, Parallel multilevel preconditioners, Math. Comp. 55

(1990), 1–22.

14. Zhihao Cao, Generalized Rayleigh quotient matrix and bloc algorith for solving large sparse

symmetric generalized eigenvalue problems, Numerical Math., J. of China Univ. 5 (1983),342–348.

15. T.F. Chan, R. Glowinski, J. Periaux, and O.B. Widlund, Eds., Domain Decomposition Meth-

ods, SIAM, Phil. PA, 1989.

16. T. Chan, R. Glowinski, J. Periaux and and O.B. Widlund, Eds., Third Inter. Symp. on

Domain Decomposition Methods for Partial Differential Equations, SIAM, Phil. PA, 1990.

17. N. Chetty, M. Weinert, T.S. Rahman, and J.W. Davenport, Vacancies and impurities in

aluminum and magnesium, Pys. Rev. B, 6313–6326.

18. J. Davidson, The iterative calculation of a few of the lowest eigenvalues and corresponding

eigenvectors of large real symmetric matrices, Jour. Comp. Phys. 17 (1975), 87–94.

19. J. Davidson,Matrix eigenvector methods, Methods in Computational Molecular Physics (1983),

Reidel, Boston, 95–113.

20. A. Stathopoulos, Y. Saad and C.F. Fischer, Robust Preconditioning of Large, Sparse, Sym-

metric Eigenvalue Problems, Journal of Computational and Applied Mathematics (to appear,Also report of AHPCRC, U of Minneapolis, TR-93-093).

21. M. Dryja and O. Widlund, An additive variant of the Schwarz alternating method for the

case of many subregions, Technical Report, Courant Institute of Mathematical Sciences 339

(1987).

22. E.G. D’yakonov, Optimization in Solving Elliptic Problems, CRC, Boca Raton, 1995.

23. E.G. D’yakonov and A.V. Knyazev, Group iterative method for finding lower-order eigenval-

ues, Moscow University, Ser. 15, Computational Math. and Cybernetics 2 (1982), 32–40.

24. E.G. D’yakonov and A.V. Knyazev, On an iterative method for finding lower eigenvalues,

Russian J. of Numerical Analysis and Math. Modelling 7 (1992), 473–486.

25. E.G. D’yakonov and M.Yu. Orekhov, Minimization of the computational labor in determining

the first eigenvalues of differential operators, Math. Notes 27 (1980), 382–391.

26. E.G. D’yakonov and M.Yu. Orekhov, On some iterative methods for eigenvalue problems,Math. Notes 34 (1983), 879–912.

27. R. Glowinski, Y.A. Kuznetzov, G. Meurant, J. Periaux and O.B. Widlund, Eds., Fourth

Inter. Symp. on Domain Decomposition Methods for Partial Differential Equations, SIAM,Phil. PA, 1991, pp. 263–289.

28. R. Glowinski, G.H. Golub, G.A. Meurant, and J. Periaux, Eds., First Inter. Symp. on Domain

Decomposition Methods for Partial Differential Equations, SIAM, Phil. PA, 1988, pp. 144–

172.

29. S.K. Godunov, V.V. Ogneva and G.P. Prokopov, On the convergence of the modified steepest

descent method in application to eigenvalue problems, Am. Math. Soc. Trans. 2 (1976), 105..

30. V.P. Il’in, Numerical Methods of Solving Electrophysical Problems, Nauka, Moscow, 1985, InRussian..

31. T. Kato, Perturbation Theory for Linear Operators, Springer-Verlag, New York, 1976.

Page 30: A SUBSPACE PRECONDITIONING ALGORITHM FOR EIGENVECTOR ...math.ucdenver.edu/~aknyazev/research/papers/bpk96.pdf · 2 BRAMBLE, ET. AL. multiplicity. The extensions of the results to

30 BRAMBLE, ET. AL.

32. A.V. Knyazev, Convergence rate estimates for iterative methods for a mesh symmetric eigen-

value problem., Sov. J. Num. Anal. Math. Modeling, 2 (1987), 371–396.33. A.V. Knyazev, Computation of Eigenvalues and Eigenvectors for Mesh Problems: The Algo-

rithms and Error Estimates (1986), Dept. Numer. Math., USSR Acad. Sci., Moscow, USSR,

In Russian..34. A.V. Knyazev, New estimates for Ritz vectors, CIMS NYU, New York (to appear, Math.

Comp.) 677 (1994).

35. A. V. Knyazev, A preconditioned conjugate gradient method for eigenvalue problems and its

implementation in a subspace, International Ser. Numerical Mathematics, v. 96, Eigenwertauf-gaben in Natur- und Ingenieurwissenschaften und ihre numerische Behandlung, Oberwolfach,

1990. (1991), Birkhauser, Basel, 143–154.36. A. V. Knyazev and A. L. Skorokhodov, The preconditioned gradient-type iterative methods in

a subspace for partial generalized symmetric eigenvalue problem, SIAM J. Numerical Analysis

31 (1994), 1226.37. N. Kosugi, Modification of the Liu-Davidson method for obtaning one or simultaneously sev-

eral eigensolutions of a large real symmetric The preconditioned gradient-type iterative meth-

ods in a subspace for partial generalized symmetric eigenvalue problem, J. Comp. Phys. 55

(1984), 426–436.

38. D. E. Longsine and S. F. McCormick, Simultaneous Raileigh-quotient minimization methods

for Ax = λBx., Linear Algebra and Applications 34 (1980), 195–234.39. S. F. McCormick and T. Noe, Simultaneous iteration for the matrix eigenvalue problem,

Linear Algebra and Applications 16 (1977), 43–56.40. R. B. Morgan, Davidson’s method and preconditioning for generalized eigenvalue problems,

J. Comp. Phys. 89 (1990), 241–245.

41. R. B. Morgan and D. S. Scott, Generalizations of Davidson’s method for computing eigenval-ues of sparse symmetric matrices, SIAM J. Sci. Statist. Comput. 7 (1986), 817–825.

42. R. B. Morgan and D. S. Scott, Preconditioning the Lanczos algorithm for of sparse symmetric

eigenvalue problems, SIAM J. Sci. Comput. 14 (1993), 585–593.43. C. W. Murray, S. C. Racine and E. R.Davidson, Improved algorithms for the lowest few

eigenvalues and associated eigenvectors of large matrices, J. Comp. Phys. 103 (1992), 382–

389.44. B.N. Parlett, The Symmetric Eigenvalue Problem, Prentice-Hall, Inc., Englewood Cliffs, N.J.,

1980.

45. W.V. Petryshyn, On the eigenvalue problem Tu−λSu = 0 with unbounded and non-symmetricoperators T and S, Philos. Trans. R. Soc. Math. Phys. Sci. 262 (1968), 413–458.

46. Axel Ruhe, SOR-methods for the eigenvalue problem with large sparse matrices, Math. Com-put. 28 (1974), 695–710.

47. Axel Ruhe, Iterative eigenvalue algorithm based on convergent splittings, J. Comp. Phys. 19

(1975), 110–120.48. Y. Saad, Numerical Methods for Large Eigenvalue Problems, Halsted Press, New-York, 1992.49. M. Sadkane, Block-Arnoldi and Davidson methods for unsymmetric large eigenvalue problems,

Numerische Mathematik 64 (1993), 195–211.50. B. A. Samokish, The steepest descent method for an eigenvalue problem with semi-bounded

operators, Izvestiya Vuzov, Math. 5 (1958), 105–114, In Russian..

51. D. S. Scott, Solving sparse symmetric generalized eigenvalue problems without factorization,SIAM J. Numer. Analysis 18 (1981), 102–110.

52. P. Vassilevski, Hybrid V-cycle algebraic multilevel preconditioners (1987), (preprint) Bulgarian

Academy Sciences, Sofia Bulgaria.