preconditioned iterative solution of iml/moment method problems

4

Click here to load reader

Upload: fx

Post on 22-Sep-2016

216 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Preconditioned iterative solution of IML/moment method problems

1946 IEEE TRANSACTIONS ON MAGNETICS, VOL. 29, NO. 2, MARCH 1993

Preconditioned Iterative Solution of IML/Moment Method Problems

Francis X. Canning Rockwell International Science Center

1049 Camino Dos Rios Thousand Oaks, CA 91360

Abstract - The Impedance Matrix Localization (IML) method replaces the usual Method of Moments matrix Z by a sparse matrix T. For example, when both Z and T are N x N, T generally has about 50N nonzero elements. Although this allows each iteration of an iterative method to take only 50N operations (rather than N2), the number of iterations still must be decreased to truly have a “fast” method. For example, often more than N iterations are necessary for standard methods. Several standard and fast iterative methods are compared. These methods all converge to the exact solution of the matrix equation involving T, and the fast ones do so by using an approximate solution to this matrix equation. This approximate solution is derived from a sparse, approximate factorization of T . The approximate factorization is accurate enough to allow a solution for general problems in five iterations.

I. INTRODUCTION

Previous papersl-7 have shown how a standard moment method problem with a full matrix Z may be replaced by an equivalent matrix problem using a sparse matrix T. This method is called the Impedance Matrix Localization (IML) method since it localizes all physically important effects within the impedance matrix to a small number of clumps of large matrix elements. The IML method then requires the solution of the sparse matrix problem;

T I = V (1)

where I and V are vectors of length N. A tolerance z is specified, and T is replaced by a new T , with all elements set equal to zero which are smaller than z times the magnitude of the largest element of T.

If we can find an approximate inverse B =: T-1, then we can consider the preconditioned equation

(BT) I = BV (2)

where I and (BV) are vectors of length N. A variety of iterative methods (e.g., conjugate gradients, biconjugate gradients ....) will converge more quickly on (2) than on

Manuscript received May 30, 1992. This work was partially supported by Office of Naval Research contract N00014-91-C-0031.

(1) provided that B is a good enough approximation (in some sense) to T-1. However, if B is such a good approximate inverse that all eigenvalues of I-BT (where I is the identity matrix) have a magnitude less than one, then we may use iterative refinement which is defined by:

This is the same method that is often used when one has an explicit inverse to T , but wants to improve the accuracy of the answer to remove the effects of round off error.

The sparse IML matrix T has a very special block structure, which may be used to find an “approximate inverse” to it. Each block has a group of large numbers within it which corresponds to a “GTD like” diffraction term. This group of large numbers will be confined to an increasingly smaller region near the center of the block as more unknowns per wavelength are used. Thus the sparseness of the resulting matrix and therefor the efficiency of IML will increase as there is more subwavelength structure.

We know that typically only a few diffraction terms participate in any scattering event (meaning radiation scattering from one part of an object to another). Thus, we choose a mathematical solution method which uses this to generate a sparse approximate direct solution to define B . This method will also be more efficient as the size of the body involved (measured in wavelengths) increases. The matrix will become more sparse since the number of large elements within each block will be constant, while the block size will increase. Furthermore, very few blocks will interact with any given block, further decreasing compute time.

Let L be strictly block lower diagonal (its diagonal blocks are identity matrices and it is zero above these blocks), M be strictly block upper diagonal, and D be block diagonal. Then we may factor T as

T = (LD) D-1 (DM) (4)

Nearly all of the elements of the matrices (LD) and ( D M ) are very small. If one uses a tolerance for approximating small elements of (LD) and (DM) by zero, then the resulting factorization is that of some

0018-9464/93$03.00 0 1993 IEEE

Page 2: Preconditioned iterative solution of IML/moment method problems

1947

the rows and columns of T will be permuted as described in [4,5,8]. Another form of preconditioner results from the (LD) D-1 (DM) factorization given above and in [7]. When the tolerance used is zf, the resulting matrix B is defined by

B-1 = (LD)(zf) D-1 (DM)(zf) (6 )

matrix B-1, where B-1 = T. The tolerance used here will be called zf and is defined similarly to 7 as described above. In performing calculations, having the factored form allows one to multiply a vector by B. Since the factored form is sparse, this can be done in about 50N operations by storing only the sparse factored form. Details of this factorization were given in [7], as were iterative results for the use of this factorization in conjunction with iterative refinement. In this paper, the rates of convergence of a variety of iterative techniques are compared.

11. NUMERICAL METHODS USED

A variety of iterative methods will all be applied to the same one scattering problem. That problem will be TM (E-Pol) scattering by a perfectly conducting ogive with a blade attached at one end. The sides of the ogive have a radius of curvature of 229.5 and the chord length (end to end) of the ogive is 216, while the length of the blade is 54. Thus, the overall length of this body is 270, which is given in wavelengths as are all of these measurements. The sparseness of the IML matrix and the accuracy of scattering calculations for this body were discussed in a previous paper.5 All iterative calculations will be for a plane wave incident from the blade end. Single precision on a VAX computer was used, giving nearly seven decimal digits of precision.

The underlying moment method matrix 2 will be found using the Electric Field Integral Equation (EFIE), e.g. as in [4,5]. Two methods will be used to calculate the transformed matrix T, and they will be called the Traveling Wave (TW) and Standing Wave (SW) methods. The TW method was defined in each of [3-51, and uses complex exponential basis functions along with a Cos2 taper function. As in [ 5 ] , the basis functions will be equal to the complex conjugate of the testing functions (in [3,4] the complex conjugate was not taken). The SW method was introduced in [6], and uses standing wave (Le., a sum of two complex exponentials) basis functions. The SW method will be used here with the taper function of the form Sin[(x/2) Cos2(.)] as given in (29) of [6].

The iterative methods of conjugate gradients applied to the normal equations and orthomin(20) will be used here. When they are applied to (1) we will say there is no preconditioning and when applied to (2) there is left preconditioning by the matrix B. When B is defined using the Incomplete LU (ILU) decomposition of T as

T = L U ; B - l = L U ( 5 )

then we will say ILU preconditioning is used. The definition of this L and U is as given in [4,5,8]. In each case where ILU preconditioning is used, prior to its use

III. NUMERICAL RESULTS

The first comparison is for Conjugate Gradient (CG) and Orthomin(20) (OR) iteration on the matrix T resulting from the Traveling Wave (TW) and Standing Wave (SW) methods. For this and all succeeding calculations T was found using a tolerance of z = 10-4. As Fig. 1 shows, none of these methods converges quickly. As might be expected, the SW form results in faster convergence due to its smaller condition number. Although Orthomin (OR) initially has an advantage in both cases, the initial small advantage doesn't persist after many iterations.

# of Iterations

Fig. 1 Residual reduction as a function of iter number without using a preconditioner.

apparently destroys a matrix property (all eigenvalues have positive real part) [SI needed by OR. On the whole, CG performs about as well with preconditioning without for this case. This agrees with the earli conclusion that although ILU preconditionin4 is often quite effective, its performance varies greatly.

The third comparison is for CG preconditioned with the (LD) D-1 (DM) factori the matrix T resulting from TW and SW Although CG iteration is guaranteed to re

Page 3: Preconditioned iterative solution of IML/moment method problems

1948

lo-':

................. 1 ' " ' 1 , I , , , a , > , , -

" ..,\

160 I

1 10 # of Iterations

Fig. 2. Residual reduction as a function of iteration number using ILU preconditioning.

............... ......... /

/ \

......... TW, BV = Io ..... ',. .............. ................... ?-

.......... ........ TW, O = I o ......

residual of the preconditioned equation 121 at each iteration, this is not necessarily true of the residual of the original equation [l] as defined in [3], which is plotted on Fig. 3. In fact, when the initial value for the current, Io, is taken to be BV, the residual increases at first as Fig. 3 shows. The preconditioner was computed using zf= 3.3 x 10-4, and with this tolerance it clearly is superior to ILU as shown in Fig. 2. For the SW form the matrix T and its ILU factorization both had about 92 nonzero elements per row, while the factorization of Fig. 3 had only about 63 nonzero elements per row. While the TW form had a matrix T which was sparser than this (having 82 per row), the TW form had a (LD) D-1 (DM) factorization which was much less sparse than this (having 116 elements per row).

The fourth comparison is for iterative refinement using the (LD) D-1 (DM) factorization of the matrix T which results from TW methods. As Fig. 4 shows, the iteration diverges for the values of zf considered, though for a slightly smaller value it would be expected to converge.

# of Iterations

Fig. 4. Residual reduction as a function of iteration number using (LD) D-1 (DM) and iterative refinement for the TW formulation.

The last comparison is for iterative refinement using the (LD) D-1 (DM) factorization on the matrix T which results from the SW formulation. As Fig. 5 shows, reducing z changes the situation from rapid divergence to rapid convergence. In fact, for z = 10-4 the approximate inverse alone is good enough to give a relative residual of 10-4, while it required only 104 nonzero elements per row in the factored form.

1 7=0.00033

# of Iterations

Fig. 3. Residual reduction as a function of iteration number using (LD) D-1 (DM) preconditioning and Io = 0, BV. Only CG iteration is used here.

# of Iterations

Fig. 5 . Residual reduction as a function of iteration number using (LD) D-1 (DM) and iterative refinement for the SW formulation.

Page 4: Preconditioned iterative solution of IML/moment method problems

IV. CONCLUSION

1949

REFERENCES

Several iterative methods have been considered for use with the IML form of moment method problems. Standard iterative methods and these methods preconditioned by ILU were found to be too slow and too unreliable respectively in their convergence rates. An approximate inverse given by the block factored form (LD) D-1 (DM) was then considered, and it was found to be effective with conjugate gradients applied to the normal equations. However, additional increases were achieved by using iterative refinement rather than conjugate gradients. Furthermore, the new standing wave (SW) formulation of [6] achieved a residual reduction of four orders of magnitude in only one iteration. This required a storage of 82 elements per row for the original matrix and 104 elements per row for the “approximate inverse.” The original moment method problem considered here required 5103 elements per row to be stored. The reductions in execution time per iteration (see [7]) were as great as the reductions in storage, while the reductions in preparation time were even greater (see [7]).

These conclusions may be restated in a more general context. These iterqtive methods are generally applicable to the IML form of moment method problems. A sparse approximate inverse was found which in general should allow an iterative solution in only a few (say five or less) iterations. This general method depends only on the structure imposed by IML and not on the geometry of the problem. Using sparse matrix techniques each iteration requires only about 200N operations in contrast to the N2 solve time for each right-hand side for direct methods. The storage required is also only 200N in contrast to N2.

Francis X. Canning was born in Summit N. J. in September 1950. After a freshman year at SUNY at Stony Brook, he transferred to Dartmouth College. There he completed both the mathematics and physics majors, receiving the A.B. degree in March of 1971. This dual interest continued into graduate school at the University of Massachusetts at Amherst, where he ultimately studied phase transitions, receiving the M.S. degree in 1976 and the Ph.D. in 1982. He then worked on electromagnetic wave propagation and scattering at the Naval Weapons Center at China Lake until 1987, primarily using high frequency methods. Since 1987 he has worked at the Rockwell Science Center on combining the accuracy of fully numerical methods with the efficiency of high frequency methods.

[ l ] F. X. Canning, Reducing Moment Method Storage from Order N2 to Order N , Electronics Letters 25, pp 1274-1275, September 14, 1989. Reprinted in R. C. Hansen, Moment Methods in Antennas and Scattering, Artech House, Norwood, MA, 1990.

[2] F. X. Canning, Transformations that Produce a Sparse Moment Method Matrix, Journal of Electromagnetic Waves and Applications, 4, No. 9, pp 893-913, 1990.

[3] F. X. Canning, Sparse matrix approximation to an integral equation of scattering, Communications on Applied Numerical Methods, 7, pp 543-548, October 1990.

[4] F. X. Canning, The Impedance Matrix Localization Method (IML) for Moment-Method Calculations, IEEE Antennas and Propagation Society Magazine, 32, pp 18- 30, October 1990.

[5] F. X. Canning, Sparse Approximation to Integral Equations with Oscillatory Kernels, SIAM J. on Scientific and Statistical Computing, 13, pp 71-87, January 1992.

[6] F. X. Canning, Improved Impedance Matrix Localization Method, Submitted to IEEE Transactions on Antennas and Propagation.

[7] F. X. Canning, Solution of IML Form of Moment Method Problems in 5 Iterations, Submitted to IEEE Transactions on Antennas and Propagation.

[8] F. X. Canning, Interaction Matrix Localization (IML) Permits Solution of Larger Scattering Problems, IEEE Transactions on Magnetics, 27, pp 4275-4277, September 199 1.