iterative methods for solving systems of...

130
Iterative Methods for Solving Systems of Equations By Javed Iqbal CIIT/FA08-PMT-006/ISB PhD Thesis In Mathematics COMSATS Institute of Information Technology Islamabad-Pakistan Spring, 2012

Upload: others

Post on 07-Aug-2020

20 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

Iterative Methods for Solving Systems of

Equations

By

Javed Iqbal

CIIT/FA08-PMT-006/ISB

PhD Thesis

In

Mathematics

COMSATS Institute of Information Technology

Islamabad-Pakistan

Spring, 2012

Page 2: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

ii

COMSATS Institute of Information Technology

Iterative Methods for Solving Systems of Equations

A Thesis Presented to

COMSATS Institute of Information Technology, Islamabad

In partial fulfillment

of the requirement for the degree of

PhD Mathematics

By

Javed Iqbal

CIIT/FA08-PMT-006/ISB

Spring, 2012

Page 3: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

iii

Iterative Methods for Solving Systems of Equations

__________________________________________

A Post Graduate Thesis submitted to the department of Mathematics as partial

fulfillment of the requirement for the award of Degree of Ph.D. in Mathematics

Name Registration Number

Javed Iqbal FA08-PMT-006/ISB

Supervisor

Dr. Muhammad Aslam Noor

Professor Department of Mathematics

Islamabad Campus.

COMSATS Institute of Information Technology (CIIT)

Islamabad.

May, 2012

Page 4: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

iv

Final Approval __________________________________________________________

This thesis titled

Iterative Methods for Solving Systems of Equations

By

Javed Iqbal

FA08-PMT-006/ISB

Has been approved

For the COMSATS Institute of Information Technology, Islamabad

External Examiner 1: ______________________________________

Prof. Dr. Shahid Siddiqi

Chairman, Department of Mathematics

Punjab University, Lahore

External Examiner 2: ______________________________________

Prof. Dr. Siraj-ul-Islam

Department of Basic Sciences,

KPK University of Engineering and Technology,

Peshawar

Supervisor: _______________________________________

Prof. Dr. Muhammad Aslam Noor

Department of Mathematics, Islamabad

HoD: ________________________________________

Dr. Moiz-Ud-Din Khan

Department of Mathematics, Islamabad

Dean, Faculty of Sciences: _________________________________________

Prof. Dr. Arshad Saleem Bhatti

Page 5: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

v

Declaration

I Javed Iqbal bearing registration number FA08-PMT-006/ISB hereby declare that I have

produced the work presented in this thesis, during the scheduled period of study. I also

declare that I have not taken any material from any source except referred to wherever

due that amount of plagiarism is within acceptable range. If a violation of HEC rules on

research has occurred in this thesis, I shall be liable to punishable action under the

plagiarism rules of the HEC.

Date: ________________ Signature of the student:

___________________

Javed Iqbal

FA08-PMT-006/ISB

Page 6: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

vi

Certificate

It is certified that Javed Iqbal registration number FA08-PMT-006/ISB has carried out all

the work related to this thesis under my supervision at the Department of Mathematics,

COMSATS Institute of Information Technology, Islamabad and the work fulfills the

requirement for award of PhD degree.

Date: _________________

Supervisor:

Prof. Dr. Muhammad Aslam Noor

Professor of Mathematics

Head of Department:

___________________________

Dr. Moiz-Ud-Din Khan, Associate Professor

Department of Mathematics

Page 7: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

vii

DEDICATED

To

My Dear Parents

Whose prayers has always been a source of

Great inspiration to me

Page 8: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

viii

ACKNOWLEDGMENTS

First of all I express my thankfulness to ALMIGHTY ALLAH (above all and first of all)

for enabling me to complete this work.

Words of gratitude and appreciation do not always convey the depth of one’s feelings, yet

I wish to record my thanks to my most respected Supervisor Professor Dr. Muhammad

Aslam Noor, for his wisdom and kindness during the project. I further wish to

acknowledge Prof. Dr. Khalida Inayat Noor for her invaluable, intellectual suggestions

and constructive criticism that she render to me during the course of research work.

I have strong feelings of appreciation for the Higher Education Commission of Pakistan

for financial support and other facilities. I am also indebted to Honorable Rector, Dr. S.

M. Junaid Zaidi, COMSATS Institute of Information Technology, Islamabad. I am

grateful to the Head, Department of Mathematics, for providing me all necessary

facilities and research environment.

I wish to express heartfelt thanks and deep gratitude to my brother Asif Iqbal for his

sincere encouragement and financial support. I would like to thank my mother for her

prayers which helped during my studies. I am also thankful to rest of my family for their

supports and helping.

This acknowledgment would remain incomplete without the thanks to my friends and

seniors. I am grateful to my seniors specially Mohammad Arif for their sincere

encouragement. I am grateful to all my friends for their support during my stay as a

student.

Javed Iqbal

CIIT/FA08-PMT-006/ISB

Page 9: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

ix

ABSTRACT

Iterative Methods for Solving Systems of Equations

It is well known that a wide class of problems, which arises in pure and applied sciences

can be studied in the unified frame work of the system of absolute value equations of the

type

,Ax x b− =

,n nA R

×∈ .nb R∈

Here x is the vector in nR with absolute values of components of .x In this thesis,

several iterative methods including the minimization technique, residual method and

homotopy perturbation method are suggested and analyzed. Convergence analysis of

these new iterative methods is considered under suitable conditions. Several special cases

are discussed. Numerical examples are given to illustrate the implementation and

efficiency of these methods. Comparison with other methods shows that these new

methods perform better.

A new class of complementarity problems, known as absolute complementarity problem

is introduced and investigated. Existence of a unique solution of the absolute

complementarity problem is proved. A generalized AOR method is proposed. The

convergence of GAOR method is studied. It is shown that the absolute complementarity

problem includes system of absolute value equations and related optimizations as special

cases.

Page 10: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

x

TABLE OF CONTENTS

1. Introduction…………………………………………………………………...1

2. Preliminr1es…………………………………………………………………...7

2.1 Linear Systems………………………………………………………..8

2.1.1 Stationary Iterative Methods………………………………...8

2.1.2 Non Stationary Iterative Methods…………………………..13

2.2 Linear Complementarity Problems…………………………………..17

2.3 System of Absolute Value Equations………………………………..19

2.3.1 Existence of Solution……………………………………….22

3. One-Step Gauss-Seidel Method……………………………………………..25

3.1 Iterative Method……………………………………………………..26

3.2 Convergence Analysis……………………………………………….30

3.3 Numerical Result…………………………………………………….31

4. Two-Step Gauss-Seidel Method …………………………………………….39

4.1 Two-Step Iterative Method ………………………………………….41

4.2 Convergence Analysis……………………………………………….46

4.3 Numerical Result…………………………………………………….47

5. Residual Iterative Method…………………………………………………...53

5.1 Residual Iterative Method……………………………………………55

5.2 Numerical Result…………………………………………………….59

6. Quasi Newton Method……………………………………………………….66

6.1 Quasi Newton Method……………………………………………….67

6.2 Numerical Results………………………………………………........70

7. Homotopy Perturbation Method………………………………………........78

7.1 Homotopy Perturbation Method……………………………………..79

7.2 Convergence Analysis……………………………………………….82

7.3 Iterative Methods…………………………………………………….83

Page 11: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

xi

7.3 Numerical Result…………………………………………………….83

8. Absolute Value Complementarity Problems……………………………….87

8.1 Absolute Value Complementarity Problems……..………………….90

8.2 Generalized AOR Method…………………………………………...95

8.3 Numerical Results………………………………………………….100

9. Conclusion…………………………………………………………………104

10. Reference…………………………………………………………………..107

Page 12: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

xii

LIST OF FIGURES

___________________________________________

Fig 2.1 Comparison among basic iterative methods……………….…………………..13

Fig 2.2 Efficiency of the conjugate gradient method…………………………………..17

Fig 3.1 Comparison between Algorithm 3.1 and Algorithm 3.2…………………........32

Fig 4.1 Comparison graph……………………………………………………………...48

Fig 5.1 Efficiency of residual iterative method………………………………………..60

Fig 6.1 Comparison of quasi Newton method with other methods…………………....74

Fig 7.1 Efficiency of homotopy SOR method……….…………………………….......87

Page 13: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

xiii

LIST OF TABLES

___________________________________________

Table 3.1……………………………………………………………………………...33

Table 3.2……………………………………………………………………………...34

Table 3.3……………………………………………………………………………...35

Table 3.4……………………………………………………………………………...36

Table 3.5……………………………………………………………………………...37

Table 4.1……………………………………………………………………………...49

Table 4.2……………………………………………………………………………...50

Table 4.3……………………………………………………………………………...51

Table 4.4……………………………………………………………………………...52

Table 4.5……………………………………………………………………………...53

Table 5.1……………………………………………………………………………...61

Table 5.2……………………………………………………………………………...62

Table 5.3……………………………………………………………………………...63

Table 5.4……………………………………………………………………………...64

Table 5.5……………………………………………………………………………...65

Table 6.1……………………………………………………………………………...75

Table 6.2……………………………………………………………………………...75

Table 6.3……………………………………………………………………………...76

Table 6.4……………………………………………………………………………...77

Table 7.1……………………………………………………………………………...84

Table 7.2……………………………………………………………………………...85

Table 7.3……………………………………………………………………………...86

Table 8.1……………………………………………………………………………..101

Table 8.2……………………………………………………………………………..102

Table 8.3……………………………………………………………………………..103

Page 14: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

xiv

LIST OF ABBREVIATIONS

AOR Accelerated over relaxation

BFGS Broyden Fletcher Goldfarb Shanno

BVP Boundary value problem

CM Concave minimization

GAOR Generalized accelerated over relaxation

GCRES Generalized conjugate residual

GMRES Generalized minimal residual

GQC Globally and quadratically convergent

HPM Homotopy perturbation method

PDBP Primal-dual bilinear programming

PSO Particle swarm optimization

SNM Smoothing Newton method

SOR Successive over relaxation

SSOR Symmetric successive over relaxation

TOC Time of computation (seconds)

Error 2-norm of residual of approximate solution

TA Transpose of matrix A

nR Finite dimensional Euclidean space

. Euclidean norm

Page 15: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

1

Chapter 1

Introduction

Page 16: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

2

Many practical problems can be reduced to system of linear equations ,Ax b= where

,A b are known matrices and x is a vector of unknowns. This type of equations play a

prominent role in finance, industry, economics, engineering, physics, chemistry,

computer science and other field of pure and applied sciences. System of nonlinear

equations may be solved using system of linear equations.

Babylonians [8] have solved system of linear equations involving two unknowns about

4000 years ago. In 200 BC Chinese [45] solved systems of linear equations of order

3 3,× by using coefficient of the systems. It is the first known example of matrix

reorientation of linear system. Cramer [8] used the determinants to solve the system of

linear equations. This method is called the Cramer rule. Gauss solved systems of linear

equations known as Gauss elimination method using matrix like arrangements.

Sylvester [8] introduced the term “matrix” for such arrangements.

The systems of linear equations can be solved using both direct and iterative methods.

The best known direct method is Gauss elimination see [26, 94]. Turing [96] introduced

LU decomposition of a matrix for solving system of linear equations. Choleski [13]

decomposed the matrix A in the product of lower triangular matrix and their transpose.

The Choleski method is more efficient than LU decomposition method for solving

symmetric and positive definite linear system.

Direct methods produce new matrices at each step therefore they are sensitive to

rounding errors. Direct methods are not efficient in term of computer storage so these

methods are prohibitively expensive for large systems. Iterative methods are very

efficient when they applied to large and spare systems of equations that arise in practical

problems.

The iterative method starts with an initial guess and generates a sequence of

approximation that improves the solution of a problem at each step. We divide the

iterative methods into two categories stationary and nonstationary iterative methods.

Stationary iterative methods are simpler but not as effective as nonstationary iterative

methods. Nonstationary iterative methods are a relatively recent development that

generates a sequence which involve parameters that changes at each iteration. These

methods do not have an iteration matrix.

The Jacobi method, Gauss-Seidel method and successive over relaxation (SOR) method

Page 17: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

3

are the examples of stationary iterative methods. Young [102] and Frankel [24]

simultaneously suggested the SOR method for solving system of linear equations. The

effective preconditioners can increase the rate of convergence of stationary iterative

methods by reducing the condition number of the problem. It is also possible that in

some cases the original method diverges but preconditioned method rapidly converges

to the solution. Hadjimos [27] proposed accelerated over-relaxation (AOR) method to

improve the convergence of the relaxation methods.

The conjugate gradient method, GCRES method and GMRES method are the examples

of nonstationary iterative methods. The conjugate gradient method was introduced by

Hestenes and Stiefel [34]. If the matrix A is a symmetric and positive definite, then one

can show that the minimization of the function ( ) , 2 ,f x Ax x b x= − on the whole

space can be characterized by the system of linear equations. Axelesson [4] and Jea et

al. [37] modified the conjugate gradient method for solving non-symmetric linear

systems.

Paige and Saunders [72] suggested minimal residual methods for large and sparse

indefinite problems. Saad and Schultz [87] have presented generalized minimal residual

algorithm, which minimizes the residual norm efficiently as compare to the method of

Paige and Saunders [72]. For recent development see [13, 85].

Homotopy perturbation method is a popular technique to suggest different iterative

methods. Kermati [43] and Yusufoglu [104] used homotopy perturbation method for

solving linear systems. Liu [46] has used homotopy perturbation method and proposed

different iterative methods for solving linear systems.

In recent years much attention have given to study the generalized system of absolute

value equations of the form ,Ax B x b+ = , n nA B R ×∈ and .nb R∈ Here x denote

component wise absolute values of .nx R∈ If ,B I= − then generalized system of

absolute value equations reduces to ,Ax x b− = where I is the identity matrix. If

0B = (null matrix), then generalized system of absolute value equations is equivalent to

system of linear equations .Ax b=

The generalized system of absolute value equations Ax B x b+ = was introduced by

Rohn [79]. He used the theorem of the alternative for solving the generalized system of

Page 18: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

4

absolute value equations. There is no direct method for solving system of absolute value

equations because these systems are nonlinear. Rohn [79] proved the equivalence

between system of absolute value equations and linear complementarity problems.

The problem of checking whether system of absolute value equations has a unique

solution is NP-hard [48, 76]. If the system of absolute value equations is solvable, then

either it has a unique solution or multiple solutions (exponentially many). We do not

know about the exact number of solutions of the system of absolute value equations. The

importance of system of absolute value equations arises from the fact that several

mathematical problems including linear programming, bimatrix games can be formulated

as system of absolute value equations.

The system of absolute value equations can be solved iteratively. Several iterative

methods were proposed for solving system of absolute value equations, for example,

generalized Newton method [50], minimization iterative methods [53, 66, 67, 68] and the

methods based on linear complementarity problems [53, 76].

The complementarity problems introduced by Lemake [41]. He showed that the two

person game problem can be studied by the linear complementarity problems. Lemke [41]

and Cottle and Dantzig [17], developed the direct methods for solving linear

complementarity problems. The direct methods are not useful for solving large problems

therefore several iterative methods proposed for solving complementarity problems [2, 39,

44, 58, 61].

The following conditions play an important role in the solubility of system of absolute

value equations:

(i) The system of absolute value equations has a unique solution when 1 1.A− <

(ii) The system of absolute value equations has 2n distinct solutions, each of which has

different sign pattern with no zero entries when 0,b < and 2A σ∞

< where

min max .i i

i i

b bσ =

For more details see [53].

In this thesis, several iterative methods including the minimization techniques, residual

method and homotopy perturbation method are suggested and analyzed. Absolute value

complementarity problems is introduced and investigated. Convergence analysis of these

Page 19: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

5

new methods is considered under suitable conditions. Comparison with other methods

shows that these new method perform better.

In chapter 2, we discuss those iterative methods for solving systems of linear and

absolute value equations that we use in upcoming chapters. We examine some related

problems about the solutions of system of absolute value equations. Some numerical

examples are considered for comparison.

In chapter 3, we suggest an iterative method for solving systems of absolute value

equations based on minimization techniques. We suggest two algorithms with different

search directions. The convergence criteria of this method are proved for symmetric and

positive definite absolute value systems. The numerical comparison with different

iterative methods is given. The contents of chapter 3 are already accepted for publication

in Optimization Letters, (2011), DOI: 10.1007/s11590-011-0332-0.

In chapter 4, we modify the iterative method with double search directions for solving

system of absolute value equations. This method is based on minimization techniques.

We prove that the modified method is better than the previous iterative method both

theoretically and numerically. Some numerical examples are considered. This work is

published in International Journal of the Physical Sciences, (2011), 6(7), 1793-1797.

In chapter 5, we propose residual method for solving system of absolute value equations

based on projection techniques. In this method, the non-symmetric and positive definite

system are considered. We minimize norm of residual using Petrov-Galerkin method over

the Krylov subspace. Choosing different search directions, the method converges in

different number of iteration for the same problem. The convergence of the proposed

method is discussed under certain conditions. This work is already published in Abstract

and Applied Analysis, (2012), DOI: 10.1155/ 2012/406232.

In chapter 6, we deal with generalized system of absolute value equations

.Ax B x b+ = We suggest quasi-Newton method for multiple solutions of generalized

system of absolute value equations and compare our method with other iterative

methods. The quasi Newton method is basically minimization method with single

search direction. It is also applicable for special case. We consider numerical examples

of both types of system of absolute value equations.

Page 20: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

6

In chapter 7, we relaxed positive definiteness and suggest homotopy perturbation

method for solving system of absolute value equations. We use the homotopy

perturbation method to suggest iterative methods for solving the system of absolute

value equations. We discuss the convergence of these iterative methods. We consider

several examples to illustrate the implementation and efficiency of the proposed method.

In chapter 8, we introduce a class of complementarity problems known as absolute

value complementarity problem. We propose and analyze generalized AOR algorithm

for absolute value complementarity problem. The convergence criteria of the GAOR

method are discussed. Using GAOR method, we can solve system of absolute value

equations. The contents of this chapter are already accepted for publication in Journal of

Applied Mathematics, (2012), DOI:10.1155/2012/743861.

Page 21: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

7

Chapter 2

Preliminaries

Page 22: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

8

In this chapter, we give a short introduction to iterative methods for solving the system of

linear equations and system of absolute value equations. The convergence of already

existing iterative methods is examining for both types of systems of equations. We also

discuss some problems related to the solution of system of absolute value equations.

2.1 Linear Systems

One of the problems encountered most frequently in scientific computation is the

solution of system of linear equations

,Ax b= (2.1)

where ,n nA R

×∈ x is vector of unknowns and b is a constant vector. System of linear

equations plays an important role in transformation theory, finance, industry, economics,

engineering, physics, chemistry, computer science and other fields of pure and applied

sciences. There are several methods for solving the systems of linear equations. These

methods can be classified into two categories. These are called direct methods and

indirect (iterative) methods. The best known direct method is Gauss elimination.

The system (2.1) is consistent, when nb R∈ is in the range of the matrix ,A otherwise the

system (2.1) is inconsistent. The system (2.1) is always consistent and has a unique

solution when the matrix A is nonsingular. For singular matrix A, the system (2.1) has

either infinitely many solutions or no solution. When the system (2.1) is consistent then it

can be solved by using direct methods or iterative methods. We have two types of

iterative methods for solving (2.1), that is

2.1.1 Stationary iterative methods,

2.1.2 Non stationary iterative methods.

2.1.1 Stationary Iterative Methods

Stationary iterative method for solving system of linear equations can be expressed as:

1 , 1,2, ,k k

x Tx d k−= + = … (2.2)

where the matrix T and the constant vector d are free from the iteration count .k This

Page 23: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

9

form of methods are called stationary iterative methods. In these methods, we do the

same process at every iteration that is multiplying the iterate by the operator T and adding

the constant vector .d For different iteration matrix T we have different iterative

methods. These methods are simple to derive and easy to implement. Different types of

preconditioners are suggested to increase the rate of convergence of these methods. To

prove the convergence of stationary iterative methods we need the following definitions.

Definition 2.1 [13]. The spectral radius ( )Aρ of a matrix A is defined as

( ) max ,Aρ λ=

where λ is the eigenvalue of A and . denote the absolute value.

Definition 2.2 [13]. The inner product is denoted by . , . has the following properties:

i. , 0, nx x x R≥ ∀ ∈ and 0, =xx if and only if 0,x =

ii. zxyxzyx ,,, +=+ , , , ,nx y z R∀ ∈

iii. , ,x y x yα α= , , nx y R∀ ∈ and ,Rα ∈

iv. , , ,x y y x= , .nx y R∀ ∈

Definition 2.3. A matrix n nA R ×∈ is positive definite, if only if , 0,Ax x > .nx R∀ ∈

Definition 2.4. If n nA R ×∈ is positive definite, then the following conditions hold:

i. There exists a constant 0,β > such that

2, ,Ax x xβ≥ for all .n

x R∈

ii. There exists a constant 0γ > such that

,Ax xγ≤ for all .nx R∈

The following result is needed for the convergence of stationary iterative methods.

Theorem 2.1 [13]. For any initial guess 0 ,nx R∈ the sequence ,

kx defined by

1 , 1, 2, ,k k

x Tx d k−= + = …

Page 24: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

10

has the unique solution of ,x Tx d= + if and only if ( ) 1.Tρ <

Proof. Consider ( ) 1Tρ < from (2.2), we have

1

2

2

2

1 2

0

( )

( )

( ) .

k k

k

k

k k k

T x T x d

T T x d d

T x T I d

T x T T T I d

− −

= +

= + +

= + +

= + + + + +

Since ( ) 1Tρ < , so the matrix T is convergent and

0lim 0.k

xT x

→∞=

Thus, we have

1

0

1

lim lim ( ) .k i

kx x

i

x T x T d I T d∞

→∞ →∞=

= + = −

∑ (2.3)

From (2.3), we say that the sequence kx converges to the vector 1( )x I T d−= − or

.x Tx d= +

Conversely suppose that the sequence defined by (2.2) converges to the unique solution

,x Tx d= + we have to show that ( ) 1,Tρ < or equivalently lim 0,k

xT y

→∞= for any .ny R∈

Let 0x x y= − and 11, .k k

k x Tx d−≥ = + Then k

x converges to x and

1

1

2

2

0

( ) ( )

( )

( )

( ) .

k k

k

k

k k

x x Tx d Tx d

T x x

T x x

T x x T y

− = − − −

= −

= −

= − =

Page 25: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

11

Now 0lim lim( ) lim( ) 0.k

kk k k

T y x x x x→∞ →∞ →∞

= − = − = Hence for arbitrary ,ny R T∈ is a convergent

matrix this implies that ( ) 1.Tρ <

We decomposed the matrix A as:

,A D L U= − − (2.4)

where D is a nonsingular diagonal matrix, L and U are strictly lower and strictly upper

triangular matrices respectively. The Jacobi method and Gauss-Seidel method can be

written as

1 , 1,2, ,k k

x Tx d k−= + = …

where

1

1

( ) , Jacobi method

( ) , Gauss-Seidel method.

J

G

T D L U

T D L U

= +

= −

The Jacobi method approximates the solution of (2.1) using the previous values. The

Gauss-Seidel method is using the previous values and the new available values to solve

(2.1). The Gauss-Seidel method converges to the solution of (2.1) faster than the Jacobi

method. The relation between the spectral radii of the Jacobi method and Gauss-Seidel

method is given by Stein and Rosenberg as follows:

Theorem 2.2 [92]. If 0ii

a > and 0,ij

a ≤ for each , 1,2, , ,i j i n≠ = … then one and only

one of the following holds:

(i). 0 ( ) ( ) 1,G J

T Tρ ρ≤ < <

(ii). 1 ( ) ( ),J G

T Tρ ρ< <

(iii). ( ) ( ) 0,J G

T Tρ ρ= =

(iv). ( ) ( ) 1,J G

T Tρ ρ= =

where ( ), ( )J G

T Tρ ρ denote the spectral radii of the Jacobi method and the Gauss-Seidel

method respectively. We see from Theorem 2.2 that when one method converges the

Page 26: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

12

other also converges. There are some examples for which the Jacobi method converges

and Gauss-Seidel method diverges and vice versa. For example, see [13].

The successive over relaxation (SOR) method is the more efficient stationary iterative

method. The modification of Gauss-Seidel method can be expressed as:

( )1 1

1( ) (1 ) ( ) ,k kx D L D U x D L bω ω ω ω− −−= − − + + − (2.5)

for 0 1,ω< < the above method is called under relaxation method. For 0 1,ω< < it is

possible that the sequence defined by (2.5) converges and Gauss-Seidel does not

converge for some systems, for example the Gauss-Seidel method diverges for system

(2.4) but the under relaxation method converges and the spectral radius is as follows:

( ) .92 1, 0.3.Sρ ω= < =

If 1 ,ω< then (2.5) is called over relaxation method, which accelerate the rate of

convergence for systems that are convergent by the Gauss-Seidel method [13, 84]. The

above two relaxation type methods are called successive over relaxation methods.

Hadjimos [27] proposed AOR method to accelerate the convergence of the relaxation

methods. The iteration matrix of AOR method is given by

( )1

, ( ) (1 ) ( ) ,r wT D rL D r L Uω ω ω−= − − + − +

where , r Rω ∈ and 0.ω ≠

To compare the basic iterative methods, we consider the following example.

Example 2.1 Consider system of linear equations

1 2

1 2 3

2 3

5 4 31

4 5 37

5 29.

x x

x x x

x x

+ =

+ − =

− + = −

The above system has solution (3, 4, 5) .Tx = − Let the initial guess be 0 (0, 0, 0) .Tx = We

Page 27: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

13

denote the Gauss-Seidel method and Jacobi method by GSM and JM respectively. The

iterate is accurate to ten places of decimal. The comparison is given in figure 2.1.

0 20 40 60 80 100 120 140 160 18010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

Number of iterations

2 n

orm

of

resi

du

al

JM

GSM

SORM

Figure 2.1

Comparison among basic iterative methods

In figure 2.1, we observe that the Jacobi method, Gauss-Seidel method and SOR method

approach to the approximate solution of (2.1) in 174, 82 and 38 iterations respectively.

Thus we conclude that SOR method is almost twice faster than Gauss-Seidel method and

four times faster than Jacobi method for this example. In the next section, we discuss the

nonstationary iterative methods which are relatively new approach as compare to the

stationary iterative methods.

2.1.2. Nonstationary Iterative Methods

Nonstationary iterative methods involve information that changes at every iteration.

These methods are harder to understand but more effective than the classical stationary

iterative methods. Generally non stationary iterative methods based on the idea of

orthogonal vectors and subspace projections. Examples of nonstationary iterative

Page 28: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

14

methods are conjugate gradient method, biconjugate gradient method, quasi Newton

methods and Chebyshev iteration.

The nonstationary iterative methods or Krylov subspace methods are powerful tool for

solving linear systems. These methods are named after Russian mathematician and

engineer Krylov [85, 86]. The significance of Krylov subspace methods arises from the

fact that requires low memory and gives good approximation to the solution. Krylov used

the sequence 2, , , ,u Au A u… to determent the minimum polynomial of the system matrix

A associated with a vector .nu R∈ Later in 1970 the Krylov subspace

2 1( , ) , , , m

mA u span u Au A u A uκ −= … (2.6)

was introduced and several Krylov subspace methods were proposed to solve large and

sparse linear systems [86]. For different choices of the vector u we have different Krylov

subspaces. The constant vector b and the residual vector r b Ax= − are commonly used

in Krylov subspaces. The conjugate gradient method is well known Krylov subspace

method.

The conjugate gradient method was proposed by Hestenes and Stiefel [34]. They

minimized the functional

( ) , 2 , ,f x Ax x x b= − (2.7)

in the direction ,n

ku R∈ using the sequence

1 1, 2,k k k k

x x u kα−= + = …

where n nA R ×∈ is symmetric and positive definite matrix, , , .n

kx b R Rα∈ ∈ The

conjugate gradient method chooses the search directions from the set 1 2 , , ,u u … whose

elements are A -orthogonal to each other and mutually orthogonal to the residual vectors

.k

r The conjugate gradient method uses the single search direction to minimize the

functional (2.7). The conjugate gradient method can be stated as:

Page 29: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

15

Algorithm 2.1

Choose an initial guess 0

nx R∈

0 0 1 0,

For

1, 2, ,

r b Ax u r

k n

= − =

= …

1 1

1

1

1 1

1

,,

,

,

,

stopping criteria

end for .

k k

k

k k

k k k k

k k k k

k k

k

k k

k k k k

r r

u Au

x x u

r r Au

r rt

r r

u r t u

k

α

α

α

− −

− −

+

=

= +

= −

=

= +

The conjugate gradient method converges to the solution of (2.1) in ,m n≤ steps. The

precondition conjugate gradient method obtained the approximate solution in about n

iterations, where n is problem size. However several generalizations [4, 37] of conjugate

gradient method have been proposed for non-symmetric matrices. The conjugate gradient

method is applicable only when the system matrix is symmetric positive definite.

Another class of effective iterative methods is quasi-Newton methods. Using these

methods, one can solve system of linear and nonlinear equations. The first quasi-Newton

method was suggested by Davidon [20]. Different updating formulas for Hessian matrix

were proposed. The well known updating formula was developed by Fletcher and Powell

[24].

Paige and Saunders [71] proposed Lanczos based iterative method for solving (2.1). They

minimized the norm of residual on Krylov subspace. This method is called minimal

residual method. This method can be used to solve symmetric and indefinite systems. In

this method one minimizes the functional

Page 30: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

16

21

( ) ,2

f x b Ax= −

where n nA R ×∈ is a symmetric indefinite system matrix. It is possible to improve the

minimal residual method by choosing a suitable search direction. Minimal residual

method can be stated as:

Algorithm 2.2.

Choose an initial guess 0

nx R∈

1

1

For

1, 2, ,

,,

,

stopping criteria

end for .

k k

k k

k

k k

k k k k

k

r b Ax

Ar rt

Ar Ar

x x t r

k

=

= −

=

= +

The modification of minimal residual method was proposed by Sheng et al. [89] with two

search directions. The modified method converges faster than minimal residual method to

the solution of (2.1).

Now we solve example 2.1 and using Algorithm 2.1 and Algorithm 2.2. The comparison

between conjugate gradient method and minimal residual method with respect to number

of iterations against 2-norm of residual is given in figure 2.2.

Page 31: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

17

0 20 40 60 80 100 12010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

Number of iterations

2 n

orm

of

resi

du

al

Algorithm 2.1

Algorithm 2.2

Figure 2.2

Efficiency of conjugate gradient method

The numbers of iterations for Algorithm 2.1 and Algorithm 2.2 are 3 and 116 respectively.

From figure 2.2, we see that conjugate gradient method converges faster than minimal

residual method to the solution of (2.1).

In section 2.3, we discuss the general form of system of linear equations. This general

form of equations are called absolute value equations, which are nonlinear and involve

component wise absolute values of the unknown vector .nx R∈ First we define linear

complementarity problems before discussing system of absolute value equations.

2.2 Linear Complementarity Problems

The linear complementarity problem consists of finding two vectors w and z which

satisfy the following conditions:

0, 0, , 0,z w Mz q w z≥ = + ≥ = (2.8)

where n nM R ×∈ and .nq R∈ The complementarity problems have been generalized and

extended to study a wide class of problems, which arise in pure and applied sciences, see

[51-53, 58-62] and the references therein. Cottle and Dantzig [17] and Lemke [41]

Page 32: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

18

introduced the complementarity problems which play a prominent role in engineering,

economics, industry, optimization, linear programming, and physical sciences in unified

framework. To solve the linear complementarity problems several method were proposed.

These methods can be classified into two categories. These are called direct methods and

indirect (iterative) methods. Lemke [41] and Cottle and Dantzig [17] developed the

direct methods for solving linear complementarity problems. The direct methods are

expensive for solving large problems therefore several iterative methods proposed for

solving linear complementarity problems. These iterative methods make it possible to

handle very large scale linear programs which cannot be solved using well known

simplex method. It is proved that the complementarity problem is equivalent to the fixed

point problem by using the following result.

Lemma 2.1 [62]. Let K be a nonempty, closed and convex set in .nR For a given

,nz R∈ u K∈ satisfies the inequality

, 0, ,u z u v v K− − ≥ ∈

if and only if

,Ku P z=

where KP is the projection of nR onto the closed convex set K .

Another effective mathematical tool for solving different type of equations is known as

homotopy perturbation method (HPM). This method was introduced by He [29]. Kermati

[43] and Yusufoglu [104] used HPM for solving linear systems. Liu [46] proposed

homotopy iterative method for solving linear systems. He proved that homotopy iterative

methods converged rapidly as compare to the stationary iterative method. Noor [64]

introduced the auxiliary parameter to accelerate the convergence of the solution series.

Generally homotopy can be defined by

( , ) (1 ) ( ) ( ( ) ( )) 0,H x p p F x p L x N x= − + + = (2.9)

where [0,1],p ∈ ( )L x and ( )N x denote linear operator and nonlinear operator respectively.

Page 33: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

19

From (2.9), we have

( ,0) ( ) 0, ( ,1) ( ) ( ) 0.H x F x H x L x N x= = = + =

The approximate solution is obtained in series as:

0 2

0 1 2

0 1 21

lim .p

y p x px p x

x y x x x→

= + + +

= = + + +

In the next section, we discuss system of absolute value equations.

2.3 System of Absolute Value Equations

In this section, we deal with generalized system of absolute value equations of the form:

,Ax B x b+ = (2.10)

where , ,n n nA B R b R×∈ ∈ and x denotes the vector with absolute values of components

of .nx R∈ The generalized system of absolute value equations (2.10) was introduced by

Rohn [79] and further studied in [14, 46-50, 66, 76, 104]. Rohn [79] proposed theorem of

the alternatives for solving system of absolute value equation (2.10). He proved the

equivalence between system of absolute value equation (2.9) and linear complementarity

problem as:

1 1( ) ( ) ( ) ,x A B A B x A B b− −

+ −= + − + + (2.11)

where ( ) 2x x x+ −= − and ( ) 2.x x x+ −= + Clearly (2.11) is a complementarity problem

with 1 1( ) ( ), ( ) .M A B A B q A B b− −= + − = +

Hence system of absolute value equations (2.10) suggests another way to formulate linear

complementarity problems. Theorem of the alternatives describes the relation between

the unique solution and the nontrivial solution of generalized system of absolute value

equations (2.10) as:

Page 34: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

20

Theorem 2.3 [78]. Let , , 0.n nA E R E×∈ ≥ Then exactly one of the following alternatives

holds:

(i) For each n nB R ×∈ with B E≤ and for each ,nb R∈ the system

,Ax B x b+ =

has a unique solution,

(ii) There exist [0,1]δ ∈ and a 1± -vector y such that the system

( ) ,Ax diag y E x bδ+ = (2.12)

has a nontrivial solution.

The problem of checking whether system of absolute value equations (2.10) has a unique

solution is NP- hard for rational square matrices ,A E with 0,E ≥ [theorem of alternative].

Mangasarian [53] has shown that solving of system of absolute value equations (2.10) is

NP-hard. If the system of absolute value equations (2.10) is solvable, then either it has

unique solution or it has multiple solutions (exponentially many). We do not know about

the exact number of solution of (2.10). For example

Example 2.2. Consider the following matrices

( )

0.31 0.55 0.59 0.34, ,

0.14 0.52 0.37 0.62

0.8 0.17 .T

A B

b

− − = =

− −

=

For different initial guess (vector with different sign pattern), we have the following

solutions of (2.10).

1.5164 1.3297 0.5870 6.6437 1.7877 0.4457 1.2938 1.1913

x =

− −

− −

For above example total number of possible solutions is 22 2 4,n = = here n is the problem

size (matrices of order 2 2× ).

Page 35: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

21

Example 2.3. Consider the following matrices

( )

0.3 0.5 0.7 0.5 0.3 0.6

0.4 0.5 0.3 , 0.3 0.6 0.5

0.5 0.3 0.8 0.5 0.6 0.9

0.18, 0.66, 0.49T

A B

b

= − = − − − −

= −

For different initial guess (vector with different sign pattern), we have the following

solutions of (2.10).

0.7273 1.1785 118.7571 1.0536

0.3304 0.3948 180.7000 0.8007

0.2299 1.3467 38.2571 8.2305

x =

− −

− −

− −

Maximum number of solutions is 32 8,= but the all possible solutions for this problem

are 4 as given above.

Rohn [81] has suggested an algorithm for computing all solution of system of absolute

value equations (2.10). He considered randomly generated square matrices of order 7,

which have 10 solutions instead of 72 128.= Several iterative methods were proposed for

solving system of absolute value equations (2.10), using the idea of linear

complementarity problems.

It is proved in [48] that system of absolute value equations (2.10) is equivalent to the

linear complementarity problem

( )( ) ( )1 1

1 1 1 1( ) ( ), 2 , ,z I M x q M I I B A q I B A B b− −− − − −= − + = − + = − (2.13)

where B and 1I B A−+ are invertible also no eigenvalue of M is 1 and hence ( )I M− is

nonsingular. Mangasarian [48] imposed several conditions on B and ,M to prove the

equivalence between generalized system of absolute value equations and linear

complementarity problem.

Now we discuss system of absolute value equations when ,B I= − that is:

Page 36: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

22

,Ax x b− = (2.14)

where ,n n nA R b R×∈ ∈ and I is the identity matrix of order .n The system of absolute

value equations (2.14) can be solved using different iterative methods based on

minimization methods and the methods of linear complementarity problems etc. In [36]

the system of absolute value equations (2.14) was studied. They reduced the system of

absolute value equations (2.14) to the linear complementarity problem as:

, 0, 0, 0,TMu Pv d u v u v+ = ≥ ≥ =

where , m nM P R ×∈ and .md R∈ Mangasarian and Meyer [53] have proved the

equivalence between system of absolute value equations (2.14) and linear

complementarity problems. If the solution of (2.14) exists, then it has a unique solution,

multiple solutions ( 2n solutions) or infinitely many solutions [80]. Here we discuss the

solubility of system of absolute value equations (2.14).

2.3.1 Existence of Solution

(i) If singular values of the matrix A are grater than 1, then the system of absolute value

equations (2.14) has a unique solution for any .nb R∈

(ii) The system of absolute value equations (2.14) has a nonnegative solution when

0, 1A A≥ < and 0.b ≤

The above two conditions were discussed in [53]. Form (i), we say that the system

absolute value equations (2.14) has unique solution for diagonally dominant matrices, for

example:

Example 2.5. Let the matrix A be given by

3 1 0 0

1 3 1 0,

0 1 3 1

0 0 1 3

A

− − = − −

Page 37: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

23

and

(1, 0, 0, 1)Tb = .

The singular values of A are

[ ]( ) 4.6180, 3.6180, 2.3820, 1.3820 . svd A =

Clearly ( ) 1,svd A > therefore the system of absolute value equations (2.14) has a unique

solution, (1,1, 1, 1) .Tx =

Mangasarian [53] has shown that the system of absolute value equations (2.14) has 2n

distinct solutions, each of which has different sign pattern with no zero entries when

0,b < and 2A σ∞

< where min max .i i

i i

b bσ =

Example 2.6. Consider the matrices

0.2 0.1 0

0.1 0.2 0.2 ,

0 0.1 0.2

A

= − − −

( 1, 1, 2) .Tb = − − −

Here 1

2σ = and 0.4 0.5,A

∞= < so the system of absolute value equations has the

following 32 solutions:

1.1492 0.7461 1.3138 1.0864 0.8910

0.8065 1.0471 0.5102 1.3089 0.6920

2.399 2.3691 2.5638 1.5576 2.5865

1.3581 0.7058 0.9214

0.8651 1.5306 1.0563

1.7388 1.5391 1.7547

2

x =

− −

− −

− −

− −

− − −

Note if the following conditions hold, then the solution of the system of absolute value

equations (2.14) does not exist.

Page 38: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

24

(i) If 1A < and 0b ≥ such that all element of b are not zeros, then the system of

absolute value equations (2.14) has no solution.

(ii) If 2A σ∞

< where 0

max maxi

i ib i

b bσ>

= and b has at least one positive element,

then the system of absolute value equations (2.14) has no solution.

The proof of above two statements can be viewed in [48]. There are several examples

which satisfy the non existence conditions. We have some examples, which do not satisfy

the above two conditions but still (2.14) have no solution, for example:

5 2 3, ,

2 1 2A b

− = =

here 7A∞

= , 0b ≥ , the above example does not satisfy the above two conditions but

the system of absolute value equations (2.14) have no solution. Hence the conditions (i)

and (ii) are sufficient but not necessary.

Page 39: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

25

Chapter 3

One-Step Gauss-Seidel Method

Page 40: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

26

In this chapter, we suggest and analyze an iterative method for solving the system of

absolute value equations (2.14) using minimization technique. We use a sequence with

single search direction. Our method is simple and easy to implement as compare to the

methods of linear complementarity problems. In section 3.1, we present the proposed

method for solving system (2.14). To highlight the rule of search direction we consider

two different directions.

We also discus the convergence of the method under suitable condition on .C Let ( )D x

be a matrix, define as

( ) ( ( )),D x x diag sign x= ∂ = (3.1)

where ( )D x is a diagonal matrix corresponding to )(xsign and x∂ is the subgradient of

.x )(xsign denote a vector with components equal to 1,0,1 − depending on whether the

corresponding component of x is positive, zero or negative.

We consider symmetric and positive definite matrix C as:

( ).C A D x= −

In section 3.3, we give some numerical examples. Comparison with other methods shows

that this method performs better.

3.1 Iterative Method

In this section, we suggest an iterative method for solving system of absolute value

equations (2.14) using minimization technique. For a given matrix n nA R ×∈ and vector

nRb∈ , we consider the functional

( ) , , 2 , .h x Ax x x x b x= − − nRx∈ (3.2)

We prove that the minimum of the functional ( )h x defined by (3.2) is equivalent to the

solution of (2.14).

Theorem 3.1. If ( )C A D x= − is a symmetric and positive definite matrix then nx R∈ is

the solution of the system of absolute value equations (2.14), if and only if, nx R∈ is the

minimum of the function ( )h x defined by (3.2).

Page 41: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

27

Proof. Let nRvx ∈, andα be a real number variable. Using the Taylor’s series, we have

2

( ) ( ) ( ), ( ) ,2

h x v h x h x v h x v vα

α α ′ ′′+ = + + (3.3)

Using (3.2), we have

( )

( )

( ) 2 2

2 , (3.4)

( ) 2 ( ) 2 . (3.5)

h x Ax x x x b

Ax x b

h x A D x C

′ = − − ∂ −

= − −

′′ = − =

We also note that

,x x x∂ =

where x∂ denote subgradient of ,x see Mangasarian [50].

From (3.3), (3.4) and (3.5), we have

2( ) ( ) 2 , ,h x v h x Ax x b v C v vα α α+ = + − − + .

For fixed x and ,v we consider the auxiliary function g as

( ) ( )g h x vα α= + . (3.6)

It is clear that ( )g α has a minimum at ,α if

,

,,

Ax x b v

C v vα

− −= − (3.7)

where ,C v v is positive.

From (3.6) and (3.7), we have

2

2

, ,( ) ( ) 2 , ,

, ,

,( )

,

Ax x b v Ax x b vg h x Ax x b v C v v

C v v C v v

Ax x b vh x

C v v

α − − − −

= − − − + −

− −= −

( ) (3.8)h x≤

Page 42: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

28

So for any vector 0 ,nv R≠ ∈ we have

( ) ( ) ( ),h x v g h xα α+ = <

which is impossible. Consequently, we have

,0, =−− vbxAx

Suppose nx R

∗ ∈ satisfies

.Ax x b∗ ∗− =

Then, for any vector 0 nv R≠ ∈ , we have

, 0,Ax x b v∗ ∗− − =

Thus ( )h x cannot be made any smaller than ( )h x∗ . Thus x∗ minimizes ( )h x .

On the other hand, suppose that x∗ minimizes ( ).h x Then for vector 0,v ≠ we have

( ) ( ).h x v h xα∗ ∗+ ≥

Thus,

, 0,Ax x b v∗ ∗− − =

which implies that

.Ax x b∗ ∗− =

This shows that nx R

∗ ∈ is the solution of (2.14).

Theorem 3.1 enables us to suggest the following iterative scheme for solving system of

absolute value equations (2.14). Let

1 ,k k k kx x vα+ = + (3.9)

where

,

, 0,1, 2,,

k k k

k

k k

Ax x b vk

Cx vα

− −= = … .

Page 43: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

29

The vector n

kv R∈ may be chosen in different ways. For the sake of simplicity, we

consider k k

v e= , the kth column of the identity matrix. The method is called the one-step

Gauss-Seidel method for solving system of absolute value equations (2.14). We present

the one-step Gauss-Seidel method as follows:

Algorithm 3.1

0

1

Choose an initial guess

For 0,1,2, ,

n

k

x R

k

y x

=

=

1

1 1

For 1,2, ,

,

,

End for

Stopping criteria

End for .

i i i

i

i i

i i i i

k n

i n

Ay y b e

Ce e

y y e

i

x y

k

α

α+

+ +

=

− −=

= +

=

In Algorithm 3.1, ie is defined as:

1 2(1, 0, 0, ,0), (0, 1, 0, ,0), , (0, 0, 0, ,1).ne e e= = =… … … …

The proposed method depends on the search direction. The well known directions are

columns of identity matrix (Algorithm 3.1), Broyden family [23], residual vectors and

directions discussed by Saad [85]. To show the importance of search directions we

consider k kv s= as follows:

Algorithm 3.2

0Choose an initial guess

For 0,1,2, ,

nx R

k

= …

Page 44: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

30

( ) ( )

( ) ( )( )1

( ) ( )

( ) ( )

( )

,

,

T

k k k k

T

k k k

k k k

k k k

k

k k

g x A D x Ax x b

H A D x A D x

s H g x

Ax x b s

Cs sα

= − − −

= − −

= −

− −=

1

Stopping criteria

End for .

k k k kx x s

k

α+ = +

In Algorithm 3.2, kH is defined as C is a symmetric and positive definite matrix.

We now prove that the sequence kx defined by (3.9) converges to the solution of

system of absolute value equations (2.14).

3.2 Convergence Analysis

We assume that 1( ) ( )k k

D x D x+ = that is two consecutive ( )D x matrices have the same

sign pattern, see [50].

Theorem 3.2. The reduction between ( )k

h x and 1( )k

h x + is of equivalence to the reduction

of error in the C norm when h is the form of (3.1) and the sequence defined by (3.9)

converges linearly to the solution nx R

∗ ∈ of (2.14) in C norm if the components of 1kx +

and kx have the same sign.

Proof. Consider

2 2

1 1 1, ,k k k k k kC C

x x x x Cx C x x x Cx C x x x∗ ∗ ∗ ∗ ∗ ∗

+ + +− − − = − − − − −

1 1 1 1

1 1 1

, , , ,

, , , ,

, 2 , , 2 , ,

k k k k

k k k k

k k k k k k

Cx x Cx x C x x C x x

Cx x Cx x C x x C x x

Cx x C x x Cx x C x x

∗ ∗ ∗ ∗+ + + +

∗ ∗ ∗ ∗

∗ ∗+ + +

= − − + −

+ + −

= − − +

Page 45: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

31

as C is symmetric

2 2

1 1 1 1

1 1 1 1

, 2 , , 2 ,

, 2 ,

, 2 ,

k k k k k k k kC C

k k k k

k k k k

x x x x Cx x C x x Cx x C x x

Ax x x Ax x x

Ax x x Ax x x

∗ ∗ ∗ ∗+ + + +

∗ ∗+ + + +

∗ ∗

− − − = − − −

= − − − −

− − −

1 1 1 1

1

, 2 , , 2 ,

( ) ( ). (3.10)

k k k k k k k k

k k

Ax x x b x Ax x x b x

h x h x

+ + + +

+

= − − − − −

= −

This proves the first part of the Theorem. Now we prove the convergence of the sequence

(3.9), which follow from (3.8) and (3.10) as:

2 2

1 1( ) ( ) 0,k k k kC C

x x x x h x h x∗ ∗

+ +− − − = − ≤

which implies that

1 .k kC C

x x x x∗ ∗

+ − ≤ − (3.11)

From which, it follows that

1 0 .k kC C C

x x x x x x∗ ∗ ∗

+ − ≤ − ≤ ≤ −… (3.12)

Thus from (3.12), we conclude that kx is a Fejer sequence [98] and converges linearly to

nx R

∗ ∈ .

3.3 Numerical Results

In this section, we consider several examples to show the efficiency of the proposed

method. We also consider systems that are not positive definite. The comparison with

different methods is given. All the computations are done using the Matlab 7.

Example 3.1. Consider the second order BVP of the type

2

2

2(1 ), 0 1, (0) 1 (1) 0.

d xx t t x x

d t− = − ≤ ≤ = − = (3.11)

Page 46: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

32

We discretize the equation (3.12) using finite difference method to obtain the system of

absolute value equations (2.14). The matrix 10 10A R ×∈ is defined as

,

242, for

1, 1, 2, , 1121, for

1, 2, 3, ,

0, otherwise.

i j

j i

j i i na

j i i n

− =

= + = − =

= − =

The exact solution is

2

2

.1915802528sin 4cos 3 , 0

1.462117157 0.5378828428 1 , 0.t t

t t t xx

e e t x−

− + − <=

− − + + >

The constant vector b is given by

( )121.9917, 0.9669, 0.9256, 0.8678, 0.7934, 0.7025, 0.5950, 0.4710, 0.3306, 0.1736 .T

b =

Let the initial guess be ( )0 1, 1, , 1 .T

x = … The comparison is given in figure 3.1.

0 50 100 150 200 250 300 350 400 450 50010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

Number of iterations

2 n

orm

of

resi

du

al

Algorithm 3.1

Algorithm 3.2

Figure 3.1

Comparison between Algorithm 3.1 and Algorithm 3.2

Page 47: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

33

The number of iterations for Algorithm 3.1 and Algorithm 3.2 is 456 and 12 respectively,

with accuracy 1310 .− The Algorithm 3.2 is more efficient than Algorithm 3.1.

Example 3.2. Let a random matrix n nA R ×∈ be chosen from a uniform distribution on

[ 10, 10]− , whose diagonal elements are equal to 1000 and n ranging from 10 to 1000. A

random nx R∈ is chosen from [ 1, 1]− . The constant vector is computed as b Ax x= − .

We take m consecutively generated solvable random problems. The stopping criteria is

6

1 10 .k kx x−

−− < Here the matrix A is a non-symmetric matrix. The computational results

are given in Table 3.1:

Table 3.1

n m No. of iterations TOC (seconds) Error

1 5 0.125 99.8808 10−×

10 10 50 0.253 71.1951 10−×

1 6 0.116 71.8099 10−×

50 10 60 0.320 72.6796 10−×

1 7 0.128 85.7908 10−×

100 10 71 0.456 87.1340 10−×

1 9 27.425 71.5518 10−×

500 10 91 272.938 72.2952 10−×

1 10 331.33 71.3688 10−×

1000 10 100 2560.23 71.3051 10−×

In Table 3.1 ,n m denote problem size and total number of problem solved respectively.

For 10 problems, we take the average error in the last column of Table 3.1. The

Algorithm 3.1 is very effective for solving positive definite systems of absolute value

equations (2.14).

Page 48: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

34

Now we compare Algorithm 3.1 with Algorithm 3.2 to highlight the role of search

direction.

Example 3.3. Let the matrix A be given by

,

4 , for

1, 1, 2, , 1, for

1, 2, 3, ,

1, otherwise.

i j

n j i

j i i na n

j i i n

=

= + = − =

= − =

… (3.10)

Let eIAb )( −= where I is the identity matrix of order n and e is 1×n vector. The

elements of e are all equal to unity. The stopping criterion is 10

1 10 .k kx x−

−− < We

choose the initial guess 0x as 0 (0, 0, , 0) .Tx = … The computational results are given in

Table 3.2.

Table 3.2

Algorithm 3.1 Algorithm 3.2 Order

No. of iterations TOC No. of iterations TOC

10 12 0.0168 3 0.011

50 12 0.0187 3 0.012

100 13 0.197 3 0.047

500 13 43.39 3 1.059

1000 14 351.191 3 7.279

1500 14 451.154 3 10.731

It is clear from Table 3.2, that the one-step Gauss-Seidel method converges rapidly when

,k k

v s= as compared to .k k

v e= The Algorithm 3.2 gives the solution of system of

absolute value equation (2.14) in a few iterations. Similar results are obtained for other

examples too.

Page 49: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

35

In the next example, we compare Algorithm 3.2 with interval algorithm for absolute

value equations [99].

Example 3.4 [99]. Consider the matrix A in matlab code as

( ) ( )( )( )( )('dimension of matrix )

round 100 eye n,n 0.02 2 rand n,n 1 .

n input A

A

=

= ∗ − ∗ ∗ −

We computed ,b Ax x= − where the vector x is chosen as

( ) ( )rand n,1 rand n,1 .x = −

The computational results are given in Table 3.3.

Table 3.3

n Interval Algorithm Algorithm 3.2

10 2 2

50 3 2

500 5 2

1000 6 3

2000 6 4

From Table 3.3, we conclude that for large problem Algorithm 3.2 is more efficient than

Interval Algorithm [99]. Algorithm 3.2 converges to the exact solution of system of

absolute value equations (2.14) in most cases.

In the next example, we compare Algorithm 3.2 with smoothing Newton method (SNM)

[101].

Example 3.5 [101]. Consider A and b in Matlab code as:

n=input('dimension of matrix A=')

rand('state',0);

A1=zeros(n,n);

Page 50: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

36

for i=1:n

for j=1:n

if i==j

A1(i,j)=500;

elseif i>j

A1(i,j)=1+rand;

else

A1(i,j)=0;

end

end

end

A=A1+(tril(A1,-1))';

b=(A-eye(n))*ones(n,1);

with random initial guess. The stopping criteria are 6

1 210 .k kx x

−+ − < The computational

results are given in Table 3.4 with random initial guess.

Table 3.4

No. of iterations No. of iterations

Order SNM Algorithm 3.2

Order SNM Algorithm 3.2

4 2 2 64 3 2

8 2 2 128 3 2

16 2 2 256 3 2

32 2 2 512 3 2

The proposed method converges to the exact solution of system of absolute value

aequations (2.14). From Table 3.4, we conclude that our method converges faster than

smoothing Newton method [101] to the solution of system of absolute value equations

(2.14) for large systems.

Page 51: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

37

In the next example, we compare Algorithm 3.2 with particle swarm optimization (PSO)

method [100].

Example 3.6 [100]. Consider random A and b in Matlab code as:

n=input('dimension of matrix A=');

rand('state',0);

R=rand(n, n);

b=rand(n, 1);

A=R'*R+n*eye(n);

with random initial guess. The stopping criteria are 12

1 1 210 .

k kAx x b −

+ +− − < The

comparison between Algorithm 3.2 and PSO method [100] is presented in Table 3.5.

Table 3.5

PSO method Algorithm 3.2 Order

No. of iterations TOC No. of iterations TOC

4 2 2.230 2 0.006

8 2 3.340 2 0.022

16 3 3.790 2 0.025

32 2 4.120 2 0.053

64 3 6.690 2 0.075

128 3 12.450 2 0.142

256 3 34.670 2 0.201

512 5 76.570 2 1.436

1024 5 157.12 3 6.604

From Table 3.5, we conclude that for large problem the Algorithm 3.2 converges faster

than PSO method [100]. The proposed method is simple and easy to implement. The last

Page 52: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

38

column of Table 3.5 shows that the Algorithm 3.2 solves system of absolute value

equations (2.14) in a few seconds.

In this chapter, we have discussed only these two search direction. The Algorithm 3.2 has

performed better than other iterative methods. The future work is to choose a good search

direction to improve the proposed method for solving system of absolute value equations

(2.14).

Page 53: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

39

Chapter 4

Two-Step Gauss-Seidel Method

Page 54: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

40

We suggest and analyze an iterative method for solving system of absolute value

equations (2.14) using minimization technique. We extend the idea of minimization

techniques using double search directions. This method is faster than the one-step Gauss-

Seidel method discussed in the previous chapter. Two-step Gauss-Seidel method is very

effective and performs better. In this method, we consider a sequence which updates two

component of approximate solution at the same time. This technique enables us to

suggest two-step Gauss-Seidel method for solving system of absolute value equations

(2.14).

The two-step Gauss-Seidel method also depends upon on the choice of search directions.

Comparison with one-step Gauss-Seidel method is given. For ,nRx ∈ ( )sign x will

denote a vector with components equal to 1,0,1 − depending on whether the

corresponding component of x is positive, zero or negative. The diagonal matrix ( )D x is

defined as

( ) ( ( ))D x x diag sign x= ∂ =

where ( )D x is a diagonal matrix corresponding to )(xsign ) and x∂ denote the

subgradient of .x We consider A such that ( )kE A D x= − is symmetric and positive

definite 0, 1, 2,k∀ = … . For simplicity, we denote the following:

1 1

1 2 2 1

2 2

1

2

, , (4.1)

, , , (4.2)

, , (4.3)

, (4.4)

, , (4.5)

k k

k k

a E v v

c E v v E v v

d E v v

p Ax x b v

q Ax x b v

=

= =

=

= − −

= − −

where 1 20 , nv v R≠ ∈ are linearly independent vectors, ( ) ( ( ))k kD x diag sign x= and note

that ( ) , 0, 1, 2, ,k k kD x x x k= = … see Mangasarian [50].

We need the following result to define the relation among , and ,a c d which we use in the

development of two-step Gauss-Seidel method. Using the technique of [38], we have the

Page 55: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

41

following result.

Lemma 4.1 [38]. Let , ,a c d be defined by (4.1), (4.2) and (4.3) such that

1 1

2 2

, 0, (4.6)

, 0. (4.7)

a E v v

d E v v

= >

= >

Then 2 0.ad c− > (4.8)

Proof. The inequalities (4.6) and (4.7) hold according to the inner product properties. To

prove (4.8) we consider

2

1 2 1 2 1 1 1 2 2 2( ), , 2 , , ,E v tv v tv Ev v t Ev v t Ev v− − = − + (4.9)

where t R∈ and 1 20 , nv v R≠ ∈ are linearly independent vectors, take

1 2

2 2

,

,

Ev vt

Ev v= in (4.9)

we have

2

1 2 1 2 1 1 1 2 2 2

2 2

1 2 1 2

1 1 2

2 2 2 2

2

1 2

1 1

2 2

2

2

0 ( ), , 2 , ,

, ,, 2

, ,

,,

,

.

E v tv v tv Ev v t Ev v t Ev v

Ev v Ev vEv v

Ev v Ev v

Ev vEv v

Ev v

ca

d

ad c

< − − = − +

= − +

= −

= −

= −

Thus the required result.

4.1 Two-Step Iterative Method

In this section, we use two search directions to suggest and analyze an iterative method

for solving the system of absolute value equations (2.14). Consider the functional of the

type:

( ) , , 2 , .h x Ax x x x b x= − −

Page 56: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

42

For 1 20 0 ,nv v R≠ ≠ ≠ ∈ we consider

1 1 2. = 0, 1, 2, k kx x v v kα β+ = + + … , Rα β ∈ (4.10)

We want to show that the minimum of ( )h x defined by (3.1) occurs at the point (4.10),

that is we have to show that 1( ) ( ).k kh x h x+ ≤ Using the Taylor’s series, we have

( )

1 1 2

1 2 1 2 1 2

( ) ( )

1( ), ( )( ), . (4.11)

2

k k

k k k

h x h x v v

h x h x v v h x v v v v

α β

α β α β α β

+ = + +

′ ′′= + + + + +

where

( ) 2( )

( ) 2( ( )) 2 .

k k k

k k

h x Ax x b

h x A D x E

′ = − −

′′ = − =

(4.12)

We also note that

,x x x∂ =

where x∂ denote subgradient of ,x see Mangasarian [50].

From (4.11) and (4.12), we have

1 2 1 2 1 2 1 2

2

1 2 1 1

2

2 1 1 2 2 2

1 2

2 21 2 1 2 2 2

( ) ( ) 2 , ,

( ) 2 , 2 , ,

, , , .

( ) 2 , 2 ,

, 2 , , , (4.13)

k k k k

k k k k k

k k k k k

h x v v h x Ax x b v v E v v v v

h x Ax x b v Ax x b v E v v

E v v E v v Ev v

h x Ax x b v Ax x b v

Ev v Ev v Ev v

α β α β α β α β

α β α

αβ αβ β

α β

α αβ β

+ + = + − − + + + +

= + − − + − − +

+ + +

= + − − + − −

+ + +

where we have used the fact that ( )kE A D x= − is symmetric for each k . Now from (4.1)

-(4.5) and (4.13), we have

2 2

1 2( ) ( ) 2 2 2 .k kh x v v h x p q a c dα β α β α αβ β+ + = + + + + +

Clearly h is continuous, to minimize ,h we need the following partial derivatives

Page 57: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

43

with respect to ,α β :

2 2 2 , (4.14)

2 2 2 , (4.15)

hp c a

hq c d

β αα

α ββ

∂= + +

∂= + +

2

2

2

2

2

2 ,

2 ,

2 .

ha

hd

hc

α

β

α β

∂=

∂=

∂=

∂ ∂

Using Lemma 4.1 and Theorem 11 [95], it is clear that h assume its minimum as

2

22 0,

ha

α

∂= >

and

22 2 2

2

2 24( ) 0.

h h had c

α β α β

∂ ∂ ∂− = − >

∂ ∂ ∂ ∂

To find the minimum, equating (4.14) and (4.15) to zero, that is

2 2 2 0,

2 2 2 0.

p c a

q c d

β α

α β

+ + =

+ + =

Solving the above equations, we have

2

2

, (4.16)

. (4.17)

cq dp

ad c

cp aq

ad c

α

β

−=

−=

From (4.16), (4.17) and (4.13) we have

2 2

1 2

2( ) ( ) (4.18)k k

dp aq cpqh x h x

ad c+

+ −− =

Page 58: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

44

( )( )

( )

( )( )

2 2

1 2

2 2 2 2

2

2 2 2

2

1

2( ) ( ) as 0

( )

0

( ) ( ). (4.19)

k k

k k

a dp aq cpqh x h x a

a ad c

adp c p cp aq

a ad c

p ad c p

aa ad c

h x h x

+

+

+ −− = >

− + −=

−≥ = ≥

So for vector 1 2, 0,v v ≠ we have

1( ) ( ),k k

h x h x+ <

which is impossible. Consequently from (4.18), we have

0.p q= =

Suppose x∗ satisfies

.Ax x b∗ ∗− =

Then for nonzero vectors 1 2, ,nv v R∈ we have

0,p q= =

and ( )h x can be made any smaller than ( ).h x∗ Thus x∗ minimizes ( ).h x

On the other hand, suppose that x∗ minimizes ( ).h x Then for vector 1 2, 0,v v ≠ we have

1 2( ) ( ).h x v v h xα β∗ ∗+ + ≥

Thus from (4.18), we have

1 2, 0 , 0,Ax x b v Ax x b v∗ ∗ ∗ ∗− − = − − = and

which implies that

.Ax x b∗ ∗− =

This shows that nx R

∗ ∈ is the solution of the system of absolute value equations (2.14).

Page 59: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

45

The above result enables us to suggest the next algorithm.

Algorithm 4.1:

Choose an initial guess nRx ∈0 to (2.14)

1

For 0,1, , do

For 1, 2, , do

1

k

k n

y x

i n

j i

=

=

=

= −

2

2

1

1 1

if 1 then end if

For 0, 1, 2, do

,

,

End do for

Stopping criteria

End do for .

i i i i

j i i j

j i

i

i j

i

n n i i i j

k n

i j n

k

p Ay y b e

q Ay y b e

cq dp

ad c

cp aq

ad c

y y e e

i

x y

k

α

β

α β+

+ +

= =

=

= − −

= − −

−=

−=

= + +

=

The pair of vectors 1 2,v v may be chosen in different ways. The efficiency of the proposed

method depend on different choice of the pair of vectors 1 2,v v . Here we consider 1 ,iv e=

2 jv e= where j depends on , , 1, 2, , .i i j i n≠ = … 1j i= − for ,1>i and nj = when 1.i =

Here ji ee , denote the ith and jth columns of the identity matrix respectively.

Remark. If ,cp aq= then Algorithm 4.1 reduces to the Algorithm 3.1 with single search

direction.

Page 60: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

46

4.2 Convergence Analysis

The sequence kx defined by (4.10) converges under the condition that 1( ) ( )k kD x D x+ =

where 1 1( ) ( ( )), 0,1, 2,k kD x diag sign x k+ += = … .

Theorem 4.1. If 1( ) ( )k kD x D x+ = for some ,k then (4.10) converges linearly to a solution

,x∗ of (2.14) in E-norm when h is in the form of (3.1).

Proof. Consider

( )

( )

2 2*

1 1 1

1 1 1 1

1 1 1

1 1 1

1

, ,

, , , ,

, , , ,

, 2 , , 2 ,

( ) , 2 ,

( ) , 2 ,

(

k k k k k kE E

k k k k

k k k k

k k k k k k

k k k k

k k k k

k k

x x x x Ex E x x x Ex E x x x

Ex x Ex x E x x E x x

Ex x Ex x E x x E x x

Ex x b x Ex x b x

A D x x x b x

A D x x x b x

Ax D x

∗ ∗ ∗ ∗ ∗+ + +

∗ ∗ ∗ ∗+ + + +

∗ ∗ ∗ ∗

+ + +

+ + +

+

− − − = − − − − −

= − − + −

+ + −

= − − +

= − − −

− +

= − 1 1 1 1) , 2 ,

( ) , 2 , ,

k k k

k k k k k

x x b x

Ax D x x x b x

+ + + +− −

− +

as 1( ) ( ),k kD x D x+ = E is symmetric and .E x b∗ =

2 2

1 1 1 1 1

1

, 2 , , 2 ,

( ) ( ).

k k k k k k k k k kE E

k k

x x x x Ax x x b x Ax x x b x

h x h x

∗ ∗+ + + + +

+

− − − = − − − − −

= −

Using (4.19) we have

2 2

1 .k kE E

x x x x∗ ∗

+ − ≤ − (4.20)

Thus from (4.20) we have

1 0 .k kE E E

x x x x x x∗ ∗ ∗

+ − ≤ − ≤ ≤ −… (4.21)

Page 61: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

47

Thus from (4.21) we conclude that kx is a Fejer sequence [98], therefore it converges

linearly to nx R

∗ ∈ in E-norm.

In the next result, we compare theoretically our method with Algorithm 3.1.

Theorem 4.2. The rate of convergence of tow-step Gauss-Seidel method is better (at least

equal) than the one-step Gauss-Seidel method.

Proof. One-step Gauss-Seidel method gives the reduction of (3.1) as

2

1( ) ( )k k

ph x h x

a+− = , (4.22)

To compare (4.18) and (4.22), subtract (4.22) from (4.18) we have

2 2 2 2

2 2

2 ( )0.

( )

dp aq cpq p cp aq

aad c a ad c

+ − −− = ≥

− −

Hence two-step Gauss-Seidel method gives better reduction of the function ( )h x defined

by (3.2) than one-step Gauss-Seidel method.

4.3 Numerical Results

In this section, we consider several examples to illustrate the implementation and

efficiency of the proposed method. The convergence of two-step Gauss-Seidel method is

guaranteed for positive definite systems only. The comparison with one-step Gauss-

Seidel method is given. All the computations are done using the Matlab 7.

Example 4.1. Consider the second order BVP of the type

2

2

2(1 ), 0 1, (0) 1 (1) 0.

d xx t t x x

d t− = − ≤ ≤ = − = (4.23)

We discretize (4.23) using finite difference method to obtain the system of absolute value

equations of the type:

,Ax x b− =

Page 62: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

48

where the matrix 10 10A R ×∈ is given by

,

242, for

1, 1, 2, , 1121, for

1, 2, 3, ,

0, otherwise.

i j

j i

j i i na

j i i n

− =

= + = − =

= − =

The constant vector b is given by

( )121.9917, 0.9669, 0.9256, 0.8678, 0.7934, 0.7025, 0.5950, 0.4710, 0.3306, 0.1736 .T

b =

The exact solution is

2

2

.1915802528sin 4cos 3 , 0,( )

1.462117157 0.5378828428 1 , 0.t t

t t t xx t

e e t x−

− + − <=

− − + + >

The initial guess 0x is chosen as ( )0 1, 1, , 1 .T

x = … The comparison between Algorithm

3.1 and Algorithm 4.1 is given in figure 4.1. We plot the number of iterations against the

norm of residual. The stopping criteria is 13

1 1 210 .k kAx x b −

+ +− − < The figure 4.1 shows

the comparison as follows

0 50 100 150 200 250 300 350 400 450 50010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

104

Number of iterations

2-n

orm

of

resi

du

al

Algorithm 3.1

Algorithm 4.1

Figure 4.1

Comparison graph

Page 63: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

49

In figure 4.1, the line represent the number of iterations calculated by Algorithm 4.1 and

the dashes line represent the approximate solution of (2.14) using the Algorithm 3.1. The

numbers of iterations are 143 and 456 respectively. The accuracy index is 1310 .−

Example 4.2. Let the matrix A be given by

,

4 , for

1, 1, 2, , 1, for

1, 2, 3, ,

1, otherwise.

i j

n j i

j i i na n

j i i n

=

= + = − =

= − =

Let eIAb )( −= where I is the identity matrix of order n and e is 1×n vector whose

elements are all equal to unity such that Tx )1,,1,1( …= is the exact solution. The stopping

criteria are 101 10k kx x

−+ − < . We chose the initial guess as 0 (0, 0, ,0) .T

x = … Let A be of

size ranging from 10 to 1000. The computational results are shown in Table 4.1:

Table 4.1

Algorithm 3.1 Algorithm 4.1 Order

No. of iterations Error No. of iterations Error

5 15 113.212 10−× 6 129.151 10−×

10 20 112.602 10−× 7 122.432 10−×

50 24 114.135 10−× 10 111.134 10−×

100 24 119.103 10−× 10 112.581 10−×

200 25 117.012 10−× 10 113.259 10−×

500 26 115.393 10−× 10 118.820 10−×

1000 27 113.588 10−× 11 111.115 10−×

Page 64: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

50

The computational results show that the Algorithm 4.1 is two times faster than Algorithm

3.1. The proposed method is very efficient for solving system of absolute value equations

(2.14) of large size.

In the next example, we consider the full dense non-symmetric matrix A and different

number of problems.

Example 4.3. Let a random matrix n nA R ×∈ be chosen from [ 10, 10]− and whose

diagonal elements are all equal to 1000. A random nx R∈ is chosen from [ 1, 1]− and n

ranging from 10 to 1000. The constant vector is computed as b Ax x= − . We take m

consecutively generated solvable random problems. We use 6

1 1 210

k kAx x b −

+ +− − < as

stopping criteria. Clearly the matrix A defined in this example is non-symmetric matrix.

The computational results are given in Table 4.2.

Table 4.2

Algorithm 3.1 Algorithm 4.1

n

m No. of iterations Error No. of iterations Error

1 5 72.301 10−× 4 91.412 10−×

10 10 50 75.339 10−× 36 84.351 10−×

1 6 76.272 10−× 5 82.941 10−×

50 10 60 71.258 10−× 55 83.280 10−×

1 7 72.131 10−× 6 85.314 10−×

100 10 71 71.421 10−× 60 82.476 10−×

1 9 72.143 10−× 8 85.122 10−×

500 10 91 78.602 10−× 80 71.013 10−×

1 10 77.254 10−× 9 77.250 10−×

1000 10 100 79.651 10−× 90 73.394 10−×

Page 65: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

51

In Table 4.2 n, m, denote the problem size and total number of problems solved. From the

above Table, we see that the Algorithm 4.1 requires less number of iterations than the

Algorithm 3.1.

For different pairs of vectors ,i j

e e the two-step Gauss-Seidel method converges to the

solution of system of absolute value equations (2.14) in different number of iterations.

We consider example 4.3 with different combinations of ,i j

e e , columns of the identity

matrix. Let k denote the gap between ,i j . We consider the relation between i and j as

follows:

, for

, for .

i k i kj

i k n i k

− >=

− + <

The number of iterations for solving system of absolute value equations (2.14) with

different k is given in Table 4.3.

Table 4.3

n 2k =

2

nk =

1k n= −

5 4 3 5

10 4 3 5

50 6 3 6

100 7 4 7

200 7 4 8

400 9 5 9

800 10 5 10

1000 10 5 10

Table 4.3 shows that for the same vectors (columns of the identity matrix) with different

combinations, the Algorithm 4.1 converges in different number of iterations to the

solution of system of absolute value equations (2.14).

Page 66: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

52

Example 4.4. Let the matrix A be given by

1000, for , 1, 2, ,

1, for , 2,3, ,

1 for , 2,3, , .

i j

i i j i n

a i j i n

i j j n

+ = =

= − > =

< =

Let eIAb )( −= where I is the identity matrix of order n and e is 1×n vector whose

elements are all equal to unity such that Tx )1,,1,1( …= is the exact solution. The stopping

criteria are 61 1 2

10 .k kAx x b −+ +− − < We choose the initial guess as 0 1 2( , , , ) ,T

nx x x x= …

0.001* .ix i= The numerical results are shown in Table 4.2.

Table 4.4.

Algorithm 3.1 Algorithm 4.1 n

No. of iterations Error No. of iterations Error

4 3 91.801 10−× 2 111.364 10−×

8 3 96.654 10−× 3 107.238 10−×

16 3 74.310 10−× 3 85.935 10−×

32 4 73.145 10−× 3 73.456 10−×

64 5 71.445 10−× 3 75.378 10−×

128 6 73.691 10−× 4 7 2.638 10−×

256 7 76.175 10−× 5 77.852 10−×

512 11 74.369 10−× 6 72.161 10−×

1024 26 76.205 10−× 12 73.241 10−×

From Table 4.2, we conclude that two-step Gauss-Seidel method is more efficient than

one-step Gauss-Seidel method. For large problem size the Algorithm 4.1 is two times

faster than Algorithm 3.1.

Page 67: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

53

Chapter 5

Residual Iterative Method

Page 68: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

54

The residual methods are minimizing the residual norm at each step. These methods do

not need matrix factorization. Paige and Saunder [72] have proposed minimal residual

method for solving symmetric and indefinite linear systems. Saad and Schultz [87]

generalized the minimal residual method for nonsymmetric linear system. Saad [85]

discussed several residual iterative methods for solving system of linear equations using

single search direction.

In this chapter, we suggest residual iterative method for solving system of absolute value

equations (2.14). The residual iterative method based on the projection techniques. In the

previous two chapters, we have used the idea of minimization techniques with symmetric

positive definite systems. Here the condition of symmetric matrix is relaxed. To solve

nonsymmetric positive definite system of absolute value equations, we propose the

residual iterative method. The residual method considers double search directions and

minimize norm of residual

b x Ax+ − ,

where n nA R ×∈ and , nb x R∈ . The convergence of residual iterative method consider

under some suitable conditions. The rate of convergence at each step depends on the

choices of the search directions. We compare the residual iterative method with other

methods.

Let M and N be the search subspace and the constraints subspace, respectively, and let

m be their dimension and 0

nx R∈ be an initial guess. To find an approximate solution

nRx ∈ to (2.14), we use the projection method onto the subspace M and orthogonal to N

that x belong to affine space 0x + M such that the new residual vector is orthogonal to N

that is:

find 0x x∈ + M such that ( ( ))b A D x x− − ⊥ N , (5.1)

where ( )D x is a diagonal matrix corresponding to ( ).sign x For different choices of the

subspace N we have different iterative methods. Here we use the constraint space

( ( )) .A D x= −N M The residual method approximate the solution of (2.14) by the

vector 0x x∈ +M that minimizes the norm of residual. Let ( )sign x denote a vector with

components equal to 1,0,1 − depending on whether the corresponding component of x is

Page 69: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

55

positive, zero or negative. The diagonal matrix ( )D x corresponding to ( )sign x is defined

as ( ) ( ( )),D x x diag sign x= ∂ = we denote the following inner product by

1 1

1 2

2 2

1 1 1

2 2 2

, ,

, ,

, ,

, , ,

, , ,

k k k

k k k

a Cv Cv

c Cv Cv

d Cv Cv

p b Ax x Cv b Cx Cv

p b Ax x Cv b Cx Cv

=

=

= = − + = − = − + = −

(5.2)

where 1 20 , ,nv v R≠ ∈ and ( )kC A D x= − consider A such that C is a positive definite

matrix, also note that ( ) .k k kD x x x= We denote the kth residual by ,kr that is

( )k k k kr x r b x Ax= = + − .

5.1 Residual Iterative Method

Consider the following sequence

1 1 2 1 2, 0 , , 0, 1, 2,n

k kx x v v v v R kα β+ = + + ≠ ∈ = … (5.3)

where 1 2, nv v R∈ are arbitrary , these vectors can be chosen by different ways. To derive

residual method for solving system of absolute value equations in the first step we choose

the subspace 1 1 ,span v=M 1 1 0, ,kspan C v x x= =N and taking 1( ) ( ),k kD x D x+ = such

that the residual can be written as

1 1 1 1

1

1

( ( ))

( ( ))

. (5.4)

k k k k

k k

k

b Ax x b A D x x

b A D x x

b C x

+ + + +

+

+

− + = − −

= − −

= −

Combining (5.4) and (5.1), we have

Page 70: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

56

Find 1 1k kx x+ ∈ + M such that 1 1,kb C x +− ⊥ N (5.5)

where 1 1.k kx x vα+ = +

Equation (5.5) in term of inner product can be written as

1 1, 0,kb Cx C v+− = (5.6)

that is

1 1 1 1 1

1

, , ,

0, (5.7)

k kb Cx C v C v b C x C v C v C v

p a

α α

α

− − = − −

= − =

From (5.7) we have

1 .p

aα = (5.8)

The next step is to choose the subspace 2 2 ,span v=M 2 2 0 1, ,kspan C v x x += = N and

(1.3) can be written as

Find 1 1 2k kx x+ +∈ + M such that 1 2 ,kb C x +− ⊥N (5.9)

where 1 1 2k kx x vβ+ += + and 1 1 1 whenk k kb Ax x b C x+ + +− + = − 1( ) ( ).k kD x D x+ =

Equation (5.9) can be written as:

1 2, 0,kb Cx C v+− =

that is

1 2 1 2 2

2 1 2 2 2

2

, , ,

, , ,

0. (5.10)

k k

k

b Cx C v b C x C v C v C v

b C x C v C v C v C v C v

p c d

α β

α β

α β

+− = − − −

= − − −

= − − =

From (5.8) and (5.10) we have

2 1 .ap cp

adβ

−= (5.11)

Let 1 kv r= and 2 0,v ≠ may be chosen in different ways. The residual iterative method can

be described as follows:

Page 71: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

57

Algorithm 5.1.

Choose an initial guess 0

nx R∈

For 0, 1, 2,k = …

k k kr b Ax x= − +

If 0,kr = then stop; else

1k

p

aα =

2 1k

ap cp

adβ

−=

Set 1 2k k k k kx x r vα β+ = + +

1 1 1

6

1if 10

k k k

k

r b Ax x

r

+ + +

−+

= − +

<

then stop

End if

End for .k

If 0β = , then Algorithm 5.1 reduces to minimal residual method. In section 5.2, we

consider 2 1kv x −= in first two examples and 2 kv s= ( ks is defined in section 5.2) in the

last examples. The following result is needed in the convergence of the Algorithm 5.1.

Theorem 5.1. Let kx and kr be generated by Algorithm 5.1 if 1( ) ( ),k kD x D x+ = then

we have

2 2

2 2 1 2 11 2

( ),k k

p ap cpr r

a a d+

−− = + (5.12)

where 1 1 1k k kr b Ax x+ + += − + and 1 1( ) ( ( )), 0, 1, 2,k kD x diag sign x k+ += = … .

Proof. Using (5.3) in 1,kr + we have

1 1 1k k kr b Ax x+ + += − +

Page 72: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

58

1 1

1

1 2

1 2

1 2

( ( ))

( ( ))

( ( )) ( ( )) ( ( ))

. (5.13)

k k

k k

k k k k

k k

k

b A D x x

b A D x x

b A D x x A D x v A D x v

b Ax x C v C v

r C v C v

α β

α β

α β

+ +

+

= − −

= − −

= − − − − − −

= − + − −

= − −

Now consider

2

1 1 1

1 2 1 2

21 1 2 2 1 1

2

2 2

2 2 21 2

,

,

, 2 , 2 , 2 , ,

,

2 2 2 (5.14)

k k k

k k

k k k k

k

r r r

r Cv C v r Cv C v

r r r C v C v C v r C v Cv Cv

C v C v

r p c p a d

α β α β

α αβ β α

β

α αβ β α β

+ + +=

= − − − −

= − − − +

+

= − + − + +

From (5.8), (5.11) and (5.14) we have

2 2

2 1 2 11 2

( ).k k

p ap cpr r

a a d+

−= − − (5.15)

Equation (5.15) can be written as

2 2

2 2 1 2 11 2

( ).k k

p ap cpr r

a a d+

−− = + (5.16)

From (5.16) we have2 2

1k kr r+ ≤ because 2 2

1 2 1

2

( )0,

p ap cp

a a d

−+ ≥ for any arbitrary non

zero vectors 1 2, nv v R∈ therefore ,α β defined by (5.8) and (5.11) minimize norm of the

residual.

The iteration converges under the condition thatC is positive definite as is stated in the

next result.

Page 73: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

59

Theorem 5.2. If C is positive definite matrix, then the sequence defined by (5.3),

converges to the solution of the system of absolute value equations (2.14).

Proof: From (5.16) we have

2 422 2

2 2 2min1 min1 2 22

maxmax

,.

,

k k k

k k k

k k k

r C r rpr r r

a C r C r r

λ λ

λλ+− ≥ = ≥ =

Clearly the sequence2

kr is a decreasing and bounded. Thus it is convergent which

implies that 2

kr tends to zero and hence the result.

5.2 Numerical Results

In this section, we consider several examples and comparison is given. The convergence

of residual method is guaranteed for positive definite systems. In most cases, the residual

method is applicable for systems which are not positive definite.

Example 5.1. Consider the second order BVP of the type

2

2

2(1 ), 0 1, (0) 1 (1) 0.

d xx t t x x

d t− = − ≤ ≤ = − = (5.17)

We discretize the above equation using finite difference method to obtain the system of

absolute value equations of the type (2.14). The matrix 10 10A R ×∈ is given by

,

242, for

1, 1, 2, , 1121, for

1, 2, 3, ,

0, otherwise.

i j

j i

j i i na

j i i n

− =

= + = − =

= − =

The constant vector b is given by :

( )121.9917, 0.9669, 0.9256, 0.8678, 0.7934, 0.7025, 0.5950, 0.4710, 0.3306, 0.1736 .T

b =

Page 74: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

60

The exact solution is

2

2

.1915802528sin 4cos 3 , 0

1.462117157 0.5378828428 1 , 0.t t

t t t xx

e e t x−

− + − <=

− − + + >

The initial guess is chosen as ( )0 1, 1, , 1 .T

x = … The accuracy index is 1210 .− The

comparison is given in figure 5.1,

0 50 100 150 200 250 300 350 400 450 50010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

104

Number of iterations

2 n

orm

of

resi

du

al

Algorithm 3.1

Algorithm 4.1

Algorithm 5.1

Figure 5.1

Efficiency of residual iterative method

The number of iterations for solving system of absolute value equations (2.14), using

Algorithm 5.1, Algorithm 4.1 and Algorithm 3.1 are 51, 143 and 456 respectively. This

result shows that the residual iterative method converges faster than Algorithm 3.1 and

Algorithm 4.1.

Example 5.2. Let the matrix A be given by

1000, for , 1, 2, ,

1, for , 2,3, ,

1 for , 2,3, , .

i j

i i j i n

a i j i n

i j j n

+ = =

= − > =

< =

Page 75: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

61

Let eIAb )( −= where I is the identity matrix of order n and e is 1×n vector whose

elements are all equal to unity such that Tx )1,,1,1( …= is the exact solution. The stopping

criteria are 101 1 2

10 .k kAx x b −+ +− − < We choose the initial guess as 0 1 2( , , , ) ,T

nx x x x= …

0.001* .ix i= The number of iterations for each method is given in Table 5.1.

Table 5.1

Order Algorithm 3.1 Algorithm 4.1 Algorithm 5.1

4 4 3 3

8 4 4 3

16 4 4 4

32 5 5 4

64 6 6 5

128 8 7 6

256 10 9 8

512 16 14 12

From Table 5.1, we see that the Algorithm 5.1, solve system of absolute value equations

(2.14) in a few iterations. The residual iterative method converges rapidly to the

approximate solutions of system of absolute value equations (2.14).

In the next examples, we consider 2 ,k

v s= where k

s is defined as:

( ) ( )

( ) ( )( )1

( ) ( )

( ) ( )

( ).

T

k k k k

T

k k k

k k k

g x A D x Ax x b

H A D x A D x

s H g x

= − − −

= − −

= −

We compare Algorithm 5.1 with particle swarm optimization (PSO) method by Yong

[100] in the next example.

Page 76: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

62

Example 5.3 [100]. Consider random matrices A and b in Matlab code as:

n=input('dimension of matrix A=');

rand('state',0);

R=rand(n, n);

b=rand(n, 1);

A=R'*R+n*eye(n);

with random initial guess. The stopping criteria are 12

1 1 210 .

k kAx x b −

+ +− − < The

comparison between the residual iterative method and PSO method [100] is presented in

Table 5.2:

Table 5.2

PSO method Algorithm 5.1 Order

No. of iterations TOC No. of iterations TOC

4 2 2.230 2 0.006

8 2 3.340 2 0.022

16 3 3.790 2 0.025

32 2 4.120 2 0.053

64 3 6.690 2 0.075

128 3 12.450 2 0.142

256 3 34.670 2 0.201

512 5 76.570 3 1.436

1024 5 157.12 2 6.604

From Table 5.2, we conclude that for large problem the Algorithm 5.1 faster than PSO

method [100]. The Algorithm 5.1 converges to the exact solution of system of absolute

value equations (2.14) in most cases.

Page 77: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

63

Caccetta et al. [14] proposed globally and quadratically convergent (GQC) method for

solving system of absolute value equations (2.14). In the next example, we consider 100

problems and compare Algorithm 5.1 with GQC method [14].

Example 5.4 [14]. Let a random matrix 1000 1000A R ×∈ be chosen from a uniform

distribution on [ 10, 10]− . A random nx R∈ is chosen from a uniform distribution on

[ 1, 1]− . The constant vector is computed as b Ax x= − . We take the 100 problems into

10 groups each containing equal number of problems (instances). The stopping criterion

is 6

1 1 210 .

k kAx x b −

+ +− − ≤ We observed the following:

(i) 99 instances are solved to the accuracy 610 .−

(ii) The average number of iterations per instance is 5.04.

(iii) The average time for solving each problem is 8.27 seconds.

Computational results are given in the Table 5.3.

Table 5.3

Instances

Tnve

Nvei

No. of iterations

TOC(seconds)

1 – 10

0

0

45

78.26

11- 20

0

0

46

77.58

21- 30

0

0

41

85.50

31- 40

0

0

49

83.56

41- 50

1

1

60

78.70

51- 60

0

0

50

81.62

61- 70

0

0

46

90.25

71- 80

0

0

61

83.78

81- 90

0

0

55

89.68

91- 100

0

0

51

78.53

Page 78: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

64

In Table 5.3 Tnve, Nvei denotes the total number of violated equations in each group of

10 problems and per individual problem respectively. The time taken for each group is

denoted by TOC. Now we compare residual iterative method with the GQC [14], in

Table 5.4:

Table 5.4

Problems GQC method Algorithm 5.1

Problem size

Number of problem solved

Total number of iterations

Accuracy

1000 1000

97 99

532 504

610− 610−

In Table 5.4, we summarized and compared Algorithm 5.1 with the GQC method [14].

Our method solved 99 problems and GQC solved 97 problems out of 100. The number of

iterations for solving system of absolute value equations (2.14) of Algorithm 5.1 is less

than GQC method [14]. Hence the Algorithm 5.1 is more efficient than GQC method [14].

In the next example, we compare Algorithm 5.1 with interval algorithm for absolute

value equations by Wang et al. [99].

Example 5.5 [99]. Consider the matrix A in Matlab code as:

( ) ( )( )( )( )round 100 eye n,n 0.02 2 rand n,n 1 .A = ∗ − ∗ ∗ −

We computed ,b Ax x= − where the vector x is chosen as

( ) ( )rand n,1 rand n,1 .x = −

The computational results are given in Table 5.5.

Page 79: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

65

Table 5.5

n Interval Algorithm Algorithm 5.1

10 2 2

50 3 2

500 5 2

1000 6 3

2000 6 3

From Table 5.5, Algorithm 5.1 requires less number of iteration to approximate the

solution of (2.14) as compare to the interval Algorithm. We conclude that for large

problem Algorithm 5.1 is more efficient than Interval Algorithm [99].

Page 80: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

66

Chapter 6

Quasi Newton Method

Page 81: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

67

Quasi Newton methods have been used to solve nonlinear equation, system of linear and

nonlinear equations. In these methods, one can use a positive definite matrix to

approximate the Hessian (or its inverse), which saves the work of computing exact

second derivatives. The first quasi-Newton method was suggested by W.C. Davidon [20]

in 1959. Different updating formulas for Hessian matrix were proposed. The well known

updating formula was developed by Fletcher and Powell [24]. Shi [90] modified quasi

Newton method for solving system of linear equations with double and tripple search

directions.

In this chapter we develop and analyze quasi Newton method for solving generalized

system of absolute value equations .Ax B x b+ = This method is also based on

minimization techniques. Quasi Newton method can be used for solving system of

absolute value equations (2.14) by taking ,B I= − where I denote the identity matrix. In

chapters 3, 4 and 5 we used positive definite matrix to propose the methods. Here we

consider the full rank matrix, instead of positive definite. Quasi Newton method can

solve a wide range of system of absolute value equations.

6.1 Iterative Method

In this section, we discuss our main result which based on minimization techniques. Let

nx R∈ and ( )sign x will denote a vector with components equal to 1,0,1 − depending on

whether the corresponding component of x is positive, zero or negative. The diagonal

matrix D corresponding to ( )sign x and square matrix C are defined as

( ( )),D x diag sign x= ∂ =

where x∂ represent the generalized Jacobian of x based on a subgradient, [74, 77]

a n d ( ).C A BD= + W e c o n s i d e r t h e m a t r i x A a n d B s u c h t h a t Rank( )C =

Rank( ) ,A BD n+ = for each ,D and TA denote the transpose of matrix .A We use full

rank matrix which is general form of positive definite matrix, because every positive

definite matrix is full rank matrix but the converse is not true. For , n nA B R ×∈ and

Page 82: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

68

,nb R∈ we consider the function

1

( ) , .2

nf x Ax B x b Ax B x b x R= + − + − ∈ (6.1)

In next result, we minimize (6.1) to find the exact line search. This result is the basic tool

in the development of quasi Newton method.

Theorem 6.1. If Rank( ) Rank( ) ,C A BD n= + = then nx R∈ is the solution of the system of

absolute value equations

,Ax B x b+ =

if and only if nx R∈ is the minimum of the function )(xf , where )(xf is defined as (6.1).

Proof. Let nRvx ∈, andα be a real number variable. Using the Taylor’s series, we have

vvxfvxfxfvxf ,)(2

),()()(2

′′+′+=+α

αα (6.2)

From (6.1), we have

( ) ( ) ( ),

( ) ( ) ( ).

T

T

f x A BD Ax B x b

f x A BD A BD

′ = + + −

′′ = + +

(6.3)

We also note that

,x x x∂ =

where x∂ denote subgradient of ,x see Mangasarian [50].

From (6.2) and (6.3), we get

2

( ) ( ) , ( ,2

f x v f x Ax B x b C v C v C vα

α α+ = + + − + . (6.4)

It is clear that ( )f x vα+ has its minimum at

,

.,

Ax B x b C v

C v C vα

+ −= − (6.5)

Page 83: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

69

Putting value of α in (6.4), we have

2

,( ) ( ) ,

,

,1,

2 ,

Ax B x b C vf x v f x Ax B x b C v

C v C v

Ax B x b C vC v C v

C v C v

α+ −

+ = − + −

+ −+ −

2

,1( ) ( ). (6.6)

2 ,

Ax B x b C vf x f x

C v Cv

+ −= − ≤

Thus x minimizes .f On the other hand, suppose that x is a vector that minimizes .f

Then for any vector ,v we have ( ) ( ).f x v f xα+ ≥ Thus , 0.Ax B x b C v+ − = This

implies that ,Ax B x b+ = which shows that x is the solution of (2.10).

The above result suggests the following Algorithm.

Algorithm 6.1.

Choose an initial guess 1 ,nx R∈ a symmetric and positive definite matrix 1

n nH R

×∈

and , .n nA B R ×∈

For 1,2,... until convergence dok =

1

( ) ( ) and direction ( )

( ) , ( )

,

,

k k k k k

k k k k k

k k k k

k

k k k k

k k k k

g x f x s H g x

D x x x C A BD x

Ax B x b C s

C s C s

x x s

α

α+

′= = −

= = +

+ −= −

= +

1

1 1 1

( ( )) ( )

( ( )) ( )

k k k

T

k k k k

T

k k k

x x

A BD x Ax B x b

A BD x Ax B x b

δ

γ

+

+ + +

= −

= + + − −

+ + −

Page 84: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

70

1 (1 ) ( )

Stoping criteria

End for .

T T T T

k k k k k k k k k k kk k T T T

k k k k k k

H H HH H

k

γ γ δ δ δ γ γ δ

δ γ δ γ δ γ+

+= + + −

The Algorithm 6.1 is called the quasi Newton method. We remark that in [22, 23]

different formulas for updating k

H are discussed and for different k

H we have different

quasi- Newton methods. In Algorithm 6.1, we use BFGS formula for updating ,k

H a

member of Broyden family

1 (1 ) ( ),T T T T

k k k k k k k k k k kk k T T T

k k k k k k

H H HH H

γ γ δ δ δ γ γ δ

δ γ δ γ δ γ+

+= + + − 1, 2,k = … (6.7)

We take the initial matrix 1H as n n× identity matrix. The Broyden methods are quite

effective as stated in the next result.

Theorem 6.2 [22]. A Broyden method with exact line searches terminates after m n≤

iterations on the quadratic function.

In Theorem 6.2 the exact line search means that ,k

α will be the exact minimum of

( )f x vα+ defined by (6.4).

6.2 Numerical Results.

We consider several numerical examples to illustrate the implementation and efficiency

of the proposed method. We also compare our method with already existing methods. We

suggest the quasi Newton method for solving generalized system of absolute value

equations but it also works efficiently for solving the system of absolute value equations

of the type (2.14). All the computations are done using the Matlab 7.

Page 85: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

71

Example 6.1. Consider the matrices

( )

4.45 5.12 1.34 3.12 9.17 8.09

3.56 5.10 9.51 , 1.66 6.30 5.13

8.76 5.23 7.43 5.07 4.23 1.30

21.78, 18.12, 14.58T

A B

b

= = − − −

= −

For different initial guess (only changing the sign of the component of initial guess) we

have the following solutions of (2.14).

2.7942 0.5340

0.4625 1.0713

2.7982 1.9851

x =

− −

These are the all possible solutions of generalized system of absolute value equations

with accuracy 1010 .−

Example 6.2 [80]. Consider the matrices

A

− − − − − −

− − − −

− − − −

= −

0.1479 0.5985 0.2265 0.2292 0.2426 0.4978 0.4772

0.3503 0.7914 0.8554 0.2560 0.4149 0.3221 0.5674

0.8144 0.8176 0.9111 0.9181 0.1953 0.9376 0.0201

0.1143 − − − −

− −

− − − −

0.8706 0.1203 0.5198 0.6242 0.7633 0.1536

0.7850 0.7964 0.6195 0.5218 0.9041 0.7736 0.9708

0.4198 0.5983 0.9180 0.5057 0.6677 0.1967 0.0734

0.1962 0.62

− − − 55 0.3860 0.1035 0.4396 0.7893 0.9860

0.8464 0.5703 0.9208 0.0867 0.2831 0.9318 0.8203

0.7984 0.3861 0.1074 0.1288 0.8478 0.8475 0.8466

0.3445 0.4156 0.7606 0.4585 0.9195 0.0428

B

− − − −

− − −

=

0.0485

0.1394 0.8962 0.2990 0.2622 0.6214 0.5709 0.1978

0.8221 0.1798 0.2713 0.9308 0.9663 0.9149 0.0731

0.8508 0.2720 0.7906 0.8783 0.5006 0.9402 0.6437

0.7

− − − − − −

− − −

− − − −

253 0.0865 0.5792 0.1374 0.0348 0.4932 0.2036

− − −

Page 86: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

72

( ) .0.6525, 0.3719, 0.6019, 0.3199, 0.2327, 0.3168, 0.5135Tb = − − −

For different initial guess (only changing the sign of the component of initial guess) we

have the following solutions of (2.1).

. . .

. . .

. . .

. . .

. . .

. . .

. . .

x =

− −

− −

− − −

− −

0.2842 1 9010 0 1483 0 6611

0.2852 0 3674 0 4042 0 5319

0.0841 0 7374 0 7864 0 5818

0.0106 2 2564 0 1355 0 9468

0.2235 1 0900 0 1988 0 3509

0.0125 0 4787 0 3220 0 2792

0.0045 0 8550 0 2678 0 6273

. . .

. . .

. . .

. . .

. . .

. . .

. . .

.

.

.

.

.

− −

− −

− −

− − −

4 3204 1 8890 0 2118

0 2405 0 4361 0 3700

1 2114 0 3517 0 1697

6 0074 2 5070 0 0233

2 2160 0 7772 0 2024

0 1360 0 0890 0 0745

2 8807 1 2930 0 0600

0.1583

0.3712

0 1643

0 1678

0 1048

0 0477

0 0

. .

. .

. .

. .

. .

. .

. .

− −

− −

0 1048 0 2798

0 3815 0 2885

0 2708 0 0792

0 2813 0 0201

0 2570 0 2208

0 0703 0 0114

900 0 1583 0 0032

Quasi Newton method gave the same solutions of generalized system of absolute value

equations as computed by Rohn in [80].

In the next example, we consider the system of absolute value equations of the type:

,Ax x b− =

which is the special case of generalized system of absolute value equations.

Example 6.3. Consider the second order BVP of the type

2

2

2(1 ), 0 1, (0) 1 (1) 0.

d xx t t x x

d t− = − ≤ ≤ = − = (6.8)

In this example we try to solve

Page 87: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

73

,Ax x b− =

using quasi Newton method, taking ,B I= − the identity matrix of order n with negative

sign. The matrix 10 10A R ×∈ is given by

,

242, when

1 and 1, 2, , 1121, when

1 and 2, 3, ,

0, otherwise.

i j

j i

j i i na

j i i n

− =

= + = − =

= − =

The constant vector b is given by:

( )121.9917, 0.9669, 0.9256, 0.8678, 0.7934, 0.7025, 0.5950, 0.4710, 0.3306, 0.1736 .T

b =

The exact solution of (6.8) is

2

2

.1915802528sin 4cos 3 , 0

1.462117157 0.5378828428 1 , 0.t t

t t t xx

e e t x−

− + − <=

− − + + >

In this example, we solve system of absolute value equations defined by (2.14) using

quasi Newton method. We compare this method with Algorithm 3.1, Algorithm 4.1 and

Algorithm 5.1. The Algorithm 3.1 solves the system of absolute value equations (2.14) in

456 iterations, Algorithm 4.1 in 143 iterations, Algorithm 5.1 in 51 iterations and

Algorithm 6.1 in just 10 iterations. In figure 6.1, we take number of iterations on x-axis

and the 2-norm of residual

2,b x Ax+ −

on y-axis. The accuracy index is 1310 .−

Page 88: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

74

0 50 100 150 200 250 300 350 400 450 50010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

104

Number of iterations

2 n

orm

of

resi

du

al

Algorithm 3.1

Algorithm 4.1

Algorithm 5.1

Algorithm 6.1

Figure 6.1

Comparison of quasi Newton method with other methods

From figure 6.1, we see that quasi Newton method is quite effective in solving system of

absolute value equations as compare to other methods.

In the next example, we compare Algorithm 6.1 with concave minimization (CM)

method [49] and globally and quadratically convergent (GQC) method [14].

Example 6.4 [49]. Let a random matrix 1000 1000A R ×∈ be chosen from a uniform

distribution on [ 10, 10]− . A random 1000x R∈ is chosen from [ 1, 1].− The constant vector

is computed as b Ax x= − . We take m consecutively generated solvable random

problems. We divide the 100 problems into 10 groups each containing equal number of

problems (instances). We use 50,n = iteration per instance. The stopping criterion is

6

1 1 10 .k k

Ax x b −+ +− − ≤ We observe the following;

(i) All the 100 problems are solved to the accuracy 810 .−

(ii) The average number of iterations per problem is 4.58.

(iii) The average time taken to solve each problem is 44.65 seconds

Page 89: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

75

Computational results are given in Table 6.1.

Table 6.1

Instances

Tnve

Nvei

No. of iterations

TOC(seconds)

1 – 10

0

0

45

474.06

11- 20

0

0

44

477.32

21- 30

0

0

43

374.03

31- 40

0

0

49

508.85

41- 50

0

0

50

477.73

51- 60

0

0

46

505.45

61- 70

0

0

45

385.40

71- 80

0

0

46

383.78

81- 90

0

0

41

380.68

91- 100

0

0

49

497.83

In Table 6.1 Tnve, Nvei denotes the total number of violated equations in each group of

10 problems and per individual problem respectively.

Now we compare quasi Newton method with CM method by Mangasarian [49] which

solved 95 instances out of 100 and GQC method [14], in Table 6.2:

Table 6.2

Problems with svd(A)>1 CM GQC Algorithm 6.1

Problem size

Number of problem solved

Total number of iterations

Accuracy

1000 1000 1000

95 97 100

481 506 458

610− 610− 810−

Page 90: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

76

From Table 6.2, we see that number of problems solved by quasi Newton method and

GQC method [13] is 100 and 97 respectively. Mangsarian [49] solved 95 out of 100

problems. The number of iterations to achieve the given accuracy for quasi Newton

method, GQC method [13] and CM method [49] are 458, 506 and 481 respectively.

Hence quasi Newton method is more efficient than both the methods for solving system

of absolute value equations (2.14).

In the next example, we compare Algorithm 6.1 with primal-dual bilinear programming

(PDBP) [52].

Example 6.4 [52]. Let a random matrix 1000 1000A R ×∈ be chosen from a uniform

distribution on [ 5, 5]− . A random 1000x R∈ is chosen from [ 0.5, 0.5].− The constant

vector is computed as b Ax x= − . The computational results are given in Table 6.3.

Table 6.3

Dual complementarity Quasi Newton method

Order No. of problems

solved

Average No. of

iterations

No. of problems

solved

Average No. of

iterations

10 92 6.29 94 5.6

50 93 9.15 96 6.58

100 91 9.38 99 6.77

500 85 8.68 99 7.00

1000 90 7.91 99 7.45

Table 6.3 illustrates the efficiency of quasi Newton method. Our method solves more

problems than PDBP method [52] in less number of iterations. .

Example 6.5 [100]. Consider random matrix A and b in Matlab code as:

n=input('dimension of matrix A=');

rand('state',0);

R=rand(n, n);

b=rand(n, 1);

Page 91: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

77

A=R'*R+n*eye(n);

with random initial guess. We use 13

210Ax x b −− − < as stopping criteria. The

comparison between the quasi Newton method and particle swarm optimization (PSO)

method by Yong [100] is presented in Table 6.4.

Table 6.4

PSO method Algorithm 6.1 Order

NI TOC NI TOC

4 2 2.230 2 0.011

8 2 3.340 2 0.016

16 3 3.790 2 0.072

32 2 4.120 2 0.092

64 3 6.690 2 0.095

128 3 12.450 2 0.388

256 3 34.670 2 0.401

512 5 76.570 3 1.590

1024 5 157.12 2 7.851

The Algorithm 6.1 and PSO method [100] converges after equal number of iterations for

small size problems. The Algorithm 6.1 performs better than PSO method [100] for large

size problems.

Page 92: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

78

Chapter 7

Homotopy Perturbation Method

Page 93: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

79

The homotopy perturbation method (HPM) was first proposed by He [29]. Kermati [43]

and Yusufoglu [104] used HPM for solving linear systems. Liu [46] presented HPM as

iterative method for solving system of linear equations. He proved that homotopy

iterative methods converged rapidly as compare to the stationary iterative method for

solving linear systems. Noor [64] introduced the auxiliary parameter for rapid

convergence of the solution series.

In this chapter, we suggest HPM for solving system of absolute value equations defined

by (2.14). In the previous chapters, we considered 1( ) ( ),k k

D x D x+ = for convergence of

Algorithms. Here we relax this condition and develop the method in section 7.2. Using

HPM, we suggest Jacobi method, Gauss-Seidel method and SOR method for solving

system of absolute value equations (2.14).

For ,nRx ∈ ( )sign x will denote a vector with components equal to 1,0,1 − depending on

whether the corresponding component of x is positive, zero or negative. The diagonal

matrix ( )D x is defined as

( ) ( ( ))D x x diag sign x= ∂ =

where ( )D x is a diagonal matrix corresponding to )(xsign ), where x∂ represent the

generalized Jacobiean of x based on a subgradient, see [74, 77].

7.1 Homotopy Perturbation Method

We apply the HPM for solving system of absolute value equations of the form

,Ax x b− =

where ,n nA R

×∈ nx R∈ is unknowns and n

b R∈ is constant, we can rewrite (2.14) in the

following form

( ) ( ) ,L x N x b− = (7.1)

where ( ), ( )L x N x are the linear and nonlinear operators respectively, that is

( ) , ( ) ,

( ) ( ), (7.2)

L x Ax N x x

N x x D x

= =

′ = ∂ =

Page 94: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

80

since ( ) ( ( ))D x diag sign x= therefore ( ) 0.N x′′ =

We define homotopy by

( , ) (1 ) ( ) ( ( ) ( ) ) 0,H x p p F x p L x N x b= − + − − = (7.3)

where [0,1]p ∈ , 0( )F x Ex w= − , where n nE R ×∈ is nonsingular and 0w is the initial

approximation, from (7.3) we have

Here we are free to choose the auxiliary parameter ( ),F x see [46]. If the parameter p

tends to one, then (7.3) converges to the original problem ( ) ( ) 0.L x N x b− − = The basic

assumption is that the solution of (7.3) can be expressed as:

2

0 1 2y x px p x= + + + (7.4)

the approximate solution of (2.14) is obtained as:

0 1 21

limp

x y x x x→

= = + + +… (7.5)

Using the Taylor series, we have

0 0 0( ) ( ) ( ), ,N x N x N x x x′= + − (7.6)

From (7.4), (7.6) and (7.3) we have

( ) (

)

2 2

0 1 2 0 0 1 2 0

2

0 1 2

(1 ) ( ) ( ) ( )

( ), 0. (7.7)

p E x px p x x p A x px p x N x

N x px p x b

− + + + − + + + + − −

′ + + − =

Equate the terms with identical power of p we have

( ,0) ( ) 0, ( ,1) ( ) ( ) 0.H x F x H x L x N x b= = = − − =

Page 95: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

81

0

0 0

1 0 0 0 0

2

2 1 1 0 1

33 2 2 0 2

1 0

:

: ( )

: ( ),

: ( ), ,

: ( ), .n

n n n n

p E x w

p Ex N x Ax b Ex w

p Ex Ax Ex N x x

p Ex Ax Ex N x x

p Ex Ax Ex N x x+

=

= − + + −

′= − + +

′= − + +

′= − + +

(7.8)

From (7.8) we have

( )

( )

( )

( )

0 10 0

1 11 0 0 0

2 1

2 1 1 0 1

3 1

3 2 2 0 2

1

1 0

:

: ( ) ( )

: ( ),

: ( ), ,

: ( ), .n

n n n n

p x E w

p x E D x A E x E b w

p x E Ax Ex N x x

p x E Au Ex N x x

p x E Ax Ex N x x

− −

−+

= = − + + − ′= − + +

′= − + +

′= − + +

(7.9)

Taking 0 ,w b= in (7.9), we have

( )( )

1

0

1

0( ) 1, 2, 3, ,k

k

x E b

x I E A D x b k

=

= − − = … (7.10)

where I is the identity matrix of order .n

From (7.10), the solution y of (2.14) can be written as

( )( )

0 1 2

1

0 0

0

( ) . (7.11)k

k

y x x x

I E A D x x∞

=

= + + +

= − −∑

Page 96: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

82

Using the technique of Keramati [40] one can prove the convergence of (7.11). However,

we include all the details to convey the main idea and the significant modifications.

7.2 Convergence Analysis

Theorem 7.1. The sequence

1

0 0

0

( ( ( ))) ,m

k

m

k

x I E A D x x−

=

= − −∑

is a Cauchy sequence if

1

0( ( )) 1.I E A D x−− − <

Proof. We have to show that

lim 0.m p mm

x x+→∞

− =

Consider

( ) ( )1 1

0 0 0

0 0

1

0 0

0

( ( ( ))) ( ( ( )))

( ( ( ))) ,

m mk p k

m p m

k k

pm p

k

x x I E A D x I E A D x x

x I E A D x

− + −+

= =

− +

=

− = − − − − −

≤ − −

∑ ∑

Let 1

0( ( )) .I E A D uα −= − − Then

0 0

0

1. .

1

ppm k m

m p m

k

x x x xα

α α αα

+=

−− ≤ =

− ∑

If 1,α < then we have

( )0

1lim lim ,

1

pm

m p mm m

x x xα

αα

+→∞ →∞

−− ≤

hence we obtain

lim 0.m p mm

x x+→∞

− =

This completes the proof.

Page 97: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

83

7.3 Iterative Methods

The splitting matrix E may be chosen in different ways. We decompose the matrix A as

,A D L U= − −

where D is the diagonal matrix, L and U are strictly lower and strictly upper triangular

matrices respectively. Now consider three cases as follows:

(i) If ,E D= then the sequence (7.10) can be written as:

( )( )

1

0

1

0( ) 1, 2, 3, .k

k

x D b

x I D A D x b k

=

= − − = …

This method is called Jacobi method.

(ii) If ,E D L= − then the sequence (7.10) is called Gauss-Seidel method and has the

following form:

( )

( ) ( )( )

1

0

1

0( ) 1, 2, 3, .k

k

x D L b

x I D L A D x b k

= −

= − − − = …

(iii) If ,E D Lω= − then the sequence (7.10) is of the form:

( )

( ) ( )( )

1

0

1

0( ) 1, 2, 3,k

k

x D L b

x I D L A D x b k

ω

ω

= −

= − − − = …

This method is known as SOR method.

7.4 Numerical Results

In this section, we consider several examples to illustrate the implementation and

efficiency of the proposed methods. We find that homotopy iterative methods are more

efficient for solving system of absolute value equations (2.14). We also compare our

Page 98: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

84

method with Algorithm 3.1 and Algorithm 4.1. All the computations are done using the

Matlab 7.

Example 7.1. Let the matrix A be given by

,

4 , for

1, 1, 2, , 1, for

1, 2, 3, ,

0, otherwise.

i j

n j i

j i i na n

j i i n

=

= + = − =

= − =

Let the constant vector b be chosen random. The stopping criterion is 6

1 10 .k kx x−

+ − <

The comparison is given in Table 7.1.

Table 7.1

Order Jacobi method Gauss-Seidel method SOR method

2 8 7 5

4 8 7 5

8 9 7 6

16 10 8 6

32 10 8 6

64 12 9 7

128 12 9 7

256 13 11 8

512 15 12 10

From Table 7.1, we conclude that the SOR method converges in less number of iterations

to the solution of system of absolute value equations (2.14) as compare to Jacobi method

and Gauss-Seidel method.

Page 99: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

85

Example 7.2. Let the matrix A be given by

1000, for , 1, 2, ,

1, for , 2,3, ,

1 for , 2,3, , .

i j

i i j i n

a i j i n

i j j n

+ = =

= − > =

< =

Let eIAb )( −= where I is the identity matrix of order n and e is 1×n vector whose

elements are all equal to unity such that Tx )1,,1,1( …= is the exact solution. The stopping

criteria are 61 1 2

Error 10 .k kAx x b −+ +− − = <

Table 7.2

Algorithm 3.1 Gauss-Seidel method n

No. of iterations Error No. of iterations Error

4 3 91.801 10−× 3 92.361 10−×

8 3 96.654 10−× 3 91.419 10−×

16 3 74.310 10−× 3 85.283 10−×

32 4 73.145 10−× 4 73.456 10−×

64 5 71.445 10−× 4 75.378 10−×

128 6 73.691 10−× 5 88.328 10−×

256 7 76.175 10−× 7 77.852 10−×

512 11 74.369 10−× 10 79.352 10−×

1024 26 76.205 10−× 20 73.441 10−×

In Table 7.2, we compare Gauss-Seidel method with Algorithm 3.1. From Table 7.2, we

conclude that Gauss-Seidel method is better than Algorithm 3.1 for solving system of

absolute value equations (2.14) in term of number of iterations.

In the next example, we compare SOR method with Algorithm 3.1 and Algorithm 4.1.

Page 100: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

86

Example 7.3. Let a random matrix n nA R ×∈ be chosen from a uniform distribution on

[ 1, 1]− such that whose main diagonal elements are all 1000. Let the constant vector b

be chosen random. The stopping criterion is 61 1 2

10 .k kAx x b −+ +− − < The computational

results are given in Table 7.3.

Table 7.3

Algorithm 3.1 Algorithm 4.1 SOR method n

NI TOC NI TOC NI TOC

4 3 0.061 3 0.005 2 0.001

8 4 0.074 3 0.008 2 0.001

16 4 0.096 4 0.052 2 0.007

32 4 0.145 4 0.087 2 0.013

64 4 0.459 4 0.153 2 0.025

128 5 0.691 4 0.197 2 0.041

256 5 5.630 5 4.262 3 2.213

512 5 50.347 5 48.301 3 12.341

1024 6 390.52 5 376.45 4 218.60

In this example, we consider the full dense matrix A and compare the proposed method

with Algorithm 3.1 and Algorithm 4.1. The homotopy SOR method converges to the

solution of (2.14) very quickly.

Example 7.4. Consider the second order BVP of the type

2

2

2(1 ), 0 1, (0) 1 (1) 0.

d xx t t x x

d t− = − ≤ ≤ = − = (7.12)

We discretize (7.12) using finite difference method to obtain the system of absolute value

equations of the type:

,Ax x b− =

Page 101: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

87

where the matrix 10 10A R ×∈ is given by

,

242, for

1, 1, 2, , 1121, for

1, 2, 3, ,

0, otherwise.

i j

j i

j i i na

j i i n

− =

= + = − =

= − =

The constant vector b is given by

( )121.9917, 0.9669, 0.9256, 0.8678, 0.7934, 0.7025, 0.5950, 0.4710, 0.3306, 0.1736 .T

b =

The exact solution is

2

2

.1915802528sin 4cos 3 , 0

1.462117157 0.5378828428 1 , 0.t t

t t t xx

e e t x−

− + − <=

− − + + >

0 50 100 150 200 250 300 350 400 45010

−14

10−12

10−10

10−8

10−6

10−4

10−2

100

102

Number of iterations

2 n

orm

of

resi

du

al

Algorithm 3.1

Algorithm 4.1

SOR method

Figure 7.1

Efficiency of Homotopy SOR method

Page 102: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

88

In figure 7.1, we compare SOR method with Algorithm 3.1 and Algorithm 4.1. To solve

(7.12) with accuracy 1210− the number of iterations for SOR method, Algorithm 4.1 and

Algorithm 3.1 are 117, 136 and 456 respectively.

Page 103: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

89

Chapter 8

Absolute Value Complementarity Problems

Page 104: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

90

Complementarity theory introduced and studied by Lemke [41] and Cottle and Dantzig

[17]. The complementarity problems have been generalized and extended to study a wide

class of problems, which arise in pure and applied sciences, see [51-53, 58-62] and the

references therein. Equally important is the variational inequality problem, which was

introduced and studied in the early sixties. For the recent applications, formulation,

numerical results and other aspects of the variational inequalities, see [58-62].

Motivated and inspired by the research going on in these areas, we introduce and

consider a new class of complementarity problems, which is called the absolute value

complementarity problem. Related to the absolute value complementarity problem, we

consider the problem of solving the absolute value variational inequality. If the

underlying set is the whole space, then the absolute value complementarity problem is

equivalent to solve the system of absolute value equations. We use the projection

technique to show that the absolute value complementarity problems are equivalent to the

fixed point problem. This alternative equivalent form is used to study the existence of a

unique solution of the absolute value complementarity problems under some suitable

conditions. We suggest and analyze a generalized AOR method for solving the absolute

value complementarity problems. The convergence analysis of the proposed method is

considered under some suitable conditions. We need the following definition.

Definition 8.1 [103]. The matrix ,n nC R ×∈ is called an L-matrix if 0ii

c > for 1, 2, ,i n= …

and 0ijc ≤ for , , 1, 2, , .i j i j n≠ = …

8.1 Absolute Value Complementarity Problems

For a given matrix ,n nA R ×∈ a vector ,nb R∈ we consider the problem of finding ,x K∈

such that

, , , 0,x K Ax x b K Ax x b x∗∈ − − ∈ − − = (8.1)

where : , 0,nK x R x y y K∗ = ∈ ≥ ∈ is the polar cone of a closed convex cone K in nR

and x will denote the vector in nR with absolute values of components of .nx R∈ We

remark that the absolute value complementarity problem (1.1) can be viewed as an

Page 105: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

91

extension of the complementarity problem considered by Lemke [41].

Let K be a closed and convex set in the inner product space .nR We consider the

problem of finding x K∈ such that

, 0, .Ax x b y x y K− − − ≥ ∀ ∈ (8.2)

The problem (8.2) is called the absolute value variational inequality, which is a special

form of the mildly nonlinear variational inequalities [63]. If ,nK R= then the problem

(8.2) is equivalent to find nx R∈ such that

0.Ax x b− − = (8.3)

Using this equivalence formulation it is possible to suggest a number of iterative methods

for absolute complementarity problems. To propose and analyze algorithm for absolute

complementarity problems, we need the following results.

Lemma 8.1. Let K be a cone in .nR Then x K∈ is a solution of absolute variational

inequality (8.2) if and only if x K∈ is the solution of the absolute value complementarity

problem (8.1).

Proof. Let x K∈ is the solution of (8.2). Then

, 0, .Ax x b y x y K− − − ≥ ∀ ∈

Since K is a convex cone, taking 0y K= ∈ and 2 ,y x K= ∈ we have

, 0.Ax x b x− − = (8.4)

From (8.2) and (8.4), we have

, 0, .Ax x b y y K− − ≥ ∀ ∈ (8.5)

This implies that

Page 106: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

92

.Ax x b K∗− − ∈ (8.6)

Thus we conclude that x K∈ is the solution of absolute value complementarity problems

(8.1).

Conversely, let x K∈ satisfies

, 0, .Ax x b y y K− − ≥ ∀ ∈

From (8.1) and (8.5), it follows

, 0, .Ax x b y x y K− − − ≥ ∀ ∈

Hence x K∈ satisfies absolute variational inequality (8.2).

In Lemma 8.1, we have proved that the absolute value complementarity problem (8.1) is

equivalent to the variational inequality (8.2).

Lemma 8.2. If K is closed convex cone in ,nR then 0, x Kρ > ∈ satisfies (8.2) if and

only if x K∈ satisfies the relation

( ).K

x P x Ax x bρ= − − − (8.7)

where KP is the projection of nR onto the closed convex cone .K

Proof. Let x K∈ be the solution of (8.2). Then, for a constant 0,ρ >

( )( ) , 0, .x x Ax x b y x y Kρ− − − − − ≥ ∀ ∈

Using Lemma 2.1, which is equivalent to

( )Kx P x Ax x bρ= − − − .

Page 107: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

93

Thus the variational inequality (8.2) is equivalent to fixed point problem.

Now using Lemma 8.1 and Lemma 8.2 the absolute complementarity problem (8.1) can

be transformed to fixed point problem as:

( ).K

x P x Ax x bρ= − − −

Theorem 8.1. Let K be a closed convex cone in .nR Let ,n nA R ×∈ be a positive definite

matrix with constant β and continuous with constant .γ If 2

2( 1)0 ,

( 1)

βρ

γ

−< <

−1,β >

1,γ > then there exist a unique solution ,x K∈ such that

, 0 .Ax x b y x y K− − − ≥ ∀ ∈

Proof. Uniqueness: Let 1 2x x K≠ ∈ be two solutions of (8.2). Then

1 1 1, 0 ,Ax x b y x y K− − − ≥ ∀ ∈ (8.8)

2 2 2, 0 .Ax x b y x y K− − − ≥ ∀ ∈ (8.9)

Taking 2y x K= ∈ in (8.8), 1y x K= ∈ in (8.9) and adding the resultant, we have

1 2 1 2 1 2( ) , 0.A x x x x x x− − + − ≤

This implies that

2

1 2 1 2 1 2( ), .A x x x x x x− − ≤ − (8.10)

Since A is positive definite, using definition 2.4 and (8.10), we have

2

1 2( 1) 0.x xβ − − ≤ (8.11)

As 1,β > so

2

1 2 0,x x− ≤

Page 108: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

94

which is impossible. Thus 1 2 ,x x= the uniqueness of the solution.

Existence: Let x K∈ is the solution of (8.2). Then from Lemma 8.3, we have

( )( ) .K

F x x P x Ax x bρ= = − − − (8.12)

To show that the existence of solution of (8.2) it is enough to prove that ( )F x is a

contraction mapping. For 1 2x x K≠ ∈ consider

( ) ( )

( ) ( )

( )

( )

1 2 1 1 1 2 2 2

1 1 1 2 2 2

1 2 1 2 1 2

1 2 1 2 1 2

( ) ( )

, (8.13)

K KF x F x P x Ax x b P x Ax x b

x Ax x b x Ax x b

x x Ax Ax x x

x x A x x x x

ρ ρ

ρ ρ

ρ ρ

ρ ρ

− = − − − − − − −

≤ − − − − − − −

≤ − − − + −

= − − − + −

where we have used the fact thatK

P is non expansive. Now using positive definiteness of

,A we have

2 2

1 2 1 2

1 2

( ) ( ) (1 2 )

,

F x F x x x

x x

βρ γ ρ

θ

− ≤ − + −

= −

where ( )2 2

2

2( 1)1 2 . Form 0

( 1)

βθ ρ βρ γ ρ ρ

γ

−= + − + < <

−and 1,ρ < we have 1,θ <

which shows that ( )F x is a contraction mapping and has a fixed point x K∈ satisfying

the inequality (8.2).

For the sake of simplicity, we consider the special case when [0, ]K c= is a closed and

convex set in nR . We define the projection operator K

P x as:

( ) ( ) min max 0, , , 1,2, , .K i i i

P x x c i n= = … (8.14)

Page 109: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

95

Definition 8.2 [1]. For any x and y in ,nR the projection K

P x has the following

properties:

(i) ( )

(ii) ( )

(iii) ( )

(iv) .

K K K

K K K

K K

K K

P x y P x P y

P x P y P x y

x P x P x

x y P x P y

+ ≤ +

− ≤ −

= + −

≤ ⇒ ≤

8.2 Generalized AOR Method

Now we suggest the iterative method for solving the absolute value complementarity

problem (8.1). For this purpose, we decompose the matrix A as:

,A D L U= − − (8.15)

where D is the diagonal matrix, L and U are strictly lower and strictly upper triangular

matrices respectively. Let 1 2( , , , )n

diag ω ω ωΩ = … with 0 1,i

ω< ≤ 0 1,α≤ ≤ using (8.15)

we suggest the iterative scheme for solving (8.2) as follows:

Algorithm 8.1.

( )

0

1

1 1

1

Step 1: Choose an initial guess and a parameter set 0,

Step 2: Calculate

( ) ( ) ,

Step 3: If , then stop; Else, set 1 and go to step 2.

n

k K k k k k

k k

x R R k

x P x D Lx A L x x b

x x k k

ω

α α

+

−+ +

+

∈ ∈ =

= − − Ω + Ω + Ω − Ω +

= = +

Now we define an operator : n ng R R→ such that ( ) ,g x ξ= where ξ is the fixed point of

the system

( )1 ( ) ( ) .K

P x D L A L x x bξ α ξ α−= − − Ω + Ω + Ω − Ω + (8.16)

We also assume that the set

: 0, 0 ,nx R x Ax x bϕ = ∈ ≥ − − ≥

Page 110: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

96

of the absolute value complementarity problem is nonempty. The following result is

needed in the convergence of Algorithm 8.1.

Theorem 8.2. Consider the operator : n ng R R→ as define in (8.16). Assume that

n nA R ×∈ is an L-matrix and 0 1,i

ω< ≤ 0 1.α≤ ≤ Then for any x ϕ∈ it holds:

( ). ( ) ,

( ). ( ) ( ),

( ). ( ) .

g x x

x y g x g y

g xξ ϕ

≤ ⇒ ≤

= ∈

i

ii

iii

Proof. To prove (i) we need to verify that

, 1, 2, ,i i

x i nξ ≤ = …

hold with i

ξ satisfying

1

1

1

( ) ( ) .i

i K i ii i ij j j i i

j

P x a L x Ax x bξ αω ξ ω−

=

= − − − + − −

∑ (8.17)

To prove the required result, we use mathematical induction. For this let 1,i =

( )1

1 1 11 1 1( ) .K

P x a Ax x bξ ω−= − − −

Since 0, 0iAx x b ω− − ≥ > therefore 1 1xξ ≤ .

For 2,i = we have

( )1

2 2 22 2 21 1 1 2 2( ) ( ) .K

P x a L x Ax x bξ αω ξ ω−= − − − + − −

Here 210, 0, 0iAx x b Lω− − ≥ > ≥ and 1 1 0.xξ − ≤ This implies that 2 2.xξ ≤

Suppose that

i i

xξ ≤ for 1, 2, , 1,i k= −…

we have to prove that the statement is true for ,i k= that is

Page 111: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

97

.k k

xξ ≤

Consider

( )(

)

11

1

1

1 1 1 2 2 2 1 1 1

( ) ( ) ,

( ) ( ) ( )

( ) . (8.18)

k

k K k kk k kj j j k k

j

K k kk k k k kk k k

k k

P x a L x Ax x b

P x a L x L x L x

Ax x b

ξ αω ξ ω

αω ξ ξ ξ

ω

−−

=

−− − −

= − − − + − −

= − − − + − + + −

+ − −

Since 1 2 10, 0, , , , 0k k k kkAx x b L L Lω −− − ≥ > ≥… and i i

xξ ≤ for 1, 2, , 1,i k= −… from

(8.18) we can write

.k k

xξ ≤

Hence (i) is proved.

Now we prove (ii), for this let us suppose that ( )g xξ = and ( ).g yφ = We will prove

.x y ξ φ≤ ⇒ ≤

As

( )1 ( ) ( ) .K

P x D L A L x x bξ α ξ α−= − − Ω + Ω + Ω − Ω +

So i

ξ can be written as

1 11

1 1 1

1 11

1 1 1

(1 )

(1 ) (1 )

i i n

i K i ii i ij j i ii i i ij j i ij j i i i i

j j jj i

i i n

K i i ii i ij j i ij j i ij j i i i i

j j jj i

P x a L a x L x U x x b

P x a L L x U x x b

ξ αω ξ ω α ω ω ω ω

ω αω ξ α ω ω ω ω

− −−

= = =≠

− −−

= = =≠

= − − + − − − − −

= − − − − − − − −

∑ ∑ ∑

∑ ∑ ∑ .

Similarly, for i

φ we have

Page 112: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

98

1 11

1 1 1

(1 ) (1 )i i n

i K i i ii i ij j i ij j i ij j i i i i

j j jj i

P y a L L y U y y bφ ω αω φ α ω ω ω ω− −

= = =≠

= − − − − − − − −

∑ ∑ ∑

for 1,i =

1

1 1 1 11 1 1 1 1

1

1

1 1 11 1 1 1 1

1

1

(1 )

(1 )

.

n

K j j

jj i

n

K j j

jj i

P y a U y y b

P x a U x x b

φ ω ω

ω ω

ξ

=≠

=≠

= − − − − −

≥ − − − − −

=

Since 1 1,y x≥ therefore 1 1 .y x− ≤ − Hence it is true for 1.i = Suppose it is true for

1,2, 1,i k= −… we will prove it for ,i k= for this consider

1 11

1 1 1

1 11

1 1 1

(1 ) (1 )

(1 ) (1 )

k k n

k K k k kk k kj j k kj j k kj j k k k k

j j jj i

k k n

K k k kk k kj j k kj j k kj j k k k k

j j jj i

P y a L L y U y y b

P x a L L x U x x b

φ ω αω φ α ω ω ω ω

ω αω ξ α ω ω ω ω

− −−

= = =≠

− −−

= = =≠

= − − − − − − − −

≥ − − − − − − − −

∑ ∑ ∑

∑ ∑ ∑

.kξ=

Since ,x y≤ and i i

ξ φ≤ for 1,2, 1.i k= −… Hence it is true for k and (ii) is verified.

Next we prove (iii), that is

( ) .g xξ ϕ= ∈ (8.19)

Let ( )1( ) ( )K

g P D L A bλ ξ ξ α λ ξ ξ ξ−= = − Ω − − + − − from (i) ( ) .g ξ λ ξ= ≤ Also by

definition of ,g ( ) 0g xξ = ≥ and ( ) 0.gλ ξ= ≥

Now

Page 113: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

99

1

1

1

( ) ( ) .i

i K i ii i ij j j i i

j

P a L A bλ ξ αω λ ξ ω ξ ξ−

=

= − − − + − −

For 11, 0i ξ= ≥ by definition of .g Suppose that ( ) 0,iA bξ ξ− − < so

( )

( )

1

1 1 11 1 1

1 1

( )

.

K

K

P a A b

P

λ ξ ω ξ ξ

ξ ξ

−= − − −

> =

Which contradicts the fact that .λ ξ≤ Therefore, ( ) 0.iA bξ ξ− − ≥

Now we prove it for any k in 1,2, , .i n= … Suppose the contrary ( ) 0,iA bξ ξ− − < then

1

1

1

( ) ( ) .k

k k k kk k kj j j k k

j

P a L A bλ ξ αω λ ξ ω ξ ξ−

=

= − − − + − −

Since it is true for all [0,1],α ∈ it should be true for 0.α = That is

( )

( )

1 ( )

.

k K k kk k k

K k k

P a A b

P

λ ξ ω ξ ξ

ξ ξ

−= − − −

> =

Which contradicts the fact that .λ ξ≤ So ( ) 0,kA bξ ξ− − ≥ for any k in 1,2, , .i n= …

Hence ( ) .f xξ ϕ= ∈

Now we prove the convergence criteria of Algorithm 8.1 when the matrix A is an

L-matrix as stated in the next result.

Theorem 8.3. Assume that n nA R ×∈ is an L-matrix. Also assume that 0 1,i

ω< ≤

0 1.α≤ ≤ Then for any initial vector 0 ,x ϕ∈ the sequence , 0,1,2, ,kx k = … defined by

Algorithm 8.1 has the following properties:

(i). 1 00 ; 0,1,2, ,k k

x x x k+≤ ≤ ≤ = …

(ii). lim kk

x x∗

→∞= is the solution of the absolute value complementarity problem (8.1).

Page 114: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

100

Proof. Since 0 ,x ϕ∈ by (i) of Theorem 8.2 we have 1 0x x≤ and 1 .x ϕ∈ Recursively using

Theorem 8.2 we obtain

1 00 ; 0,1, 2,k k

x x x k+≤ ≤ ≤ = …. (8.20)

From (i) we observe that the sequence ,kx is monotone bounded, therefore it converges

to some nx R

∗+∈ satisfying

( )

( )

1

1

( ) ( )

.

K

K

x P x D Lx A L x x b

P x D Ax x b

α α∗ ∗ − ∗ ∗ ∗

∗ − ∗ ∗

= − − Ω + Ω + Ω − Ω +

= − Ω − Ω − Ω

Hence x∗ is the solution of (8.1).

8.3 Numerical Results

In this section, we consider several examples to show the implementation and efficiency

of the proposed method. The convergence of GOAR method is guaranteed for L-matrices

only but it is also possible to solve different type of systems. The values of α varies from

0.7 to 0.99 and the elements of the diagonal matrix Ω are chosen from the Interval [c, d]

such that

( ), 1, 2, ,i

d c ic i n

−= + = …

where iω is the ith diagonal element of Ω . The GAOR method converges quickly for

grater values of α and .iω

Example 8.1. We test Algorithm 8.1 on m consecutively generated solvable random

problems ,n nA R ×∈ and n ranging from 10 to 1000. We chose a random matrix A from a

uniform distribution on [0, 1] , such that whose diagonal elements are equal to 1000 and

nx R∈ is chosen randomly from [0, 1] . The constant vector is computed as b Ax x= − .

Let 0

nx R∈ be random init ial guess. We take Error 1 1 2

.k k

Ax x b+ += − − The

Page 115: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

101

computational results are shown in Table 8.1.

Table 8.1

n m No. of iterations TOC Error

1 4 0.001 81.8204 10−×

10 10 40 0.011 97.4875 10−×

1 4 0.003 71.2595 10−×

50 10 40 0.016 71.3834 10−×

1 4 0.014 79.5625 10−×

100 10 41 0.031 76.6982 10−×

1 5 0.203 71.3142 10−×

500 10 49 2.075 71.5168 10−×

1 6 1.076 97.2231 10−×

1000 10 60 11.591 98.3961 10−×

In Table 8.1, n, m, denote the problem size and total number of problems solved. We see

that the Algorithm 8.1 converges to the solution of (2.14) in a few iterations. From the

last two columns of the Table 8.1, we conclude that the Algorithm 8.1 is a good choice

for solving system (2.14).

Example 8.2. Let the matrix A be given by

,

8, for

1, 1, 2, , 11 for

1, 2, 3, ,

0, otherwise.

i j

j i

j i i na

j i i n

=

= + = − = −

= − =

Page 116: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

102

Let (6, 5, 5, , 5, 6) ,Tb = … the problem size n is ranging from 4 to 1024. The stopping

criteria are 61 1 2

10 .k kAx x b −+ +− − < We choose initial guess 0x as 0 (0, 0, , 0) .T

x = … The

computational results are shown in Table 8.2.

Table 8.2

Algorithm 3.1 Algorithm 8.1 n

No. of iterations TOC No. of iterations TOC

4 10 0.0168 10 0.001

8 11 0.018 11 0.001

16 11 0.143 11 0.002

32 12 3.319 11 0.008

64 12 7.145 11 0.082

128 12 11.342 11 0.330

256 12 25.014 11 2.298

512 12 98.317 11 19.230

1024 13 534.903 11 158.649

In Table 8.2, TOC denotes total time taken by CPU in seconds. The Algorithm 8.1 is

better than the Algorithm 3.1 for solving system of absolute value equations (2.14), with

respect time and requires less number of iterations.

Example 8.3. Let the matrix A be given by

,

1000 , for

1, 1, 2, , 11 for

1, 2, 3, ,

0, otherwise.

i j

i j i

j i i na

j i i n

+ =

= + = − = −

= − =

Page 117: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

103

Let b Ax x= − and a random nx R∈ is chosen from [1,2]. The problem size n is ranging

from 4 to 1024. The stopping criteria are 61 1 2

10 .k kAx x b −+ +− − < We choose a random

initial guess 0 .nx R∈ The computational results are shown in Table 8.3.

Table 8.3

Algorithm 3.1 Algorithm 8.1 n

No. of iterations TOC No. of iterations TOC

4 4 0.017 4 0.001

8 4 0.019 4 0.001

16 4 0.263 4 0.003

32 4 2.414 4 0.005

64 4 6.132 4 0.061

128 4 12.152 4 0.231

256 4 16.014 4 2.518

512 5 50.214 4 17.231

1024 5 102.023 4 92.144

In Table 8.3, TOC denotes total time taken by CPU in seconds. The Algorithm 8.1

requires less time to approximate the solution of (2.14). The Algorithm 8.1 is better than

the Algorithm 3.1 for solving system of absolute value equations (2.14) of large size.

Page 118: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

104

Chapter 9

Conclusion

Page 119: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

105

In this thesis, we have discussed the solution of system of absolute value equations. We

have suggested different iterative methods for solving system of absolute value equations.

The system of absolute value equations has been solved making use of linear

complementarity problems. We have used the idea of minimization methods, projection

techniques, homotopy perturbation method and absolute value complementarity problems

for solving system of absolute value equations. Now we conclude our work chapter wise

and discuss some open problems.

In chapter 3, we have suggested iterative method based on minimization techniques for

solving system of absolute value equations (2.14). The convergence of the method

guaranteed for symmetric positive definite systems only. We have proposed two

Algorithms for solving system of absolute value equations (2.14). In this method, we

have considered the sequence of approximation with single search direction. The

Algorithm 3.2 has performed better than other methods. The future work is to choose the

good search direction and modified this method for generalized system of absolute value

equations.

In chapter 4, we have used again the minimization techniques and proposed the iterative

method for solving system of absolute value equations. We have generated a sequence of

approximation with double search directions and proved that both theoretically and

numerically this method is faster than the method discussed in chapter 3. The future work

is required to generalize these methods using best possible search directions. It is also

required to develop such methods for generalized system of absolute value equations of

the type Ax B x b+ = with relaxing the condition of symmetric positive definiteness.

In chapter 5, we have used projection techniques and proposed the residual iterative

method which minimized the norm of residual over Krylov subspace. In this method, we

have relaxed the condition of symmetry but still we have used positive definite system.

In chapter 6, we have dealt with generalized system of absolute value equations

,Ax B x b+ = here , .n nA B R ×∈ We have suggested quasi Newton method for solving

system of absolute value equations. Quasi Newton method was based on minimization

techniques with single search direction using full rank matrix instead of positive definite

matrix. We have computed all solutions of generalized system of absolute value

equations using quasi Newton method. The further work is required to modify the quasi

Page 120: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

106

Newton method with more than one search directions for solving generalized system of

absolute value equations.

In chapter 7, we first time used the homotopy perturbation method for solving system of

absolute value equations. We have removed the condition of symmetric and positive

definiteness. Numerical comparison with Algorithm 3.1 was given. The further study is

to derive the homotopy perturbation method for solving generalized system of absolute

value equations.

In chapter 8, we have introduced absolute value complementarity problem. We have

suggested an algorithm for solving absolute complementarity problem and the

convergence of the algorithm was give under the condition that the system matrix is an

L -matrix. The further work is to define different algorithm for example SSOR method,

AOR method etc. for solving system of absolute value equations with the help of absolute

value complementarity problem.

Page 121: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

107

Chapter 10

References

Page 122: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

108

1. Abbasbandy, S., (2007). Application of He's homotopy perturbation method to

functional integral equations. Chaos Solitons Fractals, 31, 1243-1247.

2. Ahn, B. H. (1983). Iterative methods for linear complementarity problems with

upper bounds on primary variables, Math. Prog., 26, 295-315.

3. Ascher, U. M., Mattheij, R. M. M. & Russell, R. B. (1988). Numerical solution of

boundary value problems for ordinary differential equations. Prentice-Hall,

Englewood Cliffs, NJ.

4. Axelesson, O. (1980). Conjugate gradient type methods for unsymmetric and incon-

sistent systems of linear equations. Lin. Alg. Appl., 29, 1-16.

5. Axelesson, O. (1994). Iterative solution methods, Cambridge University Press, New-

York,

6. Bansal, P .P. & Jacobsen, S. E. (1975). Characterization of basic solutions for a

class of nonconvex programs. J. Optim. Theory Appl., 15, 549-564.

7. Bennett, K. P. & O. L. Mangasarian, O. L. (1993), Bilinear separation of two sets in

n-space. Comput. Optim. Appl., 2, 207-227.

8. Bourbaki, N. (1994). Elements of the history of mathematics, Springer-Verlag.

9. Bramley, R. & Sameh, A. (1992). Row projection methods for large nonsymmetric

linear systems. SIAM J. Sci. Stat. Comput., 13, 168–193.

10. Brezinski, C. & Zaglia, M. R. (1991). Extrapolation methods: Theory and Practice,

North-Holland, Amsterdam.

11. Brown, P. N. (1991). A theoretical comparison of the Arnoldi and GMRES

algorithms. SIAM J. Sci. Stat. Comput., 12, 58-78.

12. Broyden, C. G. (1965). A class of methods for solving nonlinear simultaneous

equations. Math. Comput., 19, 577-593.

Page 123: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

109

13. Burden, R. L. & Faires, J. D. (2006). Numerical analysis (7th

edition), The

PWS Publishing Company, Boston.

14. Caccetta, L., Qu, B. & Zhou, G. (2011). A globally and quadratically convergent

method for absolute value equations. Comput. Optim. Appl., 48, 45-58.

15. Chan, T. F., Gallopoulos, E., Simoncini, V., Szeto, T. & Tong, C. (1994). A quasi

minimal residual variant of the Bi-CGSTAB algorithm for non-symmetric systems.

SIAM J. Sci. Comput., 15, 338-347.

16. Chung, S. J. (1989). NP-completeness of the linear complementarity problem. J.

Optim. Theory Appl., 60, 393-399.

17. Cottle, R. W. & Dantzig, G. (1968). Complementary pivot theory of mathematical

programming. Lin. Alg. Appl., 1, 103-125.

18. Cottle, R. W., Pang, J. S. & Stone, R. E. (1992). The Linear complementarity

problem, Academic Press, New York.

19. Datta, B. N. (2010). Numerical linear algebra and applications (2nd

edition),

SIAM, Philadelphia, PA.

20. Davidon, W. C. (1991). Variable metric method for minimization. SIAM J. Optim.,

1, 1-17.

21. Fiedler, M., Nedoma, J., Ramík, J., Rohn, J. & Zimmermann, K. (2006). Linear

optimization problems with inexact data. Springer-Verlag, New York.

22. Fletcher, R. (2000). Practical methods of optimization (2nd

edition). John Wiley

& Sons, New York.

23. Fletcher, R. & Powell, M. J. D. (1963). A rapidly convergent descent method for

minimization. Comp. J., 6, 163-168.

Page 124: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

110

24 Frankel, S. (1950). Convergence rates of iterative treatments of partial differential

equations. Math. Tab. Other Aids Comput., 4, 65-75.

25. Golub, G. H. & Van Loan C. F. (1996). Matrix computation. The Johns Hopkins

University Press, Baltimore.

26. Grcar, J. F. (2011). How ordinary elimination became Gaussian elimination.

Historia Mathematica, 38, 163-218.

27. Hadjimos, A. (1978). Accelerated overrelaxation method. Math. Comp., 32, 149-157.

28. Hadjidimos, A. (2000). Successive over relaxation (SOR) and related methods. J.

Comput. Appl. Math., 123, 177-199.

29. He, J. H. (1999). Homotopy perturbation technique. Comput. Meth. Appl.

Mech. Eng., 178, 257-62.

30. He, J. H. (2000). A coupling method of a homotopy technique and a perturbation

technique for nonlinear problems. Int. J. Nonlin. Mech., 35, 37-43.

31. He, J. H. (2004). Comparison of homotopy perturbation method and homotopy

analysis method. Appl. Math. Comput., 156, 527-539.

32. He, J. H. (2003). Homotopy perturbation method, a new nonlinear analytical

technique. Appl. Math. Comput., 135, 73-79.

33. He, J. H. (2006). Homotopy perturbation method for solving boundary value

problems. Phys. Lett., 350, 87-88.

34. Hestenes, M. R. & Stiefel, E. L. (1952). Methods of conjugate gradients for

solvig linear systems. J. Res. Natl. Bur. Stand., 49, 409-436.

35. Hu, S. L., Huang, Z. H. & Zang, Q. (2011). A generalized Newton method for

absolute value equations associated with second order cones. J. Coput. Appl. Math.,

235, 1490-1501.

Page 125: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

111

36. Hu, S. L. & Huang, Z. H. (2010). A note on absolute value equations. Optim. Lett.,

4, 417-423.

37. Jea, K. C. & Young, D. M. (1980). Generalized conjugate gradient acceleration of

nonsymmetrizable iterative methods. Lin. Alg. Appl., 34, 159-194.

38. Jing, Y. F. & Huang, T. Z. (2008). On a new iterative method for solving linear

systems and comparison results. J. Comput. Appl. Math., 220, 74-84.

39. Karamardian, S. (1971). Generalized complementarity problem. J. Optim. Theory

Appl., 8, 161-168.

40. Keramati, B. (2009). An approach to the solution of linear system of equations by

He's homotopy perturbation method. Chaos Solitons Fractals, 41, 152-156.

41. Lemke, C. E. (1965). Bimatrix equilibrium points and mathematical programming.

Manag. Sci., 11, 681-689.

42. Lim, T. C. (2010). Nonexpansive matrices with applications to solutions of linear

systems by fixed point iterations. Fix. Point Theory Appl., DOI: 10.1155/ 2010/

821928.

43. Li, W. & Sun, W. (2000). Modified Gauss-Seidel type methods and Jacobi types

methods for Z-matrices. Lin. Alg. Appl., 317, 227-240.

44. Li, Y. & Dai, P. (2007). Generalized AOR methods for complementarity problem.

Appl. Math. Comput., 188, 7-18.

45. Libbrecht, U. (1973). Chinese mathematics in the thirteenth century. MIT Press,

Cambridge.

46. Liu, H. K. (2011). Application of homotopy perturbation methods for solving

systems of linear equations. Appl. Math. Comput., 217, 5259-5264.

Page 126: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

112

47. Luenberger, D. G. (1984). Linear and nonlinear programming. Addison-Wesley

Publishing, Boston.

48. Mangasarian, O. L. (2007). Absolute value programming. Comput. Optim. Appl., 36,

43-53.

49. Mangasarian, O. L. (2007). Absolute value equation solution via concave minimi-

zation. Optim. Lett., 1, 3-8.

50. Mangasarian, O. L. (2009) A generalized Newton method for absolute value

equations. Optim. Lett., 3, 101-108.

51. Mangasarian, O. L. (1977). Solution of symmetric linear complementarity problems

by iterative methods. J. Optim. Theory Appl., 22, 465-485.

52. Mangasarian, O. L. (2011). Primal-dual bilinear programming solution of the

absolute value equations. Optim. Lett., DOI: 10.1007/s11590-011-0347-6.

53. Mangasarian, O. L. & Meyer, R. R. (2006). Absolute value equations. Lin. Alg.

Appl., 419, 359–367.

54. Mohyud-Din, S. T., Noor , M. A. & Noor, K. I. (2009). Parametric expansion

techniques for strongly nonlinear oscillators. Int. J. Numer. Sci. Numer. Samul., 10

(5), 581-583.

55. Martinez, J. M. (2000). Practical quasi Newton methods for solving nonlinear

systems. J. Comp. Appl. Math., 124, 97-122.

56. Moore, G. H. (1995). An axiomatization of linear algebra: 1875–1940. Hist. Math.

22, 262-303.

57. Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming.

Helderman verlage Berlin.

Page 127: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

113

58. Noor, M. A. (1988). Fixed point approach for complementarity problems. J. Math.

Anal. Appl., 133, 437- 448.

59. Noor, M. A. (1988). General variational inequalities. Appl. Math. Lett., 1, 119-121.

60. Noor, M. A. (2010). Iterative methods for nonlinear equations using homotopy

perturbation method. Appl. Math. Info. Sci., 4(2), 227-235.

61. Noor, M. A. (1988). Iterative methods for a class of complementarity problems. J.

Math. Anal. Appl., 133, 366-382.

62. Noor, M. A. (2007). On merit functions for quasivariational inequalities. J. Math.

Ineq., 1, 259-268.

63. Noor, M. A. (1975). On variational inequalities. Ph. D Thesis, Brunel university

London, UK.

64. Noor, M. A. (2010). Some iterative methods for solving nonlinear equations using

homotopy perturbation method. Int. J. Comput. Math., 87, 141-149.

65. Noor, M. A. (2004). Some developments in general variational inequalities. Appl.

Math. Comput., 152, 100-277.

66. Noor, M. A., Iqbal, J., Khattri S. & Al-Said, E. (2011). A new iterative method for

solving absolute value equations. Int. J. Phy. Sci., 6(7), 1793-1797.

67. Noor, M. A., Iqbal, J., Noor, K. I. & Al-Said, E. (2011). On an iterative method for

solving absolute value equations. Optim. Lett., DOI: 10.1007/s11590-011-0332-0.

68. Noor, M. A., Iqbal, J. & Al-Said, E. (2012). Residual iterative method for solving

absolute value equations. Abst. Appl. Anal., DOI:10.1155/2012/406232.

69. Noor, M. A., Iqbal, J., Noor, K. I. & Al-Said, E. (2012). Generalized AOR method

for solving absolute complementarity problems. J. Appl. Math., DOI:10.1155/2012/

743861.

Page 128: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

114

70. Noor, M. A., Noor, K. I. & Rassias T. M. (1993). Some aspects of variational

inequalities. J. Comput. Appl. Math., 47, 285-312.

71. Ortega, J. M. (1972). Numerical analysis; a second course, Academic Press, New

York.

72. Paige, C. C. & Saunders, M. A. (1975). Solution of sparse indefinite systems of

linear equations. SAIM J. Numer. Anal., 12, 617-624.

73. Pardolas, P. M., Rassais T. M., & Khan, A. A. (2010). Nonlinear analysis and

variational analysis. Springer, Berlin.

74. Pardalos, P. M. & Rosen, J. B. (1988). Global optimization approach to the linear

complementarity problem. SIAM J. Sci. Stat. Comput., 9, 341-353.

75. Polyak, B. T. (1987). Introduction to optimization, Optimization Software, Inc.,

Publications Division, New York.

76. Prokopyev, O. (2009). On equivalent reformulations for absolute value equations.

Comput. Optim. Appl., 44, 363–372.

77. Rex, G. & Rohn, J. (1999). Sufficient conditions for regularity and singularity of

interval matrices. SIAM J. Mat. Anal., 20, 437-445.

78. Rockafellar, R. T. (1971). New applications of duality in convex programming. In

Proceedings Fourth Conference on Probability. Brasov, Romania.

79. Rohn, J. (2004). A theorem of the alternatives for the equation .Ax B x b+ =

Lin. Multilin. Alg., 52, 421-426.

80. Rohn, J. (2009). An algorithm for solving the absolute value equation. Elec. J. Lin.

Alg., 18, 589-599.

81. Rohn, J. (2011). An algorithm for computing all solutions of an absolute value

equation. Optim. Lett., DOI 10.1007/s11590-011-0305-3

Page 129: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

115

82. Rohn, J. (2009). On unique solvability of the absolute value equation. Optim. Lett.,

3, 603-606

83. Rohn, J. (2009). Description of all solutions of a linear complementarity problem.

Elec. J. Lin. Alg., 18, 246-252.

84. Rohn, J. (2010). An algorithm for solving the absolute value equation: An

improvement. Technical Report 1063, Institute of Computer Science, Academy of

Sciences of the Czech Republic, Prague.

85. Saad, Y. (2003). Iterative methods for sparse linear systems (2nd

edition). The PWS

Publishing Company, Boston.

86. Saad, Y. (1981). Krylov subspace methods for solving large unsymmetric linear

systems. Math. Comput., 37, 105-126.

87. Saad, Y. & Schultz, M. H. (1983). GMRES: a generalized minimal residual

algorithm for solving nonsymmetric linear systems. Technical Report 254, Yale

University New Haven, USA.

88. Saad, Y. (1984). Practical use of some Krylov subspace methods for solving

indefinite and unsymmetric linear systems. SIAM J. Sci. Stat. Comput., 5, 203-228.

89. Sheng, X., Su, Y. & Chen, G. (2009). A modification of minimal residual iterative

method to solve linear systems. Math. Prob. Engg., DOI:10.1155/2009/794589.

90. Shi, Y. (2007). Modified quasi-Newton methods for solving systems of linear

equations. Int. J. Contemp. Math. Sci., 15, 737-744.

91. Shi, Y. (1995). Solving linear systems involved in constrained optimization. Lin.

Alg. Appl., 229, 175-189.

92. Stein, P. & Rosenberg, R.L. (1948). On the solution of linear simultaneous equations

by iteration, J. London Math. Soc., 23, 111-118.

Page 130: Iterative Methods for Solving Systems of Equationsprr.hec.gov.pk/jspui/bitstream/123456789/1632/2/1493S.pdf · The Jacobi method, Gauss-Seidel method and successive over relaxation

116

93. Stetter, H. I. (1973). Analysis of discretization methods for ordinary differential

equations: From tracts in natural philosophy. Springer-Verlag, New York.

94. Strassen, V. (1969). Gaussian elimination is not optimal. Numer. Math., 13(4), 354-

356.

95. Thomas, G. B., Weir, M. D., Hass, J. & Giordano, F. R. (2004). Thomas calculus

(11th

edition). Addison-Wesley Bostan.

96. Turing, A. M. (1948). Rounding-off errors in matrix processes. Quart. J. Mech.

Appl. Math., 1, 287–308.

97. Varga, R. S. (2000). Matrix iterative analysis, (Second edition), Springer, New York.

98. Vasin, V. V. & Eremin, I. I. (2009). Operators and iterative processes of Fejér type:

theory and applications. Walter de Gruyter, Berlin.

99. Wang, A., Wang, H. & Deng, Y. (2011). Interval algorithm for absolute value

equations. Cent. Eur. J. Math., 9(5), 1171-1184.

100. Yong, L. (2010). Particle swarm optimization for absolute value equations. J.

Comput. Inf. Syst., 7, 2359-2366.

101. Yong, L., Liu, S., Zheng S. & Deng, F. (2011). Smoothing Newton method for

absolute value equations based on aggregate function. Int. J. Phy. Sci., 6 (23),

5399-5405.

102. Young, D. M. (1950). Iterative methods for solving partial differential equations of

elliptic type, Ph D. thesis. Harvard university Cambridge Mass.

103. Yuan, D. & Song, Y. (2003). Modified AOR method for linear complementarity

problem. Appl. Math. Comput., 140, 53-67.

104. Yusufoglu, M. (2009). An improment to homotopy perturbation method for solving

system of linear equations. Comput. Math. Appl., 58, 2231-2235.