primer on matrices - pcmehou/y2_eng_analysis/matrices.doc  · web viewa linear equation about...

23
A PRIMER ON MATRICES Definition A rectangular table of m rows and n columns of numbers in the tabular form of 1 j n is called a rectangular matrix of order mn. If m = n, the matrix is called a square matrix. Square matrices are more common and thus are the subject of this module. A vector is a special matrix of a single column. Special Matrices 1. Diagonal matrix (D) 2. Identity (unit) matrix (I) H Ouyang, University of Liverpool, 2003 1 i row colum bold normal two

Upload: others

Post on 14-Aug-2020

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

A PRIMER ON MATRICES

DefinitionA rectangular table of m rows and n columns of numbers in the tabular form of

1 j n

is called a rectangular matrix of order mn. If m = n, the matrix is called a square matrix. Square matrices are more common and thus are the subject of this module. A vector is a special matrix of a single column.

Special Matrices1. Diagonal matrix (D)

2. Identity (unit) matrix (I)

3. Symmetric matrix 4. Anti-symmetric (skew-symmetric) matrix Simple Matrix Operations

H Ouyang, University of Liverpool, 2003

1

i

m

row

column

bold normal

two subscriptsfor elements

Page 2: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

Two matrices are said to be equal if they have the same order and identical corresponding elements.

if and only if for all i and j.

1. Addition and Subtraction

2. Scalar Multiplication

3. Matrix Multiplication

Two matrices can be multiplied only when the number of columns of the 1st matrix equals the number of the rows of the 2nd matrix. In general, . If , the two matrices are said to commute. Diagonal matrices can commute.4. Division ? inverse matrix explained later

5. Transpose

Commutative Law:

H Ouyang, University of Liverpool, 2003

3rd row

4th column

86 84 46

Page 3: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

Associative Law:

Determinant of a square matrix

The determinant of a 11 matrix is .

The determinant of a 22 matrix is .

The determinant of a 33 matrix is

The determinant of an nn matrix is defined as (information only)

(l, k)-cofactor:

(l, k)-minor taking away the lth row and kth column.Inverse (for information only)

If there exists a square matrix B such that , then B is the inverse of a square matrix A, denoted as .

Example 1: (try it out)

H Ouyang, University of Liverpool, 2003

- +

- +

Page 4: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

If det(A)=0, A is said to be singular and does not exist.

Computation of det(A) and is complicated, the latter in particular. Computer programs of suitable algorithms must be used for high-order matrices.

Determinant and inverse arise in simultaneous equations.SIMULTANEOUS LINEAR EQUATIONS

where A is a known square matrix, x the vector of the unknowns and b a known vector.

H Ouyang, University of Liverpool, 2003

Page 5: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

The formal solution is (information only)

or (Crammer’s rule)

where is formed by replacing the ith column of A with b.

The formal solution is useful only in mathematical manipulation. Solving a system of linear equations is carried out using computer programs, which do not actually use the above formal solutions.

Methods for Solving Linear Equations

1. SubstitutionSubstitute any one equation into all the others, this reduces the unknown by one and leads to an equation of unknowns. Repeating this process until only one unknown is left in an equation. Find the value of this unknown. Back-substitute and gradually find values of all the other unknowns.

This is the simplest way and is used to solve low order (maximum 4) linear equations.

2. Gaussian elimination (for information only)One equation can be multiplied by a known factor on both sides and added to both sides of any other equations. This operation does not change the solution of the simultaneous equations. This

H Ouyang, University of Liverpool, 2003

Page 6: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

property is exploited to eliminate some coefficients of A and reduce it to a triangular form, which can then be solved very easily.

To achieve an accurate solution, a pivot (the maximum in a row) element may be chosen and A should be rearranged.3. Iteration (for information only)Re-cast the original equation into

Iterate

Sometimes the iteration would not converge. Pivot elements should be chosen and the equations rearranged.Gaussian elimination is the most commonly used method for solving simultaneous linear equations.

4. Matrix decomposition (for information only)A similar method to Gaussian elimination is to decompose A as the product of a lower triangular matrix of all diagonal elements of one and an upper triangular matrix.

H Ouyang, University of Liverpool, 2003

Page 7: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

Then first solve and then solve . Both equations can be solved readily because the coefficient matrices involved are triangular.

If A is symmetrical, A can be decomposed as , where L is a lower triangular matrix of all diagonal elements of one and D a diagonal matrix. Symmetry allows a faster solution.

Some useful conclusions:

Gaussian elimination, LU decomposition and LDLT decomposition are normally available in a linear algebra software package. Users need to know the input and output parameters of

H Ouyang, University of Liverpool, 2003

0)det( A

0b

Infinite number of solutsons,

including zero

yes

yes

No or infinite number of solutions

no0b

Solutions are a zero vector

no

yes

Solutions are unique

no

This chart is important and should be understood and preferably remembered

Page 8: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

these subroutines (or procedures) and call them in an appropriate way.

Some more convenient software packages, such as MATLAB, MATHEMATICA, MAPLE, provide the inverse of a matrix from their built-in code. In this case, a solution can be found with a little effort of users’ own programming.

Students must know how to (1) calculate the determinant of a matrix with an order of three or less and (2) solve a system of simultaneous equations of not greater than three unknowns.

H Ouyang, University of Liverpool, 2003

Page 9: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

EIGENVALUE PROBLEMS

A linear equation about scalar and vector x in the form of

or (1)

apparently has a zero solution of x=0 (check the flowchart). This zero solution is called a trivial solution, for it is obvious and needs no mathematical effort. Only when is a singular matrix can the above equation has a non-zero (non-trivial) solution .

For to be singular, that is, , cannot be arbitrary. A value of which permits a non-trivial solution x0 of is called an eigenvalue of matrix A. The corresponding solution x0 is called an eigenvector. Together a pair of (, x) is called an eigen-solutions of A. There can be as many eigenvalues (eigenvectors) as the order of a square matrix.

A generalised eigenvalue problem is defined as solving

or (2)

for (, x) ( ), where neither of A and B is a diagonal matrix. For an eigen-solution to exist, the following equation must hold

(3)

H Ouyang, University of Liverpool, 2003

Page 10: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

The straightforward way of solving an eigenvalue problem is to expand the determinant (the determinant or characteristic polynomial method)

or

into a nonlinear equation (characteristic equation) about as

(4)

where for a conventional eigenvalue problem

A polynomial equation, such as (4), can always be factorised as

(5)

where are the roots of the characteristic equation (4) (eigenvalues). By comparing equations (4) and (5), one can see

These properties can be used to check whether the eigenvalues obtained are correct or to find remaining one or two eigenvalues.When a is found, its corresponding eigenvector can be found by analytically solving equation (1) after substituting the back to

H Ouyang, University of Liverpool, 2003

Page 11: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

this homogeneous equation. Like computing a determinant, this way works well for only matrices of very low orders (maximum 4). Using the methods for solving nonlinear equations, for example, the binary search, is inefficient for solving all eigenvalues, and is very difficult for solving complex eigenvalues.

Some useful properties on eigenvalue problemsUniqueness: If x is an eigenvector, then cx is also an eigenvector, where c is a constant. For this reason, x is usually normalized (scaled according to a rule) and a normalized eigenvector is unique.Normalisation: (1) . (2) . (3) .Orthorgonality: If xi and xj are eigenvectors of two distinct eigenvalues, then for a symmetric A.

Example 2: find the eigen-solutions of .

Solution:

eigenvalues: .For x1: substituting into

H Ouyang, University of Liverpool, 2003

Page 12: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

this results in , where is an arbitrary non-zero number. After normalization (1), . The other two eigenvectors can be found similarly.

All the normalized eigenvectors form a so-called eigenvector matrix as

Eigenvalue problems are very common in science and engineering. The natural frequencies of a system in undamped free vibration appear as the square root of in a (generalized) eigenvalue problem. The stability of an aircraft can be assessed by solving an eigenvalue problem.

H Ouyang, University of Liverpool, 2003

Page 13: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

Free Vibration

A mass-spring system:

Free-body diagram:

Equations of motion:

or

In matrix form as

This leads to the following eigenvalue problem when substituting into the above matrix equation of motion

The characteristic equation is

H Ouyang, University of Liverpool, 2003

m1 m2

m1 m2

x1 x2

k1 k2 k3

k1x1 k3x2k2(x1-x2) k2(x1-x2)

Page 14: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

or

This quadratic equation has two roots of . Suppose and . It follows

The two analytic solutions are and .Equation (2) allows the ratio of the two amplitudes to be found as

(6)

Substitution of one eigenvalue a time into equation (6) yields

For : For :

There are many eigenvalue methods or algorithms. One simple algorithm is iteration. For an eigenvalue problem defined as

(1) Iteration (the power method)

H Ouyang, University of Liverpool, 2003

0.73 1

-2.73

1

a1 a2

a1

a2

The two modes

Page 15: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

allows the greatest numerical (absolute) eigenvalue to be found when converging.

(2) Iteration

allows the smallest numerical eigenvalue to be found when converging. However, since a matrix inverse is difficult to compute, the power method with shift is normally used instead.

(3) Define

(shift)

the shifted iteration of

allows the greatest of to be found, giving another (likely to be the minimum) eigenvalue as of matrix A.

Example 3: try the matrix shown in Example 2. Using the power method allows one to find the maximum eigenvalue as

H Ouyang, University of Liverpool, 2003

Page 16: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

Construct a new matrix as

The eigenvalues of this new matrix has the following eigenvalues as

Using the power method allows one to find the maximum (numerical) eigenvalue of this matrix as

Therefore the minimum eigenvalue of A is

It is also known that

Therefore the intermediate eigenvalue can be found as

which is of course the same as that determined by the determinant method.

H Ouyang, University of Liverpool, 2003

Page 17: Primer on Matrices - pcmehou/Y2_Eng_Analysis/Matrices.doc  · Web viewA linear equation about scalar and vector . x. in the form of. or (1) apparently has a zero solution of . x

Plot with respect to

-10

-5

0

5

10

50 100 150 200 250 300 350

Students must know how to calculate the eigen-solutions of a matrix with an order of three or lower.

H Ouyang, University of Liverpool, 2003