inverting matrices determinants and matrix multiplication

27
Inverting Matrices Determinants and Matrix Multiplication

Upload: dinah-oneal

Post on 23-Dec-2015

247 views

Category:

Documents


2 download

TRANSCRIPT

Inverting Matrices

Determinants and Matrix Multiplication

Determinants

• Square matrices have determinants, which are useful in other matrix operations, especially inversion.

• For a second-order square matrix, A,the determinant is

21122211 aaaa A

2221

1211

aa

aa

Consider the following bivariate raw data matrix

Subject # 1 2 3 4 5

X 12 18 32 44 49

Y 1 3 2 4 5

from which the following XY variance-covariance matrix is obtained:

  X YX 256 21.5Y 21.5 2.5

9.05.2256

5.21

YX

XY

SS

COVr

75.177)5.21(5.21)5.2(256 AThink of the variance-covariance matrix as containing information about the two variables – the more variable X and Y are, the more information you have.

Any redundancy between X and Y reduces the total amount of information you have -- to the extent that you have covariance between X and Y, you have less total information.

Generalized Variance

• The determinant tells you how much information the matrix has about the variance in the variables – the generalized variance,

• after removing redundancy among variables.

• We took the product of the variances and then subtracted the product of the covariances (redundancy).

Imagine a Rectangle

• Its width represents information on X• Its height represents information on Y• X is perpendicular to Y (orthogonal), thus rXY = 0.

• The area of the rectangle represents the total information on X and Y.

• With covariance = 0, the determinant = the product of the two variances minus 0.

Imagine a Parallelogram

• Allowing X and Y to be correlated with one another moves the angle between height and width away from 90 degrees.

• As the angle moves further and further away from 90 degrees, the area of the parallelogram is also reduced.

• Eventually to zero (when X and Y are perfectly correlated).

• See the Generalized Variance video clip in BlackBoard.

Consider This Data MatrixSubject # 1 2 3 4 5

X 10 20 30 40 50

Y 1 2 3 4 5

  X Y

X 250 25Y 25 2.5

Variance-Covariance Matrix

15.2250

25

YX

XY

SS

COVr 0)25(25)5.2(250 A

Since X and Y are perfectly correlated, the generalized variance is nil.

Identity Matrix

• An identity matrix has 1’s on its main diagonal, 0’s elsewhere.

1 0 0

0 1 0

0 0 1

Inversion

• The inverted matrix is that which when multiplied by A yields the identity matrix. That is, AA1 = A1A = I.

• With scalars, multiplication by the inverse yields the scalar identity.

• Multiplication by an inverseis like division with scalars.

.11

a

a

.1

b

a

ba

Inverting a 2x2 Matrix

• For our original variance/covariance matrix:

256 21.5-

21.5- 2.5

75.177

1

-

- *

1

1121

1222

aa

aa

2

12 A

A

Multiplying a Scalar by a Matrix

• Simply multiply each matrix element by the scalar (1/177.75 in this case).

• The resulting inverse matrix is:

51.44022503 .120956399-

.120956399- .014064698 1A

AA1 = A1A = I

dzcxdycw

bzaxbyaw

zy

xw

dc

ba

colrow colrow

colrow colrow

2212

2111

1 0

0 1

51.44022503 .120956399-

.120956399- .014064698

2.5 21.5

21.5 256

The Determinant of a Third-Order Square Matrix

3A

333231

232221

131211

aaa

aaa

aaa

33211223321113223121

3213312312332211

aaaaaaaaaa

aaaaaaaa

Matrix Multiplication for a 3 x 3

izhwgtiyhvgsixhugr

fzewdtfyevdsfxeudr

czbwatcybvascxbuar

zyx

wvu

tsr

ihg

fed

cba

332313

322212

312111

colrow colrow colrow

colrow colrow colrow

colrow colrow colrow

SAS Will Do It For You

• Proc IML;• reset print; display each matrix when created

• XY ={ enter the matrix XY

• 256 21.5, comma at end of row

• 21.5 2.5}; matrix within { }

• determinant = det(XY); find determinant

• inverse = inv(XY); find inverse

• identity = XY*inverse; multiply by inverse

• quit;

XY 2 rows 2 cols 256 21.5 21.5 2.5 DETERMINANT 1 row 1 col 177.75 INVERSE 2 rows 2 cols 0.0140647 -0.120956 -0.1209560 1.440225 IDENTITY 2 rows 2 cols 1 -2.22E-16 -2.08E-17 1