Transcript

Lecture 4: πΏπ‘ˆ Decomposition and MatrixInverse

Thang Huynh, UC San Diego1/17/2018

Gaussian elimination revisited

β–Ά Example. Keeping track of the elementary matrices duringGaussian elimination on 𝐴:

𝐴 = ⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

𝐸𝐴 = ⎑⎒⎣

1 0βˆ’2 1

⎀βŽ₯⎦

⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

= ⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

.

Note that𝐴 = πΈβˆ’1 ⎑⎒

⎣2 10 βˆ’8

⎀βŽ₯⎦

= ⎑⎒⎣1 02 1

⎀βŽ₯⎦

⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

We factored 𝐴 as the product of a lower and upper triangularmatrix! We say that 𝐴 has triangular factorization.

1

Gaussian elimination revisited

β–Ά Example. Keeping track of the elementary matrices duringGaussian elimination on 𝐴:

𝐴 = ⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

𝐸𝐴 = ⎑⎒⎣

1 0βˆ’2 1

⎀βŽ₯⎦

⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

= ⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

.

Note that𝐴 = πΈβˆ’1 ⎑⎒

⎣2 10 βˆ’8

⎀βŽ₯⎦

= ⎑⎒⎣1 02 1

⎀βŽ₯⎦

⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

We factored 𝐴 as the product of a lower and upper triangularmatrix! We say that 𝐴 has triangular factorization.

1

Gaussian elimination revisited

β–Ά Example. Keeping track of the elementary matrices duringGaussian elimination on 𝐴:

𝐴 = ⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

𝐸𝐴 = ⎑⎒⎣

1 0βˆ’2 1

⎀βŽ₯⎦

⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

= ⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

.

Note that𝐴 = πΈβˆ’1 ⎑⎒

⎣2 10 βˆ’8

⎀βŽ₯⎦

= ⎑⎒⎣1 02 1

⎀βŽ₯⎦

⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

We factored 𝐴 as the product of a lower and upper triangularmatrix! We say that 𝐴 has triangular factorization.

1

Gaussian elimination revisited

β–Ά Example. Keeping track of the elementary matrices duringGaussian elimination on 𝐴:

𝐴 = ⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

𝐸𝐴 = ⎑⎒⎣

1 0βˆ’2 1

⎀βŽ₯⎦

⎑⎒⎣2 14 βˆ’6

⎀βŽ₯⎦

= ⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

.

Note that𝐴 = πΈβˆ’1 ⎑⎒

⎣2 10 βˆ’8

⎀βŽ₯⎦

= ⎑⎒⎣1 02 1

⎀βŽ₯⎦

⎑⎒⎣2 10 βˆ’8

⎀βŽ₯⎦

We factored 𝐴 as the product of a lower and upper triangularmatrix! We say that 𝐴 has triangular factorization.

1

Gaussian elimination revisited

𝐴 = πΏπ‘ˆ is known as the LU decomposition of 𝐴.

β–Ά Definition.lower triangular

⎑⎒⎒⎒⎒⎒⎒⎣

βˆ— 0 0 0 0βˆ— βˆ— 0 0 0βˆ— βˆ— βˆ— 0 0βˆ— βˆ— βˆ— βˆ— 0βˆ— βˆ— βˆ— βˆ— βˆ—

⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦

upper triangular

⎑⎒⎒⎒⎒⎒⎒⎣

βˆ— βˆ— βˆ— βˆ— βˆ—0 βˆ— βˆ— βˆ— βˆ—0 0 βˆ— βˆ— βˆ—0 0 0 βˆ— βˆ—0 0 0 0 βˆ—

⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦

2

Gaussian elimination revisited

𝐴 = πΏπ‘ˆ is known as the LU decomposition of 𝐴.

β–Ά Definition.lower triangular

⎑⎒⎒⎒⎒⎒⎒⎣

βˆ— 0 0 0 0βˆ— βˆ— 0 0 0βˆ— βˆ— βˆ— 0 0βˆ— βˆ— βˆ— βˆ— 0βˆ— βˆ— βˆ— βˆ— βˆ—

⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦

upper triangular

⎑⎒⎒⎒⎒⎒⎒⎣

βˆ— βˆ— βˆ— βˆ— βˆ—0 βˆ— βˆ— βˆ— βˆ—0 0 βˆ— βˆ— βˆ—0 0 0 βˆ— βˆ—0 0 0 0 βˆ—

⎀βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯βŽ₯⎦

2

πΏπ‘ˆ decompostion

β–Ά Example. Factor 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

as 𝐴 = πΏπ‘ˆ.

β–Ά Solution.

𝐸1𝐴 =⎑⎒⎒⎣

1 0 0βˆ’2 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

𝐸2(𝐸1𝐴) =⎑⎒⎒⎣

1 0 00 1 01 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

𝐸3𝐸2𝐸1𝐴 =⎑⎒⎒⎣

1 0 00 1 00 1 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 0 1

⎀βŽ₯βŽ₯⎦

= π‘ˆ

3

πΏπ‘ˆ decompostion

β–Ά Example. Factor 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

as 𝐴 = πΏπ‘ˆ.

β–Ά Solution.

𝐸1𝐴 =⎑⎒⎒⎣

1 0 0βˆ’2 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

𝐸2(𝐸1𝐴) =⎑⎒⎒⎣

1 0 00 1 01 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

𝐸3𝐸2𝐸1𝐴 =⎑⎒⎒⎣

1 0 00 1 00 1 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 0 1

⎀βŽ₯βŽ₯⎦

= π‘ˆ

3

πΏπ‘ˆ decompostion

β–Ά Example. Factor 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

as 𝐴 = πΏπ‘ˆ.

β–Ά Solution.

𝐸1𝐴 =⎑⎒⎒⎣

1 0 0βˆ’2 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

𝐸2(𝐸1𝐴) =⎑⎒⎒⎣

1 0 00 1 01 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

𝐸3𝐸2𝐸1𝐴 =⎑⎒⎒⎣

1 0 00 1 00 1 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 0 1

⎀βŽ₯βŽ₯⎦

= π‘ˆ

3

πΏπ‘ˆ decompostion

β–Ά Example. Factor 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

as 𝐴 = πΏπ‘ˆ.

β–Ά Solution.

𝐸1𝐴 =⎑⎒⎒⎣

1 0 0βˆ’2 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

𝐸2(𝐸1𝐴) =⎑⎒⎒⎣

1 0 00 1 01 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’2

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

𝐸3𝐸2𝐸1𝐴 =⎑⎒⎒⎣

1 0 00 1 00 1 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 8 3

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 0 1

⎀βŽ₯βŽ₯⎦

= π‘ˆ

3

πΏπ‘ˆ decompostion

𝐸3𝐸2𝐸1𝐴 = π‘ˆ

⟹ 𝐴 = πΈβˆ’11 πΈβˆ’1

2 πΈβˆ’13 π‘ˆ

The factor 𝐿 is given by

𝐿 = πΈβˆ’11 πΈβˆ’1

2 πΈβˆ’13

=⎑⎒⎒⎣

1 0 02 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

1 0 00 1 0

βˆ’1 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

1 0 00 1 00 βˆ’1 1

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

1 0 02 1 0

βˆ’1 βˆ’1 1

⎀βŽ₯βŽ₯⎦

4

πΏπ‘ˆ decompostion

𝐸3𝐸2𝐸1𝐴 = π‘ˆ ⟹ 𝐴 = πΈβˆ’11 πΈβˆ’1

2 πΈβˆ’13 π‘ˆ

The factor 𝐿 is given by

𝐿 = πΈβˆ’11 πΈβˆ’1

2 πΈβˆ’13

=⎑⎒⎒⎣

1 0 02 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

1 0 00 1 0

βˆ’1 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

1 0 00 1 00 βˆ’1 1

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

1 0 02 1 0

βˆ’1 βˆ’1 1

⎀βŽ₯βŽ₯⎦

4

πΏπ‘ˆ decompostion

𝐸3𝐸2𝐸1𝐴 = π‘ˆ ⟹ 𝐴 = πΈβˆ’11 πΈβˆ’1

2 πΈβˆ’13 π‘ˆ

The factor 𝐿 is given by

𝐿 = πΈβˆ’11 πΈβˆ’1

2 πΈβˆ’13

=⎑⎒⎒⎣

1 0 02 1 00 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

1 0 00 1 0

βˆ’1 0 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

1 0 00 1 00 βˆ’1 1

⎀βŽ₯βŽ₯⎦

=⎑⎒⎒⎣

1 0 02 1 0

βˆ’1 βˆ’1 1

⎀βŽ₯βŽ₯⎦

4

πΏπ‘ˆ decompostion

We found the following πΏπ‘ˆ decomposition of 𝐴:

𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

= πΏπ‘ˆ =⎑⎒⎒⎣

1 0 02 1 0

βˆ’1 βˆ’1 1

⎀βŽ₯βŽ₯⎦

⎑⎒⎒⎣

2 1 10 βˆ’8 βˆ’20 0 1

⎀βŽ₯βŽ₯⎦

.

5

Why πΏπ‘ˆ decomposition?Once we have 𝐴 = πΏπ‘ˆ, it is simple to solve 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏.

𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏𝐿(π‘ˆπ‘₯π‘₯π‘₯) = 𝑏𝑏𝑏

𝐿𝑐𝑐𝑐 = 𝑏𝑏𝑏 and π‘ˆπ‘₯π‘₯π‘₯ = 𝑐𝑐𝑐.Both of the final systems are triangular and hence easily solved:

β€’ 𝐿𝑐𝑐𝑐 = 𝑏𝑏𝑏 by forward substitution to find 𝑐𝑐𝑐, and thenβ€’ π‘ˆπ‘₯π‘₯π‘₯ = 𝑐𝑐𝑐 by backward substitution to find π‘₯π‘₯π‘₯.

β–Ά Example. Solve

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

π‘₯π‘₯π‘₯ =⎑⎒⎒⎣

410βˆ’3

⎀βŽ₯βŽ₯⎦

.

6

Why πΏπ‘ˆ decomposition?Once we have 𝐴 = πΏπ‘ˆ, it is simple to solve 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏.

𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏𝐿(π‘ˆπ‘₯π‘₯π‘₯) = 𝑏𝑏𝑏

𝐿𝑐𝑐𝑐 = 𝑏𝑏𝑏 and π‘ˆπ‘₯π‘₯π‘₯ = 𝑐𝑐𝑐.Both of the final systems are triangular and hence easily solved:

β€’ 𝐿𝑐𝑐𝑐 = 𝑏𝑏𝑏 by forward substitution to find 𝑐𝑐𝑐, and thenβ€’ π‘ˆπ‘₯π‘₯π‘₯ = 𝑐𝑐𝑐 by backward substitution to find π‘₯π‘₯π‘₯.

β–Ά Example. Solve

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

π‘₯π‘₯π‘₯ =⎑⎒⎒⎣

410βˆ’3

⎀βŽ₯βŽ₯⎦

.

6

Why πΏπ‘ˆ decomposition?Once we have 𝐴 = πΏπ‘ˆ, it is simple to solve 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏.

𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏𝐿(π‘ˆπ‘₯π‘₯π‘₯) = 𝑏𝑏𝑏

𝐿𝑐𝑐𝑐 = 𝑏𝑏𝑏 and π‘ˆπ‘₯π‘₯π‘₯ = 𝑐𝑐𝑐.Both of the final systems are triangular and hence easily solved:

β€’ 𝐿𝑐𝑐𝑐 = 𝑏𝑏𝑏 by forward substitution to find 𝑐𝑐𝑐, and thenβ€’ π‘ˆπ‘₯π‘₯π‘₯ = 𝑐𝑐𝑐 by backward substitution to find π‘₯π‘₯π‘₯.

β–Ά Example. Solve

⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦

π‘₯π‘₯π‘₯ =⎑⎒⎒⎣

410βˆ’3

⎀βŽ₯βŽ₯⎦

.

6

The inverse of a matrix

β–Ά Definition. An 𝑛 Γ— 𝑛 matrix 𝐴 is invertible if there is a matrix 𝐡such that

𝐴𝐡 = 𝐡𝐴 = 𝐼𝑛×𝑛.

In that case, 𝐡 is the inverse of 𝐴 and is denoted by π΄βˆ’1.

β–Ά Remark.

β€’ The inverse of a matrix is unique. (Why?)β€’ Do not write 𝐴

𝐡 .

7

The inverse of a matrix

β–Ά Definition. An 𝑛 Γ— 𝑛 matrix 𝐴 is invertible if there is a matrix 𝐡such that

𝐴𝐡 = 𝐡𝐴 = 𝐼𝑛×𝑛.

In that case, 𝐡 is the inverse of 𝐴 and is denoted by π΄βˆ’1.β–Ά Remark.

β€’ The inverse of a matrix is unique. (Why?)

β€’ Do not write 𝐴𝐡 .

7

The inverse of a matrix

β–Ά Definition. An 𝑛 Γ— 𝑛 matrix 𝐴 is invertible if there is a matrix 𝐡such that

𝐴𝐡 = 𝐡𝐴 = 𝐼𝑛×𝑛.

In that case, 𝐡 is the inverse of 𝐴 and is denoted by π΄βˆ’1.β–Ά Remark.

β€’ The inverse of a matrix is unique. (Why?)β€’ Do not write 𝐴

𝐡 .

7

The inverse of a matrix

β–Ά Example. Let 𝐴 = βŽ‘βŽ’βŽ£π‘Ž 𝑏𝑐 𝑑

⎀βŽ₯⎦

. If π‘Žπ‘‘ βˆ’ 𝑏𝑐 β‰  0, then

π΄βˆ’1 = 1π‘Žπ‘‘ βˆ’ 𝑏𝑐

⎑⎒⎣

𝑑 βˆ’π‘βˆ’π‘ π‘Ž

⎀βŽ₯⎦

.

β–Ά Example. The matrix 𝐴 = ⎑⎒⎣0 10 0

⎀βŽ₯⎦

is not invertible.

β–Ά Example. A 2 Γ— 2 matrix βŽ‘βŽ’βŽ£π‘Ž 𝑏𝑐 𝑑

⎀βŽ₯⎦

is invertible if and only if

π‘Žπ‘‘ βˆ’ 𝑏𝑐 β‰  0.

8

The inverse of a matrix

β–Ά Example. Let 𝐴 = βŽ‘βŽ’βŽ£π‘Ž 𝑏𝑐 𝑑

⎀βŽ₯⎦

. If π‘Žπ‘‘ βˆ’ 𝑏𝑐 β‰  0, then

π΄βˆ’1 = 1π‘Žπ‘‘ βˆ’ 𝑏𝑐

⎑⎒⎣

𝑑 βˆ’π‘βˆ’π‘ π‘Ž

⎀βŽ₯⎦

.

β–Ά Example. The matrix 𝐴 = ⎑⎒⎣0 10 0

⎀βŽ₯⎦

is not invertible.

β–Ά Example. A 2 Γ— 2 matrix βŽ‘βŽ’βŽ£π‘Ž 𝑏𝑐 𝑑

⎀βŽ₯⎦

is invertible if and only if

π‘Žπ‘‘ βˆ’ 𝑏𝑐 β‰  0.

8

The inverse of a matrix

β–Ά Example. Let 𝐴 = βŽ‘βŽ’βŽ£π‘Ž 𝑏𝑐 𝑑

⎀βŽ₯⎦

. If π‘Žπ‘‘ βˆ’ 𝑏𝑐 β‰  0, then

π΄βˆ’1 = 1π‘Žπ‘‘ βˆ’ 𝑏𝑐

⎑⎒⎣

𝑑 βˆ’π‘βˆ’π‘ π‘Ž

⎀βŽ₯⎦

.

β–Ά Example. The matrix 𝐴 = ⎑⎒⎣0 10 0

⎀βŽ₯⎦

is not invertible.

β–Ά Example. A 2 Γ— 2 matrix βŽ‘βŽ’βŽ£π‘Ž 𝑏𝑐 𝑑

⎀βŽ₯⎦

is invertible if and only if

π‘Žπ‘‘ βˆ’ 𝑏𝑐 β‰  0.

8

The inverse of a matrix

Suppose 𝐴 and 𝐡 are invertible. Then

β€’ π΄βˆ’1 is invertible and (π΄βˆ’1)βˆ’1 = 𝐴.

β€’ 𝐴𝑇 is invertible and (𝐴𝑇 )βˆ’1 = (π΄βˆ’1)𝑇 .β€’ 𝐴𝐡 is invertible and (𝐴𝐡)βˆ’1 = π΅βˆ’1π΄βˆ’1. (Why?)

9

The inverse of a matrix

Suppose 𝐴 and 𝐡 are invertible. Then

β€’ π΄βˆ’1 is invertible and (π΄βˆ’1)βˆ’1 = 𝐴.β€’ 𝐴𝑇 is invertible and (𝐴𝑇 )βˆ’1 = (π΄βˆ’1)𝑇 .

β€’ 𝐴𝐡 is invertible and (𝐴𝐡)βˆ’1 = π΅βˆ’1π΄βˆ’1. (Why?)

9

The inverse of a matrix

Suppose 𝐴 and 𝐡 are invertible. Then

β€’ π΄βˆ’1 is invertible and (π΄βˆ’1)βˆ’1 = 𝐴.β€’ 𝐴𝑇 is invertible and (𝐴𝑇 )βˆ’1 = (π΄βˆ’1)𝑇 .β€’ 𝐴𝐡 is invertible and (𝐴𝐡)βˆ’1 = π΅βˆ’1π΄βˆ’1. (Why?)

9

Solving systems using matrix inverse

Theorem. Let 𝐴 be invertible. Then the system 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏 has theunique solution π‘₯π‘₯π‘₯ = π΄βˆ’1𝑏𝑏𝑏.

10

Computing the inverse

β–Ά To solve 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏, we do row reduction on [ 𝐴 ∣ 𝑏 ].

β–Ά To solve 𝐴𝑋 = 𝐼, we do row reduction on [ 𝐴 ∣ 𝐼 ].β–Ά To compute π΄βˆ’1 (The Gauss-Jordan Method):

β€’ Form the augmented matrix [ 𝐴 ∣ 𝐼 ].β€’ Compute the reduced echelon form.β€’ If 𝐴 is invertible, the result is of the form [ 𝐼 ∣ π΄βˆ’1 ].

11

Computing the inverse

β–Ά To solve 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏, we do row reduction on [ 𝐴 ∣ 𝑏 ].β–Ά To solve 𝐴𝑋 = 𝐼, we do row reduction on [ 𝐴 ∣ 𝐼 ].

β–Ά To compute π΄βˆ’1 (The Gauss-Jordan Method):

β€’ Form the augmented matrix [ 𝐴 ∣ 𝐼 ].β€’ Compute the reduced echelon form.β€’ If 𝐴 is invertible, the result is of the form [ 𝐼 ∣ π΄βˆ’1 ].

11

Computing the inverse

β–Ά To solve 𝐴π‘₯π‘₯π‘₯ = 𝑏𝑏𝑏, we do row reduction on [ 𝐴 ∣ 𝑏 ].β–Ά To solve 𝐴𝑋 = 𝐼, we do row reduction on [ 𝐴 ∣ 𝐼 ].β–Ά To compute π΄βˆ’1 (The Gauss-Jordan Method):

β€’ Form the augmented matrix [ 𝐴 ∣ 𝐼 ].β€’ Compute the reduced echelon form.β€’ If 𝐴 is invertible, the result is of the form [ 𝐼 ∣ π΄βˆ’1 ].

11

Computing the inverse

β–Ά Example. Find the inverse of 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦, if it exists.

β–Ά Solution.

⎑⎒⎒⎣

2 1 1 1 0 04 βˆ’6 0 0 1 0

βˆ’2 7 2 0 0 1

⎀βŽ₯βŽ₯⎦

𝑅2βŸΆπ‘…2βˆ’2𝑅1βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’π‘…3βŸΆπ‘…3+𝑅1

⎑⎒⎒⎣

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 8 3 1 0 1

⎀βŽ₯βŽ₯⎦

𝑅3βŸΆπ‘…3+𝑅2βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’βŽ‘βŽ’βŽ’βŽ£

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 0 1 βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’ …

12

Computing the inverse

β–Ά Example. Find the inverse of 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦, if it exists.

β–Ά Solution.

⎑⎒⎒⎣

2 1 1 1 0 04 βˆ’6 0 0 1 0

βˆ’2 7 2 0 0 1

⎀βŽ₯βŽ₯⎦

𝑅2βŸΆπ‘…2βˆ’2𝑅1βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’π‘…3βŸΆπ‘…3+𝑅1

⎑⎒⎒⎣

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 8 3 1 0 1

⎀βŽ₯βŽ₯⎦

𝑅3βŸΆπ‘…3+𝑅2βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’βŽ‘βŽ’βŽ’βŽ£

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 0 1 βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’ …

12

Computing the inverse

β–Ά Example. Find the inverse of 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦, if it exists.

β–Ά Solution.

⎑⎒⎒⎣

2 1 1 1 0 04 βˆ’6 0 0 1 0

βˆ’2 7 2 0 0 1

⎀βŽ₯βŽ₯⎦

𝑅2βŸΆπ‘…2βˆ’2𝑅1βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’π‘…3βŸΆπ‘…3+𝑅1

⎑⎒⎒⎣

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 8 3 1 0 1

⎀βŽ₯βŽ₯⎦

𝑅3βŸΆπ‘…3+𝑅2βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’βŽ‘βŽ’βŽ’βŽ£

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 0 1 βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’ …

12

Computing the inverse

β–Ά Example. Find the inverse of 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦, if it exists.

β–Ά Solution.

⎑⎒⎒⎣

2 1 1 1 0 04 βˆ’6 0 0 1 0

βˆ’2 7 2 0 0 1

⎀βŽ₯βŽ₯⎦

𝑅2βŸΆπ‘…2βˆ’2𝑅1βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’π‘…3βŸΆπ‘…3+𝑅1

⎑⎒⎒⎣

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 8 3 1 0 1

⎀βŽ₯βŽ₯⎦

𝑅3βŸΆπ‘…3+𝑅2βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’βŽ‘βŽ’βŽ’βŽ£

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 0 1 βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’ …

12

Computing the inverse

β–Ά Example. Find the inverse of 𝐴 =⎑⎒⎒⎣

2 1 14 βˆ’6 0

βˆ’2 7 2

⎀βŽ₯βŽ₯⎦, if it exists.

β–Ά Solution.

⎑⎒⎒⎣

2 1 1 1 0 04 βˆ’6 0 0 1 0

βˆ’2 7 2 0 0 1

⎀βŽ₯βŽ₯⎦

𝑅2βŸΆπ‘…2βˆ’2𝑅1βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’π‘…3βŸΆπ‘…3+𝑅1

⎑⎒⎒⎣

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 8 3 1 0 1

⎀βŽ₯βŽ₯⎦

𝑅3βŸΆπ‘…3+𝑅2βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’βŽ‘βŽ’βŽ’βŽ£

2 1 1 1 0 00 βˆ’8 βˆ’2 βˆ’2 1 00 0 1 βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’ …

12

Computing the inverse

βˆ’βˆ’βˆ’βˆ’βˆ’βˆ’β†’βŽ‘βŽ’βŽ’βŽ£

1 0 0 1216

βˆ’516

βˆ’616

0 1 0 48

βˆ’38

βˆ’28

0 0 1 βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

π΄βˆ’1 =⎑⎒⎒⎣

1216

βˆ’516

βˆ’616

48

βˆ’38

βˆ’28

βˆ’1 1 1

⎀βŽ₯βŽ₯⎦

13

Why does it work?

β€’ Each row reduction corresponds to multiplying with anelementary matrix 𝐸:

[ 𝐴∣ 𝐼 ] β†’ [ 𝐸1𝐴∣ 𝐸1𝐼 ] β†’ [ 𝐸2𝐸1𝐴∣ 𝐸2𝐸1 ] β†’ …

… β†’ [ 𝐹𝐴 ∣ 𝐹] where 𝐹 = πΈπ‘Ÿ … 𝐸2𝐸1.

β€’ If we manage to reduce [ 𝐴∣ 𝐼 ] to [ 𝐼∣ 𝐹 ], this means

𝐹𝐴 = 𝐼 ⟹ π΄βˆ’1 = 𝐹.

14

Why does it work?

β€’ Each row reduction corresponds to multiplying with anelementary matrix 𝐸:

[ 𝐴∣ 𝐼 ] β†’ [ 𝐸1𝐴∣ 𝐸1𝐼 ] β†’ [ 𝐸2𝐸1𝐴∣ 𝐸2𝐸1 ] β†’ …

… β†’ [ 𝐹𝐴 ∣ 𝐹] where 𝐹 = πΈπ‘Ÿ … 𝐸2𝐸1.

β€’ If we manage to reduce [ 𝐴∣ 𝐼 ] to [ 𝐼∣ 𝐹 ], this means

𝐹𝐴 = 𝐼 ⟹ π΄βˆ’1 = 𝐹.

14

Why does it work?

β€’ Each row reduction corresponds to multiplying with anelementary matrix 𝐸:

[ 𝐴∣ 𝐼 ] β†’ [ 𝐸1𝐴∣ 𝐸1𝐼 ] β†’ [ 𝐸2𝐸1𝐴∣ 𝐸2𝐸1 ] β†’ …

… β†’ [ 𝐹𝐴 ∣ 𝐹] where 𝐹 = πΈπ‘Ÿ … 𝐸2𝐸1.

β€’ If we manage to reduce [ 𝐴∣ 𝐼 ] to [ 𝐼∣ 𝐹 ], this means

𝐹𝐴 = 𝐼 ⟹ π΄βˆ’1 = 𝐹.

14


Top Related