gauss elimination continuedgauss elimination … · a return to gauss elimination (cont.) similar...

7
Linear Algebra for Wireless Communications Wireless Communications Lect re 2 Lecture: 2 Gauss elimination continued Gauss elimination continued and vector spaces Ove Edfors Department of Electrical and Information Technology L dU i it Lund University 2009-12-16 Ove Edfors 1 A few more results using A few more results using Gauss elimination 2009-12-16 Ove Edfors 2 Gauss-Jordan method The Gauss-Jordan method can be seen as a total Gaussian eliminationwhere The Gauss Jordan method can be seen as a total Gaussian elimination where elements on both sides of the diagonal are eliminated and the diagonal set to ones. Using this method, we can calculate the inverse of an NxN matrix A by simultaneously solving the N equations ( i=1,2,...,N ) i i = Ax e = AX I where e i is the i:th basis vector with a single 1 in position i. A 1 e 2 e 3 e Example: 2 1 1 1 0 0 4 1 0 0 1 0 2 2 1 0 0 1 2009-12-16 Ove Edfors 3 Gauss-Jordan method (cont.) 2 1 1 1 0 0 2 1 1 1 0 0 4 1 0 0 1 0 2 2 1 0 0 1 0 1 2 2 1 0 0 3 2 1 0 1 2 2 2 1 Row Row Row = × 3 3 3 2 Row Row Row = + × 2 1 1 1 0 0 0 1 2 2 1 0 2 1 1 1 0 0 0 1 2 2 1 0 0 1 2 2 1 0 2 2 1 0 0 1 0 1 2 2 1 0 0 0 4 5 3 1 3 3 1 Row Row Row = + Lower part done! Upper part remains. 2009-12-16 Ove Edfors 4

Upload: lenguyet

Post on 20-Jul-2018

261 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

Linear Algebra forWireless CommunicationsWireless Communications

Lect re 2Lecture: 2Gauss elimination continuedGauss elimination continued

and vector spacesp

Ove EdforsDepartment of Electrical and Information Technology

L d U i itLund University

2009-12-16 Ove Edfors 1

A few more results usingA few more results usingGauss elimination

2009-12-16 Ove Edfors 2

Gauss-Jordan method

The Gauss-Jordan method can be seen as a ”total Gaussian elimination” whereThe Gauss Jordan method can be seen as a total Gaussian elimination whereelements on both sides of the diagonal are eliminated and the diagonal set to ones.

Using this method, we can calculate the inverse of an NxN matrix A by simultaneouslysolving the N equations ( i=1,2,...,N )

i i=Ax e =AX Iwhere ei is the i:th basis vector with a single 1 in position i.

A 1e 2e 3eExample:

2 1 1 1 0 0⎡ ⎤⎢ ⎥4 1 0 0 1 0

2 2 1 0 0 1

⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦

2009-12-16 Ove Edfors 3

⎣ ⎦

Gauss-Jordan method (cont.)

2 1 1 1 0 0⎡ ⎤⎢ ⎥

2 1 1 1 0 0⎡ ⎤⎢ ⎥4 1 0 0 1 0

2 2 1 0 0 1

⎢ ⎥⎢ ⎥⎢ ⎥−⎣ ⎦

0 1 2 2 1 00 3 2 1 0 1

⎢ ⎥− − −⎢ ⎥⎢ ⎥⎣ ⎦⎣ ⎦

2 2 2 1Row Row Row= − × 3 3 3 2Row Row Row= + ×

⎣ ⎦

2 1 1 1 0 00 1 2 2 1 0

⎡ ⎤⎢ ⎥− − −⎢ ⎥

2 1 1 1 0 00 1 2 2 1 0⎡ ⎤⎢ ⎥− − −⎢ ⎥0 1 2 2 1 0

2 2 1 0 0 1⎢ ⎥⎢ ⎥−⎣ ⎦

0 1 2 2 1 00 0 4 5 3 1⎢ ⎥⎢ ⎥− −⎣ ⎦

3 3 1Row Row Row= + Lower part done! Upper part remains.

2009-12-16 Ove Edfors 4

Page 2: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

Gauss-Jordan method (cont.)

2 1 1 1 0 0⎡ ⎤⎢ ⎥

1 3 14 4 4

2 1 0 −⎡ ⎤⎢ ⎥0 1 2 2 1 0

0 0 4 5 3 1

⎢ ⎥− − −⎢ ⎥⎢ ⎥− −⎣ ⎦

1 112 22

0 1 0

0 0 4 5 3 1

− −⎢ ⎥

−⎢ ⎥⎢ ⎥

− −⎢ ⎥⎣ ⎦12 2 32

Row Row Row= −

⎣ ⎦ 0 0 4 5 3 1⎢ ⎥⎣ ⎦

1 1 2Row Row Row= +

2 1 1 1 0 01 1 1

⎡ ⎤⎢ ⎥⎢ ⎥

1 1 14 4 4

2 0 0 −⎡ ⎤⎢ ⎥1 1 10 1 0

2 2 20 0 4 5 3 1

⎢ ⎥− − −⎢ ⎥⎢ ⎥− −⎣ ⎦

1 1 12 2 2

0 1 0

0 0 4 5 3 1

− −⎢ ⎥

−⎢ ⎥⎢ ⎥− −⎣ ⎦0 0 4 5 3 1⎣ ⎦

11 1 34

Row Row Row= +

⎣ ⎦

Only thing left is to make diagonal elements equal to one!

2009-12-16 Ove Edfors 5

4 diagonal elements equal to one!

Gauss-Jordan method (cont.)1 1 14 4 4

2 0 0 −⎡ ⎤⎢ ⎥

Divide by 2

1 1 12 2 2

0 1 0

0 0 4 5 3 1

− −⎢ ⎥

−⎢ ⎥⎢ ⎥− −⎣ ⎦

Divide by -1

Divide by -40 0 4 5 3 1⎣ ⎦ Divide by 4

1−=X A

1 1 18 8 8

1 0 0 −⎡ ⎤⎢ ⎥

=X A

We know that an inverse to A must yield8 8 81 1 12 2 2

5 3 1

0 1 0 −⎢ ⎥⎢ ⎥⎢ ⎥

ythe identity matrix when we multiply fromboth left and right.

C i t t thi lt?5 3 14 4 4

0 0 1 − −⎢ ⎥⎢ ⎥⎣ ⎦

Can vi trust this result?

2009-12-16 Ove Edfors 6

Complexity of solving Ax = b

We have seen two basic approaches:pp

1. LU decomposition of A = LU, and back substitutionthrough the triangular matrices (first Lc = b then Ux = c).

2. Computation of the inverse A-1 and calculation of x = A-1b

Which one should we use?

COUNT THE NUMBER OF OPERATIONS

Calculate inverse A-1Perform A = LU dec 3 / 3n 3nUSING LU DECMPOSITION USING MATRIX INVERSE

LU wins!Calculate inverse A 1Perform A = LU dec.

Calculate A-1bSolve Lc = b and Ux = c

/ 3n2n

n2n

LU wins!

Tie!

2009-12-16 Ove Edfors 7

Vector spacesVector spaces

2009-12-16 Ove Edfors 8

Page 3: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

Vector spaces and subspaces

In a vector space, all linear combinations of elements stay in the vectorspace.

The most important vector spaces are IR1 IR2 IR3 which contain vectors withThe most important vector spaces are IR1, IR2, IR3, ..., which contain vectors with1, 2, 3, ... components respectively.

A subspace of a vector space in a nonempty subset that satisfies the requirementsfor a vector space, i.e., that all linear combinations of elements stay in the subspace.

Note that the zero-vector belongs to all subspaces.

2009-12-16 Ove Edfors 9

Column space

The column space C(A) of a real MxN matrix A is a subspace of IRM and contains alllinear combinations of the columns of A i elinear combinations of the columns of A, i.e.

( ) { }C = =A b b Ax| | |⎡ ⎤

⎢ ⎥3 3 1 2 3

| | |×

⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

A a a a2a | | |⎢ ⎥⎣ ⎦

( )C A3a

In this example, the tree ( )1a

p ,columns of A all lie in a 2D plane in IR3. Hence, C(A) is a 2D subspace.

A system of linear equations Ax = b is solvable if and only if b is in the columns space

2009-12-16 Ove Edfors 10

y q y pof A, i.e., in C(A).

Nullspace

The nullspace N(A) of a real MxN matrix A is a subspace of IRN and contains all vectorsx such that Ax = 0, i.e.,

( ) { }N = =A x Ax 0( ) { }N A x Ax 0

If xp is a solution to Ax = b, then so is every x = xp + xn where xn is any vector inthe nullspace N(A).

The nullspace N(A) always contains the zero-vector 0.

2009-12-16 Ove Edfors 11

A return to Gauss elimination

What happens if we apply Gauss elimination to non-square matrices?

Brute force application (possibly with row exchanges) leads to something like this:

* * * * * *⎡ ⎤ * * * * * *⎡ ⎤* * * * * ** * * * * *

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

A0 0 * * * *0 0 0 * * *

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

U* * * * * ** * * * * *⎢ ⎥⎢ ⎥⎣ ⎦

0 0 00 0 0 0 0 0⎢ ⎥⎢ ⎥⎣ ⎦

1) Pivot elements not necessarily on the ”main diagonal”

This upper triangular matrix U is said to be on echelon form

1) Pivot elements not necessarily on the main diagonal2) The non-zero elements are confined to a type

of staircase pattern.

This upper triangular matrix U is said to be on echelon form.

PropertyFor all MxN matrices A we can find a permutation matrix P, an MxM lower triangular

2009-12-16 Ove Edfors 12

matrix L and an MxN echelon matrix U such that PA = LU.

Page 4: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

A return to Gauss elimination (cont.)

Similar to the Gauss-Jordan method we can use elementary row operations to removeSimilar to the Gauss Jordan method, we can use elementary row operations to removethe non-zero elements above each pivot element and normalize the pivot elements to unity:

* * * * * *⎡ ⎤ 1 * 0 0 * *⎡ ⎤0 0 * * * *0 0 0 * * *

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

U0 0 1 0 * *0 0 0 1 * *

⎡ ⎤⎢ ⎥⎢ ⎥=⎢ ⎥

R0 0 0 * * *0 0 0 0 0 0⎢ ⎥⎢ ⎥⎣ ⎦

0 0 0 10 0 0 0 0 0⎢ ⎥⎢ ⎥⎣ ⎦

This form is called the reduced row

The three equations Ax = 0, Ux = 0, and Rx = 0, all have thesame solutions x Hence their nullspaces are identical i e

echelon form.same solutions x. Hence, their nullspaces are identical, i.e.,N(A) = N(U) = N(R).

The nullspace of R is easily identified!

2009-12-16 Ove Edfors 13

Nullspace of R

Identifying the nullspace N(R) of a reduced row echelon matrix R.

1

21 * 0 0 * * 0xx⎡ ⎤⎢ ⎥

⎡ ⎤ ⎡ ⎤⎢ ⎥

Pivot variables

2

3

1 0 0 00 0 1 0 * * 00 0 0 1 * * 0

xxx

⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥= =⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥

Rx3

5

0 0 0 1 * * 00 0 0 0 0 0 0

xx

⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥

6x⎢ ⎥⎢ ⎥⎣ ⎦ Free variables

Th i t i bl l t l d t i d b th f i blThe pivot variables are completely determined by the free variables.

By finding solutions where each free variable in turn is set to 1 and the others to 0, weindentify vectors (special solutions) that span the nullspace N(R) (and N(A)).

2009-12-16 Ove Edfors 14

Nullspace of R (cont.)

A numerical example:1

11 2 0 0 3 1 0x⎡ ⎤⎢ ⎥⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥

21−⎡ ⎤⎢ ⎥⎢ ⎥p

3

4

0 0 1 0 4 2 00 0 0 1 5 1 00 0 0 0 0 0 00

xx

⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥

2 5 61, 0, 0x x x= = = 000

⎢ ⎥⎢ ⎥

= ⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

x

1

21 2 0 0 3 1 0xx⎡ ⎤⎢ ⎥

⎡ ⎤ ⎡ ⎤⎢ ⎥

0⎢ ⎥⎢ ⎥⎣ ⎦

1

01 2 0 0 3 1 0x⎡ ⎤⎢ ⎥⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥

0⎢ ⎥⎣ ⎦

30−⎡ ⎤⎢ ⎥⎢ ⎥2

3

3

0 0 3 00 0 1 0 4 2 00 0 0 1 5 1 00 0 0 0 0 0 0

xxx

⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦

3

4

0 0 1 0 4 2 00 0 0 1 5 1 00 0 0 0 0 0 01

xx

⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥

2 5 60, 1, 0x x x= = = 45

1

⎢ ⎥⎢ ⎥−

= ⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥

x

5

6

0 0 0 0 0 0 0xx

⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎢ ⎥⎣ ⎦

0⎣ ⎦ ⎣ ⎦⎢ ⎥

⎢ ⎥⎣ ⎦

1

01 2 0 0 3 1 0x⎡ ⎤⎢ ⎥⎡ ⎤ ⎡ ⎤⎢ ⎥

0⎢ ⎥⎣ ⎦

10−⎡ ⎤⎢ ⎥⎢ ⎥

3

4

01 2 0 0 3 1 00 0 1 0 4 2 00 0 0 1 5 1 00 0 0 0 0 0 00

xx

⎡ ⎤ ⎡ ⎤⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥=⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎢ ⎥ ⎢ ⎥⎢ ⎥⎣ ⎦ ⎣ ⎦

2 5 60, 0, 1x x x= = =

021

0

⎢ ⎥⎢ ⎥−

= ⎢ ⎥−⎢ ⎥⎢ ⎥⎢ ⎥

x

2009-12-16 Ove Edfors 15

0 0 0 0 0 0 001

⎢ ⎥⎣ ⎦ ⎣ ⎦⎢ ⎥⎢ ⎥⎣ ⎦

01

⎢ ⎥⎢ ⎥⎣ ⎦

Nullspace of R (cont.)

The nullspace N(R) of R in this particular example is a 3 dimensional subspace of IR6 and

21−⎡ ⎤⎢ ⎥

30−⎡ ⎤⎢ ⎥

10−⎡ ⎤⎢ ⎥

The nullspace N(R) of R, in this particular example, is a 3-dimensional subspace of IR6 andspanned by the three vectors (special solutions):

1000

⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥⎢ ⎥

045

1

⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥−⎢ ⎥⎢ ⎥

021

0

⎢ ⎥⎢ ⎥⎢ ⎥−⎢ ⎥−⎢ ⎥⎢ ⎥0

0

⎢ ⎥⎢ ⎥⎣ ⎦

10

⎢ ⎥⎢ ⎥⎣ ⎦

01

⎢ ⎥⎢ ⎥⎣ ⎦

This procedure leads us to the conclusion that:

the dimension of the nullspace of an NxM matrix A equals the number offree variables in its reduced row echelon form Rfree variables in its reduced row echelon form R.

2009-12-16 Ove Edfors 16

Page 5: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

Solutions to Ax = b and matrix rank

S li i ti d A b t U d R d h A i f i M N USuppose elimination reduces Ax = b to Ux = c and Rx = d, where A is of size MxN, U uppertriangular and R on row reduced echelon form, with r pivot rows and r pivot columns.

( ) ( ) ( )The number r is called the rank of those matrices:

The last M – r rows of U and R are zero, so there is a solution only if the last M – r entries

( ) ( ) ( )rank rank rank r= = =A U R

of c and d are zero.

The complete solution is x = xp + xn, where:

The particular solution xp has all free variables set to zero.Its pivot variables are the first r entries of d, so Rxp = d.

Th ll l ti bi ti f N i l l tiThe nullspace solutions xn are combinations of N - r special solutions,with one free variable set to 1 and the rest to 0.

2009-12-16 Ove Edfors 17

Linear independence

S th tExample: Tree vectorsi IR3Suppose that

1 1 2 2 0N Nα α α+ + + =a a aK

in IR3.

only happens when

1 2 0.Nα α α= = = =K1 2 0.Nα α αK

Then the vectors a1, a2, ... aN are lineraly independent.

If any αk’s are nonzero, the vectors a1, a2, ... aN are lineralydependent. At least one vector is a combination of the others. These three vectors are

linearly independent ifthe determinant of thethe determinant of thematrix they form ascolumns is nonzero.

2009-12-16 Ove Edfors 18

Linear independence (cont.)

N vectors in IRM are always linearly dependent if N > M.

The columns of A are linearly independent if and only if N(A)only contains the zero vector.

The r pivot rows of an echelon matrix U and a reduced row echelonmatrix R are linearly independent.

The r pivot columns of an echelon matrix U and a reduced row echelonmatrix R are linearly independent.y p

2009-12-16 Ove Edfors 19

Basis of a vector space

If a vector space V consists of all linear combinations of vectors v1 v2 vN we sayIf a vector space V consists of all linear combinations of vectors v1, v2, ... , vN, we saythat these vectors span the space.

Every vector x in V comes from linear combinations

1 1 2 2 N Nc c c= + + +x v v vKfor some coefficients c1, c2, ..., cN. These coefffcients are not necessarily unique fora given x.

A basis for V is a sequence of vectors having two properties:

1. The vectors are linearly independent. (Not too many vectors.)2 Th t th V (N t t f t )2. The vectors span the space V. (Not too few vectors.)

For a given basis for V, there is one and only one way to write each vector in V as acombination of the basis vectors.A vector space V does not have a unique basis. There are infinitely many different bases.

All different bases for V have the same number of vectors. This number is the dimensionf V d th d f f d

2009-12-16 Ove Edfors 20

of V and expresses the degrees of freedom.

Page 6: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

Example: Span and basis

The six columns of the following (reduced row) echelon matrix span the column space C(R)of the matrix R

1 2 0 0 3 10 0 1 0 4 2⎡ ⎤⎢ ⎥⎢ ⎥

of the matrix R.

0 0 1 0 4 20 0 0 1 5 10 0 0 0 0 0

⎢ ⎥=⎢ ⎥⎢ ⎥⎣ ⎦

R

⎣ ⎦

There are six vectors in four dimensions, so they must be linearly dependent.

f C( ) fA basis for the column space C(R) can be formed by selecting the three pivot columns.

2009-12-16 Ove Edfors 21

The four fundamental spaces

Gi M N t i A h th f ll i f f d t lGiven an MxN matrix A, we have the following four fundamental spaces:

The column space of A is denoted C(A).PAR

T I

Its dimension is the rank r.

The nullspace of A is denoted N(A).Its dimension is N rA

R A

LGEB

RA

,

Its dimension is N - r.

The rowspace of A is spanned by the rows of A andequals the column space C(AT) of AT.

Subspacesof IRN.

Subspacesof IRM.

REM

OF

LIN

EA

equals the column space C(A ) of A .Its dimension is r.

The left nullspace of A is the nullspace of AT andl h ll N(AT) f ATM

ENTA

L TH

EOR

equals the nullspace N(AT) of AT.Its dimension is M - r.

FUN

DA

M

2009-12-16 Ove Edfors 22

Illustration of fundamental spaces

1 3⎡ ⎤ Li l d d t 1 2⎡ ⎤1 32 6⎡ ⎤

= ⎢ ⎥⎣ ⎦

ALinearly dependent columns, rank = 1

1 23 6

T ⎡ ⎤= ⎢ ⎥⎣ ⎦

A

C(A)

br =Ax b

xr=Ax b

xbxr

n =Ax 0xn

r n= +x x x

2009-12-16 Ove Edfors 23

Existence of inverses

An inverse (left/right/both) to an MxN matrix A only exist when the rank is as largeibl i h k(A) i (M N)as possible, i.e., when rank(A) = min(M,N).

When A has full column rank, rank(A) = N:( )

there exists a left inverse B = (ATA)-1AT

of size NxM such that BA = INxNWhen A is square (M=N) andhas full rank, rank(A) = M = N:

Combine the two:

When A has full row rank rank(A) = M:

Ax = b has at least one solution x for each b. there exists a left inverse Band a right inverse C (thatare equal)When A has full row rank, rank(A) M:

there exists a right inverse C = AT(AAT)-1

of size NxM such that AC = IMxM

are equal)

Ax = b has only one solutionx for each b.

Ax = b has at most one solution x for each b.

2009-12-16 Ove Edfors 24

Page 7: Gauss elimination continuedGauss elimination … · A return to Gauss elimination (cont.) Similar to the GaussSimilar to the Gauss-Jordan method we can use elementary row operations

Linear transformations

If we know the result of Ax for each vector in a basis, then we know Ax for all vectorsin the entire space.

Let v v v be a basis Then every x can be expressed uniquely asLet v1, v2, ... , vN be a basis. Then every x can be expressed uniquely as

1 1 2 2 N Nc c c= + + +x v v vK

The transformation Ax becomes

( )1 1 2 2 N Nc c c= + + +Ax A v v vK This often come in handywhen we want to determine

1 1 2 2 N Nc c c= + + +Av Av AvKwhen we want to determinea matrix A that performs acertain transformation.

If we know these Avk, all we need todo is to multiply them by the uniquecoefficients ck and add them up.

2009-12-16 Ove Edfors 25

Rotation

Example: We want to create a matrix Aα that performs a rotation by an angle α.

First determine what hapens to thebasis vectors.

01⎡ ⎤⎢ ⎥⎣ ⎦sinα−⎡ ⎤

⎢ ⎥cossin

αα

⎡ ⎤⎢ ⎥⎣ ⎦

cosα⎢ ⎥⎣ ⎦

1,1

2,1

cossin

10

aaα

αα

⎡ ⎤⎡ ⎤= =⎢ ⎥⎢ ⎥

⎣ ⎦ ⎣ ⎦

⎡ ⎤⎢ ⎥⎣ ⎦

A

1⎡ ⎤

α1,2

2,2

sincos

01

aaα

αα

⎡ ⎤⎡ ⎤= =⎢ ⎥⎢ ⎥

⎣ ⎦ ⎣

−⎡ ⎤⎢ ⎥⎣⎦ ⎦

A

0⎡ ⎤⎢ ⎥⎣ ⎦This gives us the two columns

of Aα directly:

i⎡ ⎤ Sanity checks:cos sinsin cosα

α αα α

−⎡ ⎤= ⎢ ⎥⎣ ⎦

ASanity checks:

α α− =A A I

α β α β+=A A A

2009-12-16 Ove Edfors 26

Projection

Example: We want to create a matrix A that projects vectors onto the space (line)d b th t [ 1/2 1 ]T

First determine what hapens to thebasis vectors.

01⎡ ⎤⎢ ⎥⎣ ⎦

spanned by the vector [ 1/2 -1 ]T.

1/ 2 1/ 2 2 /011 1

54

2/

25 5 5

T⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− −

⎣ ⎦ ⎣ ⎦=

−⎡ ⎤⎢ ⎥⎣

⎤⎢⎣ ⎦ ⎦⎥

1⎡ ⎤This gives us the two columnsof A directly:

0⎡ ⎤⎢ ⎥⎣ ⎦1/ 2 1/ 2 1/1

01 15

22

/25 5 5

T⎡ ⎤ ⎡ ⎤⎢ ⎥ ⎢ ⎥− −

⎣ ⎦ ⎣ ⎦=⎡⎤

⎢⎤

⎢ ⎥−⎣⎥

⎣ ⎦⎦

y

1/ 5 2 / 52 / 5 4 / 5

−⎡ ⎤= ⎢ ⎥−⎣ ⎦

A

1/ 21

⎡ ⎤⎢ ⎥−⎣ ⎦

”Length” ofprojection

UnitvectorSanity check:

k =A A

2009-12-16 Ove Edfors 27

A A

Reflection

Example: We want to create a matrix A that reflects vectors onto in the space (line)d b th t [ 1 1 ]T

First determine what hapens to thebasis vectors.

01⎡ ⎤⎢ ⎥⎣ ⎦

spanned by the vector [ 1 -1 ]T.

1−⎡ ⎤

1⎡ ⎤This gives us the two columnsof A directly:

10

⎡ ⎤⎢ ⎥⎣ ⎦

0⎡ ⎤⎢ ⎥⎣ ⎦

y

0 11 0

−⎡ ⎤= ⎢ ⎥−⎣ ⎦

A

11

⎡ ⎤⎢ ⎥−⎣ ⎦

01

⎡ ⎤⎢ ⎥−⎣ ⎦Sanity check:

2 =A I

2009-12-16 Ove Edfors 28

A I