linear algebra: graduate level problems and solutionsyanovsky/handbooks/linear_algebra.pdf ·...

36
Linear Algebra: Graduate Level Problems and Solutions Igor Yanovsky 1

Upload: trinhcong

Post on 07-Feb-2018

223 views

Category:

Documents


1 download

TRANSCRIPT

Page 1: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra: Graduate Level Problems and Solutions

Igor Yanovsky

1

Page 2: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 2

Disclaimer: This handbook is intended to assist graduate students with qualifyingexamination preparation. Please be aware, however, that the handbook might contain,and almost certainly contains, typos as well as incorrect or inaccurate solutions. I cannot be made responsible for any inaccuracies contained in this handbook.

Page 3: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 3

Contents

1 Basic Theory 41.1 Linear Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2 Linear Maps as Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.3 Dimension and Isomorphism . . . . . . . . . . . . . . . . . . . . . . . . . 41.4 Matrix Representations Redux . . . . . . . . . . . . . . . . . . . . . . . 61.5 Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.6 Linear Maps and Subspaces . . . . . . . . . . . . . . . . . . . . . . . . . 71.7 Dimension Formula . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.8 Matrix Calculations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71.9 Diagonalizability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2 Inner Product Spaces 82.1 Inner Products . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Orthonormal Bases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.2.1 Gram-Schmidt procedure . . . . . . . . . . . . . . . . . . . . . . 92.2.2 QR Factorization . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.3 Orthogonal Complements and Projections . . . . . . . . . . . . . . . . . 9

3 Linear Maps on Inner Product Spaces 113.1 Adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113.2 Self-Adjoint Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 Polarization and Isometries . . . . . . . . . . . . . . . . . . . . . . . . . 133.4 Unitary and Orthogonal Operators . . . . . . . . . . . . . . . . . . . . . 143.5 Spectral Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.6 Normal Operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.7 Unitary Equivalence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.8 Triangulability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

4 Determinants 174.1 Characteristic Polynomial . . . . . . . . . . . . . . . . . . . . . . . . . . 17

5 Linear Operators 185.1 Dual Spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.2 Dual Maps . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

6 Problems 23

Page 4: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 4

1 Basic Theory

1.1 Linear Maps

Lemma. If A ∈ Matmxn(F) and B ∈ Matnxm(F), then

tr(AB) = tr(BA).

Proof. Note that the (i, i) entry in AB is∑n

j=1 αijβji, while (j, j) entry in BA is∑mi=1 βjiαij .

Thus tr(AB) =m∑

i=1

n∑

j=1

αijβji,

tr(BA) =n∑

j=1

m∑

i=1

βjiαij .

1.2 Linear Maps as Matrices

Example. Let Pn = {α0 + α1t + · · · + αntn : α0, α1, . . . , αn ∈ F} be the space ofpolynomials of degree ≤ n and D : V → V the differential map

D(α0 + α1t + · · ·+ αntn) = α1 + · · ·+ nαntn−1.

If we use the basis 1, t, . . . , tn for V then we see that D(tk) = ktk−1 and thusthe (n + 1)x(n + 1) matrix representation is computed via

[D(1) D(t) D(t2) · · · D(tn)] = [0 1 2t · · · ntn−1] = [1 t t2 · · · tn]

0 1 0 · · · 00 0 2 · · · 0

0 0 0. . . 0

......

. . . . . . n0 0 0 · · · 0

1.3 Dimension and Isomorphism

A linear map L : V → W is isomorphism if we can find K : W → V such thatLK = IW and KL = IV .

VL−−−−→ W

IV

xxIW

VK←−−−− W

Page 5: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 5

Theorem. V and W are isomorphic ⇔ there is a bijective linear map L : V → W .

Proof. ⇒ If V and W are isomorphic we can find linear maps L : V → W andK : W → V so that LK = IW and KL = IV . Then for any y = IW (y) = L(K(y)) sowe can let x = K(y), which means L is onto. If L(x1) = L(x2) then x1 = IV (x1) =KL(x1) = KL(x2) = IV (x2) = x2, which means L is 1− 1.⇐ Assume L : V → W is linear and a bijection. Then we have an inverse map L−1

which satisfies L ◦ L−1 = IW and L−1 ◦ L = IV . In order for this inverse map to beallowable as K we need to check that it is linear. Select α1, α2 ∈ F and y1, y2 ∈ W .Let xi = L−1(yi) so that L(xi) = yi. Then we have

L−1(α1y1 + α2y2) = L−1(α1L(x1) + α2L(x2)) = L−1(L(α1x1 + α2x2))= IV (α1x1 + α2x2) = α1x1 + α2x2 = α1L

−1(y1) + α2L−1(y2).

Theorem. If Fm and Fn are isomorphic over F, then n = m.

Proof. Suppose we have L : Fm → Fn and K : Fn → Fm such that LK = IFn andKL = IFm . L ∈ Matnxm(F) and K ∈ Matmxn(F). Thus

n = tr(IFn) = tr(LK) = tr(KL) = tr(IFm) = m.

Define the dimension of a vector space V over F as dimF V = n if V is isomorphicto Fn.

Remark. dimCC = 1, dimRC = 2, dimQR = ∞.

The set of all linear maps {L : V → W} over F is homomorphism, and is denotedby homF(V,W ).

Corollary. If V and W are finite dimensional vector spaces over F, then homF(V, W )is also finite dimensional and

dimF homF(V,W ) = (dimFW ) · (dimF V )

Proof. By choosing bases for V and W there is a natural mapping

homF(V, W ) → Mat(dimFW )×(dimF V )(F) ' F(dimFW )·(dimF V )

This map is both 1-1 and onto as the matrix represetation uniquely determines thelinear map and every matrix yields a linear map.

Page 6: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 6

1.4 Matrix Representations Redux

L : V → W , bases x1, . . . , xm for V and y1, . . . , yn for W . The matrix for L interpretedas a linear map is [L] : Fm → Fn. The basis isomorphisms defined by the choices ofbasis for V and W :[x1 · · ·xm] : Fm → V , 1

[y1 · · · yn ] : Fn → W .

VL−−−−→ W

[x1···xm]

xx[y1···yn]

Fm [L]−−−−→ Fn

L ◦ [x1 · · ·xm] = [y1 · · · yn][L]

1.5 Subspaces

A nonempty subset M ⊂ V is a subspace if α, β ∈ F and x, y ∈ M , then αx+βy ∈ M .Also, 0 ∈ M .If M, N ⊂ V are subspaces, then we can form two new subspaces, the sum and theintersection:

M + N = {x + y : x ∈ M, y ∈ N}, M⋂

N = {x : x ∈ M, x ∈ N}.

M and N have trivial intersection if M⋂

N = {0}.M and N are transversal if M + N = V .Two spaces are complementary if they are transversal and have trivial intersection.M, N form a direct sum of V if M

⋂N = {0} and M + N = V . Write V = M

⊕N .

Example. V = R2. M = {(x, 0) : x ∈ R}, x-axis, and N = {(0, y) : y ∈ R}, y-axis.

Example. V = R2. M = {(x, 0) : x ∈ R}, x-axis, and N = {(y, y) : y ∈ R}, adiagonal.Note (x, y) = (x− y, 0) + (y, y), which gives V = M

⊕N .

If we have a direct sum decomposition V = M⊕

N , then we can construct theprojection of V onto M along N . The map E : V → V is defined using that eachz = x + y, x ∈ M , y ∈ N and mapping z to x. E(z) = E(x + y) = E(x) + E(y) =E(x) = x. Thus im(E) = M and ker(E) = N .

Definition. If V is a vector space, a projection of V is a linear operator E on Vsuch that E2 = E.

1[x1 · · ·xm] : Fm → V means

[x1 · · ·xm]

α1

...αm

= α1x1 + · · ·+ αmxm

Page 7: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 7

1.6 Linear Maps and Subspaces

L : V → W is a linear map over F.The kernel or nullspace of L is

ker(L) = N(L) = {x ∈ V : L(x) = 0}The image or range of L is

im(L) = R(L) = L(V ) = {L(x) ∈ W : x ∈ V }Lemma. ker(L) is a subspace of V and im (L) is a subspace of W .

Proof. Assume that α1, α2 ∈ F and that x1, x2 ∈ ker(L), then L(α1x1 + α2x2) =α1L(x1) + α2L(x2) = 0 ⇒ α1x1 + α2x2 ∈ ker(L).Assume α1, α2 ∈ F and x1, x2 ∈ V , then α1L(x1) + α2L(x2) = L(α1x1 + α2x2) ∈im (L).

Lemma. L is 1-1 ⇔ ker(L) = {0}.Proof. ⇒ We know that L(0·0) = 0·L(0) = 0, so if L is 1−1 we have L(x) = 0 = L(0)implies that x = 0. Hence ker(L) = {0}.⇐ Assume that ker(L) = {0}. If L(x1) = L(x2), then linearity of L tells thatL(x1 − x2) = 0. Then ker(L) = {0} implies x1 − x2 = 0, which shows that x1 = x2 asdesired.

Lemma. L : V → W , and dimV = dim W . L is 1-1 ⇔ L is onto ⇔ dim im (L) =dimV .

Proof. From the dimension formula, we have

dimV = dimker(L) + dim im(L).

L is 1-1 ⇔ ker(L) = {0} ⇔ dimker(L) = 0 ⇔ dim im (L) = dimV ⇔ dim im (L) =dimW ⇔ im (L) = W , that is, L is onto.

1.7 Dimension Formula

Theorem. Let V be finite dimensional and L : V → W a linear map, all over F, thenim(L) is finite dimensional and

dimF V = dimF ker(L) + dimF im(L)

Proof. We know that dim ker(L) ≤ dimV and that it has a complement M of dimensionk = dimV − dimker(L). Since M

⋂ker(L) = {0} the linear map L must be 1-1 when

restricted to M . Thus L|M : M → im(L) is an isomorphism, i.e. dim im(L) = dimM =k.

1.8 Matrix Calculations

Change of Basis Matrix. Given the two basis of R2, β1 = {x1 = (1, 1), x2 = (1, 0)}and β2 = {y1 = (4, 3), y2 = (3, 2)}, we find the change-of-basis matrix P from β1 to β2.Write y1 as a linear combination of x1 and x2, y1 = ax1 + bx2. (4, 3) = a(1, 1)+ b(1, 0)⇒ a = 3, b = 1 ⇒ y1 = 3x1 + x2.Write y2 as a linear combination of x1 and x2, y2 = ax1 + bx2. (3, 2) = a(1, 1)+ b(1, 0)⇒ a = 2, b = 1 ⇒ y2 = 2x1 + x2.

Write the coordinates of y1 and y2 as columns of P . P =[

3 21 1

].

Page 8: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 8

1.9 Diagonalizability

Definition. Let T be a linear operator on the finite-dimensional space V .T is diagonalizable if there is a basis for V consisting of eigenvectors of T .

Theorem. Let v1, . . . , vn be nonzero eigenvectors of distinct eigenvalues λ1, . . . , λn.Then {v1, . . . , vn} is linearly independent.

Alternative Statement. If L has n distinct eigenvalues λ1, . . . , λn, then L isdiagonalizable. (Proof is in the exercises).

Definition. Let L be a linear operator on a finite-dimensional vector space V , and letλ be an eigenvalue of L. Define Eλ = {x ∈ V : L(x) = λx} = ker(L − λIV ). The setEλ is called the eigenspace of L corresponding to the eigenvalue λ.The algebraic multiplicity is defined to be the multiplicity of λ as a root of thecharacteristic polynomial of L, while the geometric multiplicity of λ is defined to bethe dimension of its eigenspace, dimEλ = dim(ker(L− λIV )). Also,

dim(ker(L− λIV )) ≤ m.

Eigenspaces. A vector v with (A− λI)v = 0 is an eigenvector for λ.

Generalized Eigenspaces. Let λ be an eigenvalue of A with algebraic multiplicity m.A vector v with (A− λI)mv = 0 is a generalised eigenvector for λ.

2 Inner Product Spaces

2.1 Inner Products

The three important properties for complex inner products are:1) (x|x) = ||x||2 > 0 unless x = 0.2) (x|y) = (y|x).3) For each y ∈ V the map x → (x|y) is linear.The inner product on Cn is defined by

(x|y) = xty

Consequences: (α1x1 + α2x2|y) = α1(x1|y) + α2(x2|y),(x|β1y1 + β2y2) = β1(x|y1) + β2(x|y2),(αx|αx) = αα(x|x) = |α|2(x|x).

2.2 Orthonormal Bases

Lemma. Let e1, . . . , en be orthonormal. Then e1, . . . , en are linearly independent andany element x ∈ span{e1, . . . , en} has the expansion

x = (x|e1)e1 + · · ·+ (x|en)en.

Proof. Note that if x = α1e1 + · · ·+ αnen, then(x|ei) = (α1e1+· · ·+αnen|ei) = α1(e1|ei)+· · ·+αn(en|ei) = α1δ1i+· · ·+αnδni = αi.

Page 9: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 9

2.2.1 Gram-Schmidt procedure

Given a linearly independent set x1, . . . , xm in an inner product space V it is possi-ble to construct an orthonormal collection e1, . . . , em such that span{x1, . . . , xm} =span{e1, . . . , em}.

e1 =x1

||x1|| .

z2 = x2 − projx1(x2) = x2 − proje1(x2) = x2 − (x2|e1)e1, e2 =z2

||z2|| .

zk+1 = xk+1 − (xk+1|e1)e1 − · · · − (xk+1|ek)ek, ek+1 =zk+1

||zk+1|| .

2.2.2 QR Factorization

A =[

x1 · · · xm

]=

[e1 · · · em

]

(x1|e1) (x2|e1) · · · (xm|e1)0 (x2|e2) · · · (xm|e2)...

.... . .

...0 0 · · · (xm|em)

= QR

Example. Consider the vectors x1 = (1, 1, 0), x2 = (1, 0, 1), x3 = (0, 1, 1) in R3.Perform Gram-Schmidt:e1 = x1

||x1|| = (1,1,0)√2

=(

1√2, 1√

2, 0

).

z2 = (1, 0, 1)− 1√2

(1√2, 1√

2, 0

)=

(12 ,−1

2 , 1), e2 = z2

||z2|| = ( 12,− 1

2,1)√

3/2=

(1√6,− 1√

6, 2√

6

).

z3 = x3 − (x3|e1)e1 − (x3|e2)e2 = (0, 1, 1) − 1√2

(1√2, 1√

2, 0

) − 1√6

(1√6,− 1√

6, 2√

6

)=

( −1√3, 1√

3, 1√

3

), e3 = z3

||z3|| =(− 1√

3, 1√

3, 1√

3

).

2.3 Orthogonal Complements and Projections

The orthogonal projection of a vector x onto a nonzero vector y is defined by

projy(x) =(x∣∣∣ y

||y||) y

||y|| =(x|y)(y|y)

y,(The length of this projection is |projy(x)| = ||(x|y)||

||y||).

The definition of projy(x) immediately implies that it is linear from the linearity of theinner product.

The map x → projy(x) is a projection.

Proof. Need to show projy(projy(x)) = projy(x).

projy(projy(x)) = projy

((x|y)(y|y)

y

)=

(x|y)(y|y)

projy(y) =(x|y)(y|y)

(y|y)(y|y)

y =(x|y)(y|y)

y = projy(x).

Page 10: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 10

Cauchy-Schwarz Inequality. V complex inner product space.

|(x|y)| ≤ ||x||||y||, x, y ∈ V.

Proof. First show projy(x) ⊥ x− projy(x):

(projy(x)|x− projy(x)) =((x|y)||y||2 y

∣∣∣x− (x|y)||y||2 y

)=

((x|y)||y||2 y

∣∣∣x)−

((x|y)||y||2 y

∣∣∣(x|y)||y||2 y

)

=(x|y)||y||2 (y|x)− (x|y)

||y||2(x|y)||y||2 (y|y) =

(x|y)||y||2 (y|x)− (x|y)

||y||2 (x|y) = 0.

||x|| ≥ ||projy(x)|| =∣∣∣∣∣∣(x|y)(y|y)

y∣∣∣∣∣∣ =

∣∣∣(x|y)(y|y)

∣∣∣||y|| = |(x|y)|||y|| .

Triangle Inequality. V complex inner product space.

||x + y|| ≤ ||x||+ ||y||.

Proof. ||x + y||2 = (x + y|x + y) = ||x||2 + 2Re(x|y) + ||y||2 ≤ ||x||2 + 2|(x|y)|+ ||y||2 ≤||x||2 + 2||x||||y||+ ||y||2 = (||x||+ ||y||)2.

Let M ⊂ V be a finite dimensional subspace of an innter product space, ande1, . . . , em an orthonormal basis for M . Using that basis, define E : V → V by

E(x) = (x|e1)e1 + · · ·+ (x|em)em

Note that E(x) ∈ M and that if x ∈ M , then E(x) = x. Thus E2(x) = E(x) implyingthat E is a projection whose image is M . If x ∈ ker(E), then

0 = E(x) = (x|e1)e1 + · · ·+ (x|em)em ⇒ (x|e1) = · · · = (x|em) = 0.

This is equivalent to the condition (x|z) = 0 for all z ∈ M . The set of all such vectorsis the orthogonal complement to M in V is denoted

M⊥ = {x ∈ V : (x|z) = 0 for all z ∈ M}

Theorem. Let V be an inner product space. Assume V = M⊕

M⊥, then im(projM ) =M and ker(projM ) = M⊥. If M ⊂ V is finite dimensional then V = M

⊕M⊥ and

projM (x) = (x|e1)e1 + · · ·+ (x|em)em

for any orthonormal basis e1, . . . , em for M .

Proof. For E defined as above, ker(E) = M⊥. x = E(x)+ (I−E)(x) and (I−E)(x) ∈ker(E) = M⊥. Choose z ∈ M .

||x− projM (x)||2 ≤ ||x− projM (x)||2 + ||projM (x)− z||2 = ||x− z||2,

where equality holds when ||projM (x)−z||2 = 0, i.e., projM (x) is the only closest pointto x among the points in M .

Page 11: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 11

Theorem. Let E : V → V be a projection on to M ⊂ V with the property thatV = ker(E)

⊕ker(E)⊥. Then the following conditions are equivalent.

1) E = projM .2) im(E)⊥ = ker(E).3) ||E(x)|| ≤ ||x|| for all x ∈ V .

Proof. We have already seen that (1)⇔ (2). Also (1),(2)⇒ (3) as x = E(x)+(I−E)(x)is an orthogonal decomposition. So ||x||2 = ||E(x)||2 + ||(I − E)(x)||2 ≥ ||E(x)||2.Thus, we only need to show that (3) implies that E is orthogonal. Choose x ∈ ker(E)⊥

and observe that E(x) = x− (1− E)(x) is an orthogonal decomposition. Thus

||x||2 ≥ ||E(x)||2 = ||x− (1− E)(x)||2 = ||x||2 + ||(1− E)(x)||2 ≥ ||x||2

This means that (1− E)(x) = 0 and hence x = E(x) ∈ im(E) ⇒ ker(E)⊥ ⊂ im(E).Conversely, if z ∈ im(E) = M , then we can write z = x + y ∈ ker(E)

⊕ker(E)⊥. This

implies that z = E(z) = E(y) = y, where the last equality follows from ker(E)⊥ ⊂im(E). This means that x = 0 and hence z = y ∈ ker(E)⊥.

3 Linear Maps on Inner Product Spaces

3.1 Adjoint Maps

The adjoint of A is the matrix A∗ such that aij = aji.A : Fm → Fn, A∗ : Fn → Fm.(Ax|y) = (Ax)ty = xtAty = xt(Aty) = (x|A∗y).

dim(M) + dim(M⊥) = dim(V ) = dim(V ∗)

Theorem. Suppose S = {v1, v2, . . . , vk} is an orthonormal set in an n-dimensionalinner product space V . Thena) S can be extended to an orthonormal basis {v1, v2, . . . , vk, vk+1, . . . , vn} for V .b) If M = span(S), then S1 = {vk+1, . . . , vn} is an orthonormal basis for M⊥.c) If M is any subspace of V , then dim(V ) = dim(M) + dim(M⊥).

Proof. a) Extend S to a basis S′ = {v1, v2, . . . , vk, wk+1, . . . , wn} for V . Apply Gram-Schmidt process to S′. The first k vectors resulting from this process are the vectors inS. S′ spans V . Normalizing the last n− k vectors of this set produces an orthonormalset that spans V .b) Because S1 is a subset of a basis, it is linearly independent. Since S1 is clearly asubset of M⊥, we need only show that it spans M⊥. For any x ∈ V , we have

x =n∑

i=1

(x|vi)vi.

If x ∈ M⊥, then (x|vi) = 0 for 1 ≤ i ≤ k. Therefore,

x =n∑

i=k+1

(x|vi)vi ∈ span(S1).

c) Let M be a subspace of V . It is finite-dimensional inner product space because Vis, and so it has an orthonormal basis {v1, v2, . . . , vk}. By (a) and (b), we havedim(V ) = n = k + (n− k) = dim(M) + dim(M⊥).

Page 12: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 12

Theorem. Let M be a subspace of V . Then V = M⊕

M⊥.

Proof. By Gram-Schmidt process, we can obtain an orthonormal basis {v1, v2, . . . , vk}of M , and by theorem above, we can extend it to an orthonormal basis {v1, v2, . . . , vn}of V . Hence vk+1, . . . , vn ∈ M⊥. If x ∈ V , then

x = a1v1+· · ·+anvn, where a1v1+· · ·+akvk ∈ M and ak+1vk+1+· · ·+anvn ∈ M⊥.

Accordingly, V = M + M⊥. On the other hand, if x′ ∈ M⋂

M⊥, then (x′|x′) = 0.This yields x′ = 0. Hence M

⋂M⊥ = {0}.

Theorem. a) M ⊆ M⊥⊥.b) If M is a subset of a finite-dimensional space V , then M = M⊥⊥.

Proof. a) Let x ∈ M . Then (x|z) = 0, ∀z ∈ M⊥; hence x ∈ M⊥⊥.b) V = M

⊕M⊥ and, also V = M⊥⊕

M⊥⊥. Hence dimM = dimV − dimM⊥ anddimM⊥⊥ = dimV − dimM⊥. This yields dimM = dimM⊥⊥. Since M ⊆ M⊥⊥ by(a), we have M = M⊥⊥.

Fredholm Alternative. L : V → W be a linear map between finite dimensional innerproduct spaces. Then

ker(L) = im(L∗)⊥,

ker(L∗) = im(L)⊥,

ker(L)⊥ = im(L∗),ker(L∗)⊥ = im(L).

Proof. Using L∗∗ = L and M⊥⊥ = M , the four statements are equivalent.

kerL = {x ∈ V : Lx = 0}. V →L W

im(L) = {Lx : x ∈ V }, W →L∗ V

im(L∗) = {L∗y : y ∈ W},im(L∗)⊥ = {x ∈ V : (x|L∗y) = 0 for all y ∈ W} = {x ∈ V : (Lx|y) = 0 for all y ∈ W}.

If x ∈ kerL ⇒ x ∈ im(L∗)⊥. Conversely, if (Lx|y) = 0 for all y ∈ W ⇒ Lx = 0 ⇒x ∈ kerL.

Rank Theorem. L : V → W be a linear map between finite dimensional inner productspaces. Then

rank(L) = rank(L∗).

Proof. dimV = dim(ker(L)) + dim(im(L)) = dim(im(L∗))⊥ + dim(im(L))= dimV − dim(im(L∗)) + dim(im(L)).

Corollary. For an n×m matrix A, the column rank equals the row rank.

Proof. Conjugation does not change the rank. rank(A) is the column rank. rank(A∗)is the row rank of the conjugate of A.

Corollary. Let L : V → V be a linear operator on a finite dimensional inner productspace. Then λ is an eigenvalue for L ⇔ λ is an eigenvalue for L∗. Moreover, theseeigenvalue pairs have the same geometric multiplicity:

dim(ker(L− λIV )) = dim(ker(L∗ − λIV )).

Proof. Note that (L − λIV )∗ = L∗ − λIV . Thus we only need to show dim(ker(L)) =dim(ker(L∗)).

dim(ker(L)) = dimV − dim(im(L)) = dimV − dim(im(L∗)) = dim(ker(L∗)).

Page 13: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 13

3.2 Self-Adjoint Maps

A linear operator L : V → V is self-adjoint (Hermitian) if L∗ = L,and skew-adjoint if L∗ = −L.

Theorem. If L is self-adjoint operator on a finite-dimensional inner product spaceV , then every eigenvalue of L is real.

Proof. Method I: Suppose L is a self-adjoint operator in V . Let λ be an eigenvalue ofL, and let x be a nonzero vector in V such that Lx = λx. Then

λ(x|x) = (λx|x) = (Lx|x) = (x|L∗x) = (x|Lx) = λ(x|x).

Thus λ = λ, which means that λ is real.

Proof. Method II: Suppose that L(x) = λx for x 6= 0. Because a self-adjoint operatoris normal, then if x is an eigenvector of L then x is also an eigenvector of L∗. Thus,λx = L(x) = L∗(x) = λx.

Proposition. If L is self- or skew-adjoint, then for each invariant subspace M ⊂ V theorthogonal complement is also invariant, i.e., if L(M) ⊂ M , then also L(M⊥) ⊂ M⊥.

Proof. Assume that L(M) ⊂ M . If x ∈ M and z ∈ M⊥, then since L(x) ∈ M we have

0 = (z|L(x)) = (L∗(z)|x) = ±(L(z)|x).

Since this holds ∀x ∈ M , it follows that L(z) ∈ M⊥.

3.3 Polarization and Isometries

Real inner product on V :

(x + y|x + y) = (x|x) + 2(x|y) + (y|y)

⇒ (x|y) =12((x + y|x + y)− (x|x)− (y|y)) =

12(||x + y||2 − ||x||2 − ||y||2).

Complex inner products (are only conjugate symmetric) on V :

(x + y|x + y) = (x|x) + 2Re(x|y) + (y|y)

⇒ Re(x|y) =12(||x + y||2 − ||x||2 − ||y||2).

Re(x|iy) = Re(−i(x|y)) = Im(x|y). In particular, we have

Im(x|y) =12(||x + iy||2 − ||x||2 − ||iy||2).

We can use these ideas to check when linear operators L : V → V are 0. First notethat L = 0 ⇔ (L(x)|y) = 0, ∀x, y ∈ V . To check the ⇐ part, let y = L(x) to see that||L(x)||2 = 0, ∀x ∈ V .

Theorem. Let L : V → V be self-adjoint. L = 0 ⇔ (L(x)|x) = 0, ∀x ∈ V .

Proof. ⇒ If L = 0 ⇒ (L(x)|x) = 0, ∀x ∈ V .⇐ Assume that (L(x)|x) = 0, ∀x ∈ V .

0 = (L(x + y)|x + y) = (L(x)|x) + (L(x)|y) + (L(y)|x) + (L(y)|y)= (L(x)|y) + (y|L∗(x)) = (L(x)|y) + (y|(L(x))) = 2Re(L(x)|y).

Now insert y = L(x) to see that 0 = Re(L(x)|L(x)) = ||L(x)||2.

Page 14: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 14

Theorem. Let L : V → V be a linear map on a complex inner-product space. ThenL = 0 ⇔ (L(x)|x) = 0, ∀x ∈ V .

Proof. ⇒ If L = 0 ⇒ (L(x)|x) = 0, ∀x ∈ V .⇐ Assume that (L(x)|x) = 0, ∀x ∈ V .

0 = (L(x + y)|x + y) = (L(x)|x) + (L(x)|y) + (L(y)|x) + (L(y)|y) = (L(x)|y) + (L(y)|x)0 = (L(x + iy)|x + iy) = (L(x)|x) + (L(x)|iy) + (L(iy)|x) + (L(iy)|iy) = −i(L(x)|y) + i(L(y)|x)

⇒[

1 1−i i

] [(L(x)|y)(L(y)|x)

]=

[00

].

Since the columns of the matrix on the left are linearly independent the only solutionis the trivial one. In particular (L(x)|y) = 0.

3.4 Unitary and Orthogonal Operators

A linear transformation A is orthogonal is AAT = I, and unitary if AA∗ = I, i.e.A∗ = A−1.

Theorem. L : V → W is a linear map between inner product spaces. TFAE:1) L∗L = IV , (L is unitary)2) (L(x)|L(y)) = (x|y) ∀x, y ∈ V , (L preserves inner products)3) ||L(x)|| = ||x|| ∀x ∈ V . (L preserves lengths)

Proof. (1) ⇒ (2). L∗L = IV ⇒ (L(x)|L(y)) = (x|L∗L(y)) = (x|Iy) = (x|y), ∀x ∈ V .Also note: L takes orthonormal sets of vectors to orthonormal sets of vectors.(2) ⇒ (3). (L(x)|L(y)) = (x|y), ∀x, y ∈ V ⇒ ||L(x)|| =

√(L(x), L(x)) =

√(x, x) =

||x||.(3)⇒ (1). ||L(x)|| = ||x||, ∀x ∈ V ⇒ (L∗L(x)|x) = (L(x)|L(x)) = (x|x) = (Ix|x) ⇒((L∗L− I)(x)|x) = 0, ∀x ∈ V . Since L∗L− I is self-adjoint (check), L∗L = I.

Two inner product spaces V and W over F are isometric, if we can find an isom-etry L : V → W , i.e. an isomorphism such that (L(x)|L(y)) = (x|y).

Theorem. Supposet L is unitary, then L is an isometry on V .

Proof. An isometry on V is a mapping which preserves distances. Since L is unitary,||L(x)− L(y)|| = ||L(x− y)|| = ||x− y||. Thus L is an isometry.

Page 15: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 15

3.5 Spectral Theorem

Theorem. Let L : V → V be a self-adjoint operator on a finite dimensional innerproduct space. Then we can find a real eigenvalue λ for L.

Spectral Theorem. Let L : V → V be a self-adjoint operator on a finite dimensionalinner product space. Then there exists an orthonormal basis e1, . . . , en of eigenvectors,i.e. L(e1) = λ1e1, . . . , L(en) = λnen. Moreover, all eigenvalues λ1, . . . , λn are real.

Proof. Prove this by induction on dimV .Since L = L∗, we can find v ∈ V , λ ∈ R such that L(v) = λv (Lagrange multipliers).Let v⊥ = {x ∈ V : (x|v) = 0}, an orthogonal complement to v. dim v⊥ = dimV − 1.We show L leaves v⊥ invariant, i.e. L(v⊥) ⊂ v⊥. Let x ∈ v⊥, then

(L(x)|v) = (x|L∗(v)) = (x|L(v)) = (x|λv) =︸︷︷︸λ∈R

λ(x|v) = 0.

Thus, L|v⊥ : v⊥ → v⊥ is again self-adjoint, because (L(x)|y) = (x|L(y)), ∀x, y ∈ V ⇒x, y ∈ v⊥. Let e1 = v

||v|| ; e2, . . . , en - orthonormal basis for v⊥ so that L(ei) = λiei, i =2, . . . , n. Check: (e1|ei) = 0, ∀i ≥ 2 since ei ∈ v⊥ = e⊥1 , i = 2, . . . , n.

Corollary. Let L : V → V be a self-adjoint operator on a finite dimensional innerproduct space. Then there exists an orthonormal basis e1, . . . , en of eigenvectors and areal n× n diagonal matrix D such that

L = [e1 · · · en] D [e1 · · · en]∗ = [e1 · · · en]

λ1 · · · 0...

. . ....

0 · · · λn

[e1 · · · en]∗.

3.6 Normal Operators

An operator L : V → V on an inner product space is normal if LL∗ = L∗L.Self-adjoint, skew-adjoint and isometric operators are normal. These are precisely theoperators that admit the orthonormal basis that diagonalizes them.

Proposition. LL∗ = L∗L ⇔ ||L(x)|| = ||L∗(x)|| for all x ∈ V .

Proof. ||L(x)|| = ||L∗(x)|| ⇔ ||L(x)||2 = ||L∗(x)||2 ⇔ (L(x)|L(x)) = (L∗(x)|L∗(x))⇔ (x|L∗L(x)) = (x|LL∗(x)) ⇔ (x|(L∗L − LL∗)(x)) = 0 ⇔ L∗L − LL∗ = 0, sinceL∗L− LL∗ is self-adjoint.

Theorem. If V is a complex inner product space and L : V → V is normal, thenker(L− λIV ) = ker(L∗ − λIV ) for all λ ∈ C.

Proof. Observe L− λIV is normal and use previous proposition to conclude that||(L− λIV )(x)|| = ||(L∗ − λIV )(x)||.

Page 16: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 16

Spectral Theorem for Normal Operators. Let L : V → V be a normal operatoron a complex inner product space. Then there exists an orthonormal basis e1, . . . , en

such that L(e1) = λ1e1, . . . , L(en) = λnen.

Proof. Prove this by induction on dimV .Since L is complex linear, we can use the Fundamental Theorem of Algebra to find λ ∈ Cand x ∈ V \{0}, so that L(x) = λx ⇒ L∗(x) = λx. ker(L−λIV ) = ker(L∗−λIV ).Let x⊥ = {z ∈ V : (z|x) = 0}, an orthogonal complement to x. To get induction, weneed to show that x⊥ is invariant under L, i.e. L(x⊥) ⊂ x⊥. Let z ∈ x⊥ and showLz ∈ x⊥.

(L(z)|x) = (z|L∗(x)) = (z|λx) = λ(z|x) = 0.

Check that L|x⊥ is normal.Similarly, x⊥ is invariant under L∗, i.e. L∗ : x⊥ → x⊥, since

(L∗(z)|x) = (z|L(x)) = (z|λx) = λ(z|x) = 0.

⇒ L∗|x⊥ = (L|x⊥)∗ since (L(z)|y) = (z|L∗y), z, y ∈ x⊥.

3.7 Unitary Equivalence

Two nxn matrices A and B are unitary equivalent if A = UBU∗, where U is an nxnmatrix such that U∗U = UU∗ = IFn .

Corollary. (nxn matrices)1. A normal matrix is unitary equivalent to a diagonal matrix.2. A self-adjoint matrix is unitary or orthogonally equivalent to a real diago-nal.3. A skew-adjoint matrix is unitary equivalent to a purely imaginary diagonal.4. A unitary matrix is unitary equivalent to a diagonal matrix whose diagonalelements are unit scalars.

3.8 Triangulability

Schur’s Theorem. Let L : V → V be a linear operator on a finite dimensionalcomplex inner product space. Then we can find an orthonormal basis e1, . . . , en suchthat the matrix representation [L] is upper triangular in this basis, i.e.,

L = [e1 · · · en] [L] [e1 · · · en]∗ = [e1 · · · en]

α11 α12 · · · α1n

0 α22 · · · α2n...

.... . .

...0 0 · · · αnn

[e1 · · · en]∗.

Page 17: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 17

Generalized Schur’s Theorem. Let L : V → V be a linear operator on an n dimen-sional vector space over F. Assume that χL(t) = (t−λ1) · · · (t−λn) for λ1, . . . , λn ∈ F,then V admits a basis x1, . . . , xn such that the matrix representation with respect tox1, . . . , xn is upper triangular.

Proof. The proof is by induction on the dimension n of V . The result is immediate ifn = 1. So suppose that the result is true for linear operators on (n − 1)-dimensionalinner product spaces whose characteristic polynomials split. We can assume that L∗

has a unit eigenvector z. Suppose that L∗(z) = λz and that W = span({z}). We showthat W⊥ is L-invariant. If y ∈ W⊥ and x = cz ∈ W , then

(L(y)|x) = (L(y)|cz) = (y|L∗(cz)) = (y|cL∗(z)) = (y|cλz) = cλ(y|z) = cλ(0) = 0.

So L(y) ∈ W⊥. It is easy to show that the characteristic polynomial of LW⊥ dividesthe characteristic polynomial of L and hence splits. Thus, dim(W⊥) = n − 1, so wemay apply the induction hypothesis to LW⊥ and obtain an orthonormal basis γ of W⊥

such that [LW⊥ ]γ is upper triangular. Clearly, β = γ⋃{z} is an orthonormal basis for

V such that [L]β is upper triangular.

4 Determinants

4.1 Characteristic Polynomial

The characteristic polynomial of A is defined as χA(t) = tn+αn−1tn−1+. . . α1t+α0.

The characteristic polynomial of L : V → V can be defined by

χL(t) = det(L− tIV ).

Facts: detL−1 = 1det L . detA = detAT .

If A is orthogonal, detA = ±1, since det(I) = det(AAT ) = det(A) det(AT ) = (detA)2.If U is unitary, | detA| = 1, and all |λi| = 1.

Page 18: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 18

5 Linear Operators

5.1 Dual Spaces

For a vector space V over F, we define the dual space V ′ = hom(V,F) as the set oflinear functions of V , i.e. V ′ = {f : V → F | f is linear }.Let x1, . . . , xn be a basis for V . For each i, there is a unique linear functional fi on Vs.t.

fi(xj) = δij .

In this way we obtain from x1, . . . , xn a set of n distinct linear functionals f1, . . . , fn

on V . These functionals are also linearly independent. For, suppose

f =n∑

i=1

cifi.

Then f(xj) =n∑

i=1

cifi(xj) =n∑

i=1

ciδij = cj .

In particular, if f is the zero functional, f(xj) = 0 for each j and hence the scalarscj are all 0. Now f1, . . . , fn are n linearly independent functionals, and since we knowthat V ′ has dimension n, it must be that f1, . . . , fn is a basis for V ′. This basis is calledthe dual basis.We have shown that ∃! dual basis {f1, . . . , fn} for V ′. If f is a linear functional onV then f is some linear combination of the fi, and the scalars cj must be given bycj = f(xj).

f =n∑

i=1

f(xi)fi

Similarly, if

x =n∑

i=1

αixi is a vector in V , then

fj(x) =n∑

i=1

αifj(xi) =n∑

i=1

αiδij = αj

so that the unique expression for x as a linear combination of the xi is

x =n∑

i=1

fi(x)xi

fi(x) = αi = ith coordinate for x.Let M ⊂ V be a subspace and define the annihilator2 to M in V as

M0 = {f ∈ V ′ : f(x) = 0 for all x ∈ M} = {f ∈ V ′ : f(M) = {0}} = {f ∈ V ′ : f |M = 0}2annihilaror is the counterpart of orthogonal complement

Page 19: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 19

Example. Let β = {x1, x2} = {(2, 1), (3, 1)} be a basis for R2. (xi(ξ1, ξ2)). We findthe dual basis of β given by β′ = {f1, f2}. To determine formulas for f1 and f2, weseek functionals f1(ξ1, ξ2) = a1ξ1 + a2ξ2 and f2(ξ1, ξ2) = b1ξ1 + b2ξ2 such that

f1(x1) = 1, f1(x2) = 0, f2(x1) = 0, f2(x2) = 1. Thus{1 = f1(x1) = f(2, 1) = 2a1 + a2

0 = f1(x2) = f(3, 1) = 3a1 + a2

{0 = f2(x1) = f(2, 1) = 2b1 + b2

1 = f2(x2) = f(3, 1) = 3b1 + b2

The solutions yield a1 = −1, a2 = 3 and b1 = 1, b2 = −2. Hence f1(ξ1, ξ2) = −ξ1 + 3ξ2

and f2(ξ1, ξ2) = ξ1 − 2ξ2, or f1 = (−1, 3), f2 = (1,−2), form the dual basis.

Example. Let β = {x1, x2, x3} = {(1, 0, 1), (0, 2, 0), (−1, 0, 2)} be a basis for R3.(xi(ξ1, ξ2, ξ3)). We find the dual basis of β given by β′ = {f1, f2, f3}. To deter-mine formulas for f1,f2,f3 we seek functionals f1(ξ1, ξ2, ξ3) = a1ξ1 + a2ξ2 + a3ξ3,f2(ξ1, ξ2, ξ3) = b1ξ1 + b2ξ2 + b3ξ3, and f3(ξ1, ξ2, ξ3) = c1ξ1 + c2ξ2 + c3ξ3 such thatfi(xj) = δij.

1 = f1(x1) = f(1, 0, 1) = a1 + a3

0 = f1(x2) = f(0, 2, 0) = 2a2

0 = f1(x3) = f(−1, 0, 2) = −a1 + 2a3

0 = f2(x1) = f(1, 0, 1) = b1 + b3

1 = f2(x2) = f(0, 2, 0) = 2b2

0 = f2(x3) = f(−1, 0, 2) = −b1 + 2b3

0 = f3(x1) = f(1, 0, 1) = c1 + c3 Thus a1 = 23 , a2 = 0, a3 = 1

3 ,

0 = f3(x2) = f(0, 2, 0) = 2c2 b1 = 0, b2 = 12 , b3 = 0,

1 = f3(x3) = f(−1, 0, 2) = −c1 + 2c3 c1 = −13 , c2 = 0, c3 = 1

3 .

Hence f1(ξ1, ξ2, ξ3) = 23ξ1 + 1

3ξ3, f2(ξ1, ξ2, ξ3) = 12ξ2, f3(ξ1, ξ2, ξ3) = −1

3ξ1 + 13ξ3,

or f1 = (23 , 0, 1

3), f2 = (0, 12 , 0), f3 = (−1

3 , 0, 13), form the dual basis.

Example. Let W is the subspace of R4 spanned by x1 = (1, 2,−3, 4) and x2 =(0, 1, 4,−1). We find the basis for W◦, the annihilator of W .If suffices to find a basis of the set of linear functionalsf(ξ1, ξ2, ξ3, ξ4) = a1ξ1 + a2ξ2 + a3ξ3 + a4ξ4 for which f(x1) = 0 and f(x2) = 0 :f(1, 2,−3, 4) = a1 + 2a2 − 3a3 + 4a4 = 0f(0, 1, 4,−1) = a2 + 4a3 − a4 = 0The system of equations in a1, a2, a3, a4 is in echelon form with free variables a3 anda4.Set a3 = 1, a4 = 0 to obtain a1 = 11, a2 = −4, a3 = 1, a4 = 0⇒ f1(ξ1, ξ2, ξ3, ξ4) = 11ξ1 − 4ξ2 + ξ3.Set a3 = 0, a4 = −1 to obtain a1 = 6, a2 = −1, a3 = 0, a4 = −1⇒ f2(ξ1, ξ2, ξ3, ξ4) = 6ξ1 − ξ2 − ξ4.The set of linear functionals {f1, f2} is a basis of W ◦.

Page 20: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 20

Example. Given the annihilator described by the three linear functionals in R4:

f1(ξ1, ξ2, ξ3, ξ4) = ξ1 + 2ξ2 + 2ξ3 + ξ4

f2(ξ1, ξ2, ξ3, ξ4) = 2ξ1 + ξ4

f3(ξ1, ξ2, ξ3, ξ4) = −2ξ1 − 4ξ3 + 3ξ4

we find the subspace it annihilates.After the row reduction, we find that the functionals below ahhihilate the same subspace

g1(ξ1, ξ2, ξ3, ξ4) = ξ1 + 2ξ3

g2(ξ1, ξ2, ξ3, ξ4) = ξ2

g3(ξ1, ξ2, ξ3, ξ4) = ξ4

The subspace annihilated consists of the vectors with ξ1 = −2ξ3, ξ2 = ξ4 = 0.Thus the subspace that is annihilated is given by span{(−2, 0, 1, 0)}.Proposition. If M ⊂ V is a subspace of a finite dimensional space and x1, . . . , xn isa basis for V such that M = span{x1, . . . , xm}, then M◦ = span{fm+1, . . . , fn}, wheref1, . . . , fn is the dual basis. In particular we have

dim(M) + dim(M◦) = dim(V ) = dim(V ′).

Proof. Let x1, . . . , xm be a basis for M ; M = span{x1, . . . , xm}. Extend to {x1, . . . , xn},a basis for V . Construct a dual basis f1, . . . , fn for V ′, fi(xj) = δij .We show that fm+1, . . . , fn is a basis for M◦. First, show M◦ = span{fm+1, . . . , fn}.Let f ∈ M◦ ⊂ V ′. f =

∑ni=1 cifi =

∑ni=1 f(xi)fi =

∑mi=1 f(xi)fi +

∑ni=m+1 f(xi)fi =∑n

i=m+1 f(xi)fi ∈ span{fm+1, . . . , fn}.Second, {fm+1, . . . , fn} are linearly independent, since {fm+1, . . . , fn} is a subset ofbasis for V ′. Thus, dim(M◦) = n−m = dim(V )− dim(M).

Theorem. W1 and W2 are subspaces of a finite-dimensional vector space. ThenW1 = W2 ⇔ W ◦

1 = W ◦2 .

Proof. ⇒ If W1 = W2, then of course W ◦1 = W ◦

2 .⇐ If W1 6= W2, then one of the two subspaces contains a vector which is not in theother. Suppose there is a vector x ∈ W2 but x /∈ W1. There is a linear functional fsuch that f(z) = 0 for all z ∈ W1, but f(x) 6= 0. Then f ∈ W ◦

1 but f /∈ W ◦2 and

W ◦1 6= W ◦

2 .

Theorem. Let W be a subspace of a finite-dimensional vector space V . Then W =W ◦◦.

Proof. dimW + dimW ◦ = dim V ,dimW ◦ + dimW ◦◦ = dim V ′,and since dimV = dimV ′ we have dimW = dimW ◦◦. Since W ⊂ W ◦◦, we see thatW = W ◦◦.

Page 21: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 21

Proposition. Assume that the finite dimensional space V = M⊕

N , then also V ′ =M◦⊕

N◦ and the restriction maps V ′ → M ′ and V ′ → N ′ give isomorphisms

M◦ ≈ N ′,N◦ ≈ M ′.

Proof. Select a basis x1, . . . , xn for V such that M = span{x1, . . . , xm} and N =span{xm+1, . . . , xn}. Then let f1, . . . , fn be the dual basis and simply observe thatM◦ = span{fm+1, . . . , fn} and N◦ = span{f1, . . . , fm}. This proves that V ′ = M◦⊕

N◦.Next we note that

dim(M◦) = dim(V )− dim(M) = dim(N) = dim(N ′).

So at least M◦ and N ′ have the same dimension. Also if we restrict fm+1, . . . , fn to N ,then we still have that fi(xj) = δij for j = m + 1, . . . , n. As N = span{xm+1, . . . , xn},this means that fm+1|N , . . . , fn|N form a basis for N ′. The proof that N◦ ≈ M ′ issimilar.

Page 22: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 22

5.2 Dual Maps

The dual space construction leads to dual map L′ : W ′ → V ′ for a linear mapL : V → W . This dual map is a substitute for the adjoint to L and is related tothe transpose of the matrix representation of L. The definition is L′(g) = g ◦ L. Thusif g ∈ W ′ we get a linear function g ◦ L : V → F since L : V → W . The dual to L isoften denoted L′ = Lt. The dual map satisfies (L(x)|g) = (x|L′(g)) for all x ∈ V andg ∈ W ′.

Generalized Fredholm Alternative. L : V → W be a linear map between finitedimensional inner product spaces. Then

ker(L) = im(L′)◦,ker(L′) = im(L)◦,ker(L)◦ = im(L′),ker(L′)◦ = im(L).

Proof. Using L′′ = L and M◦◦ = M , the four statements are equivalent.

kerL = {x ∈ V : Lx = 0}.im(L) = {Lx : x ∈ V },im(L′) = {L′(g) : g ∈ W},im(L′)◦ = {x ∈ V : (x|L′(g)) = 0 for all g ∈ W} = {x ∈ V : g(L(x)) = 0 for all g ∈ W}.

If x ∈ kerL ⇒ x ∈ im(L′)◦. Conversely, if g(L(x)) = 0 for all g ∈ W ⇒ Lx = 0 ⇒x ∈ kerL.

Rank Theorem. L : V → W be a linear map between finite dimensional inner productspaces. Then

rank(L) = rank(L′).

Proof. dimV = dim(ker(L)) + dim(im(L)) = dim(im(L′)◦) + dim(im(L))= dimV − dim(im(L′)) + dim(im(L)).

Page 23: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 23

6 Problems

Cross Product: a = (a1, a2, a3), b = (b1, b2, b3):

a× b =

∣∣∣∣∣∣

i j ka1 a2 a3

b1 b2 b3

∣∣∣∣∣∣= (a2b3 − a3b2)i + (a3b1 − a1b3)j + (a1b2 − a2b1)k

=( ∣∣∣∣

a2 a3

b2 b3

∣∣∣∣ ,

∣∣∣∣a3 a1

b3 b1

∣∣∣∣ ,

∣∣∣∣a1 a2

b1 b2

∣∣∣∣)

.

Problem (F’03, #9).Consider a 3x3 real symmetric matrix with determinant 6. Assume (1, 2, 3) and (0, 3,−2)are eigenvectors with eigenvalues 1 and 2.a) Give an eigenvector of the form (1, x, y) for some real x, y which is linearly indepen-dent of the two vectors above.b) What is the eigenvalue of this eigenvector.

Proof. a) Since A is real and symmetric, A is self-adjoint. By the spectral theorem, itseigenvectors are orthonormal. v1 · v2 = (1, 2, 3) · (0, 3,−2) = 0, so the two vectors areorthogonal. We cross the v1 and v2 to obtain a linearly indepentent vector v3.

v3 = v1 × v2 =

∣∣∣∣∣∣

i j k1 2 30 3 −2

∣∣∣∣∣∣=

( ∣∣∣∣2 33 −2

∣∣∣∣ ,

∣∣∣∣3 1−2 0

∣∣∣∣ ,

∣∣∣∣1 20 3

∣∣∣∣)

= (−13, 2, 3).

Thus, the required vector is e3 = v3−13 = (1,− 2

13 ,− 313).

b) Since A is self-adjoint, by the spectral theorem, there exists an orthonormal basisof eigenvectors and a real diagonal matrix D such that

A = ODO∗ = [e1 e2 e3]

∣∣∣∣∣∣

λ1 0 00 λ2 00 0 λ3

∣∣∣∣∣∣[e1 e2 e3]∗.

Since O is orthotonal, OO∗ = I, i.e. O∗ = O−1, and A = ODO−1. Note detA =det(ODO−1) = det(O) det(D) det(O−1) = det(O) det(D) 1

det(O) = det(D) = λ1λ2λ3.Thus 6 = detA = detD = λ1λ2λ3, and λ1 = 1, λ2 = 2, then λ3 = 3.

Page 24: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 24

Problem (S’02, #9).Find the matrix representation in the standard basis for either rotation by an angle θin the plane perpendicular to the subspace spanned by vectors (1, 1, 1, 1) and (1, 1, 1, 0)in R4.

Proof. x1 = (1, 1, 1, 1), x2 = (1, 1, 1, 0). span{(1, 1, 1, 1), (1, 1, 1, 0)} = span{e1, e2},orthogonal complement → span{e3, e4}, where rotation happens.

T =[

e1 e2 e3 e4

]

1 0 0 00 1 0 00 0 cos(θ) ∓ sin(θ)0 0 ± sin(θ) cos(θ)

[e1 e2 e3 e4

]∗

e1 =12

1111

, z2 = x2 − (x2|e1)e1 =

1110

(

1110

·

12

1111

)12

1111

=

14

111−3

,

⇒ e2 =z2

||z2|| =1

2√

3

111−3

orthogonal complement

1−100

,

11−20

basis for ⊥; ⇒ e3 =

1√2

1−100

, e4 =

1√6

11−20

Page 25: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 25

Problem (F’01, #8).T : R3 → R3 rotation by 60◦ counterclockwise about the plane perpendicular to (1, 1, 1).S : R3 → R3 reflection about the plane perpendicular to (1, 0, 1).Determine the matrix representation of S◦T in the standard basis {(1, 0, 0), (0, 1, 0), (0, 0, 1)}.Proof. Rotation:

T =[

e1 e2 e3

]

1 0 00 cos(θ) − sin(θ)0 sin(θ) cos(θ)

[

e1 e2 e3

]∗

T =[

e1 e2 e3

]

1 0 00 1

2 −√

32

0√

32

12

[e1 e2 e3

]∗

e1, e2, e3 orthonormal basis; e1 = e2 × e3, e2 = e3 × e1, e3 = e1 × e2.

e1 =1√3

111

, e2 =

1√2

1−10

, e3 = ± 1√

6

11−2

.

Check: e3 = e1×e2 = 1√3

1√2

∣∣∣∣∣∣

i j k1 1 11 −1 0

∣∣∣∣∣∣=

(1√3

1√2

∣∣∣∣1 1−1 0

∣∣∣∣ , · · · , · · ·)

= ( 1√6, · · · , · · · ),

⇒ e3 = +1√6

11−2

.

T =

1√3

1√2

1√6

1√3

− 1√2

1√6

1√3

0 − 2√6

1 0 00 1

2 −√

32

0√

32

12

1√3

1√3

1√3

1√2

− 1√2

01√6

1√6

− 2√6

= PTθP

−1.

Reflection:

S =[

f1 f2 f3

]−1 0 00 1 00 0 1

[

f1 f2 f3

]∗

f1 =1√2

101

, f2 =

010

, f3 = ± 1√

2

10−1

.

Check: f3 = f1 × f2 = 1√2

∣∣∣∣∣∣

i j k1 0 10 1 0

∣∣∣∣∣∣=

(1√2

∣∣∣∣0 11 0

∣∣∣∣ , · · · , · · ·)

= (− 1√2, · · · , · · · ),

⇒ f3 = − 1√2

10−1

.

S =

1√2

0 − 1√2

0 1 01√2

0 1√2

−1 0 00 1 00 0 1

1√2

0 1√2

0 1 0− 1√

20 1√

2

= OSxO−1.

S ◦ T = OSxO−1PTθP−1.

Page 26: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 26

Problem (F’02, #8).Let T be the rotation of an angle 60◦ counterclockwise about the origin in the planeperpendicular to (1, 1, 2) in R3.i) Find the matrix representation of T in the standard basis. Find all eigenvalues andeigenspaces of T .ii) What are the eigenvalues and eigenspaces of T if R3 is replaced by C3.

Proof. i)

T =[

e1 e2 e3

]

1 0 00 1

2 −√

32

0√

32

12

[e1 e2 e3

]∗

e1, e2, e3 orthonormal basis; e1 = e2 × e3, e2 = e3 × e1, e3 = e1 × e2.

e1 =1√6

112

, e2 =

1√2

1−10

, e3 = ± 1√

3

11−1

.

Check: e3 = e1×e2 = 1√6

1√2

∣∣∣∣∣∣

i j k1 1 21 −1 0

∣∣∣∣∣∣=

(1√6

1√2

∣∣∣∣1 2−1 0

∣∣∣∣ , · · · , · · ·)

= ( 1√3, · · · , · · · ),

⇒ e3 = +1√3

11−1

.

Know T (e1) = e1, so λ = 1 is an eigenvalue. Eigenspace = span{e1}. If z ∈ R3,z = αe1 + w, w ∈ span{e2, e3}. T (z) = αe1 +T (w)︸ ︷︷ ︸

rot. by 60◦

. So if T (z) = λz, must have

λ(αe1 + w) = αe1 +T (w)︸ ︷︷ ︸∈ span{e2,e3}

. ⇒ λαe1 = αe1 and T (w) = λw ← impossible unless

w = 0. No more eigenvalues or eigenvectors.

Any 3-D rotation has 1 for an eigenvalue: any vector lying along the axis of therotation is unchanged by the rotation, and is therefore an eigenvector corresponding toeigenvalue 1. The line formed by these eigenvectors is the axis of rotation. For the spe-cial case of the null rotation, every vector in 3-D space is an eigenvector correspondingto eigenvalue 1.

Any 3-D reflection has two eigenvalues: -1 and 1. Any vector orthogonal to theplane of the mirror is reversed in direction by the reflection, without its size beingchanged; that is, the reflected vector is -1 times the original, and so the vector is aneigenvector corresponding to eigenvalue -1. The set formed by all these eigenvectors isthe line orthogonal to the plane of the mirror. On the other hand, any vector in theplane of the mirror is unchanged by the reflection: it is an eigenvector correspondingto eigenvalue 1. The set formed by all these eigenvectors is the plane of the mirror.Any vector that is neither in the plane of the mirror nor orthogonal to it is not aneigenvector of the reflection.

ii) T : C3 → C3;

1 0 00 1

2 −√

32

0√

32

12

, χ = (t − 1)(t2 − t + 1) ⇒ roots are t =

1, ei π3 , e−i π

3 . These are then the three distinct eigenvalues, each with one complex

Page 27: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 27

dimensional eigenspace. Find e±i π3 eigenspaces for

[12 −

√3

2√3

212

], and then

[αβ

],

[γδ

]∈ C2 eigenspaces for T : C3 → C3, span{e1} ↔ λ = 1, span{αe2 + βe3} ↔ ei π

3 ,

span{γe2 + δe3} ↔ e−i π3 .

Problem (S’03, #8; W’02, #9; F’01, #10).Let V be n-dimensional complex vector space and T : V → V a linear operator.χT has n distinct roots. Show T is diagonalizable.Let V be n-dimensional complex vector space and T : V → V a linear operator.Let v1, . . . , vn be non-zero vectors of distinct eigenvalues in V . Prove that {v1, . . . , vn}is linearly independent.

Proof. Since F = C, any root of χT is also an eigenvalue, so we have λ1, . . . , λn distincteigenvalues. Induction on n = dimV .n = 1 ⇒ trivially linearly independent.n > 1, v1, . . . , vn are non-zero vectors in V with λ1, . . . , λn distinct eigenvalues. If

α1v1 + · · ·+ αnvn = 0, (6.1)

want to show αi’s = 0.

T (α1v1 + · · ·+ αnvn) = T (0) = 0,

α1Tv1 + · · ·+ αnTvn = 0,

α1λ1v1 + · · ·+ αnλnvn = 0. (6.2)

Multiplying (6.1) by λn and subtracting off (6.2), we get

α1(λn − λ1)v1 + · · ·+ αn−1(λn − λn−1)vn−1 = 0.

Since {v1, . . . , vn−1} are linearly independent, and λi 6= λj , i 6= j, ⇒ α1 = · · · =αn−1 = 0.Then by (6.1), αnvn = 0 ⇒ αn = 0, since vn is non-zero.Thus, α1 = · · · = αn = 0, and {v1, . . . , vn} are linearly independent.Having shown {v1, . . . , vn} are linearly independent, they generate an n-dimensionalsubspace which is then all of V . Hence {v1, . . . , vn} gives a basis.

Page 28: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 28

Problem (F’01, #9).Let A be a real symmetric matrix. Prove that there exists an invertible matrix P suchthat P−1AP is diagonal.

Proof. Let V = Rn with the standard inner product. Let A be real symmetric matrix⇒ At = A ⇒ A is self-adjoint. Let T be the linear operator on V which is representedby A in the standard order basis.

VS[A]S−−−−→ VS

P

xxP

VB[A]B−−−−→ VB

Since T is self-adjoint on V , there exists an orthonormal basis β = {v1, . . . , vn} ofeigenvectors of T . Then Tvi = λvi, i = 1, . . . , n, where λi’s are eigenvalues of T .Let D = [A]B. Let P be the matrix with v1, . . . , vn as column vectors. Then

[A]B = P−1[A]SP = P−1[A]S [v1 · · · vn] = P−1[Av1 · · · Avn] = P−1[λ1v1 · · · λnvn]

Since with choice, P is orthonormal with real entries, detP = 1 (P invertible) ⇒ P−1 =P t.

[A]B = P t[λ1v1 · · · λnvn] =

v1...

vn

[λ1v1 · · · λnvn] =

λ1 · · · 0...

. . ....

0 · · · λn

since vi’s are orthonormal. Then D = P t[A]SP , D = P−1AP .

Problem (S’03, #9).Let A ∈ M3(R) satisfy det(A) = 1 and AtA = AAt = IR3. Prove that the characteristicpolynomial of A has 1 as a root (i.e. 1 is an eigenvalue of A).

Proof. χA(t) = t3 + · · · − 1 = (t − λ1)(t − λ2)(t − λ3), λ1, λ2, λ3 ∈ C, using thefundamental theorem of algebra.A real ⇒ 1 root of odd degree. λ1 ∈ R.Case 1: λ2, λ3 ∈ R;Case 2: λ2 = λ, λ3 = λ.det(A) = 1 = λ1λ2λ3, since determinant is a product of eigenvalues.AtA = AAt = IR3 ⇒ A orthogonal ⇒ A ∈ M3(C) is unitary since A∗ = AT = AT .λ1, λ2, λ3 eigenvalues for A as a unitary transformation, so if Axi = λixi, then

(xi|xi) = (U∗Uxi|xi) =(U is unitary)= (Uxi|Uxi) = (λixi|λixi) = |λi|2(xi|xi)

⇒ |λi|2 = 1.Case 1: λ2, λ3 ∈ R ⇒ λ2, λ3 = ±1 and λ2λ3 = 1, so one or three eigenvalues = +1.Case 2: λ1λ2λ3 = λ1λλ = λ1|λ|2 = λ1 = 1.

Page 29: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 29

Problem (S’03, #10).Let T : Rn → Rn be symmetric3, tr(T 2) = 0. Show that T = 0.

Proof. By spectral theorem, T = ODO∗, O is orthogonal and D is diagonal with realentries.

T 2 = ODO∗ODO∗ = OD2O∗, where D =

λ1 · · · 0...

. . ....

0 · · · λn

.

0 = tr(T 2) = tr(OD2O∗) = tr(O∗OD2) = tr(D2) = λ21 + · · ·+ λ2

n.

⇒ λi = 0 since λ2i ≥ 0.

Problem (W’02, #10).Let V be a finite dimensional complex inner product space and f : V → C a linearfunctional. Show f(x) = (x|y) for some y.

Proof. Select e1, . . . , en orthonormal basis, and let y = f(e1)e1 + · · ·+ f(en)en.

(x|y) = (x|f(e1)e1 + · · ·+ f(en)en) = f(e1)(x|e1) + · · ·+ f(en)(x|en)= f((x|e1)e1 + · · ·+ (x|en)en) = f(x), since f is linear.

We can also show that y is unique. Suppose y′ is another vector in V for whichf(x) = (x|y′) for every x ∈ V . Then (x|y) = (x|y′) for all x, so (y − y′|y − y′) = 0, andy = y′.

Problem (S’02, #11).Let V be a finite dimensional real inner product space and T, S : V → V two commuting(i.e. ST = TS) self-adjoint linear operators. Show that there exists an orthonormalbasis that simultaneously diagonalizes S and T .

Proof. Since T, S are self-adjoint, ∃ an ordered orthonormal basis {v1, . . . , vn} of eigen-vectors corresponding to eigenvalues λ1, . . . , λn for T . vi ∈ Eλi(T ).V = Eλ1(T )⊕ · · · ⊕ Eλn(T ).vi ∈ Eλi(T ) ⇒ Tvi = λivi.

TSvi = STvi = Sλivi = λiSvi ⇒ Svi ∈ Eλi(T ).

Thus Eλi(T ) is invariant under S, i.e. S : Eλi(T ) → Eλi(T ).Since S|Eλi

(T ) is self-adjoint, ∃ an ordered orthonormal basis βi of eigenvectors of S forEλi(T ). β =

⋃ni=1 βi.

3symmetric in R ⇒ self-adjoint ≡ hermitian.

Page 30: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 30

Problem (S’02, #10).Let V be a complex inner product space and W a finite dimensional subspace. Letv ∈ V . Prove that there exists a unique vector vW ∈ W such that

||v − vW || ≤ ||v − w||, ∀w ∈ W.

Deduce that equality holds if and only if w = vW .

Proof. vW is supposed to be the orthogonal projection of v onto W .Choose an orthonormal basis e1, . . . , en for W . Then define

projW (x) = (x|e1)e1 + · · ·+ (x|en)en.

Claim: xW = projW (x).Show: x− projW (x) ⊥ projW (x).

(x− (x|e1)e1 − · · · − (x|en)en

∣∣ (x|e1)e1 + · · ·+ (x|en)en

)

=(x∣∣ (x|e1)e1 + · · ·+ (x|en)en

)− ((x|e1)e1 + · · ·+ (x|en)en

∣∣ (x|e1)e1 + · · ·+ (x|en)en

)

= (x|e1)(x|e1) + · · ·+ (x|en)(x|en)− ( n∑

i=1

n∑

j=1

(x|ei)(x|ej)(ei|ej))

= (x|e1)(x|e1) + · · ·+ (x|en)(x|en)− ( n∑

i=1

(x|ei)(x|ei))

= 0 since (ei|ej) = δij .

In fact, projW (x) ⊥ x− projW (x) and W 3 w ⊥ x− projW (x).||x− projW (x) + projW (x)− w||2 = ||x− w||2⇒ ||x− projW (x)|| ≤ ||x− w||, with equality when ||projW (x)− w|| = 0⇔ projW (x) = w.Show: x− projW (x) ⊥ w ∈ W .

(x− (x|e1)e1 − · · · − (x|en)en

∣∣ (w|e1)e1 + · · ·+ (w|en)en

)

=(x∣∣ (w|e1)e1 + · · ·+ (w|en)en

)− ((x|e1)e1 + · · ·+ (x|en)en

∣∣ (w|e1)e1 + · · ·+ (w|en)en

)

= (w|e1)(x|e1) + · · ·+ (w|en)(x|en)− ( n∑

i=1

n∑

j=1

(x|ei)(w|ej)(ei|ej))

= (w|e1)(x|e1) + · · ·+ (w|en)(x|en)− ( n∑

i=1

(x|ei)(w|ei))

= 0 since (ei|ej) = δij .

Problem (F’03, #10).a) Let t ∈ R such that t is not an integer multiple of π. For the matrix

A =[

cos(t) sin(t)− sin(t) cos(t)

]

prove there does not exist a real valued matrix B such that BAB−1 is a diagonal matrix.

b) Do the same for the matrix A =[

1 λ0 1

], where λ ∈ R \ {0}.

Proof. a) det(A− λI) =[

cos(t)− λ sin(t)− sin(t) cos(t)− λ

]= λ2 − 2λ cos t + 1.

⇒ λ1,2 = cos t±√cos2 t− 1. ⇒ λ1,2 = a±ib, b 6= 0, i.e. λ1,2 /∈ R. Hence, eigenvectors

Page 31: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 31

are not real, and @B ∈ M(R), such that BAB−1 is diagonal.

b) λ1,2 = 1. We find eigenvectors,[

0 λ0 0

] [w1

w2

]=

[00

]. Thus, both eigenvectors

are v1,2 =[

10

], i.e. linearly dependent ⇒ there does not exist a basis for R2

consisting of eigenvectors of A. Therefore, @B ∈ M(R), such that BAB−1 is diagonal.

Page 32: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 32

Problem (F’02, #10) (Spectral Theorem for Normal Operators).Let A ∈ Matn×n(C) satisfying A∗A = AA∗, i.e. A is normal. Show that there is anorthonormal basis of eigenvectors of A.Rephrase: For L : V → V , V complex finite dimensional inner product space.

Proof. Prove this by induction on dimV .Since L is complex linear, we can use the Fundamental Theorem of Algebra to find λ ∈ Cand x ∈ V \{0}, so that L(x) = λx ⇒ L∗(x) = λx. ker(L−λIV ) = ker(L∗−λIV ).Let x⊥ = {z ∈ V : (z|x) = 0}, an orthogonal complement to x. To get induction, weneed to show that x⊥ is invariant under L, i.e. L(x⊥) ⊂ x⊥. Let z ∈ x⊥ and showLz ∈ x⊥.

(L(z)|x) = (z|L∗(x)) = (z|λx) = λ(z|x) = 0.

Check that L|x⊥ is normal.Similarly, x⊥ is invariant under L∗, i.e. L∗ : x⊥ → x⊥, since

(L∗(z)|x) = (z|L(x)) = (z|λx) = λ(z|x) = 0.

⇒ L∗|x⊥ = (L|x⊥)∗ since (L(z)|y) = (z|L∗y), z, y ∈ x⊥.

Page 33: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 33

Problem (W’02, #11).Let V be a finite dimensional complex inner product space and L : V → V a lineartransformation. Show that we can find an orthonormal basis so [L] is upper triangular.

Proof. Assume [L] is upper triangular with respect to e1, . . . , en

⇒ [L(e1) · · · L(en)] = [e1 · · · en]

α11 α12 · · · α1n

0 α22 · · · α2n...

. . . . . ....

0 · · · 0 αnn

.

L(e1) = α11e1 ⇒ e1 is an eigenvector with eigenvalue α11.dimV = 2: L : V → V complex ⇒ can pick e1 ∈ V so that Le1 = α11e1; picke2 ⊥ e1.

⇒ [L(e1) L(e2)] = [e1 e2][

α11 α12

0 α22

]← upper triangular.

L(e1) = α11e1,L(e2) = α12e1 + α22e2.Observe: L(ek) ∈ span{e1, . . . , ek} ⇒ L(e1), . . . , L(ek) ∈ span{e1, . . . , ek}.So we have {0} = M0 ⊂ M1 ⊂ · · · ⊂ Mk−1 ⊂ Mk = V with the property dimMk = k,L(Mk) ⊂ Mk = span{e1, . . . , ek}.Enough to show that any linear transformation on an n-dimensional space has an(n − 1)-dim subspace M ⊂ V . Keep using this result then we can generate such anincreasing sequence {0} = M0 ⊂ M1 ⊂ · · · ⊂ Mk−1 ⊂ M = V that are all invariantunder L.M1 = span{e1}, |e1| = 1.e2 = M2

⋂M⊥

1 , e2 is an orthogonal complement of e1.· · ·Pick e1, . . . , ek orthonormal basis such that span{e1, . . . , ek} = Mk.L(e1) ∈ M1, L(e1) = α11e1,L(e2) ∈ M2, L(e2) = α12e1 + α22e2,· · · · · · · · · ← Get an upper triangular formfor [L].L(ek) ∈ Mk, L(ek) = α1ke1 + α2ke2 + · · ·+ αkkek.To construct M ⊂ V we select x ∈ V \ {0} such that L∗(x) = λx. (L∗ : V → V iscomplex and linear, so by the Fundamental Theorem of Algebra can find x, λ, so thatL∗(x) = λx). Then M = x⊥. dimM = k − 1 ← have to show M is invariant underL. Take z ⊥ x:

(L(z)|x) = (z|L∗(x)) = (z|λx) = λ(z|x) = 0.

⇒ L(z) ∈ x⊥. So L(M) ⊂ M and M has dimension k − 1.

Page 34: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 34

Problem (F’02, #7; F’01, #7).Let T : V → W be a linear transformation of finite dimensional real vector spaces.Define the transpose of T and then prove both of the following:1) im(T )0 = ker(T t).2) rank(T ) = rank(T t) (dim im(T ) = dim im(T t)).

Proof. Transpose = dual. Let T : V → W be linear. Let T t = T ′ : W ′ → V ′, whereX ′ = homR(X,R). T t : W ′ → V ′ is linear.

T ′(g) = g ◦ T V →T W, V ′ ←T tW ′

1) This is a proof of the Generalized Fredholm Alternative.

kerT ′ = {g ∈ W ′ : T ′(g) = g ◦ T = 0}im(T ) = {T (x) : x ∈ V }

im(T )◦ = {g ∈ W ′ : g(T (x)) = 0 for all x ∈ V }g(T (x)) = 0 for all x ∈ V ⇔ g ◦ T = 0 ⇔ g ∈ kerT ′.

2) This is a proof of the Generalized Rank Theorem.rank(T ) = dim(im(T )), rank(T t) = dim(im(T t)). T t : W ′ → V ′. Dimensionformula:dimW ′ = dim(ker(T t)) + dim(im(T t)) = dim(im(T ))◦ + dim(im(T t))

= dim W ′ − dim(im(T )) + dim(im(T t)).

Problem (W’02, #8).Let T : V → W and S : W → X be linear transformations of finite dimensional realvector spaces. Prove that

rank(T ) + rank(S)− dim(W ) ≤ rank(S ◦ T ) ≤ min{rank(T ), rank(S)}.

Proof. Note: V →T W →S X. rank(S ◦ T ) = rank(T )− dim(im T⋂

kerS).

rank(T ) + rank(S)− dim(W ) = rank(T ) + rank(S)− dim(kerS)− rank(S)

= rank(T )− dim(kerS) = rank(S ◦ T ) + dim(im T⋂

kerS)− dim(kerS)︸ ︷︷ ︸

≤0

≤ rank(S ◦ T )

Note: M ⊂ V subspace, dim(L(M)) ≤ dimM , a consequence of dimension formula.

rank(S ◦ T ) = dim((S ◦ T )(V )) = dim(S(T (V ))) ≤ dim(T (V )) = rank(T ) ≤ dim(S(W )) = rank(S).

Alternatively, to prove rank(S ◦ T ) ≤ rank(S) , note that since T (V ) ⊂ W , we alsohave S(T (V )) ⊂ S(W ) and so dimS(T (V )) ≤ dimS(W ) . Thenrank(S ◦ T ) = dim((S ◦ T )(V )) = dim(S(T (V ))) ≤ dimS(W ) = rank(S).

Page 35: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 35

Problem (S’03, #7).Let V be a finite dimensional real vector space. Let W ⊂ V be a subspace andW 0 = {f : V → F linear | f = 0 on W}. Let W1,W2 ⊂ V be subspaces. Prove that

W ◦1

⋂W ◦

2 = (W1 + W2)◦.

Proof. W 0 = {f ∈ V ′| f |W = 0}.Write similar definitions for W ◦

1 ,W ◦2 , (W1+W2)◦, and W ◦

1

⋂W ◦

2 and make observations.

1) (W1 + W2)◦ ⊂ W ◦1 ,W ◦

2 ⇒ (W1 + W2)◦ ⊂ W ◦1

⋂W ◦

2 .

2) Suppose f ∈ W ◦1

⋂W ◦

2 ⇒ f |W1 = 0, f |W2 = 0 ⇒ f |W1+W2 = 0 ⇒ f ∈ (W1 + W2)◦.

Thus, W ◦1

⋂W ◦

2 ⊂ (W1 + W2)◦.

Problem (S’02, #8).Let V be a finite dimensional real vector space. Let M ⊂ V be a subspace andM0 = {f : V → F linear | f = 0 on M}. Prove that

dim(V ) = dim(M) + dim(M◦).

Proof. Let x1, . . . , xm be a basis for M ; M = span{x1, . . . , xm}. Extend to {x1, . . . , xn},a basis for V . Construct a dual basis f1, . . . , fn for V ′, fi(xj) = δij .We show that fm+1, . . . , fn is a basis for M◦. First, show M◦ = span{fm+1, . . . , fn}.Let f ∈ M◦ ⊂ V ′. f =

∑ni=1 cifi =

∑ni=1 f(xi)fi =

∑mi=1 f(xi)fi +

∑ni=m+1 f(xi)fi =∑n

i=m+1 f(xi)fi ∈ span{fm+1, . . . , fn}.Second, {fm+1, . . . , fn} are linearly independent, since {fm+1, . . . , fn} is a subset ofbasis for V ′. Thus, dim(M◦) = n−m = dim(V )− dim(M).

Page 36: Linear Algebra: Graduate Level Problems and Solutionsyanovsky/handbooks/linear_algebra.pdf · Linear Algebra: Graduate Level Problems and Solutions ... typos as well as incorrect

Linear Algebra Igor Yanovsky, 2005 36

Problem (F’03, #8).Prove the following three statements. You may choose an order of these statements andthen use the earlier statements to prove the later statements.a) If L : V → W is a linear transformation between two finite dimensional real vectorspaces V,W , then

dim imL = dim V − dimker(L).

b) If L : V → V is a linear transformation on a finite dimensional real inner productspace and L∗ is its adjoint, then im(L∗) is the orthogonal complement of ker(L) in V .c) Let Anxn be a real matrix, then the maximal number of linearly independent rows(row rank) equals the maximal number of linearly independent columns (column rank).

Proof. We prove (a) and (b) separately. (a),(b) ⇒ (c).a) We know that dim ker(L) ≤ dimV and that it has a complement M of dimensionk = dimV − dimker(L). Since M

⋂ker(L) = {0} the linear map L must be 1-1 when

restricted to M . Thus L|M : M → im(L) is an isomorphism, i.e. dim im(L) = dimM =k.b) We want to show ker(L)⊥ = im(L∗). Since M⊥⊥ = M , we can prove ker(L) =im(L∗)⊥.

kerL = {x ∈ V : Lx = 0}. V →L W

im(L) = {Lx : x ∈ V }, W →L∗ V

im(L∗) = {L∗y : y ∈ W},im(L∗)⊥ = {x ∈ V : (x|L∗y) = 0 for all y ∈ W} = {x ∈ V : (Lx|y) = 0 for all y ∈ W}.

If x ∈ kerL ⇒ x ∈ im(L∗)⊥. Conversely, if (Lx|y) = 0, ∀y ∈ W ⇒ Lx = 0 ⇒ x ∈kerL.c) Using Dimension formula (a) and Fredholm Alternative (b), we have the Ranktheorem:

dimV = dim(ker(A)) + dim(im(A)) = dim(im(A∗))⊥ + dim(im(A))= dimV − dim(im(A∗)) + dim(im(A)).

Thus, rank(A) = rank(A∗). Conjugation does not change the rank, so rank(A) =rank(AT ). rank(A) is the column rank. rank(AT ) is the row rank of A. Thus,row rank (A) = column rank (A).We have not proved that the conjugation does not change the rank. To establish thisresult easier, use (b) where we showed im(L∗) = ker(L)⊥. Since V = kerL

⊕imL,

ker(L)⊥ = im(L), which establishes im(L∗) = im(L).