[solution] linear algebra 2nd (kwak, hong) birkhauser

22
Selected Answers and Hints Chapter 1 Problems 1.2 (1) Inconsistent. (2) (x 1 ,x 2 ,x 3 ,x 4 )=(1 4t, 6 2t, 2 3t, t) for any t R. 1.3 (1) (x, y, z)=(t, t, t). (3) (w, x, y, z) = (2, 0, 1, 3) 1.4 (1) b 1 + b 2 b 3 = 0. (2) For any b i ’s. 1.7 a = 17 2 ,b = 13 2 ,c = 13 4 ,d = 4. 1.9 Consider the matrices: A = 2 4 3 6 ,B = 2 1 3 4 ,C = 8 7 0 1 . 1.10 Compare the diagonal entries of AA T and A T A. 1.12 (1) Infinitely many for a = 4, exactly one for a = ±4, and none for a = 4. (2) Infinitely many for a = 2, none for a = 3, and exactly one otherwise. 1.14 (3) I = I T =(AA 1 ) T =(A 1 ) T A T means by definition (A T ) 1 =(A 1 ) T . 1.17 Any permutation on n objects can be obtained by taking a finite number of interchangings of two objects. 1.21 Consider the case that some d i is zero. 1.22 x =2,y =3,z = 1. 1.23 No, in general. Yes, if the systems is consistent. 1.24 L = 1 0 0 1 1 0 0 1 1 , U = 1 1 0 0 1 1 0 0 1 . 1.25 (1) Consider (i, j )-entries of AB for i<j . (2) A can be written as a product of lower triangular elementary matrices. 1.26 L = 1 0 0 1/2 1 0 0 2/3 1 ,D = 2 0 0 0 3/2 0 0 0 4/3 ,U = 1 1/2 0 0 1 2/3 0 0 1 . 1.27 There are four possibilities for P . 1.28 (1) I 1 =0.5,I 2 =6,I 3 =0.55. (2) I 1 =0,I 2 = I 3 =1,I 4 = I 5 = 5. 1.30 x = k 0.35 0.40 0.25 , for k> 0. 1.31 A = 0.0 0.1 0.8 0.4 0.7 0.1 0.5 0.0 0.1 with d = 90 10 30 . 452

Upload: ppowoer

Post on 13-Apr-2015

2.510 views

Category:

Documents


267 download

DESCRIPTION

[solution] Linear Algebra

TRANSCRIPT

Page 1: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints

Chapter 1

Problems

1.2 (1) Inconsistent.(2) (x1, x2, x3, x4) = (−1 − 4t, 6 − 2t, 2 − 3t, t) for any t ∈ R.

1.3 (1) (x, y, z) = (t, −t, t). (3) (w, x, y, z) = (2, 0, 1, 3)1.4 (1) b1 + b2 − b3 = 0. (2) For any bi’s.1.7 a = − 17

2 , b = 132 , c = 13

4 , d = −4.

1.9 Consider the matrices: A =

[2 43 6

], B =

[2 13 4

], C =

[8 70 1

].

1.10 Compare the diagonal entries of AAT and AT A.1.12 (1) Infinitely many for a = 4, exactly one for a 6= ±4, and none for a = −4.

(2) Infinitely many for a = 2, none for a = −3, and exactly one otherwise.1.14 (3) I = IT = (AA−1)T = (A−1)T AT means by definition (AT )−1 = (A−1)T .1.17 Any permutation on n objects can be obtained by taking a finite number of

interchangings of two objects.1.21 Consider the case that some di is zero.1.22 x = 2, y = 3, z = 1.1.23 No, in general. Yes, if the systems is consistent.

1.24 L =

1 0 0−1 1 0

0 −1 1

, U =

1 −1 00 1 −10 0 1

.

1.25 (1) Consider (i, j)-entries of AB for i < j.(2) A can be written as a product of lower triangular elementary matrices.

1.26 L =

1 0 0−1/2 1 0

0 −2/3 1

, D =

2 0 00 3/2 00 0 4/3

, U =

1 −1/2 00 1 −2/30 0 1

.

1.27 There are four possibilities for P .1.28 (1) I1 = 0.5, I2 = 6, I3 = 0.55. (2) I1 = 0, I2 = I3 = 1, I4 = I5 = 5.

1.30 x = k

0.350.400.25

, for k > 0.

1.31 A =

0.0 0.1 0.80.4 0.7 0.10.5 0.0 0.1

with d =

901030

.

452

Page 2: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 453

Exercises 1.10

1.1 Row-echelon forms are A,B,D, F . Reduced row-echelon forms are A,B, F .

1.2 (1)

1 −3 2 1 20 0 1 −1/4 3/40 0 0 0 00 0 0 0 0

.

1.3 (1)

1 −3 0 3/2 1/20 0 1 −1/4 3/40 0 0 0 00 0 0 0 0

.

1.4 (1) x1 = 0, x2 = 1, x3 = −1, x4 = 2. (2) x = 17/2, y = 3, z = −4.

1.5 (1) and (2).

1.6 For any bi’s.

1.7 b1 − 2b2 + 5b3 6= 0.

1.8 (1) Take x the transpose of each row vector of A.

1.10 Try it with several kinds of diagonal matrices for B.

1.11 Ak =

1 2k 3k(k − 1)0 1 3k0 0 1

.

1.13 See Problem 1.11.

1.15 (1) A−1AB = B. (2) A−1AC = C = A + I.

1.16 a = 0, c−1 = b 6= 0.

1.17 A−1 =

1 −1 0 00 1/2 −1/2 00 0 1/3 −1/30 0 0 1/4

, B−1 =

13/8 −1/2 −1/8−15/8 1/2 3/8

5/4 0 −1/4

.

1.18 A−1 = 115

8 −19 21 −23 44 −2 1

.

1.21 (1) x = A−1b =

1/3 1/6 1/6−4/3 −5/3 4/3−1/3 −2/3 1/3

257

=

8/3−5/3−5/3

.

1.22 (1) A =

[1 04 1

] [2 00 3

] [1 1/20 1

]= LDU, (2) L = A, D = U = I.

1.23 (1) A =

1 0 02 1 03 1 1

1 0 00 2 00 0 −1

1 2 30 1 10 0 1

,

(2)

[1 0

b/a 1

] [a 00 d − b2/a

] [1 b/a0 1

].

1.24 c = [2 − 1 3]T , x = [4 2 3]T .

1.25 (2) A =

1 0 01 1 01 1 1

1 0 00 3 00 0 2

1 1 10 1 4/30 0 1

.

Page 3: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

454 Selected Answers and Hints

1.26 (1) (Ak)−1 = (A−1)k. (2) An−1 = 0 if A ∈ Mn×n.(3) (I − A)(I + A + · · · + Ak−1) = I − Ak.

1.27 (1) A =

[1 10 0

]. (2) A = A−1A2 = A−1A = I.

1.28 Exactly seven of them are true.

(8) If AB has the (right) inverse C, then A−1 = BC.

(10) Consider a permutation matrix

[0 11 0

].

Chapter 2

Problems

2.2 (2), (4).2.3 (1), (2), (4).

2.5 See Problem 1.11. For any square matrix A, A = A+AT

2 + A−AT

22.6 Note that any vector v in W is of the form a1x1 + a2x2 + · · ·+ amxm which

is a vector in U .2.7 tr(AB − BA) = 0.

2.10 Linearly dependent.2.12 Any basis for W must be a basis for V already, by Corollary 2.13.

2.13 (1) n − 1, (2) n(n+1)2 , (3) n(n−1)

2 .2.15 63a + 39b − 13c + 5d = 0.2.17 If b1, . . ., bn denote the column vectors of B, then AB = [Ab1 · · · Abn].2.18 Consider the matrix A from Example 2.19.2.19 (1) rank = 3, nullity = 1. (2) rank = 2, nullity = 2.2.20 Ax = b has a solution if and only if b ∈ C(A).2.21 A−1(AB) = B implies rank B = rank A−1(AB) ≤ rank(AB).2.22 By (2) of Theorem 2.24 and Corollary 2.21, a matrix A of rank r must have

an invertible submatrix C of rank r. By (1) of the same theorem, the rankof C must be the largest.

2.24 dim(V + W ) = 4 and dim(V ∩ W ) = 1.2.25 A basis for V is {(1, 0, 0, 0), (0,−1, 1, 0), (0,−1, 0, 1)},

for W : {(−1, 1, 0, 0), (0, 0, 2, 1)}, and for V ∩ W : {(3,−3, 2, 1)}. Thus,dim(V + W ) = 4 means V + W = R4 and any basis for R4 works for V + W .

2.28

A =

1 0 0 00 0 2 01 1 1 10 1 2 3

, and

abcd

= A−1

1244

=

1210

.

Exercises 2.11

2.1 Consider 0(1, 1).2.5 (1), (4).

Page 4: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 455

2.6 No.2.7 (1) p(x) = −p1(x) + 3p2(x) − 2p3(x).

2.11 {(1, 1, 0), (1, 0, 1)}.2.12 2.

2.13 Consider {ej = {ai}∞i=1} where ai =

{1 if i = j,0 otherwise.

2.14 (1) 0 = c1Ab1+· · ·+cpAbp = A(c1b1+· · ·+cpb

p) implies c1b1+· · ·+cpb

p = 0since N (A) = 0, and this also implies ci = 0 for all i = 1, . . . , p since columnsof B are linear independent.(2) B has a right inverse. (3) and (4): Look at (1) and (2) above.

2.15 (1) {(−5, 3, 1)}. (2) 3.2.16 5!, and dependent.2.17

(1) R(A) = 〈(1, 2, 0, 3), (0, 0, 1, 2)〉, C(A) = 〈(5, 0, 1), (0, 5, 2)〉,N (A) = 〈(−2, 1, 0, 0), (−3, 0,−2, 1)〉.

(2) R(B) = 〈(1, 1,−2, 2), (0, 2, 1,−5), (0, 0, 0, 1)〉,C(B) = 〈(1,−2, 0), (0, 1, 1), (0, 0, 1)〉, N (B) = 〈(5,−1, 2, 0)〉.

2.18 rank = 2 when x = −3, rank = 3 when x 6= −3.2.20 Since uvT = u[v1 · · · vn] = [v1u · · · vnu], each column vector of uvT is of

the form viu, that is, u spans the column space. Conversely, if A is of rank1, then the column space is spanned by any one column of A, say the firstcolumn u of A, and the remaining columns are of the form viu, i = 2, . . . , n.Take v = [1 v2 · · · vn]T . Then one can easily see that A = uvT .

2.21 Four of them are true.

Chapter 3

Problems

3.1 To show W is a subspace, see Theorem 3.2. Let Eij be the matrix with 1at the (i, j)-th position and 0 at others. Let Fk be the matrix with 1 at the(k, k)-th position, −1 at the (n, n)-th position and 0 at others. Then theset {Eij , Fk : 1 ≤ i 6= j ≤ n, k = 1, . . . , n − 1} is a basis for W . Thusdim W = n2 − 1.

3.2 tr(AB) =∑m

i=1

∑nk=1 ai

kbki =

∑nk=1

∑mi=1 bk

iaik = tr(BA).

3.3

[0 11 0

], since it is simply the change of coordinates x and y.

3.4 If yes, (2, 1) = T (−6, −2, 0) = −2T (3, 1, 0) = (−2, −2).3.5 If a1v

1 + a2v2 + · · · + akv

k = 0, then0 = T (a1v

1 + a2v2 + · · ·+ akv

k) = a1w1 + a2w

2 + · · ·+ akwk implies ai = 0

for i = 1, . . . , k.3.6 (1) If T (x) = T (y), then S ◦ T (x) = S ◦ T (y) implies x = y. (4) They are

invertible.

Page 5: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

456 Selected Answers and Hints

3.7 (1) T (x) = T (y) if and only if T (x − y) = 0, i.e., x − y ∈ Ker(T ) = {0}.(2) Let {v1, . . . , vn} be a basis for V . If T is one-to-one, then the set{T (v1), . . . , T (vn)} is linearly independent as the proof of Theorem 3.7shows. Corollary 2.13 shows it is a basis for V . Thus, for any y ∈ V , we canwrite it as y =

∑ni=1 aiT (vi) = T (

∑ni=1 aiv

i). Set x =∑n

i=1 aivi ∈ V . Then

clearly T (x) = y so that T is onto. If T is onto, then for each i = 1, . . . , nthere exists xi ∈ V such that T (xi) = vi. Then the set {x1, . . . , xn} islinearly independent in V , since, if

∑ni=1 aix

i = 0, then 0 = T (∑n

i=1 aixi) =∑n

i=1 aiT (xi) =∑n

i=1 aivi implies ai = 0 for all i = 1, . . . , n. Thus it is a

basis by Corollary 2.13 again. If T (x) = 0 for x =∑n

i=1 aixi ∈ V , then

0 = T (x) =∑n

i=1 aiT (xi) =∑n

i=1 aivi implies ai = 0 for all i = 1, . . . , n,

that is x = 0. Thus Ker (T ) = {0}.3.8 Use rotation Rπ

3and reflection

[1 00 −1

]about the x-axis.

3.9 (1) (5, 2, 3). (2) (2, 3, 0).

3.12 (1) [T ]α =

2 −3 45 −1 24 7 0

, [T ]β =

0 7 42 −1 54 −3 2

.

3.13 [T ]βα =

1 2 0 01 0 −3 10 2 3 4

.

3.15 [S + T ]α =

3 0 02 2 32 3 3

, [T ◦ S]α =

3 2 03 3 36 5 3

.

3.16 [S]βα =

1 −1 01 1 00 0 1

, [T ]α =

2 3 00 3 60 0 4

.

3.17 (2) [T ]βα =

[1 01 1

][T−1]αβ =

[1 0

−1 1

].

3.18 [Id]αβ =1

2

0 −1 54 3 −12 1 1

, [Id]βα =

−2 −3 73 5 −101 1 −2

.

3.19 [T ]α =

1 2 10 −1 01 0 4

, [T ]β =

1 4 5−1 −2 −6

1 1 5

.

3.20 Write B = Q−1AQ with some invertible matrix Q.(1) detB = det(Q−1AQ) = detQ−1 det Adet Q = detA. (2) tr (B) = tr(Q−1AQ) = tr (QQ−1A) = tr (A) (see Problem 3.2). (3) Use Problem 2.21.

3.22 α∗ = {f1(x, y, z) = x − 12y, f2(x, y, z) = 1

2y, f3(x, y, z) = −x + z}.

Exercises 3.10

3.1 (2).

3.2 ax3 + bx2 + ax + c.

Page 6: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 457

3.5 (1) Consider the decomposition of v = v+T (v)2 + v−T (v)

2 .

3.6 (1) {(x, 32x, 2x) ∈ R3 : x ∈ R}.

3.7 (2) T−1(r, s, t) = ( 12r, 2r − s, 7r − 3s − t).

3.8 (1) Since T ◦ S is one-to-one from V into V , T ◦ S is also onto and so T isonto. Moreover, if S(u) = S(v), then T ◦ S(u) = T ◦ S(v) implies u = v.Thus, S is one-to-one, and so onto. This implies T is one-to-one. In fact, ifT (u) = T (v), then there exist x and y such that S(x) = u and S(y) = v.Thus T ◦ S(x) = T ◦ S(y) implies x = y and so u = T (x) = T (y) = v.

3.9 Note that T cannot be one-to-one and S cannot be onto.

3.12

5 4 −6 18−4 −3 −2 0

0 0 1 −120 0 0 1

.

3.13 (1)

[−1/3 2/3−5/3 1/3

].

3.14 (1)

[0 23 −1

], (2)

[3 −41 5

].

3.15 (1) T (1, 0, 0) = (4, 0), T (1, 1, 0) = (1, 3), T (1, 1, 1) = (4, 3).

(2) T (x, y, z) = (4x − 2y + z, y + 2z).

3.16 (1)

1 1 10 1 20 0 1

, (4)

0 1 00 0 10 0 0

.

3.17 (1) P =

0 0 10 1 −11 −1 0

, (2) Q =

1 1 11 1 01 0 0

= P−1.

3.18 Use the trace.

3.19 (1)

[−7 −33 −13

4 19 8

].

3.20 (2)

[5 11 2

], (4)

−2/3 1/3 4/32/3 −1/3 −1/37/3 −2/3 −8/3

.

3.25 [T ]α =

0 2 1−1 4 1

1 0 1

= ([T ∗]α∗)T .

3.26 (1)

[1 1 0

−1 0 2

]. (2) [T ]βα =

[−3 1 −1

1 2 1

].

3.27 N (T ) = {0},

C(T ) = 〈 (2, 1, 0, 1), (1, 1, 1, 1), (4, 2, 2, 3) 〉, [T ]βα =

1 0 21 0 0

−1 0 −11 1 3

.

3.29 p1(x) = 1 + x − 32x2, p2(x) = − 1

6 + 12x2, p3(x) = − 1

3 + x − 12x2.

3.30 Three of them are false.

Page 7: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

458 Selected Answers and Hints

Chapter 4

Problems

4.4 (1) −27, (2) 0, (3) (1 − x4)3.4.8 (1) −14. (2) 0.4.9 See Example 4.6, and use mathematical induction on n.

4.10 Find the cofactor expansion along the first row first, and then compute thecofactor expansion along the first column of each n × n submatrix (in thesecond step, use the proof of Cramer’s rule).

4.15 If A = 0, then clearly adjA = 0. Otherwise, use A · adjA = (detA)I.4.16 Use adjA · adj(adjA) = det(adjA) = I.4.17 (1) x1 = 4, x2 = 1, x3 = −2.

(2) x =10

23, y =

5

6, z =

5

2.

4.18 The solution of the system Id(x) = x is xi = det Ci

det I = detA.

Exercises 4.5

4.1 k = 0 or 2.4.2 It is not necessary to compute A2 or A3.4.3 −37.4.4 (1) detA = (−1)n−1(n − 1). (2) 0.4.5 −2, 0, 1, 4.

4.6 Consider∑

σ∈Sn

a1σ(1) · · · anσ(n).

4.7 (1) 1, (2)24.4.8 (3) x1 = 1, x2 = −1, x3 = 2, x4 = −2.4.9 (2) x = (3, 0, 4/11)T .

4.10 k = 0 or ±1.4.11 x = (−5, 1, 2, 3)T .4.12 x = 3, y = −1, z = 2.4.13 (3) A11 = −2, A12 = 7, A13 = −8, A33 = 3.

4.16 A−1 = 172

−3 5 918 −6 186 14 −18

.

4.17 (1) adj(A) =

2 −7 −61 −7 −3

−4 7 5

, det(A) = −7, det(adj(A)) = 49,

A−1 = − 17adj(A). (2) adj(A) =

1 1 −1−10 4 2

7 −3 −1

,

det A = 2, det(adj(A)) = 4, A−1 = 12adj(A).

4.18 Note that (AB)−1 = B−1A−1 and A−1 = adj(A)/det A. (The reader mayalso try to prove this equality for non-invertible matrices.)

4.19 If we set A =

[1 33 1

], then the area is 1

2 |det A| = 4.

Page 8: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 459

4.20 If we set A =

1 21 22 1

, then the area is 1

2

√|det(AT A)| = 3

√2

2 .

4.21 Use det A =∑

σ∈Sn

sgn(σ)a1σ(1) · · · anσ(n). In fact, suppose B is a k×k matrix,

and a permutation σ ∈ Sn sends some number i ≤ k into {k + 1, . . . , n},then there is an ℓ ≥ k + 1 such that σ(ℓ) ≤ k. Thus aℓσ(ℓ) = 0, andso sgn(σ)a1σ(1) · · · aℓσ(ℓ) · · · anσ(n) = 0. Therefore the only terms that do notvanish are those for σ : {1, . . . , k} → {1, . . . , k}. But then σ : {k+1, . . . , n} →{k + 1, . . . , n}. i.e., σ = σ1σ2 with sgn(σ) = sgn(σ1)sgn(σ2). Hence,

det A =∑

σ1∈Sk

sgn(σ1)a1σ(1) · · · akσ(k) ·∑

σ2∈Sn−k

sgn(σ2)ak+1σ(k+1) · · · anσ(n)

= detB det D.

4.22 Multiply

[I 0B I

]to the right.

4.23 vol(T (B)) = |det(A)|vol(B), for the matrix representation A of T . Clearly,C = AB, so vol(P(C)) = |det(AB)| = |det A||det B| = |det A| vol(P(B)).

4.24 Exactly seven of them are true.

(4) (cIn − A)T = cIn − AT .

(10) See Exercise 2.20: det(uvT ) = v1 · · · vn det([u · · ·u]) = 0.

(13) Consider

1 0 11 1 00 1 1

.

Chapter 5

Problems

5.1 Note that 〈x,x〉 = ax21 + 2cx1x2 + bx2

2 = a(x1 + cax2)

2 + ab−c2

a x22 > 0 for all

x = (x1, x2) 6= 0. For x = (1, 0), we get a > 0. For x = (− ca , 1), we get

ab − c2 > 0. The converse is easy from the equation above.5.2 〈x,y〉2 = 〈x,x〉〈y,y〉 if and only if ‖tx+y‖2 = 〈x,x〉t2+2〈x,y〉t+〈y,y〉 = 0

has a repeated real root t0.5.3 (4) Compute the square of both sides and use Cauchy-Schwarz inequality.

5.5 〈f, g〉 =∫ 1

0f(x)g(x)dx defines an inner product on C [0, 1]. Use Cauchy-

Schwarz inequality or Problem 5.3.5.6 (1) 1√

6(2, 1, −1), (2) 1√

61(6, 4, −3).

5.7 (1): Orthogonal, (2) and (3): None, (4): Orthonormal.5.10

{1,

√3(2x − 1),

√5(6x2 − 6x + 1)

}.

5.12 (1) is just the definition, and use (1) to prove (2).5.14 ProjW (p) = (4

3 , 53 ,− 1

3 ).5.16 PT = (PT P )T = PT P = P , and P = PT P = P 2.

5.17 For x ∈ Rm, x = 〈v1,x〉v1 + · · ·+ 〈vm,x〉vm = (v1v1T)x+ · · ·+(vmvmT )x.

Page 9: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

460 Selected Answers and Hints

5.18 The null space of the matrix

[1 2 1 20 −1 −1 1

]is

x = t[1 − 1 1 0]T + s[−4 1 0 1]T for t, s ∈ R.

5.20 R(A)⊥ = N (A).

5.21 x = (1, −1, 0) + t(2, 1, −1) for any number t.

5.23 For A = [v1 v2], two columns are linearly independent.

5.24 P =1

3

2 1 11 2 −11 −1 2

.

5.26 − 16 + x.

5.27

s0

v012g

= x = (AT A)−1AT b =

−0.40.3516.1

.

5.29 (1) r = 1√2, s = 1√

6, a = − 1√

3, b = 1√

3, c = 1√

3.

5.30 Extend {v1, . . . , vm} to an orthonormal basis {v1, . . . , vm, . . . , vn}. Then‖x‖2 =

∑mi=1 |〈x,vi〉|2 +

∑nj=m+1 |〈x,vj〉|2.

5.31 (1) orthogonal. (2) not orthogonal.

5.32 Let A = QR = Q′R′ be two decompositions of A. Then QT Q′ = RR′−1

which is an upper triangular and orthogonal matrix. Since (QT Q′)T =(QT Q′)−1 = (RR′−1)−1 = R′R−1 is upper and lower, QT Q′ is diagonaland orthogonal so that QT Q′ = D = diag[di] with di = ±1. i.e., Q′ = QD,

or ui′ = ±ui for each i ≥ 1. Since c1 = b′11u1′ = b11u

1 with b′11, b11 > 0

and so u1′, u1 are unit vectors in c1 direction, we have u1′ = u1. Assumethat uj−1′ = uj−1 and uj ′ = −uj . Then uj becomes a linear combinationof u1, . . . ,uj−1, since cj = b1ju

1 + · · · + bjjuj = b′1ju

1′ + · · · + b′jjuj ′. Thus,

uj ′ = uj , or dj = 1, for all j ≥ 1, so that D = Id. Thus we get Q = Q′, andthen R = R′ follows.

Exercises 5.12

5.1 Inner products are (2), (4), (5).

5.2 For the last condition of the definition, note that 〈A,A〉 = tr(AT A) =∑i,j a2

ij = 0 if and only if aij = 0 for all i, j.

5.4 (1) k = 3.

5.5 (3) ‖f‖ = ‖g‖ =√

1/2, The angle is 0 if n = m, π2 if n 6= m.

5.6 Use the Cauchy-Schwarz inequality and Problem 5.2 with x = (a1, · · · , an)and y = (1, · · · , 1) in (Rn, ·).

5.7 (1) −37

4,

√19

3.

(2) If 〈h, g〉 = h(a3 + b

2 +c) = 0 with h 6= 0 a constant and g(x) = ax2 +bx+c,

then (a, b, c) is on the plane a3 + b

2 + c = 0 in R3.

5.10 (1)3

2v2, (2)

1

2v2.

5.12 Orthogonal: (4). Nonorthogonal: (1), (2), (3).

Page 10: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 461

5.16 Use induction on n. If n = 1, then A has only one column c1 and AT A =det(AT A) is simply the square of the length of c1. Assume the claim is truefor n − 1. Let Bm×(n−1) be the submatrix of A with the first column c1

removed so that A = [ c1 B], and let Cm×n = [ a B], where a = c1 − p andp = ProjW (c1) = a2c

2 + · · · + ancn ∈ W for some ai’s, where W = C(B).Then a is clearly orthogonal to c2, . . . , ck and p. Claim that

det(AT A) = det(CT C) = ‖a‖2 det(BT B) = ‖a‖2vol(P(B))2 = vol(P(A))2.

In fact,

det(AT A) = det

[aT a + pT p pT B

BT p BT B

]

= det

[aT a 0BT p BT B

]+ det

[pT p pT BBT p BT B

]

= det

[aT a 00 BT B

]+ det

([pT

BT

] [p B

]).

Since p is a linear combination of the columns of B, one can easily ver-

ify that det

([pT

BT

] [p B

])= 0, so that det(AT A) = det(CT C) =

‖a‖2 det(BT B). This also shows that the volume is independent of the choiceof c1 at the beginning.

5.17 Let A =

1 0 00 1 00 2 10 1 2

. Then the volume of the tetrahedron is

√det(AT A)

3 =

1.

5.19 Ax = b has a solution for every b ∈ Rm if k = m. It has infinitely manysolutions if nullity = n − k = n − m > 0.

5.20 The line is a subspace with an orthonormal basis 1√2(1, 1), or is the column

space of A = 1√2

[11

].

5.21 Find a least square solution of

1 01 11 21 3

[

ab

]=

1344

for (a, b)

in y = a + bx. Then y = x +3

2.

5.22 Follow Exercise 5.21 with A =

1 −1 1 −11 0 0 01 1 1 11 2 4 81 3 9 27

.

Then y = 2x3 − 4x2 + 3x − 5.

Page 11: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

462 Selected Answers and Hints

5.25 (1) Let h(x) = 12 (f(x)+f(−x)) and g(x) = 1

2 (f(x)−f(−x)). Then f = h+g.

(2) For f ∈ U and g ∈ V , 〈f, g〉 =∫ 1

−1f(x)g(x)dx = −

∫ −1

1f(−t)g(−t)dt

= −∫ 1

−1f(t)g(t)dt = −〈f, g〉, by change of variable x = −t.

(3) Expand the length in the inner product.5.26

AT A =

[1 sin θ cos θ

sin θ cos θ cos2 θ

]

=

[1 0

sin θ cos θ 1

] [1 00 cos4 θ

] [1 sin θ cos θ0 1

].

A = QR =

[sin θ cos θcos θ − sin θ

] [1 sin θ cos θ0 cos2 θ

].

5.27 AT A = I and detAT = detA imply detA = ±1.

The matrix A =

[cos θ sin θsin θ − cos θ

]is orthogonal with detA = −1.

5.28 Six of them are true.

(1) Consider (1, 0) and (−1, 0).

(2) Consider two subspaces U and W of R3 spanned by e1 and e2, respec-tively.

(3) The set of column vectors in a permutation matrix P are just{e1, . . . , en}, which is a set of orthonormal vectors.

Chapter 6

Problems

6.3 Zero is an eigenvalue of AB if and only if AB is singular, if and only if BAis singular, if and only if zero is an eigenvalue of BA. Let λ be a nonzeroeigenvalue of AB with (AB)x = λx for a nonzero vector x. Then the vectorBx is not zero, since λ 6= 0, but

(BA)(Bx) = B(λx) = λ(Bx).

This means that Bx is an eigenvector of BA belonging to the eigenvalue λ,and λ is an eigenvalue of BA. Similarly, any nonzero eigenvalue of BA is alsoan eigenvalue of AB.

6.4 Consider the matrices

[1 10 1

]and

[1 00 1

].

6.5 Check with A =

[1 10 1

].

6.6 If A is invertible, then AB = A(BA)A−1.6.7 (1) Use det A = λ1 · · ·λn. (2) Ax = λx if and only if x = λA−1x.6.8 (1) If Q = [x1 x2 x3] diagonalizes A, then the diagonal matrix must be λI and

AQ = λQI. Expand this equation and compare the corresponding columnsof the equation to find a contradiction on the invertibility of Q.

Page 12: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 463

6.9 Q =

[2 31 2

], D =

[2 00 3

]. Then A = QDQ−1 =

[−1 6−2 6

].

6.10 (1) The eigenvalues of A are 1, 1, −3, and their associated eigenvectors are(1, 1, 0), (−1, 0, 1) and (1, 3, 1), respectively.(2) If f(x) = x10 + x7 + 5x, then f(1), f(1) and f(−3) are the eigenvalues ofA10 + A7 + 5A.

6.11 Note that

an+1

an

an−1

=

2 1 −21 0 00 1 0

an

an−1

an−2

. The eigenvalues are 1, 2,

−1 and eigenvectors are (1, 1, 1), (4, 2, 1) and (1,−1, 1), respectively. It turns

out that an = 2 − (−1)n 2

3− 2n

3.

6.12 Write the characteristic polynomial as

f(x) = xk − a1xk−1 − · · · − ak−1x − ak = (x − λ)mg(x),

where g(λ) 6= 0. Then clearly f(λ) = f ′(λ) = · · · = f (m−1)(λ) = 0. Forn ≥ k, let f1(x) = xn−kf(x) = xn − a1x

n−1 − · · · − akxn−k. Then one caneasily show that

f2(λ) = λf ′1(λ) = nλn − a1(n − 1)λn−1 − · · · − ak(n − k)λn−k = 0,

since f ′1(λ) = (n − k)λn−k−1f(λ) + λn−kf ′(λ) = 0. Inductively,

fm(λ) = λf ′m−1(λ) = λ2f ′′

m−2(λ) + λf ′m−2(λ) = 0

= nm−1λn − a1(n − 1)m−2λn−1 − · · · − ak(n − k)m−k−1λn−k.

Thus, xn = λn, nλn, . . . , nm−1λn are m solutions. It is not hard to showthat they are linearly independent.

6.15 The eigenvalues are 0, 0.4, and 1, and their eigenvectors are(1, 4,−5), (1, 0,−1) and (3, 2, 5), respectively.

6.16 For (1), use (A + B)k =∑k

i=0

(ki

)AiBk−i if AB = BA. For (2) and (3), use

the definition of eA. Use (1) for (4).

6.17 Note that e(AT ) = (eA)T by definition (thus, if A is symmetric, so is eA), anduse (4).

6.18 Write A = 2I + N with N =

0 3 00 0 30 0 0

. Then N3 = 0.

6.19 y1 = c1e2x − 1

4c2e−3x; y2 = c1e

2x + c2e−3x.

6.20

y1 = − c2e2x + c3e

3x

y2 = c1ex + 2c2e

2x − c3e3x

y3 = 2c2e2x − c3e

3x,

y1 = e2x − 2e3x

y2 = ex − 2e2x + 2e3x

y3 = − 2e2x + 2e3x .

6.21 (1)

[e−t

e−t

], (2)

3et − 22 − e−t

e−t

.

6.22 With the basis α = {1, x, x2}, [T ]α = A =

1 0 00 2 00 0 3

.

Page 13: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

464 Selected Answers and Hints

6.23 With the standard basis for M2×2(R) : α =

{E11 =

[1 00 0

], E12 =

[0 10 0

], E21 =

[0 01 0

], E22 =

[0 00 1

]},

[T ]α = A =

1 1 0 11 1 1 00 1 1 11 0 1 1

. The eigenvalues are 3, 1, 1, −1, and their asso-

ciated eigenvectors are (1, 1, 1, 1), (−1, 0, 1, 0), (0,−1, 0, 1), and (−1, 1,−1, 1),respectively.

6.24 With respect to the standard basis α, [T ]α =

4 0 12 3 21 0 4

with eigenvalues

3, 3, 5 and eigenvectors (0, 1, 0), (−1, 0, 1) and (1, 2, 1), respectively.

Exercises 6.6

6.1 (4) 0 of multiplicity 3, 4 of multiplicity 1. Eigenvectors are ei − ei+1 for

1 ≤ i ≤ 3 and∑4

i=1 ei.6.2 f(λ) = (λ + 2)(λ2 − 8λ + 15), λ1 = −2, λ2 = 3, λ3 = 5,

x1 = (−35, 12, 19), x2 = (0, 3, 1), x3 = (0, 1, 1).6.4 {v} is a basis for N (A), and {u, w} is a basis for C(A).6.5 Note that the order in the product doesn’t matter, and any eigenvector of

A is killed by B. Since the eigenvalues are all different, the eigenvectorsbelonging to 1, 2, 3 form a basis. Thus B = 0, that is, B has only the zeroeigenvalue, so all vectors are eigenvectors of B.

6.7 A = QDQ−1 =1

2

1 −2 −11 4 −11 2 7

.

6.8 Note that Rn = W ⊕ Ker(P ) and P (w) = w for w ∈ W and P (v) = 0 forv ∈ Ker(P ). Thus, the eigenspace belonging to λ = 1 is W , and that toλ = 0 is Ker(P ).

6.9 For any w ∈ Rn, Aw = u(vT w) = (v · w)u. Thus Au = (v · u)u, so u is aneigenvector belonging to the eigenvalue λ = v ·u. The other eigenvectors arethose in v⊥ with eigenvalue zero. Thus, A has either two eigenspaces E(λ)that are 1-dimensional spanned by u and E(0) = v⊥ if v · u 6= 0, or just oneeigenspace E(0) = Rn if v · u = 0.

6.10 λv = Av = A2v = λ2v implies λ(λ − 1) = 0.6.12 Use tr(A) = λ1 + · · · + λn = a11 + · · · + ann.6.13 (1) If k = 1, clearly x1 ∈ U . Suppose that the claim is true for k, and

x1 + · · · + xk + xk+1 = u ∈ U with xi ∈ Eλi(A). Then, from

A(x1 + · · · + xk+1) = λ1x1 + · · · + λkxk + λk+1xk+1 = Au = u ∈ U

λk+1x1 + · · · + λk+1xk + λk+1xk+1 = λk+1u ∈ U,

Page 14: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 465

we get (λ1 − λk+1)x1 + · · · + (λk − λk+1)xk = u − λk+1u ∈ U . Thus byinduction all the xi’s are in U .

(2) Write Rn = Eλ1(A) ⊕ · · · ⊕ Eλk

(A). Then U ∩ Eλi(A), for i = 1, . . . , k,

span U since any basis vector u in U is of the form u = x1 + · · · + xk withxi ∈ U ∩ Eλi

(A) by (1). Thus U = U ∩ Eλ1(A) ⊕ · · · ⊕ U ∩ Eλk

(A).

6.14 A = QD1Q−1 and B = QD2Q

−1 imply AB = BA since D1D2 = D2D1.Conversely, suppose AB = BA. If Ax = λix with x ∈ Eλi

(A), then ABx =BAx = λiBx implies Bx ∈ Eλi

(A). That is, each eigenspace Eλi(A) is

invariant by B, so that the restriction of B on Eλi(A) is diagonalized.

6.16 With respect to the basis α = {1, x, x2}, [T ]α =

1 0 10 1 11 1 0

. The eigen-

values are 2, 1, −1 and the eigenvectors are (1, 1, 1), (−1, 1, 0) and (1, 1,−2),respectively.

6.19 Eigenvalues are 1, 1, 2 and eigenvectors are (1, 0, 0), (0, 1, 2) and (1, 2, 3).A10x = (1025, 2050, 3076).

6.20 Clearly, a0 = 1, a1 = 2 and a2 = 3. Inductively, one can easily see that thesequence {an : n ≥ 1} is a Fibonacci sequence: an+1 = an + an−1. In fact, in{1, 2, . . . , n}, the size of the class of subsets with the required property maybe counted as the number of the members of the class for the set without nplus that of the class for the set without n and n− 1, to each member of thisclass just add n.

6.21 One can easily check that det An = detAn−1 − det An−2. Set an = detAn,so that an = an−1 − an−2. With an−1 = an−1, we obtain a matrix equation:

xn =

[an

an−1

]=

[1 −11 0

] [an−1

an−2

]= Axn−1 = Anx1,

with a1 = 1 and a2 = 0. Using the eigenvalues might make the computationa mess. Instead, one can use the Cayley-Hamilton Theorem 8.13: Since thecharacteristic polynomial of A is λ2 − λ + 1, A2 − A + I = 0 holds. Thus,A3 = A2 − A = −I, so A6 = I. One can now easily compute an modulo 6.

6.22 The characteristic equation is λ2−xλ−0.18 = 0. Since λ = 1 is a solution, x =0.82. The eigenvalues are now 1, −0.18 and the eigenvectors are (−0.3,−1)and (1,−0.6).

6.23 (1) eA =

[e e − 10 1

].

6.24 The initial status in 1985 is x0 = (x0, y0, z0) = (0.4, 0.2, 0.4), where x, y, zrepresent the percentage of large, medium, and small car owners. In 1995, the

status is x1 =

x1

y1

z1

=

0.7 0.1 00.3 0.7 0.10 0.2 0.9

0.40.20.4

= Ax0. Thus, in 2025,

the status is x4 = A4x0. The eigenvalues are 0.5, 0.8, and 1, whose eigenvec-tors are (−0.41, 0.82,−0.41), (0.47, 0.47,−0.94), and (−0.17,−0.52,−1.04),respectively.

Page 15: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

466 Selected Answers and Hints

6.27 (1)

y1(x) = −2e2(1−x) + 4e2(x−1)

y2(x) = −e2(1−x) + 2e2(x−1)

y3(x) = 2e2(1−x) − 2e2(x−1).

(2)

{y1(x) = e2x(cos x − sin x)y2(x) = 2e2x sin x.

6.28 y1 = 0, y2 = 2e2t, y3 = e2t.

6.29 (1) f(λ) = λ3 − 10λ2 +28λ− 24, eigenvalues are 6, 2, 2, and eigenvectors are(1, 2, 1), (−1, 1, 0) and (−1, 0, 1).

(2) f(λ) = (λ− 1)(λ2 − 6λ + 9), eigenvalues are 1, 3, 3, and eigenvectors are(2,−1, 1), (1, 1, 0) and (1, 0, 1).

6.30 Three of them are true:

(1) For A =

[1 00 1

], B = Q−1AQ means that

[0 11 0

]=

[1 00 1

].

(2) Consider A =

[1 10 2

], and B =

[1 1

20 1

2

].

(3) Consider

[1 10 1

]. (4) Consider

[1 00 0

]. (5) Consider

[0 11 0

].

(6) If A is similar to I + A, then they have the same eigenvalues so thattr(A) = tr(I + A) = n+ tr(A), which can not be equal.

(7) tr (A + B) = tr (A) + tr (B).

Chapter 7

Problems

7.1 (1) u · v = uT v =∑

i uivi =∑

i viui = v · u.(3) (ku) · v =

∑i kuivi = k

∑i uivi = k(u · v).

(4) u · u =∑

i |ui|2 ≥ 0, and u · u = 0 if and only if ui = 0 for all i.

7.2 (1) If x = 0, clear. Suppose x 6= 0 6= y. For any scalar k,

0 ≤ 〈x − ky,x − ky〉 = 〈x,x〉 − k〈x,y〉 − k〈y,x〉 + kk〈y,y〉. Let k = 〈y,x〉〈y,y〉

to obtain |〈x,x〉〈y,y〉 − |〈x,y〉|2 ≥ 0. Note that equality holds if and only ifx = ky for some scalar k.(2) Expand ‖x + y‖2 = 〈x + y,x + y〉 and use (1).

7.3 Suppose that x and y are linearly independent, and consider the linear depen-dence a(x+y)+b(x−y) = 0 of x+y and x−y. Then 0 = (a+b)x+(a−b)y.Since x and y are linearly independent, we have a+b = 0 and a−b = 0 whichare possible only for a = 0 = b. Thus x + y and x − y are linearly indepen-dent. Conversely, if x+y and x−y are linearly independent, then the lineardependence ax+by = 0 of x and y gives 1

2 (a+b)(x+y)+ 12 (a−b)(x−y) = 0.

Thus we get a = 0 = b. Thus x and y are linearly independent.

7.4 (1) Eigenvalues are 0, 0, 2 and their eigenvectors are (1, 0,−i) and (0, 1, 0),

respectively. (2) Eigenvalues are 3, 1+√

52 , 1−

√5

2 , and their eigenvectors are

(1,−i, 1−i2 ), (

√5−32 i, 1, 1−

√5

2 (1 + i)), and (−√

5+32 i, 1, 1+

√5

2 (1 + i)), respec-tively.

7.5 Refer to the real case.

Page 16: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 467

7.6 (AB)H = (AB)T = BTA

T= BHAH .

7.7 (AH)(A−1)H = (A−1A)H = I.

7.8 The determinant is just the product of the eigenvalues and a Hermitian ma-trix has only real eigenvalues.

7.9 See Exercise 6.9.

7.10 To prove (3) directly, show that λ(x · y) = µ(x · y) by using the fact thatAHx = −µx when Ax = µx.

7.11 AH = BH + (iC)H = BT − iCT = −B − iC = −A.

7.12 ±AB = (AB)H = BHAH = (±B)(±A) = BA, + if they are Hermitian, − ifthey are skew-Hermitian.

7.13 Note that detUH = det U , and 1 = det I = det(UHU) = |det U |2.7.16 Since A−1 = AH , (AB)HAB = I.

7.17 Hermitian means the diagonal entries are real, and diagonality implies off-diagonal entries are zero. Unitary means the diagonal entries must be ±1.

7.18 (1) If U =

[16 i√

3 + 12 − 1

6 i√

3 + 12

− 13

√3 1

3

√3

], U−1AU =

[12 − 1

2 i√

3 0

0 12 + 1

2 i√

3

]

(2) If U =

0 0 1− 6

25 − 825 i 2

5 + 15 i 6

25 + 825 i

0 −1 0

, U−1AU =

−1 0 00 2i 10 0 2i

.

7.20 (4) Normal with eigenvalues 1 ± i so that it is unitarily diagonalizable butnot orthogonally.

7.22 This is a normal matrix. From a direct computation, one can find the eigen-values, 1− i, 1− i and 1 + 2i, and the corresponding eigenvectors: (−1, 0, 1),(−1, 1, 0) and (1, 1, 1), respectively, which are not orthogonal. But by anorthonormalization, one can obtain a unitary transition matrix so that A isunitarily diagonalizable.

7.23 AHA = (H1 − H2)(H1 + H2) = (H1 + H2)(H1 − H2) = AAH if and only ifH1H2 − H2H1 = 0.

7.24 In each subproblem, one directions are all proven in the theorems already.For the other direction, suppose that UHAU = D for a unitary matrix Uand a diagonal matrix D.

(1) and (2). If all the eigenvalues of A are real (or purely imaginary), thenthe diagonal entries of D are all real (or purely imaginary). Thus DH = ±D,so that A is Hermitian (or skew-Hermitian).

(3) The diagonal entries of D satisfy |λ| = 1. Thus, DH = D−1, andAH = UD−1U−1 = A−1.

7.25 Q =1√6

√3 −

√2 −1

0√

2 −2√3

√2 1

.

7.26 (1) A = 12

[1 −1

−1 1

]+ 3

2

[1 11 1

],

(2) B = 3+2√

66

[1 (1+

√6)(2+i)5

(1+√

6)(2−i)5

7+2√

65

]+ 3−2

√6

6

[1 (1−

√6)(2+i)5

(1−√

6)(2−i)5

7−2√

65

].

Page 17: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

468 Selected Answers and Hints

7.27 Let A = λ1P1 + · · · + λkPk be the spectral decomposition of A. Then

AH = λ1P1 + · · · + λkPk = λ1P1 + · · · + λkPk = A.

7.28 Take the Lagrange polynomials fi such that fi(λj) = δij associated with λi’sas in Section 2.10.2. Then, by Corollary 7.13,

fi(A) = fi(λ1)P1 + · · · + fi(λk)Pk = δi1P1 + · · · + δikPk = Pi.

7.29 (1) A =

−122

=

−1/32/32/3

[ 3

][1],

A+ =

−122

+

= [1][

13

] [−1/3 2/3 2/3

]=[− 1

929

29

].

(2) B =

[− 1√

21√2

1√2

1√2

] [ √3 00 1

] [ 1√6

− 2√6

1√6

− 1√2

0 1√2

].

(3) C+ =

[1/2 01/2 0

].

7.31 By the elementary row operations and then to columns in the blocks,

det B = det

[X −YY X

]= det

[X + iY −Y + iX

Y X

]

= det

[X + iY 0

Y X − iY

]= detAdet A = |det A|2.

Exercises 7.10

7.1 (1)√

6, (2) 4.7.4 (1) λ = i, x = t(1, −2 − i), λ = −i, x = t(1, −2 + i).

(2) λ = 1, x = t(i, 1), λ = −1, x = t(−i, 1).(3) Eigenvalues are 2, 2 + i, 2 − i, and eigenvectors are (0,−1, 1)),(1,− 1

5 (2 + i), 1), (1,− 15 (2 − i), 1).

(4) Eigenvalues are 0, −1, 2, and eigenvectors are(1, 0,−1)), (1,−i, 1), (1, 2i, 1).

7.6 A + cI is invertible if det(A + cI) 6= 0. However, for any matrix A, det(A +cI) = 0 as a complex polynomial has always a (complex) solution. For the

real matrix

[cos θ − sin θsin θ cos θ

], A + rI is invertible for every real number r

since A has no real eigenvalues.

7.7 (1)1√3

[1 1 − i

1 + i −1

], (2)

1

2

1 i 1 − i√2i

√2 0

1 i −1 + i

.

7.10 (2) Q =1√2

[1 11 −1

].

Page 18: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 469

7.12 (1) Unitary; diagonal entries are {1, i}. (2) Orthogonal; {cos θ+i sin θ, cos θ−i sin θ}, where θ = cos−1(0.6). (3) Hermitian; {1, 1 +

√2, 1 −

√2}.

7.13 (1) Since the eigenvalues of a skew-Hermitian matrix must always be purelyimaginary, 1 cannot be an eigenvalue.

(2) Note that, for any invertible matrix A, (eA)H = eAH

= e−A = (eA)−1.7.14 det(U − λI) = det(U − λI)T = det(UT − λI).

7.15 U =1√2

[1 −11 1

], D = UHAU =

[2 + i 0

0 2 − i

].

7.17 (1) Let λ1, . . . , λk be the distinct eigenvalues of A, and A = λ1P1+ · · ·+λkPk

the spectral decomposition of A. Then AH = λ1P1+· · ·+λkPk. Problem 7.28shows that Pi = fi(A), where fi’s are the Lagrange polynomial associatedwith λi’s as in Section 2.10.2. Then

AH =

k∑

i=1

λiPi =

k∑

i=1

λifi(A) = g(A),

where g =∑

λifi. The converse is clear.

(2) Clear from (1) since AH = g(A).7.18 (See Exercise 6.14.) Since AB = BA, each Eλi

(A) is B-invariant. Since B isnormal, BH = g(B) for some polynomial g. Thus each Eλi

(A) is both B andBH invariant. So the restriction of B on Eλi

(A) is normal, since B is normal.That is, A and B have orthonormal eigenvectors in Eλi

(A) ∩ Eµj(B).

7.19 (2) The characteristic polynomial of W is f(λ) = λn − 1.

(3) The eigenvalues of A are, for k = 0, 1, 2, . . . , n − 1,

λk =

n∑

i=1

aiω(i−1)k = a1 + a2ω

k + · · · + anω(n−1)k.

(4) The characteristic polynomial of B is f(λ) = (λ − n + 1)(λ + 1)n−1.7.20 The eigenvalues are 1, 1, 4, and the orthonormal eigenvectors are

( 1√2,− 1√

2, 0), (− 1√

6,− 1√

6,√

2√3) and ( 1√

3, 1√

3, 1√

3). Therefore,

A =1

3

2 −1 −1−1 2 −1−1 −1 2

+

4

3

1 1 11 1 11 1 1

.

7.21 If λ is an eigenvalue of A, then λn is an eigenvalue of An. Thus, if An = 0,then λn = 0 or λ = 0. Conversely, by Schur’s lemma, A is similar to anupper triangular matrix, whose diagonals are eigenvalues that are supposedto be zero. Then it is easy to conclude A is nilpotent.

7.22 Note that A+b = xr is the unique optimal least square solution in R(A) =R(U). Since, for any b ∈ Rm,

AT A(UT (UUT )−1(LT L)−1LT )b = (UT LT LU)(UT (UUT )−1(LT L)−1LT )b

= UT (LT L)(UUT )(UUT )−1(LT L)−1LT b

= UT LT b = AT b,

Page 19: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

470 Selected Answers and Hints

UT (UUT )−1(LT L)−1LT b is also a least square solution. Moreover, it is inR(A) = R(U), since UT times a column vector is a linear combination of rowvectors in U which form a basis for R(A). Therefore, it is optimal so thatUT (UUT )−1(LT L)−1LT b = A+b. That is, UT (UUT )−1(LT L)−1LT = A+.

7.23 Nine of them are true.

(2) Consider

[cos θ − sin θsin θ cos θ

]with θ 6= kπ. (3) Since rank(A) = rank(AHA) =

rank(AAH), λ = 0 may be an eigenvalue for both AHA and AAH with thesame multiplicities. Now suppose λ 6= 0 is an eigenvalue of AHA with eigen-vector x. Then Ax 6= 0, and (AAH)Ax = A(AHA)x = λAx implies λ isalso an eigenvalue of AAH with eigenvector Ax. The converse is the same.Hence, AHA and AAH have the same eigenvalues.

(4) Consider

[1 10 2

]. (5) If A is symmetric, then it is orthogonally diago-

nalizable. Moreover, if A is nilpotent, then all the eigenvalues must be zero,since otherwise it cannot be nilpotent. Thus the diagonal matrix is the zeromatrix, and so is A.

(6) and (7) A permutation matrix is an orthogonal matrix, but needs not besymmetric.

(8) If a nonzero nilpotent matrix N is Hermitian, then U−1NU = D, whereU is a unitary matrix and D is a diagonal matrix whose diagonals are not allzero. Thus Dk 6= 0 for all k ≥ 1. That is Nk 6= 0 for all k ≥ 1.

(10) There is an invertible matrix Q such that A = Q−1DQ. Thus,

det(A + iI) = det(D + iI) 6= 0.

(11) Consider A =

[1 −12 −1

]. (12) Modify (10).

Chapter 8

Problems

8.2 (2)

4 1 00 4 10 0 4

, (3)

2 0 0 00 2 0 00 0 1 10 0 0 1

.

8.3 (1) For λ = −1, x1 = (−2, 0, 1), x2 = (0, 1, 1), and for λ = 0, x1 =(−1, 1, 1). (2) For λ = 1, x1 = (−2, 0, 1), x2 = (5

2 , 12 , 0), and for λ = −1,

x1 = (−9, −1, 1).

8.5 The eigenvalue is −1 of multiplicity 3 and has only one linearly independenteigenvector (1, 0, 3). The solution is

y(t) =

y1(t)y2(t)y3(t)

= e−t

−1 − 5t + 2t2

−1 + 4t1 − 15t + 6t2

.

Page 20: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 471

8.6 For any u, v ∈ C,

|u + v|2 = (u + v)(u + v) = |u|2 + 2ℜ(uv) + |v|2≤ |u|2 + 2|uv| + |v|2= |u|2 + 2|u||v| + |v|2 = (|u| + |u|)2.

The equality holds ⇔ |uv| = ℜ(uv): i.e., uv = ℜ(uv) ∈ R ⇔ u = |u|z andv = |v|z for some z = eiθ.

8.7 Consider the Jordan Canonical form: Q−1AT Q = J of AT . By taking thetranspose of this equation, one gets P−1AP = JT , where P = (QT )−1. LetP be the matrix obtained from P by taking reverse ordering of the columnvectors in each group corresponding to each Jordan block of P . Then it is

easy to see that P−1

AP = J = Q−1AT Q. That is, A and AT have the sameJordan Canonical form J , which means that the eigenspaces have the samedimension.

8.8 See Problem 6.2.8.9 Let λ1, . . . , λn be the eigenvalues of A. Then

f(λ) = det(λI − A) = (λ − λ1) · · · (λ − λn).

Thus, f(B) = (B − λ1Im) · · · (B − λnIm) is non-singular if and only if B −λiIm, i = 1, . . . , n, are all non-singular. That is, none of the λi’s is aneigenvalue of B.

8.10 The characteristic polynomial of A is f(λ) = (λ−1)(λ−2)2, and the remain-

der is 104A2 − 228A + 138I =

14 0 840 98 00 0 98

.

Exercises 8.5

8.1 Find the Jordan canonical form of A as Q−1AQ = J . Since A is nonsingular,all the diagonal entries λi of J , as the eigenvalues of A, are nonzero. Hence,each Jordan blocks Jj of J is invertible. Now one can easily show that(Q−1AQ)−1 = Q−1A−1Q = J−1 which is the Jordan form of A−1, whoseJordan blocks are of the form J−1

j .

8.3 (x, y) = 12 (4 + i, i).

8.4 (1) Use

[3 11 3

]=

[12

12

− 12

12

] [2 00 4

] [1 −11 1

].

(2) y(t) =√

2e4t

[1√2

1√2

]−√

2e2t

[− 1√

21√2

].

8.5 (1) Use A =

−6 2 8−3 1 2

6 −1 −4

−2 0 00 2 00 0 −4

16 0 1

30 2 114 − 1

2 0

.

(2)

y1(t) = −2e2(1−t) + 4e2(t−1)

y2(t) = −e2(1−t) + 2e2(t−1)

y3(t) = 2e2(1−t) − 2e2(t−1)

.

Page 21: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

472 Selected Answers and Hints

8.6

y1(t) = 2(t − 1)et

y2(t) = −2tet

y3(t) = (2t − 1)et.

8.8 (1) (a − d)2 + 4bc 6= 0 or A = aI.8.9 (1) t2 + t − 11, (2) t2 + 2t + 13, (3) (t − 1)(t2 − 2t − 5).

8.10 (3) A−1 =

1 0 −10 1

2 − 12

0 0 1

.

8.11 (2)

5 −22 1010 27 −600 0 87

.

8.12 See solution of Problem 8.7.8.14 (3) Use (2) and the proof of Theorem 8.11.

(4) Use (3), Theorem 8.11, and (1) of Section 8.3.

8.15 (2) Ak =

1 0 k0 2k 2k − 10 0 1

.

8.16 Four of them are true.

Chapter 9

Problems9.1

(1)

9 3 −43 −1 1

−4 1 4

, (2)

1

2

0 1 11 0 11 1 0

, (3)

1 1 0 −51 1 0 00 0 −1 2

−5 0 2 −1

.

9.3 (1) The eigenvalues of A are 1, 2, 11. (2) The eigenvalues are 17, 0, −3, andso it is a hyperbolic cylinder. (3) A is singular and the linear form is present,thus the graph is a parabola.

9.5 B with the eigenvalues 2, 2 +√

2 and 2 −√

2.9.7 The determinant is the product of the eigenvalues.9.9 (1) is indefinite. (2) and (3) are positive definite.

9.10 (1) local minimum, (2) saddle point.9.12 (2) b11 = b14 = b41 = b44 = 1, all others are zero.9.14 Let D be a diagonal matrix, and let D′ be obtained from D by interchanging

two diagonal entries dii and djj , i 6= j. Let P be the permutation matrixinterchanging i-th and j-th rows. Then PDPT = D′.

9.15 Count the number of distinct inertia (p, q, k). For n, the number of inertiawith p = i is n − i + 1.

9.16 (3) index = 2, signature = 1, and rank = 3.9.17 Note that the maximum value of R(x) is the maximum eigenvalue of A, and

similarly for the minimum value.

Page 22: [solution] Linear Algebra 2nd (Kwak, Hong) Birkhauser

Selected Answers and Hints 473

9.18 max = 72 at ±(1/

√2, 1/

√2), min = 1

2 at ±(1/√

2, −1/√

2).

9.19 (1) max = 4 at ± 1√6

(1, 1, 2), min = −2 at ± 1√3

(−1, −1, 1);

(2) max = 3 at ± 1√6

(2, 1, 1), min = 0 at ± 1√3

(1, −1, −1) .

9.22 If u ∈ U ∩ W , then u = αx + βy ∈ W for some scalars α and β. Sincex, y ∈ U , b(u, x) = b(u, y) = 0. But b(u, x) = βb(y, x) = −β andb(u, y) = αb(x, y) = α.

9.23 Let c(x,y) = 12 (b(x,y) + b(y,x)) and d(x,y) = 1

2 (b(x,y) − b(y,x)). Thenb = c + d.

Exercises 9.13

9.1 (1)

[1 22 3

], (3)

1 2 32 −2 −43 −4 −3

, (4)

3 −2 05 7 −80 4 −1

.

9.4 (2) {(2, 1, 2), (−1, −2, 2), (1, 0, 0)}.9.3 (i) If a = 0 = c, then λi = ±b. Thus the conic section is a hyperbola.

(ii) Since we assumed that b 6= 0, the discriminant (a− c)2 +4b2 > 0. By thesymmetry of the equation in x and y, we may assume that a − c ≥ 0.

If a − c = 0, then λi = a ± b. Thus, the conic section is an ellipse ifλ1λ2 = a2 − b2 > 0, or a hyperbola if a2 − b2 < 0. If λ1λ2 = a2 − b2 = 0,then it is a parabola when λ1 6= 0 and e′ 6= 0, or a line or two lines for theother cases.

If a − c > 0. Let r2 = (a − c)2 + 4b2 > 0. Then λi = (a+c)±r2 for i = 1, 2.

Hence, 4λ1λ2 = (a + c)2 − r2 = 4(ac − b2). Thus, the conic section is anellipse if detA = ac − b2 > 0, or a hyperbola if detA = ac − b2 < 0. Ifdet A = ac−b2 = 0, it is a parabola, or a line or two lines depending on somepossible values of d′, e′ and the eigenvalues.

9.6 If λ is an eigenvalue of A, then λ2 and 1λ are eigenvalues of A2 and A−1,

respectively. Note xT (A + B)x = xT Ax + xT Bx.

9.8 (1) Q =1√2

[1 11 −1

]. The form is indefinite with eigenvalues λ = 5 and

λ = −1.

9.10 (1) A =

[2 −12 0

], (2) B =

[3 90 6

], (3) Q =

[1 21 −1

].

9.11 (2) The signature is 1, the index is 2, and the rank is 3.

9.15 (2) The point (1, π) is a critical point, and the Hessian is

[1 11 −1

]. Hence,

f(1, π) is a local maximum.9.18 Seven of them are true.

(5) Consider a bilinear form b(x,y) = x1y1 − x2y2 on R2.

(7) The identity I is congruent to k2I for all k ∈ R. (8) See (7).

(9) Consider a bilinear form b(x,y) = x1y2. Its matrix Q =

[0 10 0

]is not

diagonalizable.