lie algebras 2-1-11

Upload: tcampso

Post on 08-Apr-2018

221 views

Category:

Documents


0 download

TRANSCRIPT

  • 8/7/2019 Lie Algebras 2-1-11

    1/25

    Lie Algebras

    Tim Campion

    February 1, 2011

    Contents1 Nilpotent, Solvable, Semisimple Lie Algebras 1

    2 Cartans Criterion 4

    3 C omplete Reducibility and Jordan decomposition 7

    4 sl2C 11

    5 Cartan subalgebras 12

    6 Roots of a semisimple Lie algebra 15

    7 Root Systems 19

    8 Dynkin Diagrams 21

    9 The Weyl Group 23

    1 Nilpotent, Solvable, Semisimple Lie Alge-

    bras

    Definition 1. An abelian Lie group is one where the commutator is identi-cally 0.

    The lower central series is defined by D0 g = g and Dk+1 g = [g,Dk g]. Ifthe lower central series terminates at 0, then g is said to be nilpotent.

    1

  • 8/7/2019 Lie Algebras 2-1-11

    2/25

    The upper central series is defined by D0 g = g and Dk+1 g = [Dk g,Dk g].

    If the lower central series terminates at 0, then g is said to be solvable.Ifg has no nonzero solvable ideals, then g is said to be semisimple.Ifg has no nonzero ideals other than itself, then g is said to be simple

    Note D0 g = D0 g = g and D1 g = D

    1 g will be called D g. A Lie algebrag is abelian if and only if D g = 0. Note by induction Dk g Dk+1 g, sonilpotent Lie algebras are solvable. It is clear that Dk g is an ideal and socontains Dk+1 g. This is also true in the upper central series: [X, [dk, d

    k]] =

    [dk, [dk, X]] [d

    k, [dk, X]]. By induction, Dk g is an ideal so if dk, d

    k Dk g,

    then so are [dk, X], [dk, X] Dk g, so the overall expression is in Dk+1 g.

    Then extend by linearity.

    Proposition 2. A Lie algebra g is nilpotent if and only if there exist asequence g = g0 gn = 0 of ideals of g with gi/gi+1 in the center ofg/gi+1.

    A Lie algebrag is solvable if and only if there exists a sequence g = g0 gn = 0 of ideals ofg withgi/gi+1 abelian.

    A Lie algebra g is semisimple if and only if it has no nonzero abelianideals.

    Proof. In the solvable case, the upper central series fits the bill for such asequence, since we mod out the center each time. Conversely, if we have such

    a sequence, then it must be that at least the center was modded out at eachstep. By induction Dk g gk so the central series terminates at 0 and itssolvable. The upper central series is the longest such sequence.

    In the nilpotent case, the lower central series fits the bill because thecenter ofg/Dk+1 g is all X such that [g, X] Dk+1. Since Dk+1 = [g,Dk] bydefinition, this is true ofDk. Conversely, if you have such a sequence thenby induction Dk g gk so the lower central series terminates; the l.c.s. is thelongest such sequence.

    The semisimple criterion works because an abelian ideal is solvable andconversely the last step in the u.c.s. is abelian since its commutator is 0.

    Nilpotency and solvability are preserved under quotients, which can atmost collapse steps in the derived series. This is in fact true of semisimplealgebras, but only because we can characterize them as direct sums of simpleones. However, we can at least say that a semisimple Lie algebra has trivialcenter because the center is an abelian ideal: its commutator with anything

    2

  • 8/7/2019 Lie Algebras 2-1-11

    3/25

    is zero, which is definitely in the center. Complete reducibility will also show

    that a semisimple Lie algebra has no nonzero abelian quotients in particularthat it is its own commutator.On the other hand, ifh g is an ideal, then g is solvable iffh and g/h are

    both solvable by our criterion. This will also turn out to be true of semisimpleLie algebras, but again we have to prove complete reducibility. It is nottrue of nilpotent Lie algebras: the nilpotent condition requires informationfrom the whole algebra, which is hidden by the quotient. But note that thecommutator of any solvable Lie algebra is nilpotent!

    Theorem 3 (Engel). Suppose g gl(V) be made up entirely of nilpotent en-domorphisms. Then there is a v V annihilated by all ofg, and inductively,

    g is simultaneously strictly upper-triangular in some basis.Proof. Note that if X is a nilpotent matrix, then ad X is also nilpotent; infact, if Xn = 0, then (ad X)2n = 0 because there will be a string of n Xs inany term of the expansion.

    First we find an ideal of codimension 1. Let h be a maximal proper sub-algebra ofg; we claim that h is such an ideal. Since h is a subalgebra ofg,it preserves h under ad, and hence acts on the vector space g/h by ad. Now,ad(h) gl(g) is nilpotent for every h h because h g, and remains nilpo-tent on the quotient space gl(g/h). Moreover, since h is a proper subalgebra,the image ad |g/h(h) has smaller dimension than g. So we assume inductively

    that the entire theorem is true for such smaller-dimensional algebras. Thismeans that there is v g/h = 0 such that ad(h)(v) = 0 for all h h, i.e.v h but [h, v] h. But this means that h + v is again a subalgebra ofg.Since h was a maximal proper subalgebra, it follows that h + v = g. So h isof codimension 1 and since [h, v] h, it follows that h is an ideal.

    Now we will find a vector in V killed by all of g. Let h be an ideal ofcodimension 1, and let Y g\h so that g = h + Y. Let W = ker h Vbe everything annihilated by h. First we show that W is invariant under Y.For given w W, we have XY w = Y Xw + [X, Y]w = 0 since X, [X, Y] hkill w. This holds for all X h, so Y w is in ker h = W as claimed. Now,since Y acts on V nilpotently, and since W is Y-invariant, it follows that Y

    acts on W nilpotently, which in particular means it acts singularly on W. Sothere is w W such that Y w = 0. Since hw = 0 too, this means that all ofg annihilates w.

    Then, of course, to get to the strictly upper triangular matrix, you startwith w kerg. Then look at g acting on V /w. This action is made up of

    3

  • 8/7/2019 Lie Algebras 2-1-11

    4/25

    nilpotent matrices, so it is strictly upper triangular. Then expanding back

    to all ofV is still strictly upper triangular.Lies theorem is going to be kind of analogous. For it, we will need a

    Lemma 4 (Miraculous). Letg be a Lie algebra, and let h g be an ideal.LetV be a representation ofg. If h and W is a weightspace for, thenW is g-invariant.

    Proof. Let X h and Y g. For w W we have XY w = Y Xw+[X, Y]w =(X)Y w + ([X, Y])w. Let u = Y w. We have (X (X))u W for allX h. Now, W is definitely h-invariant, so we can look at the action ofhon V /W. The equation says that u is a simultaneous eigenvector of weight

    . But this means that u (any element ofu + W) is actually a weight vectorof weight , so that actually u W. This is exactly what we claimed.

    Theorem 5 (Lie). Letg gl(V) be a solvable Lie algebra. Then there isa simultaneous eigenvector ofg. Inductively, g is simultaneously nonstrictlyupper-triangular in some basis.

    Proof. Again we start by finding an ideal of codimension 1, with basicallythe same argument. Let h be a maximal proper subalgebra. Then h is alsosolvable , so ad |g/h(h) is also solvable, and of lower dimension. Inductivelywe suppose that ad h has a simultaneous eigenvector v g/h with weight. Then ad(X)(v) = [X, v] = (X)v + h for all X h. So h + v is again asubalgebra; by maximality h + v = g, and it follows that h is an ideal.

    Now let h be an ideal of codimension 1. Since h is solvable, inductivelyit has a nonzero weightspace W of weight . By the Miraculous Lemma, Wis g-invariant. Let Y g\h. Since Y|W is an endomorphism of a complexvector space, it has an eigenvector w. Then w is a simultaneous eigenvectorof Y + h = g.

    So Lies theorem guarantees that any solvable Lie algebra representationrespects a complete flag of invariant subspaces. The converse is a result ofthe following characterization:

    Proposition 6. If g gl(V) is a Lie algebra respecting a complete flagV1 Vn = V, thenD g consists of nilpotent endomorphisms respectingthe same flag.

    Conversely ifg gl(V) is a Lie algebra andD g strictly respects a com-plete flag V1 Vn, theng is solvable respecting the same flag.

    4

  • 8/7/2019 Lie Algebras 2-1-11

    5/25

    Proof. So for X g write Xvi = ivi + O(Vi+1) and Y vi = ivi + O(Vi+1).

    Then XY vi = iivi + O(Vi+1) and Y Xvi = iivi + O(vi+1). So [X, Y]vi Vi+1, i.e. [X, Y] strictly respects the flag.For the converse, note that D g is actually a nilpotent Lie algebra, and

    in particular is solvable. Now, g/D g is also solvable (in fact, abelian), so infact g is solvable, too.

    Corollary 7. A Lie algebrag gl(V) is solvable if and only if it respects acomplete flag.

    2 Cartans Criterion

    Lemma 8. LetX = Xs + Xn be the Jordan decomposition of a linear endo-morphism X. Then ad(Xs) = ad(X)s and ad(Xn) = ad(X)n.

    Proof. We need to show that ad preserves diagonalness (of Xs), nilpotency(of Xn), and commuting (of Xs, Xn). It preserves commutators because it isa representation. We observed earlier that ad preserves nilpotency becauseexpanding ad2nX (Y), every term has a string on n Xs, so they are all zero.So we check that it preserves diagonalness.

    In fact, adeiei is diagonal in the ej ek basis. Composing in either

    direction, it zeros out the term unless the meeting ends agree, in which caseit doesnt change that end. Taking linear combinations, if X is diagonal in

    the ej ek basis, then so is ad(X). So given a diagonalizable X g, we usefor e1, . . . , en an eigenbasis of X to see that ad(X) is also diagonal. In fact,it is diagonal in the induced basis on gl(gl(V)).

    Observation9. ad |gl(V) commutes with the conjugation automorphism gl(V) gl(V)defined with respect to a given basis by sending ei e

    j ei e

    j in gl(V)

    similarly with the induced basis in gl(gl(V)). This is because multiplicationby ei e

    j , written as a matrix in the ek e

    l basis, has integer entries. These

    are fixed under conjugation. So its only the coefficients which change.

    Let g be a Lie algebra and BV the Killing form associated to a represen-tation V.

    First note that if g is solvable, then BV(g,D g) = 0 identically. For ifV1 Vn is a flag respected by g, then D g respects the same flag, andg d maps Vi to Vi+1 to Vi+1 for g g, d D g. So in a flag basis the diagonalof g d is identically 0 and so traceless.

    Cartans criterion gives a converse.

    5

  • 8/7/2019 Lie Algebras 2-1-11

    6/25

    Theorem 10 (Cartan). If g gl(V) is a Lie algebra and BV(g, g) is iden-

    tically 0, theng is solvable.Proof. We will show that D g consists of nilpotent endomorphisms. Let X D g, and use the Jordan decomposition X = Xs + Xn. Let 1, . . . ,n be theeigenvalues ofXs. We want to show that they are all 0, which is equivalent toshowing that 11+ +nn = 0. This is tr(XsX). Since X D g, we canwrite it as a sum of elements of the form [Y, Z] with Y, Z g. So it sufficesto show that tr(Xs [Y, Z]) = tr([Xs, Y] Z) = 0 for Y, Z g. By hypothesis,BV is 0 on g. So for this to hold it suffices that [Xs, Y] = adXs(Y) g forall Y g.

    Combining the lemma and the observation, we have ad(Xs) = ad(Xs) =ad(X)

    s. Since we can write all of the projection operators with respect to

    ad(X) as polynomials in ad(X), it follows that we can also write ad(X)sas a polynomial in ad(X). Hence, adXs(Y) can be obtained by applyingcommutators of X to Y. So it is in g, and we are done.

    (In fact, its simple enough to observe that the projection operators ontothe invariant spaces of ad(X) are polynomials in ad(X). Since ad(X)s is alinear combination of these, we are done. We can avoid the slight hazinesssurrounding that observation...)

    Corollary 11. A Lie algebrag gl(V) is solvable iff BV(D g,D g) = 0, iffBV(g,D g) = 0.

    Proof. By Cartans criterion, D g is solvable. Since g/D g is solvable, too(in fact, abelian), it follows that g is also solvable. The converse was ourinitial observation that D g ratchets things down the flag while g does notraise them, so that gd ratchets down the flag and so has trace 0.

    Corollary 12. A Lie algebrag is semisimple iff Bad(g, g) is nondegenerate.

    Proof. Note that ifh g is an ideal, then h = {X : Y hBV(X, Y) = 0}is also an ideal: BV([X, Z], Y) = BV(X, [Z, Y]) = 0 for all Y h.

    Using BV = Bad, let s = h h. Then ad(s) is solvable by Cartans

    Criterion. Since ad(s) = s /(Z(g) s), the latter being abelian, it followsthat s is also solvable. So letting h = g, we see that ifg is semisimple, then

    s = ker B is 0. Conversely, ifh is an abelian ideal, then ad(X) maps thingsoutside of h into h while zeroing out h: in a basis starting with a basis forh, it is zero on the diagonal. If Y g, then the only way ad(X)ad(Y) couldhave nonzero trace is if ad(Y) maps something from h outside of h: but itcant do this because h is an ideal. So B(X, Y) = 0 for all Y g.

    6

  • 8/7/2019 Lie Algebras 2-1-11

    7/25

  • 8/7/2019 Lie Algebras 2-1-11

    8/25

    with g comes from the fact that a representation g gl(V) ... well, at least

    the adjoint ggl(V)= g g is a distinguished element ofg g g. Ah!Even more fundamentally, the commutator [, ] ggg is a distinguished

    element. It gives us the Killing form by contracting the adjoint rep tensor orrather, quite explicitly, by twice contracting the commutator tensored withitself.

    Akhil Matthews definition also has a very nice way to see that the Casimiris central. It is obtained as the image by maps of the identity in g g,which is central there. Now, g g is naturally a Lie algebra. So is g g. The raising map g g g g is a map of g-modules: it suffices tocheck that the lowering map g g is. That is, we have representations

    (g, ad)B+

    // (g, ad) ; B+ is a map of representations iffB+ ad = ad B+,

    or equivalently, iffB+(ad(X)(v)) = ad(X)(B+(v)) for all v g. On the leftwe have

    B+(ad(X)(v)) = B+([X, v]) = B([X, v], )

    On the right we have

    ad(X)(B+(v)) = ad(X)(B(v, )) = B(v, ad(X)()) = B(v, [X, ]) = B([v, X], )

    So indeed they agree.In order to be really canonical, we should use universal properties to show

    this this demonstration wouldnt work if functions werent determined by

    their values.So we have established that the map 1 B+ : g g g is a map of

    representations. After that, the multiplication map g g U(g) is also amap of representations, essentially by the construction ofU(g) (or rather, bysome universal property of the map g U(g). Since the identity commuteswith everything in g g, and since B+ is surjective, and since the quotientg U(g) is graded and surjective, it follows that the overall map B+ :gg U(g) is surjective onto the degree-2 elements. Since g is semisimple,it is also perfect, and so in fact B+ is surjective onto both degree-1 and degree-2 elements. So if X g, then [c, X] = [B+(Id), B+(X)] = B+([Id, X]) = 0.Now, the commutator on U(g) is generated by the commutator with degree-1elements, so this implies that c commutes with every element of U(g).

    This argument works if B is replaced by any nondegenerate associativeform (i.e. B([X, Y], Z) = B(X, [Y, Z])). So in particular, it works for theKilling form coming from any faithful representation.

    8

  • 8/7/2019 Lie Algebras 2-1-11

    9/25

    Note that a representation of a Lie algebra extends to a representation

    of the universal enveloping algebra. Invariant subspaces remain invariant(being subrepresentations); if the representation of the Lie algebra is trivial,then so is the representation of the enveloping algebra. Note that ifT U(g)is central, then T is a g-invariant endomorphism of any given representation.In particular, the Casimir is an endomorphism of any representation ofg.

    Now we are ready to prove

    Theorem 14 (Weyl, by the unitary trick.). Letg be a semisimple Lie algebraoverC. Then the category of finite-dimensionalg-modules has complete re-ducibility. That is, every finite-dimensional representation is the direct sumof irreducible representations.

    Proof. First, suppose W V is an irreducible invariant subspace of codi-mension 1. Since W is an irrep ofg, by Schurs lemma cV acts on it by ahomothety; it is not the zero map because tr cV = dim g. Since V /W is a1-dimensional rep ofg, it is trivial. So cV acts singularly. Its kernel lies inV\W; it is also a subrep ofV by Schurs lemma. Hence V = W ker cV.

    Then this holds by induction even if W is not irreducible. For if W Vis invariant of codimension k and not irreducible, then let Z W be aninvariant submodule. Then V /Z splits as W/Z Y /Z by induction sinceW/Z is of codimension k in V/Z. So Y + Z is g-invariant. Since Z is ofcodimension k (Y /Z being k-dimensional), it follows by induction that it

    splits as Y Z. Since Y W Z, it follows that Y W = 0. So we haveV = Y W. The case k = 1 is applicable now.Now we would like to use the codimension 1 case to prove the entire

    theorem. Let W V be an irreducible invariant subspace. If V containsdistinct irreducible subspaces isomorphic to W, then their intersection withW is 0. So mod out by those and assume for the moment that they arentthere. Consider the restriction map : hom(V, W) hom(W, W). This mapis g-invariant. Consider the submodule Z = 1(homg(W, W)). Becausehomg(W, W) is 1-dimensional and is surjective, it follows that ker Z is ofcodimension 1 in Z. Moreover, this is a submodule. So by the codimension1 case, Z splits as ker Z CA where A generates a 1-dimensional

    submodule. Because g is semisimple, it is in fact the case that g acts triviallyon A; that is, A : V W is g-linear. Moreover A preserves W. Since W isirreducible, A is a homothety on W, and we can choose scaling so that it isthe identity on W; that is, we have a projection map onto W. The kernel isa complementary subspace for W.

    9

  • 8/7/2019 Lie Algebras 2-1-11

    10/25

    The induction we performed earlier now proves the theorem when W is

    not irreducible.The next thing Fulton and Harris prove is the preservation of Jordan

    decomposition, which I guess is probably important in proving the existenceof a Cartan subalgebra.

    Theorem 15. Letg gl(V) be a semisimple Lie algebra. Then for everyX g, the semisimple and nilpotent parts Xs and Xn are also ing.

    Note the theorem is not at all true in the solvable case.

    Proof. Since a semisimple Lie algebra is perfect, and the center of gl(V) is

    sl(V), it follows that g sl(V).It suffices to focus on Xs (or Xn; the argument doesnt depend on whichone). Now, ad(X)s is a polynomial in ad(X). Hence for Y g we havead(X)s(Y) g. But we know that this is the same as ad(Xs) in a matrixgroup (Lemma 8); i.e. [Xs, g] g. So Xs lies in the one-step normalizerof g, consisting of everything that commutes with g. Call this subalgebraN gl(V).

    The claim is that g is essentially characterized within N by its invariantsubspaces. That is, for each invariant subspace W V of g, let LW gl(V) be the space of all endomorphisms preserving V which, in addition,are traceless on V. Then g LW because in the representation g gl(W),

    the image is semisimple and hence contained in sl(W).1 So certainly g N WLW = g

    . The claim is that this inclusion is an equality.Look at the adjoint rep ofg on g. Then g is a subrep. By Weyls theorem,

    there is a complementary subspace g = g h. Suppose Y h. Then forany irreducible invariant subspace W V of g, the action of Y commuteswith the action ofg. So Y is a g-linear endomorphism of W, and by Schurslemma Y|W is a homothety. But since tr Y|W = 0, this means that Y|W = 0.Since V decomposes as the direct sum of irreducible Ws, we in fact haveY = 0. So h = 0 and g = g.

    Since Xs LW for each W and Xs N it follows that Xs g.

    In fact, since ad : g gl(g) is an embedding for g semisimple, we cantake the Jordan decomposition in gl(g) and pull it back to g to obtain the

    1Because you can mod out the (W) from gl(W) after representing: the result is anabelian image ofg. So it better be zero.

    10

  • 8/7/2019 Lie Algebras 2-1-11

    11/25

    absolute Jordan decomposition in g. The absolute Jordan decomposition

    is preserved under homomorphisms because they commute with the adjointrep (and the induced adjoint homomorphism...). This includes representationhomomorphisms.

    Given : g h a homomorphism. IfX, Y commute in g, then (X), (Y)commute in h. If ad X is diagonal on g, then ad (X) is diagonal on the imageof g, and similarly if it is nilpotent on g, then its image is nilpotent on theimage ofg. Oh well, all of this stuffonly makes sense if we stay in the realmof semisimple algebras. So if we have g h, and both are semisimple, then..well h doesnt have to split as a direct sum with the image only if its anideal!

    The decomposition is not guaranteed to be preserved under arbitrary

    homomorphisms. Only under representations.

    4 sl2C

    Recall that sln(k) is the Lie algebra of traceless endomorphisms Cn Cn.

    If e1, e2 is a basis for C2, then define H = e1 e1 e

    2 e2, X = e

    2 e1,

    and Y = e1 e2. These span sl2, and their relations are apparent: [H, X] =2X, [H, Y] = 2Y, [X, Y] = H.

    Its simple because the projections onto eigenspaces X,Y ,Hof ad(H) arepolynomials in ad(H). So if you have a nonzero element of ad(sl2C), then

    you have an X, Y, or H. Then you have H by raising, lowering, or stayingthe same.

    Let : sl2C gl(V) be an irreducible n-dimensional representation.Weve just proven that the representation preserves the Jordan deomposi-tion. Since ad(H) is diagonal, so is (H). Decompose V =

    V where

    V is the -eigenspace of (H). Then the fundamental calculation showsthat (X) and (Y) raise and lower the eigenvalue by 2, respectively. Keepraising a weight vector until you hit zero (youre changing eigenspaces, ofwhich there are only a finite number, so this eventually happens). Call thevector from the step before v0 V. Lower from the step before that until

    you hit zero again, setting vk =

    (Y)

    k

    v. The vectors you have just passedthrough span an irreducible subspace V0: of course (Y)vk = vk+1 V0 and(H)vk = ( 2k)vk V0. In the one case, (X)v0 = 0 V0, while if k = 0then

    (X)vk = (X)(Y)vk1 = ([X, Y])vk1 + (Y)(X)vk1

    11

  • 8/7/2019 Lie Algebras 2-1-11

    12/25

    The first term is (H)vk1, while by induction on k suppose that (X)vk1 =

    cvk2 for some scalar c; then the latter term is cvk1, so the whole sum is amultiple of vk1; this completes the induction, and we conclude that (X)preserves V0 too. So in fact V0 = V.

    Now we could use the fact that representations are determined by highestweight, simply construct the Symn representations, and be done. But thatsa little unsatisfying, so lets show directly what the weights of the irreps are.

    Look at our calculation of Xvi. We have Xvi = civi1. Then ci =ci1 + 2(i 1). Since c0 = 0, this inductively gives

    ck = k 2k1

    i=0

    i = k k(k 1) = k( k + 1)

    for k 1. Now, if ( k + 1) is nonzero and vk1 is nonzero, then thisimmediately implies that vk is nonzero. So it must be that when k = n wehave n + 1 = 0. Hence, an irreducible representation of dimension nhas highest weight = n 1, and has 1-dimensional weight spaces of weightn 1, n 3, . . . , n + 3, n + 1 because the drop at each lowering reducesthe weight by 2.

    Can we show directly that any irreps with the same weights are isomor-phic? Sure pick a basis by inductively lowering highest weight vectors, thensend wi vi. Then the map commutes with Y; it commutes with H by

    hypothesis; it commutes with X because we just calculated what X doesfrom first principles, so it does the same thing on both sides.Since sl2(C) is semisimple, every representation is a direct sum of irre-

    ducibles.

    5 Cartan subalgebras

    Proposition 16. Let h gl(V) be a represented Lie algebra. Then thefollowing are equivalent:

    1. h is simultaneously diagonalizable.

    2. There is a collection of (necessarily linear) functionals : hC suchthat V = V, where V = {v V | H h, Hv = (H)v}.

    3. h is abelian and spanned by individually diagonalizable elements.

    12

  • 8/7/2019 Lie Algebras 2-1-11

    13/25

    Furthermore, if V is a semisimple Lie algebrag, andh is the adjoint repre-

    sentation of an abelian subalgebra of g, then these are all equivalent to4. every element ofh is individually diagonalizable.

    Proof of equivalence of 1,2,3. (1) and (2) are essentially restatements of eachother: if (2) holds, then choose a basis e1, . . . , e

    k for each V separately;

    amalgamating, we get a basis for V. And Hei = (H)ei so H is diagonal.

    Conversely, if (1) holds, then there is a basis e1, . . . , en for V such that forH h, ei is an eigenvector of H with eigenvalue i(H). Then i is a linearfunctional because the linear structure on h is defined pointwise:

    (aH+ H)(ei) = aHe

    i+ He

    i= (a

    i(H) +

    i(H))e

    i

    def=

    i(aH+ H)e

    i

    Moreover V = iei = i= ei = V.(1) trivially implies both (3) and (4) (well, (3) requires the observation

    that the diagonal matrices form an abelian Lie algebra).We show that (4) implies (2). Each element H h decomposes V into

    eigenspaces Wi,i. If vi Wi, then GHvi = i

    j(Gvi)j while HGvi =j j(Gvi)j. Since the different Wi, Wj have null intersection, these two are

    equal iff (Gvi)j = 0 for i = j. So G preserves the eigenspaces, too. Soif H1, . . . , H n is a basis of semisimple elements for h, then we successivelyintersect their eigenspaces to obtain V = V where : {H1, . . . , H n} is

    the eigenvalue to Hi on V. Then if H h, write H =

    ciHi. If v V,then Hv =

    ciHiv =

    ci(Hi)v so that extending by linearity means itassigns a correct eigenvalue to H.

    In order to approach 4, we will make use of a very important fact whichwill be central to our analysis of semisimple Lie algebras later on:

    Lemma 17 (Fundamental Calculation). Letg be a Lie algebra andh a sub-algebra such that ad(h) is simultaneously diagonal with root-space decomposi-tiong = g. If X is a root vector, then ad(X) respects the decompositioninto g. More precisely, ad(g) maps g g+ for all roots ,.

    Proof. Let H H, X g, Y g. Then

    [H, [X, Y]] = [X, [H, Y]] [Y, [H, X]] = (H)[X, Y] (H)[Y, X]

    = ( + )(H)[X, Y]

    13

  • 8/7/2019 Lie Algebras 2-1-11

    14/25

    Proof of equivalence with 4. We show that (4) implies (2). Fix a single H

    h: the aim is to show that H commutes with h. Simply because H itself isdiagonalizable on g (in fact, we only need it to be diagonalizable on h), wecan decompose g into eigenspaces g (with eigenvalue ) of H: g = g.So for X g (or X h) we can write X =

    X, with X g. By

    the fundamental calculation, ad(X) maps g g+. Since there are onlya finite number of eigenspaces, it follows that X is nilpotent for = 0.

    Now, I claim that each component X of X is actually in h. This is truebecause the projection maps g g with respect to the eigenspace decompo-sition are polynomials in ad(H). So they map h h. So ifX h, then eachcomponent X is in h. But as we showed by the fundamental calculation,X is nilpotent if = 0. So for = 0, X is both semisimple and nilpotent,

    and hence zero. So X g0, i.e. X commutes with H. This holds for allH h, so h is actually abelian.

    Definition 18 (Cartan Subalgebra). Let g be a Lie algebra. A Cartansubalgebra is a subalgebra h g such that ad(h) is a maximal simultaneouslydiagonal subalgebra.

    Obviously a Lie algebra has a Cartan subalgebra iffad(g) does not consistof nilpotent endomorphisms.

    Theorem 19. Letg be a semisimple Lie algebra and h g a Cartan subal-gebra. Thenh is a maximal abelian subalgebra of g.

    Proof. Let h g be a Cartan subalgebra. By Proposition 16, we can simul-taneously diagonalize h and decompose g = g where are the roots.

    By definition, g0 consists of precisely those elements ofg which commutewith h; any abelian extension of h is contained in g0. So it will suffice toprove that g0 = h. Actually, this is also necessary, and in some sense isactually the point of the theorem: if X g0, then X commutes with h, sothat CX h is an intermediate abelian subalgebra.

    If X g0 is semisimple, then X h. For h + X is an extension of hwhich is semisimple on g; since h is a maximal subalgebra semisimple on g,we have X h.

    If X g0 is nilpotent, then X h (by h I mean the orthogonal com-plement ofh in g0 defined by the Killing form of the adjoint representationon all ofg). This is because ad(X)ad(H) is again nilpotent for H h (be-cause ad(X) is nilpotent and commutes with ad(H) so (ad(X)ad(H))n =ad(X)n ad(H)n is eventually zero.), so that it is traceless.

    14

  • 8/7/2019 Lie Algebras 2-1-11

    15/25

    Moreover,ifX g0, then the semisimple and nilpotent parts ofX are also

    in g0, since ad(Xs) and ad(Xn) are polynomials in ad(X), so they commutewith anything X commutes with (freely identifying g with ad(g)).In light of this, the existence of the Jordan decomposition X = Xs + Xn

    shows that h + h spans g0. Since B is nondegenerate on g0, it follows thatB is nondegenerate on h. 2 This is another way of saying that hh = 0. Soin fact, we have a direct sum decomposition g0 = hh

    , which is apparentlyorthogonal.

    Since the Jordan deomposition writes an element as a sum of h and h,it follows that the projections onto h and h are actually given by taking thesemisimple and nilpotent parts. Since the projection onto h is surjective,this means that h consists precisely of the nilpotents ofg0.

    Hence adg(h) is a matrix Lie algebra that consists of nilpotent endomor-phisms. By Engels theorem, adg(h

    ) can be simultaneously strictly upper-triangulized. So every product in adg(h

    ) is nilpotent, and so the Killingform is uniformly zero on h. On the other hand, h is orthogonal to ev-erything else in g. Since the Killing form is nondegenerate, it follows thath = 0, and so h = g0.

    6 Roots of a semisimple Lie algebra

    We know that (a) Lie algebras consisting of semisimple elements are abelian,

    and that (b) a semisimple Lie algebra g contains a subalgebra h of elementswhich are semisimple on g, and hence a maximal such one. Also, (c) forany subalgebra h of elements semisimple on g, we can decompose into rootspaces g = g.

    (d) Ifh is any subalgebra of elements semisimple on g, then we claim thatthe roots R = {} span all ofh. 3 If they dont, then pick h ker R. 4 We

    2This is a basic fact about ambidextrous bilinear forms: if V is nondegenerate, thenW V is nondegenerate iffW + W = V, iffW W = 0. For W W is precisely theisotropic subspace of W. And ifW W = 0, then we cant have W + W = V, becausethat would make V nondegenerate: the intersection W W is orthogonal to everythingin W + W (you need ambidextrousness to conclude this).

    3These roots really are linear functionals because [H+ H, X] = [H, X] + [H, X] =(H)X + (H

    )X for X g.4Linear algebra fact: Let V be a finite dimensional vector space. Then W V is

    uniquely determined by ker W, by which I mean the intersection of the kernels of allelements of W. This is a special case of the Nullstellensatz... Heres an algorithmic way

    15

  • 8/7/2019 Lie Algebras 2-1-11

    16/25

    have [h, X] =

    [h, X] =

    (h)X = 0 for all X g. So h generates a

    1-dimensional abelian ideal ofg. This contradicts the semisimplicity ofg.(e) Fundamental calculation: g maps g g+.

    Corollary 20 ( (f) ). The decomposition g = (g g) is orthogonal.

    Proof. If X g, Y g, then ad(X) ad(Y) maps gg++. So if + = 0, then ad(X) ad(Y) is nilpotent, and hence B(X, Y) = 0.

    (g) In particular, ifg = 0, then g = 0. This follows from (f) because gis semisimple, so B is nondegenerate, and gs only hope to not be orthogonalto everything is g. A semisimple Lie algebra is a club where every rootvector must find a nonorthogonal partner to qualify for membership; scorned

    and spurned by most everybody, the lonely root vectors search for love doesnot go unfulfilled, and she finds her perfect match in her mirror image.

    (h) Also, the Killing form is nondegenerate on g0. Since the Killing formis nondegenerate on h, we can make use of the induced isomorphism betweenh and h.

    All of these facts were actually used in our proof that h is Cartan. Nowlets do something new. It is clear that ifg = gl(V) and you have a maxi-mal subalgebra of diagonal elements, then they should determine a completebasis of weight vectorsotherwise you could toss in a simultaneously diago-nal endomorphism which has distinct eigenvalues on one of the larger weight

    spaces. We will show that this is also true of a general semisimple Lie algebrain the adjoint representation.

    Definition 21 ( (i) ). Let t h be the dual of h, i.e. = B(t, ).

    Note that for nonzero C, the vector t is defined by =1

    B(t, ), sothat t = t.

    to do it (I would like to have a more conceptual way, but perhaps thats not in the cardsafter all... No, I still think it might be.): choose a basis for ker W; we will extend it toone on V. Let 0 = 0 V

    , and inductively choose vi ker0 and i W such thati(vi) = 0, setting i = i1 {i}. This continues until ker = ker W. Then we canapply Gram-Schmidt: normalize so that i(vi) = 1. Set 1 = 1; we have 1(vj) = 1j .Inductively suppose that ivj = ij for i < k. Then let k = k

    i

  • 8/7/2019 Lie Algebras 2-1-11

    17/25

    (j) [X, X] = B(X, X)t, for X g and X g. In partic-

    ular, the commutator [g, g] is the one-dimensional subalgebra Ct. Thisis because the Killing form is associative: B([X, X], H) = B(X, [X, H]) =(H)B(X, X), as desired. (and there must be X g, X g withB(X, X) =)) by nondegenracy of the Killing form.

    (k) (t) = 0 for any (nonzero) root . The idea is to show that if(t) = 0, then t = 0, which is absurd since t is the image of under anisomorphism.

    Suppose (t) = 0. Pick X g, X g with B(X, X) = c = 0.Then Fulton and Harris claim that the algebra s generated by X, X, tis solvable. This is true: We have [t, X] = (t)X = 0 and similarly[t, X

    ] = 0, while [X, X

    ] = ct by fact (j). Everything is projected onto

    t, which is projected to zero (in fact, its nilpotent but that doesnt matter),so ads(s) is solvable; since the kernel is abelian s itself is also solvable.

    Now look at the larger representation adg(s). Since ct = [X, X] Ds, it follows from Lies theorem that adg(ct) is a nilpotent endomorphismof g. Since ct lies in h, it is also semisimple, so this implies that ct = 0,so that t = 0. But this is absurd since t is the image of under anisomorphism.

    (l) If X g = 0, then there is a Y g such that X,Y ,t spana copy of sl2C with X,Y ,t playing the role of the standard X,Y ,H. Inlight of the last two facts, this is just a matter of normalizing things: Since

    [t, X] =

    (t)X and by fact (k)

    (t) = 0, we can set H =

    2

    (t)t. Thenpick Y such that B(X, Y) = c = 0. We can normalize Y = 2c(t)Y. Then

    [X, Y] = B(X, Y)t = H. This yields an explicit isomorphism to sl2Cusing the same names as the standard basis elements of sl2.

    Another way fo putting the last two steps is to realize that in ((k)) weshowed that the subalgebra spanned by g, g is not solvable. Then welook at a list of 3-dimensional Lie algebras and see that the only nonsolvableone is sl2. Or rather, we realize that we basically just did a step in theclassification of small Lie algebras: we were down to two possibilties (with[X, Y] zero or nonzero) and we excluded one because it was solvable, leavingthe sl2 possibility.

    (m) Moreover, this s = sl2C acts on m = Ct rC gr. By the funda-mental calculation we have adg() : g g(+), so we only need to checkthat weve included all of the g0 elements which are produced by commuta-tors. This is true by (j).

    17

  • 8/7/2019 Lie Algebras 2-1-11

    18/25

  • 8/7/2019 Lie Algebras 2-1-11

    19/25

    (v) B(H, H) Q for all , . This is a straightforward calculation

    now: B(H, H) = tr (ad(H)ad(H)) =

    (H)(H)

    Since each (H) and each (H) is rational, it follows that the whole sumis rational.

    (w) The Q-vector space T spanned by the {t} is n-dimensional (where nis the complex dimension ofh), and B is positive-definite on T. Choose a C-basis ofh consisting ofts, i.e. its t1, . . . , tn. In this basis, B is rational (infact, an integer matrix). Since B(t, t) > 0 lies in Q, we can perform Gram-Schmidt over Q to obtain an orthogonal basis s1, . . . , sn. Then B(t, si) is

    rational for any , i. Since t =i

    B(t,si)

    B(si,si)si (has the right product with any

    sj), we see that the t are Q-linear combinations of the si, and hence of theti.

    7 Root Systems

    Let V be an n-dimensional real vector space equipped with an inner productB. A root system in (V, B) is a set R of vectors in V such that

    1. R spans V.

    2. R is centrally symmetric, i.e. R = R. Moreover, theseare the only multiples of in R.

    3. The automorphism group of R (i.e. the isometries of (V, B) which fixR set-wise) contains all reflections in elements of R (with respect toB).

    4. If, R, then (, ) Z, where (,) = 2B(,)B(,)

    It is a fact that the roots of a semisimple Lie algebra form a root system overh with the Killing form.

    We showed at the end of the previous section that the coroots span an n-dimensional real vector space V, and that the Killing form is positive definiteon V; of course it is symmetric. They are centrally symmetric, as we sawway back, and moreover we also saw the integrality condition holds at theend.

    19

  • 8/7/2019 Lie Algebras 2-1-11

    20/25

    This leaves the reflections. Reflecting h in a root sends

    2

    B(,)

    B(,). The fixed hyperplane is given by by such that B(,) = 0.Since (H) =

    2B(,)

    (t) = 2B(,)B(,)

    , this is precisely the condition that be a zero-weight vector of.

    So under a reflection in , each -string is flipped around the point whereits -weights are zero (whether this is a root or not). This is precisely theone symmetry of the -string roots! Since every point lies on a -string, thismeans that reflection in is actually an automorphism of the set of roots.

    If S is a set in a vector space over an infinite field of larger cardinalitythan S, we can choose a vector in S which meets the additive subgroupgenerated by Sonly at the origin. Choose such a vector. Take the hyperplaneorthogonal to it, and call the points on the same side as the vector positive,and those on the other side negative. Then we obtain a partial orderingon the vector space, given by v > w if v w is positive. One important factis that there will always be a largest element of any finite subset of S underthis partial ordering.

    So pick such an incommensurate vector; call the set of positive roots R+

    and the set of negative roots R. Call a positive root simple if it cant bewritten as the sum of two other positive roots. We will find an equivalencebetween the root system and the simple roots.

    (i) Let (,) be the angle between, . Then cos2 (,) = 14(, )(,).Since this number is between 0 and 1 (inclusive), it follows from condition 4

    that cos2

    (, ) {0,14 ,

    12 ,

    34 , 1}. Each possible angle allows only a few pos-

    sible values of(, ) and (,) since they have to be integers multiplyingto an integer between 0 and 4.

    (ii) Let , be roots, = . Then ifB(,) > 0 we have R,while if B(,) < 0 we have + R. The two statements are equivalentfrom swapping for . So lets show the former: if the angle between rootsis acute, then their difference is again a root. To show this, we reflect in ,obtaining (, ). Also consider reflecting in obtaining (,).Between these two cases and using central symmetry, it suffices that either(, ) = 1 o r (,) = 1. Since the product of the two is an integerbetween 1 and 4, the only way that neither of them is 1 is if they are both 2.

    In that case we have B(, ) = B(,) = B(,). This implies that = ,contrary to hypothesis.

    (iii) Every positive root can be written as a sum of simple roots. For if itis not itself simple, then it can be written as the sum of two positive roots.

    20

  • 8/7/2019 Lie Algebras 2-1-11

    21/25

    These roots are less positive, so by induction they in fact can be written as

    the sum of simple roots, and then we aggregate the sum. In particular, thesimple roots span V.(iv) The angle between two simple roots is not acute. For if (, ) < 0,

    then either or is positive; if it is the former, = ( ) + contradicts simplicity of, and similarly for the latter.

    (v) The simple roots of a root system are linearly independent. This is ageneral linear-algebra-with-inner-product fact: if you have a bunch of vectors,none of which form an acute angle, on the same side of a hyperplane, thenthey are linearly independent. For if

    iIaii = 0, then we can re-index

    to write v =

    iJaii =

    iI\Jaii where the summands on both sidesare now all nonnegative. Assuming this relation used a minimal number of

    terms, we now have fewer terms on each side, and so v = 0 (the terms werenot all positive to begin with because then the sum would be even morepositive and hence nonzero). Since our inner produce is positive definite, wehave

    0 < (v, v) =

    iJ,jI\J|aiaj|(i,j)

    This is a contradiction because each term in the sum is nonpositive.(vi) Reflections in the simple roots generate the Weyl group, and the

    simple roots generate the root system under the Weyl group. I will hold offon proving this.

    (vii) Any two choices of division into positive and negative roots areisomorphic in the sense that there is an automorphism of R carrying thepositive roots of one to the positive roots of the other.

    8 Dynkin Diagrams

    The Dynkin diagrom of a system of roots is given by the directed weightedgraph with a node for each simple root and (,)(,) paths betweennodes , , pointing from the larger root to the smaller one; the Coxetergraph is the unordered version. An n-node Coxeter graph is called admissible

    if there exist n independent unit vectors e1, . . . , en Rn

    with angles asindicated by the nodes of the graph (with cosines 0, 12

    22 , or

    32 all

    obtuse in particular); this is certainly a necessary condition in order for thereto be normalizations which form the simple roots of a root system.

    21

  • 8/7/2019 Lie Algebras 2-1-11

    22/25

    Note that a subdiagram of an admissible one is also admissible by looking

    at subspaces (since the simple roots are linearly independent).There are between 0 and 3 connections between any two nodes (an integernumber). It would be between 0 and 4, but the case of 4 just says that thenodes coincide.

    Between any m nodes there are at most m 1 connections not count-ing multiplicity. For if S consists of m elements of {e1, . . . , en}, then 0