lectures on quantum groupsbanica.u-cergy.fr/pdf/b1.pdflectures on quantum groups teobanica compact...

Post on 08-Aug-2020

13 Views

Category:

Documents

0 Downloads

Preview:

Click to see full reader

TRANSCRIPT

Linear algebra basics

Teo Banica

Linear algebra, Matrix groups

08/20

Foreword

These are slides written in the Fall 2020, on:

1. Linear algebra

2. Matrix groups

Presentations available at my Youtube channel.

Contents 1/2

Introduction to linear algebra ... 5

1. Real matrices and their properties ... 5

2. The determinant of real matrices ... 21

3. Complex matrices and diagonalization ... 37

4. Linear algebra and calculus questions ... 53

5. Infinite matrices and spectral theory ... 69

6. Special matrices and matrix tricks ... 85

Contents 2/2

Introduction to matrix groups ... 101

1. Groups of unitary matrices ... 101

2. Symmetry and reflection groups ... 117

3. Symmetric groups and Poisson laws ... 133

4. Complex reflections and Bessel laws ... 149

5. Representations of compact groups ... 165

6. Probability on compact groups ... 181

Real matrices and their properties

Teo Banica

"Introduction to linear algebra", 1/6

08/20

Rotations 1/3

Problem: what’s the formula of the rotation of angle t?

Rotations 2/3

The points in the plane R2 can be represented as vectors(xy

). The

2× 2 matrices “act” on such vectors, as follows:(a bc d

)(xy

)=

(ax + bycx + dy

)Many simple transformations (symmetries, projections..) can bewritten in this form. What about the rotation of angle t?

Rotations 3/3

A quick picture shows that we must have:(∗ ∗∗ ∗

)(10

)=

(cos tsin t

)Also, by paying attention to positives and negatives:(

∗ ∗∗ ∗

)(01

)=

(− sin tcos t

)Thus, the matrix of our rotation can only be:

Rt =

(cos t − sin tsin t cos t

)By "linear algebra”, this is the correct answer.

Linear maps 1/4

Theorem. The maps f : R2 → R2 which are linear, in the sensethat they map lines through 0 to lines through 0, are:

f

(x

y

)=

(ax + by

cx + dy

)

Remark. If we make the multiplication convention(a bc d

)(xy

)=

(ax + bycx + dy

)the theorem says f (v) = Av , with A being a 2× 2 matrix.

Linear maps 2/4

Examples. The identity and null maps are given by:(1 00 1

)(x

y

)=

(x

y

),

(0 00 0

)(x

y

)=

(00

)The projections on the horizontal and vertical axes are given by:(

1 00 0

)(x

y

)=

(x

0

),

(0 00 1

)(x

y

)=

(0y

)The symmetry with respect to the x = y diagonal is given by:(

0 11 0

)(x

y

)=

(y

x

)We have as well the rotation of angle t ∈ R, studied before.

Linear maps 3/4

Theorem. The maps f : RN → RN which are linear, in the sensethat they map lines through 0 to lines through 0, are:

f

x1...xN

=

a11x1 + . . .+ a1NxN...

aN1x1 + . . .+ aNNxN

Remark. With the matrix multiplication conventiona11 . . . a1N

......

aN1 . . . aNN

x1

...xN

=

a11x1 + . . .+ a1NxN...

aN1x1 + . . .+ aNNxN

the theorem says f (v) = Av , with A being a N × N matrix.

Linear maps 4/4

Example. Consider the all-1 matrix. This acts as follows:1 . . . 1...

. . ....

1 . . . 1

x1

...xN

=

x1 + . . .+ xN...

x1 + . . .+ xN

But this formula can be written as follows:

1N

1 . . . 1...

. . ....

1 . . . 1

x1

...xN

=x1 + . . .+ xN

N

1...1

And this latter map is the projection on the all-1 vector.

Theory 1/4

Definition. We can multiply M × N matrices with N × K matrices, a11 . . . a1N...

...aM1 . . . aMN

b11 . . . b1K

......

bN1 . . . bNK

the product being a M × K matrix, given by the formula

a11b11 + . . .+ a1NbN1 . . . . . . a11b1K + . . .+ a1NbNK...

......

...aM1b11 + . . .+ aMNbN1 . . . . . . aM1b1K + . . .+ aMNbNK

obtained via the rule “multiply rows by columns”.

Theory 2/4

Better definition. The matrix multiplication is given by

(AB)ij =∑k

AikBkj

with Aij being the entry on the i-th row and j-th column.

Theorem. The linear maps f : RN → RM are those of the form

fA(v) = Av

with A being a N ×M matrix.

Remark. Size check (N × 1) = (N ×M)(M × 1), ok.

Theory 3/4

Theorem. With the above convention fA(v) = Av , we have

fAfB = fAB

"the product of matrices corresponds to the composition of maps".

Theorem. A linear map f : RN → RN is invertible when the matrixA ∈ MN(R) which produces it is invertible, and we have:

(fA)−1 = fA−1

Theory 4/4

Theorem. The inverses of the 2× 2 matrices are given by:(a bc d

)−1

=1

ad − bc

(d −b−c a

)

Proof. When ad = bc the columns are proportional, so the matrixcannot be invertible. When ad − bc 6= 0, let us solve:(

a bc d

)−1

=1

ad − bc

(∗ ∗∗ ∗

)We must solve the following equations:(

a bc d

)(∗ ∗∗ ∗

)=

(ad − bc 0

0 ad − bc

)But this leads to the formula in the statement.

Eigenvectors 1/4

Definition. Let A ∈ MN(R) be a square matrix, and assume that Amultiplies by λ ∈ R in the direction of a vector v ∈ RN :

Av = λv

In this case, we say that:

(1) v ∈ RN is an eigenvector of A.

(2) λ ∈ R is the corresponding eigenvalue.

Eigenvectors 2/4

Examples. The identity has all vectors as eigenvectors, with λ = 1:(1 00 1

)(x

y

)=

(x

y

)The same goes for the null matrix, with λ = 0 this time:(

0 00 0

)(x

y

)=

(00

)For the projection on the horizontal axis, P

(xy

)=(x0

), we have:

Pv = λv ⇐⇒ v =

(0y

), λ = 0 or v =

(x

0

), λ = 1

A similar result holds for the projection on the vertical axis.

Eigenvectors 3/4

More examples. For the symmetry S(xy

)=(yx

), we have:

Sv = λv ⇐⇒ v =

(x

x

), λ = 1 or v =

(x

−x

), λ = −1

For the transformation T(xy

)= (0

010)(xy

)=(y0

)we have:

Tv = λv ⇐⇒ v =

(x

0

), λ = 0

For the rotation of angle t 6= 0, we must have v = 0, λ = 0.

Eigenvectors 4/4

Definition. We say that a matrix A ∈ MN(R) is diagonalizable if ithas N eigenvectors v1, . . . , vN which form a basis of RN .

Remark. When A is diagonalizable, in that basis we can write:

A =

λ1. . .

λN

This means that we have A = PDP−1, with D diagonal.

Problems. Which matrices are diagonalizable? And, how todiagonalize them?

The determinant of real matrices

Teo Banica

"Introduction to linear algebra", 2/6

08/20

Definition 1/3

Definition. Associated to any vectors v1, . . . , vN ∈ RN is the volume

det+(v1 . . . vN) = vol < v1, . . . , vN >

of the parallelepiped made by these vectors.

Remark. This notion is useful, for instance because v1, . . . , vN arelinearly dependent precisely when det+(v1 . . . vN) = 0.

Definition 2/3

Theorem. In 2 dimensions we have the formula

det+

(a bc d

)= |ad − bc|

valid for any two vectors(ac

),(bd

)∈ R2.

Proof. We must show that the area of the parrallelogram formed bythe vectors

(ac

),(bd

)equals the quantity |ad − bc|.

But this latter quantity is a difference of areas of two rectangles,and this can be done in “puzzle” style.

Comment. This is nice, but with ad − bc as "answer", which islinear in a, b, c , d , it would be even nicer.

Definition 3/3

Convention. A system of vectors v1, . . . , vN ∈ RN is called:(1) Oriented (+), if one can pass from the standard basis to it.(2) Unoriented (-), otherwise.

Definition. Associated to v1, . . . , vN ∈ RN is the signed volume

det(v1 . . . vN) = vol± < v1, . . . , vN >

of the parallelepiped made by these vectors.

Remark. We have det

(a bc d

)= ad − bc , which is nice.

Properties 1/4

Notation. Given a matrix A ∈ MN(R), we write detA, or just |A|,for the determinant of the system of column vectors of A.

Notation. Given a linear map, written as f (v) = Av , we call thenumber detA the “inflation coefficient” of f .

Remark. The inflation coefficient of f is the signed volume of theimage f (�N) of the unit cube �N ∈ RN .

Properties 2/4

Theorem. The determinant detA of the matrices A ∈ MN(R) hasthe following properties:

(1) It is a linear function of the columns of A.

(2) We have det(AB) = detA · detB .

(3) We have det(AB) = det(BA).

Proof. (1) By doing some geometry, we obtain indeed:

det(u + v , {wi}) = det(u, {wi}) + det(v , {wi})

det(λu, {wi}) = λ det(u, {wi})

(2) This follows from fAB = fAfB , by looking at "inflation".

(3) Follows from (2), both quantities being detA · detB .

Properties 3/4

Theorem. Assuming that a matrix A ∈ MN(R) is diagonalizable,with eigenvalues λ1, . . . , λN , we have:

detA = λ1 . . . λN

Proof. This is clear from the "inflation" viewpoint, because in thebasis formed by the eigenvectors v1, . . . , vN , we have:

fA(vi ) = λivi

Alternatively, A = PDP−1 with D = diag(λ1, . . . , λN), so

det(A) = det(PDP−1) = det(DP−1 · P) = det(D)

and by linearity det(D) = λ1 . . . λN · det(1N) = λ1 . . . λN .

Properties 4/4

Theorem. We have the following formula, for any λ ∈ R:

det(u, v , {wi}i ) = det(u − λv , v , {wi}i )

Theorem. For an upper triangular matrix we have∣∣∣∣∣∣∣λ1 ∗

. . .λN

∣∣∣∣∣∣∣ = λ1 . . . λN

and a similar result holds for the lower triangular matrices.

Proofs. The first theorem follows from linearity, because we havedet(v , v , {wi}i ) = 0, and the second theorem follows from it.

Examples 1/4

Theorem. In 2 dimensions, the determinant is given by:∣∣∣∣a bc d

∣∣∣∣ = ad − bc

Proof. This is something that we already know, but that we canrecover by using the general theory developed above:∣∣∣∣a b

c d

∣∣∣∣ =

∣∣∣∣a− b · c/d bc − d · c/d d

∣∣∣∣=

∣∣∣∣a− bc/d b0 d

∣∣∣∣= (a− bc/d)d

Thus, we obtain the formula in the statement.

Examples 2/4

Theorem. In 3 dimensions, the determinant is given by∣∣∣∣∣∣a b cd e fg h i

∣∣∣∣∣∣ = aei + bfg + cdh − ceg − bdi − afh

and this can be memorized by using Sarrus’ triangle method.

Proof. This follows a bit as in 2 dimensions, by using the "Gaussmethod". We will be back later with a more conceptual proof.

Examples 3/4

Theorem. The determinant of a projection is always 0, unless theprojection is the identity, and the determinant is 1.

Proof. This is clear with the "inflation" viewpoint. Alternatively, Pis diagonalizable, with 1 eigenvalues on the image, and 0 outside:

P ∼

1. . .

0

By making the product we obtain detP = 1 . . . 1 · 0 . . . 0, with atleast one 0 in the case P 6= 1N , as claimed.

Examples 4/4

Example. For the symmetry with respect to x = y , we have:∣∣∣∣0 11 0

∣∣∣∣ = 0 · 0− 1 · 1 = −1

Example. For the rotation of angle t ∈ R, we have:∣∣∣∣cos t − sin tsin t cos t

∣∣∣∣ = cos2 t + sin2 t = 1

These formulae follow as well without computations, by "inflation".

Remark. The "basic" matrices tend to have determinant −1, 0, 1.

Theory 1/4

Theorem. The determinant can be fully computed by using theGauss method, namely:(1) Multiplying row by scalars.(2) Substracting rows.

Theorem. The determinant function

det : RN × . . .× RN → R

is multilinear, alternate and unital, and unique with these properties.

Proofs. The first theorem is something that we already know, andthe second theorem follows from it, by uniqueness.

Theory 2/4

Definition. A permutation of {1, . . . ,N} is a bijection, as follows:

σ : {1, . . . ,N} → {1, . . . ,N}

The set of such permutations is denoted SN .

Theorem. There are N! = 1.2.3 . . .N such permutations.

Proof. We have N choices for σ(1), then N − 1 choices for σ(2),and so on, up to 1 choice for σ(N).

Definition. The signature of a permutation ε(σ) ∈ {±1} is thenumber of inversions, i < j with σ(i) > σ(j).

Theory 3/4

Theorem. The determinant is given by the formula

detA =∑σ∈SN

ε(σ)A1σ(1) . . .ANσ(N)

with the signature function being the one introduced above.

Proof. This follows either by using the Gauss method, or by usingthe abstract characterization of the determinant.

Remark. At N = 3 we obtain in this way the Sarrus formula.

Theory 4/4

Theorem. The eigenvalues of a matrix A ∈ MN(R) must satisfy

PA(λ) = 0

where PA = det(A− λ1N) is the characteristic polynomial.

Proof. Given a vector v ∈ RN and a number λ ∈ R, we have:

Av = λv ⇐⇒ (A− λ1N)v = 0

But this latter equation has nonzero solutions when

B = det(A− λ1N)

is not invertible, and so when detB = 0.

Complex matrices and diagonalization

Teo Banica

"Introduction to linear algebra", 3/6

08/20

Complex numbers 1/3

The complex numbers are z = a + ib, with i2 = −1.

They can be represented in the plane, with z being(ab

).

We have z = re it , with r =√a2 + b2, and tan t = b/a.

The equation x2 = −1 has two solutions, x = ±i .In fact, the equation P(x) = 0 has N = degP solutions.

Also, complex numbers are important in quantum physics.

Complex numbers 2/3

Consider the rotation of angle t ∈ R:

Rt =

(cos t − sin tsin t cos t

)This rotation has 2 complex eigenvectors (!), because:

Rt

(1i

)=

(cos t − i sin t

sin t + i cos t

)= e−it

(1i

)

Rt

(1−i

)=

(cos t + i sin t

sin t − i cos t

)= e it

(1−i

)Thus, good news, Rt is diagonalizable over C.

Complex numbers 3/3

More magics. When identifying R2 with the complex plane C, therotation of angle t ∈ R becomes a 1× 1 matrix (!):

Rt = (e it)

Thus, with complex numbers, this rotation Rt of angle t ∈ R in theplane is something completely trivial. Very nice.

Theory 1/4

The theory from the real case extends to this setting:

Theorem. Any linear map f : CN → CN is of the form f (v) = Av ,with A ∈ MN(C).

Theorem. More generally, any linear map f : CN → CM is of theform f (v) = Av , with A ∈ MM×N(C).

Theorem. With fA(v) = Av , we have fAB = fAfB . In particular fA isinvertible when A is invertible, and f −1

A = fA−1 .

Theory 2/4

The theory of the determinant extends as well:

Definition. The determinant of a matrix A ∈ MN(C) is

detA =∑σ∈SN

ε(σ)A1σ(1) . . .ANσ(N)

where ε(σ) = (−1)c , c being the number of inversions.

Theorem. The determinant is subject to the following rules:

(1) det(λu, {wi}) = λ det(u, {wi}).

(2) det(u, v , {wi}) = det(u − v , v , {wi}).

Also, we have det(AB) = detA · detB , det(At) = detA.

Theory 3/4

The theory of the eigenvalues extends as well:

Definition. Given A ∈ MN(C), if v ∈ CN and λ ∈ C satisfy

Av = λv

we say that v is an eigenvector of A, with eigenvalue λ.

Theorem. The eigenvalues are the roots of the polynomial

P(λ) = det(A− λ1N)

called characteristic polynomial of the matrix.

Theory 4/4

Theorem. Consider a 2× 2 real or complex matrix:

A =

(a bc d

)(1) The characteristic polynomial is P(λ) = λ2 − Sλ+ P , with:

S = a + d , P = ad − bc

(2) We have two complex eigenvalues, given by:

λ1 + λ2 = S , λ1λ2 = P

(3) Equivalently, we have the following formula:

λ1,2 =S ±√S2 − 4P2

Diagonalization 1/4

Theorem. Given A ∈ MN(C), consider its characteristic polynomialP(x) = det(A− x1N), and decompose it into factors:

P(x) = (−1)N(x − λ1) . . . (x − λN)

For λ ∈ {λ1, . . . , λN} consider the corresponding eigenspace:

Eλ = {v ∈ CN∣∣∣Av = λv}

We have then dimension inequalities as follows, for any λ,

1 ≤ dim(Eλ) ≤ #(λ ∈ {λ1, . . . , λN})

and A is diagonalizable precisely when we have equalities at right.

Diagonalization 2/4

In practice, the above result can be used as follows:

(1) Compute the characteristic polynomial P(x) = det(A− x1N),and factorize it as P(x) = (−1)N(x − λ1) . . . (x − λN).

(2) Remark: if λi are distinct, A is certainly diagonalizable. Also, ifλi /∈ R for some i , A is not diagonalizable over R.

(3) Solve Av = λiv for any i . If a space of solutions Eλi satisfiesdim(Eλi ) < #(λ ∈ {λ1, . . . , λN}), A is not diagonalizable.

(4) Otherwise, find a basis of each of these spaces Eλi , and put alleigenvectors found into a matrix P (the "passage matrix").

(5) Put as well all eigenvalues found on the diagonal of a matrix D.Compute P−1. We have then A = PDP−1.

Diagonalization 3/4

Some tricks and tips:

(1) In 2 dimensions, where A = (acbd), the eigenvalues are best

computed by using x + y = a + d , xy = ad − bc .

(2) In fact, in N dimensions, it is known that the eigenvalues satisfyλ1 + . . .+ λN = Tr(A) and λ1 . . . λN = detA.

(3) If P has integer coefficients, P ∈ Z[X ], look first for integerroots, λ ∈ Z. These must divide the coefficient of X 0.

Diagonalization 4/4

More tricks and tips:

(1) When computing eigenspaces Eλi , start with eigenvalues havingbig multiplicity, because the computation here might lead to theconclusion that A is not diagonalizable, and so you’re done.

(2) Always check and doublecheck your computations. If yourmatrix depends on a parameter t, plug in t = 0 or so from time totime, in order to doublecheck. Good luck!

Advanced 1/4

Theorem. With respect to < x , y >=∑

i xi yi we have

< Ax , y >=< x ,A∗y >

with A∗ being the adjoint matrix, given by (A∗)ij = Aji .

Theorem. For a matrix U ∈ MN(A), the following are equivalent:

(1) U is a unitary, < Ux ,Uy >=< x , y >.

(2) U satisfies the equation U∗ = U−1.

Proof. We have indeed < Ux ,Uy >=< x ,U∗Uy >, as desired.

Advanced 2/4

Theorem. The matrices which are normal, in the sense that

AA∗ = A∗A

are diagonalizable.

Theorem. The matrices which are self-adjoint, in the sense that

A = A∗

are diagonalizable. Moreover, their eigenvalues are real.

Theorem. The matrices which are unitary, in the sense that

U∗ = U−1

are diagonalizable. Their eigenvalues are on the unit circle.

Advanced 3/4

Theorem. The following happen, inside MN(C):

(1) The matrices having distinct eigenvalues are dense.

(2) The diagonalizable matrices are dense.

Proof. Here (1) follows by using the resultant R(P,P ′), becausethe equation R = 0 defines a hypersurface in MN(C), having densecomplement. As for (2), this follows from (1).

Comment. This is interesting, because it tells us that "any formulawhich is true for diagonalizable matrices is true in general".

Advanced 4/4

Theorem. Any matrix A ∈ MN(C) can be put in Jordan form

A ∼

J1. . .

Jk

with each Jordan block being of the following type,

J =

λ 1

. . . . . .. . . 1

λ

with the numbers λ ranging over the eigenvalues of A.

Linear algebra and calculus questions

Teo Banica

"Introduction to linear algebra", 4/6

08/20

Systems 1/3

Theorem. Any linear system of equationsa11x1 + a12x2 + . . . + a1NxN = v1

a21x1 + a22x2 + . . . + a2NxN = v2...

aN1x1 + aN2x2 + . . .+ aNNxN = vN

can be written in matrix form, as follows,a11 a12 . . . a1Na21 a22 . . . a2N...

...aN1 aN2 . . . aNN

x1x2...xN

=

v1v2...vN

and when A is invertible, its solution is given by x = A−1v .

Systems 2/3

Theorem. Any linear recurrence systemxk+1 = a11xk + a12yk + a13zk + . . .

yk+1 = a21xk + a22yk + a23zk + . . .

zk+1 = a31xk + a32yk + a33zk + . . ....

can be written in matrix form, as follows,xk+1yk+1zk+1...

=

a11 a12 a13 . . .a21 a22 a23 . . .a31 a32 a33 . . ....

......

xkykzk...

and the solution is obtained by applying Ak to the inital data.

Systems 3/3

In order to compute Ak , we must diagonalize the matrix,

A = PDP−1

and then the powers are given by the following formula:

Ak = PDkP−1

This formula holds in fact for any k ∈ Z, or even k ∈ R.

Calculus 1/4

Theorem. Any function can be locally approximated as

f (x + t) ' f (x) + at

where a = f ′(x) is the derivative of f at the point x .

Proof. Let us recall indeed the definition of the derivative:

f ′(x) = limt→0

f (x + t)− f (x)

t

But this gives the formula in the statement.

Calculus 2/4

Theorem. Any function of several variables, written as

f = (f1, . . . , fN)

can be locally approximated as follows,

f (x + t) ' f (x) + At

with A being the matrix of partial derivatives at x ,

A =

(∂fi∂xj

(x)

)ij

acting on the vectors t by usual multiplication.

Calculus 3/4

Theorem. We have the change of variable formula∫ b

af (x)dx =

∫ d

cf (ϕ(t))ϕ′(t)dt

where c = ϕ−1(a) and d = ϕ−1(b).

Proof. This follows with f = F ′ from the rule

(Fϕ)′(t) = F ′(ϕ(t))ϕ′(t)

by integrating between c and d .

Calculus 4/4

Theorem. Given a transformation in several variables,

ϕ = (ϕ1, . . . , ϕN)

we have the following change of variable formula,∫Ef (x)dx =

∫ϕ−1(E)

f (ϕ(t))Jϕ(t)dt

with the Jϕ quantity, called Jacobian, being given by:

Jϕ(t) = det

[(∂ϕi

∂xj(x)

)ij

]

Polar coordinates 1/4

Theorem. We have polar coordinates in 2 dimensions,{x = r cos t

y = r sin t

and the corresponding Jacobian is J(r , t) = r .

Proof. The Jacobian is by definition given by:∣∣∣∣cos t −r sin tsin t r cos t

∣∣∣∣ = r

Thus, we have indeed the formula in the statement.

Polar coordinates 2/4

∫Re−x

2dx =?

Polar coordinates 3/4

Theorem. We have the following formula:∫Re−x

2dx =

√π

Proof. The square of the integral is given by:

I 2 =

∫R

∫Re−x

2−y2dxdy

=

∫ 2π

0

∫ ∞0

re−r2drdt

=

∫ 2π

0

[−e−r

2

2

]∞0

dt

We obtain I 2 = (2π)× 12 = π, and so I =

√π.

Polar coordinates 4/4

Definition. The normal law of parameter 1 is:

g1 =1√2π

e−x2/2dx

More generally, the normal law of parameter t > 0 is:

gt =1√2πt

e−x2/2tdx

Remark. The Gauss formula gives with x = y/√2t∫

Re−y

2/2tdy =√2πt

so these laws have indeed mass 1.

Spheres 1/4

Theorem. We have spherical coordinates in 3 dimensions,x = r cos s

y = r sin s cos t

z = r sin s sin t

and the corresponding Jacobian is J(r , s, t) = r2 sin s.

Proof. The Jacobian is given by:∣∣∣∣∣∣cos s −r sin s 0

sin s cos t r cos s cos t −r sin s sin tsin s sin t r cos s sin t r sin s cos t

∣∣∣∣∣∣ = r2 sin s

Thus, we have indeed the formula in the statement.

Spheres 2/4

Theorem. We have spherical coordinates in N dimensions,

x1 = r cos t1

x2 = r sin t1 cos t2...

xN−1 = r sin t1 . . . sin tN−2 cos tN−1

xN = r sin t1 . . . sin tN−2 sin tN−1

and the corresponding Jacobian is:

J(r , t) = rN−1 sinN−2 t1 sinN−3 t2 . . . sin2 tN−3 sin tN−2

Remark. This generalizes the previous coordinates at N = 2, 3.

Spheres 3/4

Theorem. The volume of the sphere in RN is given by

V

2N=(π2

)[N/2] 1(N + 1)!!

with N!! = (N − 1)(N − 3)(N − 5) . . ., stopping at 1 or 2.

(1) At N = 1 we obtain V /2 = 1, so V = 2.(2) At N = 2 we obtain V /4 = π/2 · 1/2, so V = π.(3) At N = 3 we obtain V /8 = π/2 · 1/3, so V = 4π/3.(4) At N = 4 we obtain V /16 = π2/4 · 1/8, so V = π2/2.

Spheres 4/4

Proof. By using spherical coordinates, and Fubini, we are left withcomputing integrals over the circle. But these are given by

∫ π/2

0cosp t sinq t dt =

(2π

)δ(p,q) p!!q!!

(p + q + 1)!!

where δ(a, b) = 0 if both a, b are even, and δ(a, b) = 1 otherwise,and by plugging in these quantities, we obtain the result.

Infinite matrices and spectral theory

Teo Banica

"Introduction to linear algebra", 5/6

08/20

Linear spaces 1/3

Definition. A complex vector space is a set V with operations

(u, v)→ u + v , (λ, u)→ λu

having the following properties:

(1) u + v = v + u.

(2) (u + v) + w = u + (v + w).

(3) (λ+ µ)u = λu + µu.

(4) (λµ)u = λ(µu).

(5) λ(u + v) = λu + λv .

Examples. CN , C∞, MN(C), C [0, 1] and many other.

Linear spaces 2/3

Definition. A map f : V →W is called linear when:

(1) f (u + v) = f (u) + f (v).

(2) f (λu) = λf (u).

Theorem. Let f : V →W be a linear map.

(1) ker f = {v ∈ V |f (v) = 0} is a linear space.

(2) Im f = {f (v)|v ∈ V } is a linear space.

(3) dim ker f + dim Im f = dimV .

Linear spaces 3/3

Theorem. In finite dimensions, any vector space V has a basis {ei},which is such that any v ∈ V can be written, uniquely, as:

v = v1e1 + . . .+ vNeN

Thus we have V = CN , the identification being given by:

v =

v1...vN

As a consequence, any linear map f : V →W between finitedimensional vector spaces corresponds to a matrix.

Hilbert spaces 1/4

Definition. A scalar product on a complex vector space H is anoperation (x , y)→< x , y >, satisfying:(1) < x , y > is linear in x , and antilinear in y .(2) < x , y > =< y , x >, for any x , y .(3) < x , x >> 0, for any x 6= 0.

Theorem. If we set ||x || =√< x , x > then:

(1) | < x , y > | ≤ ||x || · ||y ||.(2) ||x + y || ≤ ||x ||+ ||y ||.(3) d(x , y) = ||x − y || is a distance.

Proof. (1) follows from the fact that the degree 2 polynomialf (t) = ||tx + y ||2 is positive, and (1) =⇒ (2) =⇒ (3).

Hilbert spaces 2/4

Definition. A Hilbert space is a complex vector space H with ascalar product < x , y >, which is complete with respect to

||x || =√< x , x >

in the sense that the Cauchy sequences with respect to theassociated distance d(x , y) = ||x − y || converge.

Examples.

(1) H = CN , with < x , y >=∑

i xi yi .(2) H = l2(N), with < x , y >=

∑i xi yi .

(3) H = L2(X ), with < f , g >=∫X f (x)g(x)dx .

Hilbert spaces 3/4

Theorem. Any Hilbert space H has an orthonormal basis {ei}i∈I ,and so we have an indentification H = l2(I ).

Proof. The basis can be constructed by starting with an "algebraic”basis, and using the Gram-Schmidt method.

Warning. For spaces like H = L2[0, 1], this is something not trivial.

Theorem. Let H be a Hilbert space, with basis {ei}i∈I . We have

L(H) ⊂ MI (C)

with T : H → H linear corresponding to the following matrix:

Mij =< Tej , ei >

In particular, when dim(H) = N <∞, we obtain L(H) ' MN(C).

Hilbert spaces 4/4

Theorem. Given a Hilbert space H, the linear operators T : H → Hwhich are bounded, in the sense that

||T || = sup||x ||≤1

||Tx ||

is finite, form a complex algebra with unit B(H), which:(1) is complete with respect to ||.|| (Banach algebra).(2) has an involution T → T ∗, < Tx , y >=< x ,T ∗y >.The norm and involution are related by ||TT ∗|| = ||T ||2.

Proof. Here "complex algebra" is elementary, (1) follows by settingTx = limn→∞ Tnx , (2) comes from the fact that ϕ(x) =< Tx , y >is linear, and (3) can be proved by double inequality.

Spectral theory 1/4

Definition. A C ∗-algebra is a complex algebra with unit A, with:

(1) A norm a→ ||a||, making it a Banach algebra.(2) An involution a→ a∗, such that ||aa∗|| = ||a||2, ∀a ∈ A.

Definition. The spectrum of an element a ∈ A is the set:

σ(a) ={λ ∈ C

∣∣a− λ 6∈ A−1}Theorem. σ(ab) = σ(ba) outside {0}.

Proof. Indeed, c = (1− ab)−1 =⇒ 1 + cba = (1− ba)−1.

Remark. In infinite dimensions, S∗S = 1,SS∗ 6= 1 (shift).

Spectral theory 2/4

Theorem. We have the following formula, for any rational functionf ∈ C(X ) having its poles outside σ(a):

σ(f (a)) = f (σ(a))

Proof. In the polynomial case, f ∈ C[X ], we can factorize,

f (X )− λ = c(X − r1) . . . (X − rn)

and the result can be proved as follows:

λ /∈ σ(f (a)) ⇐⇒ a− r1, . . . , a− rn ∈ A−1

⇐⇒ r1, . . . , rn /∈ σ(a)

⇐⇒ λ /∈ f (σ(a))

In the general case, f = P/Q, we can use F = P − λQ.

Spectral theory 3/4

Definition. Given an element a ∈ A, its spectral radius ρ(a) is theradius of the smallest disk centered at 0 containing σ(a).

Theorem. Let A be a C ∗-algebra.

(1) The spectrum of a norm 1 element is in the unit disk.

(2) The spectrum of a unitary (a∗ = a−1) is on the unit circle.

(3) The spectrum of a self-adjoint element (a = a∗) is real.

(4) ρ of a normal element (aa∗ = a∗a) equals its norm.

Spectral theory 4/4

(1) Clear from (1− a)−1 = 1 + a + a2 + . . . for ||a|| < 1.(2) Follows by using f (z) = z−1. Indeed, we have:

σ(a)−1 = σ(a−1) = σ(a∗) = σ(a)

(3) Follows from (2), by using f (z) = (z + it)/(z − it).(4) By (1) we have ρ(a) ≤ ||a||. Given ρ > ρ(a), we have:∫

|z|=ρ

zn

z − adz =

∞∑k=0

(∫|z|=ρ

zn−k−1dz

)ak = an−1

By applying the norm and taking n-th roots we obtain:

ρ ≥ limn→∞

||an||1/n

If a = a∗ we are done. In general, we can use ||aa∗|| = ||a||2.

Advanced 1/4

Theorem. Given a compact space X , the complex algebra

C (X ) = {f : X → C continuous}

is a C ∗-algebra, with norm and involution given by:

||f || = supx∈X|f (x)| , f ∗(x) = f (x)

This algebra is commutative, in the sense that fg = gf .

Proof. It is well-known that C (X ) is complete with respect to thesup norm, and the other conditions are trivially satisfied.

Advanced 2/4

Theorem. Any commutative C ∗-algebra is the form C (X ), with its“spectrum” X = Spec(A) consisting of the characters:

χ : A→ C

Proof. Set X = Spec(A), with topology making continuous all theevaluation maps eva : χ→ χ(a). Then X is a compact space, anda→ eva is a morphism of algebras ev : A→ C (X ).

(1) ev involutive. Using real + imaginary parts, we must prove thateva∗ = ev∗a when a = a∗. But this follows from σ(a) ⊂ R.

(2) ev isometric. Follows from ||eva|| = ρ(a) = ||a||.

(3) ev surjective. Follows from Stone-Weierstrass.

Advanced 3/4

Theorem. Assume that a ∈ A is normal, and let f ∈ C (σ(a)).

(1) We can define f (a) ∈ A, with f → f (a) being a morphism.(2) We have the formula σ(f (a)) = f (σ(a)).

Proof. Since a is normal, B =< a > is commutative, and theGelfand theorem gives B = C (X ), with X = Spec(B).

The map X → σ(a) given by evaluation at a being bijective, wehave X = σ(a). Thus B = C (σ(a)), and we are done.

Advanced 4/4

Definition. Given an arbitrary C ∗-algebra A, we can write

A = C (X )

and call X a "noncommutative compact space".

Special matrices and matrix tricks

Teo Banica

"Introduction to linear algebra", 6/6

08/20

Fourier 1/3

Theorem. We have the Vandermonde formula:∣∣∣∣∣∣∣∣∣1 1 . . . 1x1 x2 . . . xN...

......

xN−11 xN−1

2 . . . xN−1N

∣∣∣∣∣∣∣∣∣ =∏i>j

(xi − xj)

Proof. The determinant D is a polynomial in x1, . . . , xN , of degreeN − 1 in each variable. Since xi = xj makes D = 0, we obtain:

D = c∏i>j

(xi − xj)

The constant c ∈ R can be computed by recurrence, we get c = 1.

Fourier 2/3

Definition. The Fourier matrix FN is given by:

FN = (w ij)ij , w = e2πi/N

With matrices indices i , j = 0, 1, . . . ,N − 1, we have:

FN =

1 1 1 . . . 11 w w2 . . . wN−1

1 w2 w4 . . . w2(N−1)

......

......

1 wN−1 w (2N−1) . . . w (N−1)2

This is a Vandermonde matrix, with xi = w i .

Fourier 3/3

Theorem. The rescaled matrix FN = 1√N

(w ij)ij is unitary.

Proof. We have the following computation:

(FNF∗N)ij =

∑k

(FN)ik(FN)jk

=∑k

w ik · w−jk

=∑k

(w i−j)k

= Nδij

Thus the rescaled matrix FN = FN/√N is unitary.

Special matrices 1/4

Theorem. For a matrix H ∈ MN(C), the following are equivalent,

(1) H is circulant, Hij = ξj−i for some ξ ∈ CN .

(2) H is Fourier-diagonal, H = FQF∗ with Q diagonal.

where F = FN . In addition, the first row vector of H is

ξ = Fq/√N

where qi = Qii is the vector formed by the diagonal entries of Q.

Special matrices 2/4

Proof. If Hij = ξj−i is circulant then Q = F∗HF is diagonal:

Qij =1N

∑kl

w jl−ikξl−k = δij∑r

w jrξr

Also, if Q = diag(q) is diagonal then H = FQF∗ is circulant:

Hij =∑k

FikQkkFjk =1N

∑k

w (i−j)kqk

This formula proves as well the last assertion, ξ = Fq/√N.

Special matrices 3/4

Theorem. The various sets of circulant matrices are as follows,

(1) MN(C)circ = {FQF∗|q ∈ CN}.

(2) UcircN = {FQF∗|q ∈ TN}.

(3) OcircN = {FQF∗|q ∈ TN , qi = q−i ,∀i}.

with the convention Q = diag(q), for q ∈ CN .

Proof. (1) This is something that we already know.

(2) This is because the eigenvalues must be on the unit circle T.

(3) For q ∈ CN we have Fq = F q, with qi = q−i , and so ξ = Fqis real if and only if qi = q−i for any i . This gives the result.

Special matrices 4/4

Theorem. The groups BN ⊂ ON and CN ⊂ UN of bistochasticmatrices (sum 1 on each row and column) are given by:

BN ' ON−1 , CN ' UN−1

Proof. The all-1 vector ξ being equal to√NFe0, we have:

Uξ = ξ ⇐⇒ UFe0 = Fe0⇐⇒ F∗UFe0 = e0

⇐⇒ F∗UF = diag(1,w)

Thus we have isomorphisms as in the statement.

Hadamard matrices 1/4

Definition. A complex Hadamard matrix is a square matrix

H ∈ MN(C)

whose entries are on the unit circle, Hij ∈ T, and whose rows arepairwise orthogonal, with respect to the scalar product of CN .

Example. For the Fourier matrix, FN = (w ij) with w = e2πi/N , thescalar products between rows are:

< Ra,Rb >=∑j

wajw−bj =∑j

w (a−b)j = Nδab

Thus the Fourier matrix FN is Hadamard.

Hadamard matrices 2/4

Theorem. Given a finite abelian group G , with group dual

G = {χ : G → T}

consider the Fourier coupling G × G → T:

(i , χ)→ χ(i)

(1) Via the standard isomorphism G ' G , this Fourier coupling is asquare matrix, FG ∈ MG (T), which is complex Hadamard.

(2) For a cyclic group G = ZN we obtain in this way, via thestandard identification ZN = {1, . . . ,N}, the Fourier matrix FN .

(3) In general, when using a decomposition G = ZN1 × . . .× ZNk,

the corresponding Fourier matrix is FG = FN1 ⊗ . . .⊗ FNk.

Hadamard matrices 3/4

Examples. (1) For the cyclic group Z2 we obtain the Fourier matrixF2, also denoted W2, and called first Walsh matrix:

W2 =

(1 11 −1

)(2) For the Klein group Z2 × Z2 we obtain the tensor productW4 = W2 ⊗W2, called second Walsh matrix:

W4 =

1 1 1 11 −1 1 −11 1 −1 −11 −1 −1 1

(3) In general, for the group Zn

2 we obtain the n-th Walsh matrixWN = W⊗n

2 , having size N = 2n. Useful in radio, coding.

Hadamard matrices 4/4

Hadamard Conjecture. There is at least one real Hadamard matrix

H ∈ MN(±1)

for any integer N ∈ 4N.

Comment. Verified so for up to N = 666.

Rotations 1/4

Theorem. For a matrix U ∈ MN(C), the following are equivalent:

(1) U preserves the scalar product, < Ux ,Uy >=< x , y >.

(2) U preserves the norm, ||Ux || = ||x ||, where ||x || =√< x , x >.

(3) U is unitary, in the sense that U∗ = U−1, where (U∗)ij = Uji .

(4) U has its eigenvalues on the unit circle T.

Proof. The equivalences (1) ⇐⇒ (2) ⇐⇒ (3) follow by using< Mx , y >=< x ,M∗y >, and (4) is something that we know.

Rotations 2/4

Theorem. The unitaries in M2(C) of determinant 1 are

U =

(a b−b a

)with a, b ∈ C satisfying |a|2 + |b|2 = 1.

Proof. For U = (acbd) of determinant 1, U∗ = U−1 reads:(

a cb d

)=

(d −b−c a

)Thus c = −b, d = a. Finally, detU = 1 gives |a|2 + |b|2 = 1.

Rotations 3/4

Theorem. The unitaries in M3(R) of determinant 1 are

O =

x2 + y2 − z2 − t2 2(yz − xt) 2(xz + yt)2(xt + yz) x2 + z2 − y2 − t2 2(zt − xy)2(yt − xz) 2(xy + zt) x2 + t2 − y2 − z2

with x , y , z , t ∈ R satisfying x2 + y2 + z2 + t2 = 1.

Proof. With a = x + iy , b = z + it, the previous formula reads:

U =

(x + iy z + it−z + it x − iy

)But we must have "O + 1 = ad(U)", and this gives the result.

Rotations 4/4

Conclusion. We can now:

• do some serious engineering

• or write 3D games software.

Groups of unitary matrices

Teo Banica

"Introduction to matrix groups", 1/6

08/20

Groups 1/3

Definition. A group is a set G with a multiplication operation

(g , h)→ gh

satisfying the following conditions:

(1) Associativity: (gh)k = g(hk).

(2) Unit: ∃1 ∈ G , g1 = 1g = g .

(3) Inverses: ∀g ,∃g−1, gg−1 = g−1g = 1.

Groups 2/3

Examples.

(1) R with the addition operation x + y . Here the unit is 0 (!) andthe inverses are −x .

(2) R∗ with the multiplication operation xy . Here the unit is 1 andthe inverses are x−1.

(3) Z,Q,C with the addition operation x + y , and Q∗,C∗ with themultiplication operation xy .

Note that (N,+) and (N, ·) and (Z∗, ·) are not groups, becausehere we have no inverses.

Groups 3/3

More examples.

(1) The group SN of permutations σ : {1, . . . ,N} → {1, . . . ,N}.Note that we have στ 6= τσ in general, in this group.

(2) The groups GLN(Q), GLN(R), GLN(C) of invertible N × Nmatrices over Q,R,C. Here gh = hg fails too, in general.

Conventions.

– When ab = ba we say that the group is abelian.

– We usually denote the operation of an abelian group by a sum,g + h, the unit element by 0, and the inverses by −g .

– This is not a general rule. What is true, however, is that if agroup is denoted (G ,+), then the group must be abelian.

Orthogonal groups 1/4

Notations. We use the usual scalar product and norm on RN :

< x , y >=∑i

xiyi , ||x || =√< x , x >

Theorem. For a matrix U ∈ MN(R), the following are equivalent,and if they are satisfied, we say that U is orthogonal:

(1) < Ux ,Uy >=< x , y >.

(2) ||Ux || = ||x ||.

(3) Ut = U−1, where (Ut)ij = Uji .

(4) The rows of U form an orthonormal basis of RN .

(5) The columns of U form an orthonormal basis of RN .

Proof. All this follows from < Ux , y >=< x ,Uty >.

Orthogonal groups 2/4

Theorem. The set formed by the orthogonal matrices

ON ={U ∈ MN(R)

∣∣∣Ut = U−1}

is a group, with the usual multiplication of the matrices.

Proof. Assuming U,V ∈ ON , we have UV ∈ ON , because:

(UV )t = V tUt = V−1U−1 = (UV )−1

Also, 1N ∈ ON , and U ∈ ON =⇒ U−1 ∈ ON .

Orthogonal groups 3/4

Theorem. The elements of O2 fall into two classes:(1) Rotations. The rotation of angle t ∈ R is given by the followingformula:

Rt =

(cos t − sin tsin t cos t

)The rotations are exactly the elements of O2 having determinant 1,and they form a group, denoted SO2.(2) Symmetries. The symmetry with respect to the Ox axis rotatedby t/2 ∈ R is given by the following formula:

St =

(cos t sin tsin t − cos t

)The symmetries are exactly the elements of O2 having determinant−1, and they do not form a group.

Orthogonal groups 4/4

Theorem. The elements of ON fall into two classes:

(1) Those of determinant 1, which form a group, denoted SON :

SON ={U ∈ ON

∣∣∣ detU = 1}

(2) Those of determinant −1, which do not form a group.

Proofs. For U ∈ ON we have det(UUt) = 1, so detU = ±1.

The set SON is a group, because det(UV ) = detU detV , and itscomplement is not a group, because det(1N) = 1.

Finally, the various 2D formulae are well-known, and elementary.

Unitary groups 1/4

Notations. We use the usual scalar product and norm on CN :

< x , y >=∑i

xi yi , ||x || =√< x , x >

Theorem. For a matrix U ∈ MN(C), the following are equivalent,and if they are satisfied, we say that U is unitary:

(1) < Ux ,Uy >=< x , y >.

(2) ||Ux || = ||x ||.

(3) U∗ = U−1, where (U∗)ij = Uji .

(4) The rows of U form an orthonormal basis of CN .

(5) The columns of U form an orthonormal basis of CN .

Proof. All this follows from < Ux , y >=< x ,U∗y >.

Unitary groups 2/4

Theorem. The set formed by the unitary matrices

UN ={U ∈ MN(C)

∣∣∣U∗ = U−1}

is a group, with the usual multiplication of the matrices.

Proof. Assuming U,V ∈ UN , we have UV ∈ UN , because:

(UV )∗ = V ∗U∗ = V−1U−1 = (UV )−1

Also, 1N ∈ UN , and U ∈ UN =⇒ U−1 ∈ UN .

Unitary groups 3/4

Theorem. The determinant of a unitary matrix U ∈ UN must be anumber on the unit circle:

detU ∈ T

The unitary matrices N × N having determinant 1 form a group,denoted SUN :

SUN ={U ∈ UN

∣∣∣ detU = 1}

Any matrix U ∈ UN is proportional to a matrix in SUN , theproportionality factor being a number d ∈ T.

Proof. For U ∈ UN we have det(UU∗) = 1, so | detU| = 1.The second assertion is clear from det(UV ) = detU detV .The third assertion follows by dividing by d = (detU)1/N .

Unitary groups 4/4

Theorem. We have the following formula,

SU2 =

{(a b−b a

) ∣∣∣ |a|2 + |b|2 = 1}

as well as the following formula:

U2 =

{d

(a b−b a

) ∣∣∣ |a|2 + |b|2 = 1, |d | = 1}

Proof. For U = (acbd) of determinant 1, U∗ = U−1 reads:(

a cb d

)=

(d −b−c a

)Thus c = −b, d = a. Finally, detU = 1 gives |a|2 + |b|2 = 1.

Subgroups 1/4

The groups that we considered so far are as follows:

ON// UN

SON

OO

// SUN

OO

It is possible to construct more groups along these lines:

(1) By multiplying by Zr = {w ∈ C|w r = 1}.

(2) By imposing the condition (detU)s = 1.

We can equally talk about the symplectic groups SpN ⊂ UN .

Subgroups 2/4

Another big class of groups of matrices comes by looking at

UdiagN = TN

and its subgroups. We have for instance the groups

Zr1 × . . .× ZrN

for any choice of numbers r1, . . . , rN ∈ N ∪ {∞}.

Subgroups 3/4

Importantly, the permutation groups SN appear as well as groups ofunitary matrices,

SN ⊂ ON ⊂ UN

by making each σ ∈ SN act on the coordinate axes of RN . Indeed,this action is clearly isometric, so SN ⊂ ON .

Subgroups 4/4

In fact, any finite group G appears as a group of unitary matrices.Indeed, we can make act G on itself, by left multiplication,

G ⊂ SG , σg (h) = gh

and so with N = |G | we have embeddings as follows:

G ⊂ SN ⊂ ON ⊂ UN

However, groups such as DN ⊂ ON show that each finite group Ghas its own "privileged" embedding G ⊂ UN .

Symmetry and reflection groups

Teo Banica

"Introduction to matrix groups", 2/6

08/20

Finite groups 1/3

Theorem. Any finite group is a permutation group.

Proof. Given a finite group G , we have an embedding as follows:

G ⊂ SG , σg (h) = gh

In other words, we have G ⊂ SN , with N = |G |.

Finite groups 2/3

Theorem. Any finite group appears as group of orthogonal matrices.

Proof. This is true for SN , which can be regarded as being thepermutation group of the N coordinate axes of RN :

SN ⊂ ON

Thus, given a group G of finite order N <∞, we have:

G ⊂ SN ⊂ ON

Finite groups 3/3

Conclusion. The following are the same thing:

(1) The finite groups.

(2) The subgroups G ⊂ SN .

(3) The finite subgroups G ⊂ ON .

(4) The finite subgroups G ⊂ UN .

Problem. Given a finite group G , what is the "best" embedding oftype G ⊂ UN , say with N ∈ N being smallest possible?

Comment. This is a "representation theory" problem.

Dihedral groups 1/4

Theorem. Consider the cyclic group ZN .

(1) We have an embedding ZN ⊂ O2, given by:

k →(

cos t − sin tsin t cos t

), t =

2kπN

(2) We have an embedding ZN ⊂ ON , given by:

k → [ei → ei+k ]

(3) We have an embedding ZN ⊂ U1, given by:

k → (wk) , w = e2πi/N

Comment. (2) is nicer than (1), and (3) beats everything.

Dihedral groups 2/4

Definition. The dihedral group DN is the group of symmetries of aregular N-gon.

Examples.

(1) At N = 3 we have 3 symmetries, with respect to the 3 mediansof 4, as well as 3 rotations, of angles 0◦, 120◦, 240◦.

(2) At N = 4 we have 4 symmetries, with respect to Ox ,Oy andthe diagonals of �, and 4 rotations, of angles 0◦, 90◦, 180◦, 270◦.

Dihedral groups 3/4

Theorem. The dihedral group DN has 2N elements, as follows:

(1) N rotations, of angles 2kπ/N, with k = 0, 1, . . . ,N − 1. Theseform a copy ZN ⊂ DN of the cyclic group ZN .

(2) N symmetries, with respect to the N medians when N is odd,and to the N/2 + N/2 symmetry axes, when N is even.

In addition, we have a formula of type DN = ZN o Z2.

Proof. (1) and (2) are clear. Regarding the last part, DN has thesame number of elements as ZN × Z2, but is not abelian. Thus, wemust "twist" the product of ZN × Z2 in order to obtain DN .

Dihedral groups 4/4

Theorem. Consider the dihedral group DN .

(1) We have an embedding DN ⊂ O2, given by the usual rotationand symmetry matrices:

Rk →(

cos t − sin tsin t cos t

), t =

2kπN

Sk →(

cos t sin tsin t − cos t

), t =

2kπN

(2) We have an embedding DN ⊂ ON , obtained by permuting theN-gon on the coordinate axes of RN , at distance 1 from 0:

σ → [ei → eσ(i)]

(3) We cannot have an embedding DN ⊂ U1, because the group U1is abelian, and DN is not abelian.

Symmetric groups 1/4

Theorem. The permutation group SN has N! elements.

Proof. In order to construct a permutation σ ∈ SN , we must:

(1) Choose σ(1), and there are N choices here.

(2) Choose σ(2), and there are N − 1 choices left.......

(N) Choose σ(N), and there is 1 choice left.

Thus, we have a total of N(N − 1) . . . 1 = N! choices.

Symmetric groups 2/4

Theorem. We have an embedding SN ⊂ ON , given by:

σ → [ei → eσ(i)]

By using the standard eij : ej → ei notation, the formula is:

σ →∑i

eσ(i)i

In matrix notation, and with Kronecker symbols, the formula is:

σ → [δiσ(j)]ij

Proof. The first assertion is clear, because the transformationsei → eσ(i) are isometries of RN , and the rest is clear too.

Symmetric groups 3/4

Theorem. The permutation matrices SN ⊂ ON are precisely the 0-1matrices having a 1 entry on each row and column.

Theorem. The trace of a permutation matrix σ ∈ SN ⊂ ON is thenumber of its fixed points.

Proofs. Both these results are clear from definitions.

Symmetric groups 4/4

Theorem. The determinant of the permutation matrices

det(σ) ∈ {±1}

coinsides with the signature of the permutations,

ε(σ) = (−1)c

where c is the number of inversions.

Proof. This is clear with any of the definitions of det.

Comment. Thus, SN ∩ SON = AN , the alternating group.

Reflection groups 1/4

Definition. The hyperoctahedral group HN is the symmetry groupof the hypercube �N ⊂ RN .

Comment. Thus, we have by definition HN ⊂ S2N .

Example. We have H2 = D4.

Problem. |HN | = ?

Reflection groups 2/4

Theorem. The group HN appears as well as the group of signedpermutations of the coordinate axes of RN , so we have

HN ⊂ ON

with the image consisting of the −1, 0, 1 matrices having exactlyone ±1 entry on each row and each column. Thus we have:

|HN | = 2NN!

Comment. One can prove that HN = SN o ZN2 , which is also

written as HN = Z2 o SN , wreath product.

Reflection groups 3/4

Definition. The reflection group HsN , depending on parameters

N ∈ N , s ∈ N ∪ {∞}

is the group of N × N matrices having entries in

Zs ∪ {0}

having exactly one nonzero entry on each row and each column.

Examples. At s = 1 we obtain SN , and at s = 2 we obtain HN .In general, at s <∞, we have a certain finite group Hs

N ⊂ UN .At s =∞ we have a group KN ⊂ UN , which is no longer finite.

Reflection groups 4/4

One can prove that the "complex reflection groups" are:

– The above groups HsN = SN o Zs .

– Their subgroups HsdN given by detd = 1.

– And some exceptional examples.

Symmetric groups and Poisson laws

Teo Banica

"Introduction to matrix groups", 3/6

08/20

Characters 1/3

Definition. A representation of a finite group G is a morphism

π : G → UN

and the character of this representation is the map

χ : G → C

obtained by taking the trace of the images of group elements:

χ(g) = Tr(π(g))

When G comes as G ⊂π UN , we call χ the "main character".

Characters 2/3

Remark. The characters are central functions on the group, in thesense that they satisfy the following condition:

χ(gh) = χ(hg)

We will see later that any central function on the group is a linearcombination of characters. This is something non-trivial.

Remark. We can talk, more generally, about representations andcharacters of compact groups, with the representations

π : G → UN

being assumed to be continuous. We will do this later on.

Characters 3/3

Problem. Given π : G → UN , we want to compute the law of:

χ = Tr ◦ π : G → C

That is, we would like to compute the following probabilities,

P(χ = k) ∈ [0, 1] , k ∈ C

and then the complex discrete measure encoding them:

µ =∑k∈C

P(χ = k)δk

There are many motivations for this question. Details later.

Fixed points 1/4

Theorem. For the symmetric group SN , regarded as subgroup

SN ⊂ ON

permuting the coordinate axes of RN , the main character is

χ(σ) = #{i∣∣∣σ(i) = i

}and its law is a discrete probability measure, supported by N.

Proof. Each σ ∈ SN ⊂ ON is a 0-1 matrix, whose trace Tr(χ)counts the 1 diagonal entries, corresponding to fixed points.

Fixed points 2/4

Theorem. The probability for a permutation σ ∈ SN to be aderangement is, in the N →∞ limit:

P0 '1e

Proof. We must be outside the union F =⋃

i Fi , where:

Fi ={σ ∈ SN

∣∣∣σ(i) = i}

The inclusion-exclusion principle gives:

F c = N!−∑i

|Fi |+∑i<j

|Fi ∩ Fj | −∑

i<j<k

|Fi ∩ Fj ∩ Fk |+ . . .

We obtain P0 = 1− 11! + 1

2! −13! + . . . ' 1

e , as claimed.

Fixed points 3/4

Theorem. The probability for a permutation σ ∈ SN to haveexactly k ∈ N fixed points is

Pk '1e· 1k!

once again in the N →∞ limit.

Proof. We already know that the result holds at k = 0. In generalthe proof is similar, by using the inclusion-exclusion principle.

Fixed points 4/4

Theorem. The character of the standard representation

SN ⊂ ON

obtained by permuting the coordinate axes of RN is given by

χ(σ) = #{i∣∣∣σ(i) = i

}and follows with N →∞ the following law:

p1 =1e

∑k

δkk!

Proof. This follows by putting together the above results.

Poisson laws 1/4

Definition. The Poisson law of parameter 1 is:

p1 =1e

∑k

δkk!

More generally, the Poisson law of parameter t > 0 is:

pt = e−t∑k

tk

k!δk

Remark. These laws have indeed mass 1.

Poisson laws 2/4

Theorem. We have the following formula, for any s, t > 0:

ps ∗ pt = ps+t

Proof. By using δk ∗ δl = δk+l and the binomial formula:

ps ∗ pt = e−s∑k

sk

k!δk ∗ e−t

∑l

t l

l!δl

= e−s−t∑n

δn∑

k+l=n

skt l

k!l!

= e−s−t∑n

(s + t)n

n!δn

Thus, we obtain the Poisson law ps+t , as claimed.

Poisson laws 3/4

Theorem. The Fourier transform of pt is given by:

Fpt (x) = exp((e ix − 1)t

)Proof. By using Ff (x) = E(e ixf ), we obtain:

Fpt (x) = e−t∑k

tk

k!e ikx

= e−t∑k

(e ix t)k

k!

= exp(−t) exp(e ix t)

Thus, we obtain the formula in the statement.

Poisson laws 4/4

Theorem. We have the following convergence, in moments:((1− t

n

)δ0 +

t

nδ1

)∗n→ pt

Proof. We have the following computation:

Fδt (x) = e itx =⇒ Fµn(x) =(1− t

n

)+

t

ne ix

=⇒ Fµ∗nn (x) =((

1− t

n

)+

t

ne ix)n

=⇒ Fµ∗nn (x) =

(1 +

(e ix − 1)t

n

)n

=⇒ F (x) = exp((e ix − 1)t

)Thus, we obtain the Fourier transform of pt .

Truncation 1/4

Problem. We know that for SN ⊂ ON with N →∞, the maincharacter follows the Poisson law p1.

What about the general Poisson law pt , of parameter t > 0? Canwe obtain this law in the representation theory context?

Truncation 2/4

Definition. Given a group representation π : G → UN , its truncatedcharacter with respect to a parameter t ∈ (0, 1],

χt : G → C

is the map given by the following formula:

χt(g) =

[tN]∑i=1

π(g)ii

When G comes as a group of matrices, G ⊂π UN , we call this mapχt the "main truncated character" of the group.

Truncation 3/4

Theorem. The main truncated character of the symmetric group

SN ⊂ ON

which permutes the coordinate axes of RN , is given by

χt(σ) = #{i ∈ {1, . . . , [tN]}

∣∣∣σ(i) = i}

and follows with N →∞ the Poisson law of parameter t,

pt = e−t∑k

tk

k!δk

for any value of the parameter t ∈ (0, 1].

Truncation 4/4

Proof. We already know that the formula holds at t = 1. The samemethod, inclusion-exclusion, gives, more generally:

limN→∞

P(χ = k) =1et· t

k

k!

Thus, we obtain with N →∞ the Poisson law pt , as claimed.

Comment. We will see later extensions and interpretations of allthis, in the advanced representation theory context.

Complex reflections and Bessel laws

Teo Banica

"Introduction to matrix groups", 4/6

08/20

Reflection groups 1/3

Definition. The reflection group HsN , depending on parameters

N ∈ N , s ∈ N ∪ {∞}

is the group of N × N matrices with entries in

Zs ∪ {0}

having one nonzero entry on each row and each column.

Examples. At s = 1 we have the symmetric group SN ⊂ ON .At s = 2 we have the hyperoctahedral group HN ⊂ ON .At s = 3, 4, . . . we have a certain finite subgroup Hs

N ⊂ UN .At s =∞ we have a certain infinite subgroup KN ⊂ UN .

Reflection groups 2/3

Theorem. We have HsN = Zs o SN , wreath product decomposition.

Proof. This basically says that the elements g ∈ HsN appear as

permutations σ ∈ SN "decorated" with signs ε ∈ ZNs , which is

something that we already know, from the matrix picture.

Theorem. The irreducible complex reflection groups are

HsdN = {U ∈ Hs

N | detU ∈ Zd}

along with 34 exceptional examples.

Proof. This is something complicated, due to Shephard and Todd.

Reflection groups 3/3

Theorem. The groups HsN are easy, in the sense that

Ckl = Hom(π⊗k , π⊗l)

are Brauer type algebras, spanned by diagrams.

Proof. This holds indeed, with Dkl ⊂ P(k , l) being defined by thecondition #◦ = # • (s), weighted count, in each block.

Problem. What is the law of the main character χ for HsN? And,

what about the laws of truncated characters χt?

Comment. At s = 1, where the group is SN , we have χ ∼ p1, andmore generally χt ∼ pt , Poisson laws, with N →∞.

Real reflections 1/4

Definition. The hyperoctahedral group HN ⊂ ON is:

(1) The symmetry group of the unit hypercube �N ⊂ RN .

(2) The group of symmetries of the N coordinate axes of RN .

(3) The group of permutation-like matrices with ±1 entries.

Theory. We have HN = Z2 o SN , the reflection subgroups reduce toSHN = HN ∩ SON , and we have easiness, with D = Peven.

Real reflections 2/4

Theorem. The laws of truncated characters for HN are

law(χt) ' e−t∞∑

k=−∞δk

∞∑p=0

(t/2)|k|+2p

(|k |+ p)!p!

for any t ∈ (0, 1], in the N →∞ limit.

Proof. Inclusion-exclusion principle, exactly as for SN , but this timewith the permutations σ ∈ SN being decorated by signs ε ∈ ZN

2 .

Real reflections 3/4

Remark. The limiting truncated character law for HN is

bt = e−t∑k∈Z

δk fk(t/2)

where fk is the Bessel function of the first kind:

fk(t) =∞∑p=0

t |k|+2p

(|k |+ p)!p!

Due to this fact, we call bt Bessel law, of parameter t.

Real reflections 4/4

Theorem. The Bessel laws bt have the semigroup property

bs ∗ bt = bs+t

with respect to the usual convolution of real measures.

Theorem. The Bessel laws are compound Poisson laws,

bt = law(a− b)

with a, b being independent, both following the Poisson law pt .

Proofs. Similar to the proofs for SN , using the Fourier transform.

Bessel laws 1/4

Theorem. Given a compactly supported positive measure ν on R,having mass t = mass(ν), the following limit converges,

pν = limn→∞

((1− t

n

)δ0 +

1nν

)∗nand the measure pν is called compound Poisson law. For

ν =s∑

i=1

tiδzi

with ti > 0 and zi ∈ R, we have the formula

pν = law

(s∑

i=1

ziαi

)

whenever the variables αi are Poisson (ti ), independent.

Bessel laws 2/4

Definition. The higher Bessel laws are the compound Poisson laws

bst = ptεs

with εs being the uniform measure on the s-roots of unity.

Comments. By the above, this means that we have:

bst = limn→∞

((1− t

n

)δ0 +

t

nεs

)∗nEquivalently, we have the following formula,

bst = law

(s∑

r=1

w rαi

)

where w = e2πi/s , and where αi ∼ pt , independent.

Bessel laws 3/4

Examples.

(1) At s = 1 we obtain the Poisson laws pt .

(2) At s = 2 we obtain the Bessel laws bt .

(3) At s = 3, 4, . . . we obtain certain discrete complex measures.

(4) At s =∞ we obtain certain complex measures Bt .

Bessel laws 4/4

Theorem. The Fourier transform of bst is given by:

Fbst (y) = exp

(t

s∑r=1

(e iwry − 1)

)

Theorem. The Bessel laws for a convolution semigroup:

bst ∗ bst′ = bst+t′

Proofs. The first formula is clear from the bst = law (∑s

r=1 wrαi )

interpretation, and the second formula follows from it.

Complex reflections 1/4

Definition. The reflection group HsN , depending on parameters

N ∈ N , s ∈ N ∪ {∞}

is the group of N × N matrices with entries in

Zs ∪ {0}

having one nonzero entry on each row and each column.

Examples. At s = 1 we have the symmetric group SN ⊂ ON .At s = 2 we have the hyperoctahedral group HN ⊂ ON .At s = 3, 4, . . . we have a certain finite subgroup Hs

N ⊂ UN .At s =∞ we have a certain infinite subgroup KN ⊂ UN .

Complex reflections 2/4

Theorem. The laws of truncated characters for HsN are

law(χt) ' bst

for any t ∈ (0, 1], in the N →∞ limit.

Proof. Inclusion-exclusion principle, exactly as for SN , but this timewith the permutations σ ∈ SN being decorated by signs ε ∈ ZN

s .

Remark. This extends and unifies all our previous results.

Complex reflections 3/4

In the order to further extend all this, a first idea would be to lookat the general series of complex reflection groups:

HsdN =

{U ∈ Hs

N

∣∣∣ detU ∈ Zd

}However, this does not seem to bring new laws, at least at order 0.The study of the fluctuations is an interesting problem.

Complex reflections 4/4

Another type of extension comes by staying with HsN , but looking

at the fluctuations of the characters

g → Tr(g)

or of the truncated characters

g →[tN]∑i=1

gii , t ∈ (0, 1]

or of the Diaconis-Shahshahani variables

g → Tr(gk) , k ∈ N

and so on. Things here are quite well understood at s = 1, 2.

Representations of compact groups

Teo Banica

"Introduction to matrix groups", 5/6

08/20

Representations 1/3

Definition. Given a closed subgroup G ⊂ UN , its representationsare the continuous morphisms into unitary groups:

ρ : G → Un

As a basic example, we have the embedding G ⊂ UN , calledfundamental representation, and denoted π.

Comment. We will assume that our representations are "smooth",in the sense that their coefficients are polynomials of gij .

Representations 2/3

Definition. The representations of G are subject to:(1) Making sums: ρ+ ν : g → diag(ρ(g), ν(g)).(2) Making products: ρ⊗ ν : g → ρ(g)⊗ ν(g).(3) Taking conjugates: ρ : g → ρ(g).

Definition. Given G ⊂π UN , its Peter-Weyl representations

π⊗k , k = ◦ • • ◦ . . .

are the representations obtained by tensoring π, π.

Representations 3/3

Definition. Given ρ : G → Un and ν : G → Um, we set:

Hom(ρ, ν) ={T ∈ Mm×n(C)

∣∣∣Tρ(g) = ν(g)T}

and we use the following conventions:(1) Fix(ρ) = Hom(1, ρ) and End(ρ) = Hom(ρ, ρ).(2) ρ ∼ ν when Hom(ρ, ν) contains an invertible element.(3) ρ is called irreducible, ρ ∈ Irr(G ), when End(ρ) = C1.

Definition. Given G ⊂π UN , the collection of vector spaces

Ckl = Hom(π⊗k , π⊗l)

with k, l = ◦ • • ◦ . . . is called Tannakian category of G .

Peter-Weyl 1/7

Theorem (PW1). Any representation ρ : G → Un decomposes as

ρ = ρ1 + . . .+ ρk

direct sum of irreducible representations.

Proof. Consider the intertwiner algebra of our representation:

A = End(ρ) ⊂ Mn(C)

By writing its unit as 1 = q1 + . . .+ qk , with qi being minimalprojections, we obtain a decomposition as follows:

A = Mn1(C)⊕ . . .⊕Mnk (C)

We can now define a subrepresentation ρi by restricting ρ to thespace Im(qi ), which is invariant, and the result follows.

Peter-Weyl 2/7

Theorem (PW2). Any irreducible representation ρ : G → Un

appears inside a certain Peter-Weyl representation π⊗k .

Proof. Given a representation ρ : G → Un, consider its space ofcoefficients, Cρ = span(g → ρ(g)ij). Then ρ→ Cρ is functorial,mapping subrepresentations into subspaces. We have:

< Cπ >=∑k

Cπ⊗k

By smoothness, Cρ ⊂< Cπ >, for certain exponents k1, . . . , kp:

Cρ ⊂ Cπ⊗k1⊕...⊕π⊗kp

Thus we have ρ ⊂ π⊗k1 ⊕ . . .⊕ π⊗kp , and PW1 gives the result.

Peter-Weyl 3/7

Theorem. Any closed subgroup G ⊂ UN has a Haar measure

µ(gE ) = µ(Eg) = µ(E )

which can be constructed by starting with any probability measureν, and taking the following Cesàro limit:

µ = limr→∞

1r

r∑k=1

ν∗k

Moreover, for any representation ρ : G → Un, the matrix

P =

(∫Gρ(g)ij dg

)ij

is the projection onto Fix(ρ) = {ξ ∈ Cn|ρ(g)ξ = ξ}.

Peter-Weyl 4/7

Proof. Our first claim is that given any positive mass 1 measure νon our group G , not necessarily strictly positive, the limit∫ ν

Gf = lim

r→∞

1r

r∑k=1

∫Gf (g)dν∗k(g)

exists, and for any representation ρ : G → Un, the matrix

P =

(∫ ν

Gρ(g)ij dg

)ij

is the projection onto the 1-eigenspace of the matrix:

M =

(∫Gρ(g)ijdν(g)

)ij

This is indeed standard algebra, on the coefficient space Cρ.

Peter-Weyl 5/7

End of proof. Assuming now that ν is strictly positive, we mustprove that Mξ = ξ implies ξ ∈ Fix(ρ). Let us set:

f (g) =∑i

∑j

ρ(g)ijξj − ξi

(∑k

ρ(g)ikξk − ξi

)

We must prove that f = 0. Since ρ(g) is unitary, we obtain:

f (g) = 2(||ξ||2 − Re(< ρ(g)ξ, ξ >)

)By using now Mξ = ξ, we obtain from this, by integrating:∫

Gf (g)dν(g) = 0

Thus we have f = 0, and so ξ ∈ Fix(ρ), as desired.

Peter-Weyl 6/7

Theorem (PW3). The space C(G ) =< Cπ > decomposes as

C(G ) =⊕

ρ∈Irr(A)

Mdim(ρ)(C)

the summands being pairwise orthogonal with respect to∫G .

Proof. We must prove that for ρ, ν ∈ Irr(G ) we have:

ρ 6∼ ν =⇒ Cρ ⊥ Cν

The matrix P given by Pia,jb =∫G ρij νab is the projection onto:

Fix(ρ⊗ ν) ' Hom(ρ, ν) = {0}

Thus we have P = 0, and this gives the result.

Peter-Weyl 7/7Theorem (PW4). The characters of irreducible representations

χρ : G → C , g → Tr(ρ(g))

belong to the algebra of “smooth central functions”

C(G )central ={f ∈ C(G )

∣∣∣f (gh) = f (hg)}

and form an orthonormal basis of it.

Proof. The only tricky assertion is the norm 1 one. But:∫Gχρχρ =

∑ij

∫Gρii ρjj =

∑i

1N

= 1

Here we have used the fact that the integrals∫G ρij ρkl form the

orthogonal projection onto Fix(ρ⊗ ρ) ' End(ρ) = C1.

Easiness 1/3

Theorem. The closed subgroups G ⊂ UN are in correspondencewith the Tannakian categories C = (Ckl), via the construction

Ckl = Hom(π⊗k , π⊗l)

in one sense, and via the construction

G ={g ∈ UN

∣∣∣Tg⊗k = g⊗lT ,∀T ∈ C}

in the other sense.

Proof. This is something quite technical, basically due to Tannakaand Krein, and heavily using the Peter-Weyl theory.

Easiness 2/3

Definition. A collection of subsets D(k , l) ⊂ P(k , l) is called acategory of partitions when it satisfies:

(1) Stability under the horizontal concatenation, (π, σ)→ [πσ].

(2) Stability under vertical concatenation (π, σ)→ [σπ] (matching).

(3) Stability under the upside-down turning ∗, with ◦ ↔ •.

(4) Each P(k, k) contains the identity partition || . . . ||.

(5) Both P(∅, ◦•) and P(∅, •◦) contain the semicircle ∩.

Easiness 3/3

Definition. A closed subgroup G ⊂ UN is called easy when

Hom(π⊗k , π⊗l) = span(Tπ

∣∣∣π ∈ D(k , l))

for a certain category of partitions D ⊂ P , where

Tπ(ei1 ⊗ . . .⊗ eik ) =∑j1...jl

δπ

(i1 . . . ikj1 . . . jl

)ej1 ⊗ . . .⊗ ejl

with δπ ∈ {0, 1} depending on whether the indices fit or not.

Examples 1/2Theorem. The basic unitary and reflection groups, namely

ON// UN

HN

OO

// KN

OO

are all easy, coming from the following categories of partitions:

P2

��

P2oo

��Peven Pevenoo

Proof. This result, due to Brauer, and also known as Schur-Weylduality, comes from Tannaka, by working out the details.

Examples 2/2

In addition to the above, it is known that:

(1) In the continuous case, the bistochastic groups BN ⊂ ON andCN ⊂ UN are easy as well, coming from P12,P12.

(2) In the discrete case, SN is easy as well, coming from P itself. Infact, the reflection groups Hs

N are all easy, coming from Ps .

(3) Back to the continuous case, SU2, SO3 and SpN ⊂ UN are noteasy. However, they are "super-easy" in a suitable sense.

(4) However, the general SON , SUN , and other groups constructedusing det, such as Hsd

N , are definitely not easy.

Probability on compact groups

Teo Banica

"Introduction to matrix groups", 6/6

08/20

Characters 1/3

Problem. Given a closed subgroup G ⊂ UN , what is the law of

χ : G → C , g → Tr(g)

with respect to the uniform integration over G?

Characters 2/3

Motivation. The moments of χ are the dimensions

Mk = dim(Fix(π⊗k))

of the fixed point spaces of tensor powers of π : G ⊂ UN .

Comment. We are mostly interested in the Tannakian category

Ckl = Hom(π⊗k , π⊗l)

and by Frobenius, we have identifications as follows:

Hom(π⊗k , π⊗l) = Fix(π⊗k l)

Thus, the moments of χ count the dimensions dim(Ckl).

Characters 3/3

Version. More generally, we are interested in the truncations

χt : G → C , g →[tN]∑i=1

gii

with t ∈ (0, 1] of the main character χ = χ1.

Example. For the symmetric group SN ⊂ ON we have

χ ∼ p1

Poisson, and more generally χt ∼ pt for any t, with N →∞.

Finite groups 1/4

Theorem. For the cyclic group ZN ⊂ ON we have

χ(g) = Nδg0

and the corresponding distribution is a Bernoulli law:

law(χ) =

(1− 1

N

)δ0 +

1NδN

Proof. The cyclic matrices have 0 on the diagonal, and so trace 0,expect for the identity, having 1 on the diagonal, and trace N.

Remark. The truncated characters and the asymptotics are notinteresting. We do not have convolution semigroups.

Finite groups 2/4

Theorem. For the dihedral group DN ⊂ SN we have:

law(χ) =

(3

4 −1

2N

)δ0 + 1

4δ2 + 12N δN (N even)

(12 −

12N

)δ0 + 1

2δ1 + 12N δN (N odd)

Proof. The dihedral group DN consists of:(1) N symmetries, having 1 fixed point when N is odd, and having0 or 2 fixed points, 50− 50, when N is even.(2) N rotations, having 0 fixed points, except for the identity,which has N fixed points.

Remark. The truncations and asymptotics are not interesting.

Finite groups 3/4

Theorem. For the symmetric group SN ⊂ ON we have

χt(σ) ={i ∈ {1, . . . , [tN]}

∣∣∣σ(i) = i}

and we have law(χt) ' pt , Poisson laws, with N →∞.

Proof. By using the inclusion-exclusion principle, we have:

P(χ = 0) = 1− 11!

+12!− . . .+

(−1)N

N!' 1

e

The same method gives succesively, by generalizing,

P(χ = k) ' 1e· 1k!

, P(χt = k) ' 1et· t

k

k!

so we obtain in the N →∞ limit the Poisson laws pt .

Finite groups 4/4

Theorem. For the complex reflection groups

HsN = Zs o SN

we have law(χt) ' bst , Bessel laws, with N →∞.

Proof. The elements of HsN being usual permutations σ ∈ SN

"decorated" with signs ε ∈ ZNs , we can use the same method as

before, inclusion-exclusion, and with N →∞ we are led to

bst = πtεs

compound Poisson laws, with εs being the uniform measure on Zs ,which are called Bessel laws, due to the fact that at s = 2 thedensity is the Bessel function of the first kind.

Lie groups 1/4

Definition. The normal law of parameter 1 is:

g1 =1√2π

e−x2/2dx

More generally, the normal law of parameter t > 0 is:

gt =1√2πt

e−x2/2tdx

These laws appear via the Central Limit Theorem (CLT).

Lie groups 2/4

Theorem. The moments of the normal laws are

Mk(gt) = tk/2 × k!!

where k!! = 1.3.5 . . . (k − 1), with k!! = 0 when k is odd.

Proof. We have the following computation:

Mk =1√2πt

∫Rxke−x

2/2tdx

=1√2πt

∫R

(txk−1)(−e−x2/2t

)′dx

=1√2πt

∫Rt(k − 1)xk−2e−x

2/2tdx

We obtain Mk = t(k − 1)Mk−2, which gives the result.

Lie groups 3/4

Theorem. For the orthogonal group ON we have

law(χ) ' g1

with N →∞.

Proof. By using the Brauer easiness result, we have:

Mk(χ) = dim(Fix(π⊗k))

= dim(span(Tπ|π ∈ P2(k)))

' |P2(k)|= k!!

Thus, the main character χ has the same moments as g1.

Lie groups 4/4

The other classical Lie groups can be investigated by using thesame method, and the asymptotic law of χ is as follows:

(1) For UN we obtain the complex Gaussian law G1. The proof issimilar, by using Mk(G1) = |P2(k)|.

(2) For the bistochastic groups BN ⊂ ON and CN ⊂ UN we obtainshifted versions of g1,G1.

(3) The symplectic group SpN ⊂ UN is not exactly easy, but rather"super-easy", and we obtain the Gaussian law g1.

Truncation 1/4

Theorem. The Haar integration over G ⊂π UN is given by∫Gg s1i1j1

. . . g skik jk

dg =∑

σ,τ∈Dk

δσ(i)δτ (j)Wk(σ, τ)

where Dk is a basis of Fix(π⊗k), δσ(i) =< σ, ei1 ⊗ . . .⊗ eik >, andWk = G−1

k is the inverse of Gk(σ, τ) =< σ, τ >.

Proof. The integrals in the statement form the projection P ontoFix(π⊗k) = span(Dk). Consider the following linear map:

E (x) =∑σ∈Dk

< x , σ > σ

By linear algebra we have P = WE , where W is the inverse onspan(Dk) of the restriction of E , and this gives the result.

Truncation 2/4

Theorem. For an easy group GN ⊂ UN , coming from a category ofpartitions D = (D(k, l)), we have∫

GN

g s1i1j1

. . . g skik jk

dg =∑

σ,τ∈D(k)

δσ(i)δτ (j)WkN(σ, τ)

where D(k) = D(∅, k), δ are usual Kronecker symbols, andWkN = G−1

kN is the inverse of GkN(σ, τ) = N |σ∨τ |.

Proof. The vectors associated to partitions are given by:

Tσ(ei1 ⊗ . . .⊗ eik ) =∑j1...jl

δσ

(i1 . . . ikj1 . . . jl

)ej1 ⊗ . . .⊗ ejl

Thus the Gram matrix and Kronecker symbols are those above.

Truncation 3/4

Application. We have the following computation,∫GN

(g11 + . . .+ gss)k dg

=s∑

i1=1

. . .

s∑ik=1

∫GN

gi1i1 . . . gik ik dg

=∑

σ,τ∈D(k)

WkN(σ, τ)s∑

i1=1

. . .

s∑ik=1

δσ(i)δτ (i)

=∑

σ,τ∈D(k)

WkN(σ, τ)Gks(τ, σ)

= Tr(WkNGks)

and the s = [tN]→∞ asymptotics can be worked out.

Truncation 4/4

Theorem. The truncated characters χt for the main unitary andreflection groups are as follows, in the N →∞ limit,

ON// UN

HN

OO

// KN

OO

gt Gt

bt Bt

and we have independence results as well, with N →∞.

Proof. In the discrete case, this is something that we already know.In general, this follows by using the above results.

References 1/4

[1] M.F. Atiyah and I.G. MacDonald, Introduction to commutativealgebra, Addison-Wesley (1969).

[2] R. Brauer, On algebras which are connected with the semisimplecontinuous groups, Ann. of Math. 38 (1937), 857–872.

[3] P. Deligne, Catégories tannakiennes, in “GrothendieckFestchrift”, Birkhauser (1990), 111–195.

[4] P. Diaconis and M. Shahshahani, On the eigenvalues of randommatrices, J. Applied Probab. 31 (1994), 49–62.

References 2/4

[5] S. Doplicher and J. Roberts, A new duality theory for compactgroups, Invent. Math. 98 (1989), 157–218.

[6] V.G. Drinfeld, Quantum groups, Proc. ICM Berkeley (1986),798–820.

[7] R. Hartshorne, Algebraic geometry, Springer (1977).

[8] F. Klein, Vergleichende Betrachtungen über neueregeometrische Forschungen, Math. Ann. 43 (1893), 63–100.

References 3/4

[9] S. Lang, Algebra, Addison-Wesley (1993).

[10] W. Rudin, Real and complex analysis, McGraw-Hill (1966).

[11] J.P. Serre, Linear representations of finite groups, Springer(1977).

[12] I.R. Shafarevich, Basic algebraic geometry, Springer (1974).

References 4/4

[13] G.C. Shephard and J.A. Todd, Finite unitary reflection groups,Canad. J. Math. 6 (1954), 274–304.

[14] T. Tannaka, Über den Dualitätssatz der nichtkommutativentopologischen Gruppen, Tôhoku Math. J. 45 (1939), 1–12.

[15] H. Weyl, The classical groups: their invariants andrepresentations, Princeton (1939).

[16] S.L. Woronowicz, Compact matrix pseudogroups, Comm.Math. Phys. 111 (1987), 613–665.

top related