lectures on quadratic forms
TRANSCRIPT
-
8/3/2019 Lectures on Quadratic Forms
1/170
Lectures on Quadratic Forms
By
C.L. Siegel
Tata Institute of Fundamental Research, Bombay
1957
(Reissued 1967)
-
8/3/2019 Lectures on Quadratic Forms
2/170
Lectures on Quadratic Fomrs
By
C.L. Siegel
Notes by
K. G. Ramanathan
No part of this book may be reproduced in any
form by print, microffilm of any other means with-
out written permission from the Tata Institute ofFundamental Research, Colaba, Bombay 5
Tata Institute of Fundamental Research, Bombay
1955 56
(Reissumed 1967)
-
8/3/2019 Lectures on Quadratic Forms
3/170
Contents
1 Vector groups and linear inequalities 1
1 Vector groups . . . . . . . . . . . . . . . . . . . . . . . 12 Lattices . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 Characters . . . . . . . . . . . . . . . . . . . . . . . . . 10
4 Diophantine approximations . . . . . . . . . . . . . . . 13
2 Reduction of positive quadratic forms 21
1 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . 21
2 Minima of definite forms . . . . . . . . . . . . . . . . . 27
3 Half reduced positive forms . . . . . . . . . . . . . . . . 33
4 Two auxiliary regions . . . . . . . . . . . . . . . . . . . 40
5 Space of reduced matrices . . . . . . . . . . . . . . . . 47
6 Binary forms . . . . . . . . . . . . . . . . . . . . . . . 567 Reduction of lattices . . . . . . . . . . . . . . . . . . . 61
3 Indefinite quadratic forms 67
1 Discontinuous groups . . . . . . . . . . . . . . . . . . . 67
2 The H - space of a symmetric matrix . . . . . . . . . . . 72
3 Geometry of the H-space . . . . . . . . . . . . . . . . . 78
4 Reduction of indefinite quadratic forms . . . . . . . . . 80
5 Binary forms . . . . . . . . . . . . . . . . . . . . . . . 85
4 Analytic theory of Indefinite quadratic forms 99
1 The theta series . . . . . . . . . . . . . . . . . . . . . . 992 Proof of a lemma . . . . . . . . . . . . . . . . . . . . . 103
iii
-
8/3/2019 Lectures on Quadratic Forms
4/170
iv Contents
3 Transformation formulae . . . . . . . . . . . . . . . . . 105
4 Convergence of an integral . . . . . . . . . . . . . . . . 1155 A theorem in integral calculus . . . . . . . . . . . . . . 127
6 Measure of unit group and measure of representation . . 133
7 Integration of the theta series . . . . . . . . . . . . . . . 142
8 Eisenstein series . . . . . . . . . . . . . . . . . . . . . . 148
9 Main Theorem . . . . . . . . . . . . . . . . . . . . . . 157
10 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . 161
-
8/3/2019 Lectures on Quadratic Forms
5/170
Chapter 1
Vector groups and linear
inequalities
1 Vector groups1
Let K be the field of real numbers and V a vector space of dimension n
over K. Let us denote element ofV by small Greek letters and elements
of K by small Latin letters. The identity element ofV will be denoted
by 0 and will be called the zero element ofV. We shall also denote by 0
the zero element in K.
Let 1, . . . , n be a base ofV so that for any V =
i
iixi, xi K.
We call x1, . . . , xn the coordinates of . Suppose 1
, . . . , n is anotherbasis ofV, then
i =
j
jaji, i = 1, . . . , n
where aji K and the matrix M = (aji) is non-singular. If in terms of
1, . . . , n
= i
iyi, yi K
1
-
8/3/2019 Lectures on Quadratic Forms
6/170
2 1. Vector groups and linear inequalities
then it is easy to see that x1
:
:
xn
= My1
:
:
yn
(1)Suppose 1, . . . , m is any finite set of elements ofV. We denote by
L(1, . . . , m) the linear subspace generated in V by 1, . . . , m. This2
means that L(1, . . . , m) is the set of elements of the form
1x1 + + mxm, xi X.
It is clear that L(1, . . . , m) has dimension
Min(n, m).Let Rn denote the Euclidean space of n dimensions, so that every
point P in Rn has coordinates x1, . . . , xn, xi K. Let 1, . . . , n be abasis of V and let x1, . . . , xn be the coordinates of in V with regard
to this basis. Make correspond to , the point in Rn with coordinates
x1, . . . , xn. It is then easily seen that this correspondence is (1, 1). For
any V define the absolute value || by
||2 =n
i=1
x2i
where x1, . . . , xn are coordinates of . Then
| |satisfies the axioms of a
distance function in a metric space. We introduce a topology in V by
prescribing a fundamental system of neighbourhoods of to be the set
of{S d} where S d is the set of in V with
| | < d (2)
S d is called a sphere of radius dand center . The topology above makes
V a locally compact abelian group. The closure S d of S d is a compact
set. From (1), it follows that the topologies defined by different bases of
V are equivalent.
A subgroup G of V is called a vector group. The closure G ofG in3
V is again a vector group. We say that G is discrete if G has no limitpoints in V. Clearly therefore a discrete vector group is closed.
-
8/3/2019 Lectures on Quadratic Forms
7/170
1. Vector groups 3
Suppose G is discrete, then there is a neighbourhood of zero which
has no elements ofG not equal to zero in it. For, if in every neighbour-hood of zero there exists an element ofG, then zero is a limit point ofG
in V. This contradicts discreteness ofG. Since G is a group, it follows
that all elements ofG are isolated in V. As a consequence we see that
every compact subset ofV has only finitely many elements ofG in it.
We now investigate the structure of discrete vector groups. We shall
omit the completely trivial case when the vector group G consists only
of the zero element.
Let G {0} be a discrete vector group. Let 0 be an element ofG. Consider the intersection
G1 = G
L().
Let d > 0 be a large real number and consider all the y > 0 for
which y is in G1 and y d. If d is large, then this set is not empty.
Because G is discrete, it follows that there are only finitely many y with
this property. Let q > 0 be therefore the smallest real number such that
1 = q G1. Let = x be any element in G1. Put x = hq + kwhereh is an integer and 0 k < q. Then x and 1h are in G1 and so kis in 4
G1. But from definition ofq, it follows that k = 0 or
= 1h, h integer.
This proves that
G1 = {1},the infinite cyclic group generated by 1.
If in G there are no elements other than those in G1, then G = G1.
Otherwise let us assume as induction hypothesis that in G we have found
m( n) elements 1, . . . , m which are linearly independent over K and
such that G L(1, . . . , m) consists precisely of elements of the form1g1 + +mgm where g1, . . . , gm are integers. This means that
Gm = G L(1, . . . , m) = {1} + + {m}is the direct sum ofm infinite cyclic groups. If in G there exist no other
elements than in Gm then G = Gm. Otherwise let
G, Gm. Put
Gm+1 = G L(1, . . . , m, ).
-
8/3/2019 Lectures on Quadratic Forms
8/170
4 1. Vector groups and linear inequalities
Consider the elements in Gm+1 G of the form = 1x1 + +mxm +y, xi K.
where y 0 and y dwith d a large positive real number. This set C of
elements is not empty since it contains . Put now xi = gi + ki where
gi is an integer and 0 ki < 1, i = 1, . . . , m. Let = 1g1 + +mgm,then Gm and so
= 1k1 + +mkm +yis an element of Gm+1. Thus for every Gm+1 there exists a =5 G with the property
= 1k1 + +mkm +y0 ki < 1, y d. Thus all those s lie in a closed sphere of radius
(m + d2)12 . Since G is discrete, this point set has to be finite. Thus for
the s in G the y can take only finitely many values.
Therefore let q > 0 be the smallest value of y for which m+1 =
1t1 + +mtm +q is in G. Let = 1x1 + +mxm +y
be in Gm+1. Put y = qh + kwhere h is an integer and 0 k < q. Then
m
+1
h = 1
(x1
t1
h) +
+m
(xm
tm
h) +k
is in Gm+1. By definition of q, k = 0. But in that case by induction
hypothesis xi tih = hi is an integer. Thus = 1h1 + +mhm +m+1h
h1, . . . , h are integers. This proves that
Gm+1 = {1} + + {m+1}is a direct sum ofm + 1 infinite cyclic groups.
We can continue this process now but not indefinitely since 1, . . . ,
m+1, . . . are linearly independent. Thus after r n steps, the processends. We have hence the
-
8/3/2019 Lectures on Quadratic Forms
9/170
1. Vector groups 5
Theorem 1. Every discrete vector group G {0} in V is a direct sum of6r infinite cyclic groups, 0 < r n.
Conversely the direct sum of cyclic infinite groups is a discrete vec-
tor group. We have thus obtained the structure of all discrete vector
groups.
We shall now study the structure of all closed vector groups.
Let G be a closed vector group. Let S d be a sphere of radius d
with the zero element ofG as centre. Let r(d) be the maximum number
of elements of G which are linearly independent and which lie in S d.
Clearly r(d) satisfies
0 r(d) n.
Also r(d) is an increasing function ofd and since it is integral valued ittends to a limit when d 0. So let
r = limd0
r(d).
This means that there exists a d0 > 0 such that for d d0
r = r(d).
We call r the rankofG.
Clearly 0 r n. Suppose r = 0, then we maintain that G is
discrete; for if not, there exists a sequence 1, . . . , n, . . . of elements of
G with a limit point in V. Then the differences {k 1}, k 1 willform a set of elements of G with zero as a limit point and so in every
neighbourhood of zero there will be elements ofG which will mean that
r > 0.
Conversely if G is discrete there exists a sphere S d, d > 0 which 7
does not contain any point of G not equal to zero and containing zero.
This means r = 0. Hence
r = 0 G is discrete.
Let therefore r > 0 so that G is not discrete. Let d be a real number
0 < d < d0 so that r(d)=
r. Let S d be a sphere around the zero elementofG and of radius d. Let 1, . . . , r be elements ofG in S d which are
-
8/3/2019 Lectures on Quadratic Forms
10/170
6 1. Vector groups and linear inequalities
linearly independent. Let t > 0 be any real number and let d1 > 0
be chosen so that d1 < Min(d,t
n). Then r(d1) = r. If 1, . . . , r be
elements ofG which are linearly independent and which are contained
in the sphere S d1 around the zero element of G, then L(1, . . . , r) L(1, . . . , r) since S d1 S d. But since both have dimension r,
L(1, . . . , r) = L(1, . . . , r).
Since 1, . . . , r are in S d1 we have
|i| d1 t
n, i = 1, . . . , r.
Let
L(1
, . . . , r). Then by above
= 1x1 + +rxr.
Put xi = gi + ki where gi is an integer and 0 ki < 1. Put = 1g1 +
+rgr. Since 1, . . . , r G, will also be in G. Now
|| = |1k1 + +rkr| |1k1| + + |rkr| r+ s, rand s being integers determined by theorem 2. If G thenfor V
() = x1(1) + + xn(n).If however G then () is integral. Therefore
(i) =
0 i rinteger r < i r+ sarbitrary real i > r+ s.
Thus for G
() =
r+si=r+1
(i) xi
If G, then because of definition of 1, . . . , n it follows thateither at least one of xr+1, . . . , xr+s is not an integer or at least one of
-
8/3/2019 Lectures on Quadratic Forms
16/170
12 1. Vector groups and linear inequalities
xr+s+1, . . . , xn is not zero. Suppose that = i 1xi, xr+1 0(mod 1).Define the linear function on V by 15(i) =
1 ifi = r+ 10 ifi r+ 1.Then is a character ofG and
() = (r+1)xr+1 = xr+1 0(mod 1)
The same thing is true if xr+i 0(mod 1), 1 i s. Suppose now that =
iixi and one of xr+s+1, . . . , xn say xn 0. Define linear on V
by
(i) =
0 ifi n
1
2xnifi = n.
Then is a character of G and () =1
2 0(mod 1). Hence if
G there is a character ofG which is not integral for . We have thus
proved.
Theorem 4. Let V. Then G if and only if for every character of G, () is integral.
Let us fix a basis 1, . . . , n of V so that 1, . . . , r+s is a basis of
G. If G then (i) = ci where
ci =
0 i rinteger r < i r+ sreal i > r+ a
If (c1, . . . , cn) is any set of n real numbers satisfying the above condi-
tions, then the linear function defined on V by
() =
ni=1
cixi
-
8/3/2019 Lectures on Quadratic Forms
17/170
4. Diophantine approximations 13
where = i ixi, is a character of G. If Rn denotes the space of real16 n-tuples (x1, . . . , xn) then the mapping (c1, . . . , cn)
is seen to be an isomorphism of G into Rn. Thus G is a closed vectorgroup of rankn r s.
It can be proved easily that G the character group ofG is isomor-phic to G.
4 Diophantine approximations
We shall study an application of the considerations in 3 to a problemin linear inequalities.
Let
Li(h) =
mj=1
ai jhj, (i = 1, . . . , n)
be n linear forms in m variables h1, . . . , hm with real coefficient ai j. Let
b1, . . . , bn be n arbitrarily given real numbers. We consider the problem
of ascertaining necessary and sufficient conditions on the ai js so that
given a > 0 there exist integers h1, . . . , hm such that
|Li(h)
bi
|< , (i = 1, . . . , n).
In order to study this problem, let us introduce the vector space V of
all a rowed real columns
=
a1...
an
, ai K.V has then dimension n over K. Let 1, . . . , n be elements ofV defined 17
by
i=
a1i...
ani , i = 1, . . . , m
-
8/3/2019 Lectures on Quadratic Forms
18/170
14 1. Vector groups and linear inequalities
and let G be the vector group consisting of all sumsm
i=1 igi where gisare integers. Let be the vector =
b1...
bn
Then our problem on linear forms is seen to be equivalent to that of
obtaining necessary and sufficient conditions that there be elements in
G as close to as one wishes; in other words the condition that be in
G. Theorem 4 now gives the answer, namely that
() 0(mod 1)for every character ofG.
Let us choose a basis 1, . . . , n ofV where
i =
0...
1
0...
0
i = 1, . . . , n
with zero everywhere except at the i th place. Now in terms of this basis
k = 1a1k + + nank, k = 1, . . . , mTherefore if is a character ofG18
(k) =
ni=1
aikci
where (i) = ci, i = 1, . . . , n. Also (k) 0(mod 1). Furthermore ifc1, . . . , cn be any real numbers satisfying
ni=1
ciaik 0(mod 1), k = 1, . . . , m,
-
8/3/2019 Lectures on Quadratic Forms
19/170
4. Diophantine approximations 15
then the linear function defined on V by (i) = ci is a character ofG.
By theorem 4 thereforen
i=1
cibi 0(mod 1)
We have therefore the theorem due to Kronecker.
Theorem 5. A necessary and sufficient condition that for every t > 0,
there exist integers h1, . . . , hm satisfying
|Li(h) bi| < t, i = 1, . . . , n,is that for every set c1, . . . , cn of real numbers satisfying
ni=1
ciaik 0(mod 1), k = 1, . . . , m,
we should haven
i=1
aibi 0(mod 1).
We now consider the special case m > n. Let m = n + q, q 1. Letthe linear forms be
q
j=1 ai jhj + gi, i = 1, . . . , nin the m variables h1, . . . , hq, g1, . . . , gn. Then the vectors 1, . . . , m 19
above are such that
q+i = i, i = 1, . . . , n.
This means that if is a character ofG, ci = (i) is an integer. Thus
Corollary 1. The necessary and sufficient condition that for every t > 0,
there exist integers h1, . . . , hq, g1, . . . , gn satisfying
q
j=1ai jhj + gi bi < t, i = 1, . . . , n
-
8/3/2019 Lectures on Quadratic Forms
20/170
16 1. Vector groups and linear inequalities
is that for every set c1, . . . , cn of integers satisfyingi
ciai j 0(mod 1), j = 1, . . . , q
we have i
cibi 0(mod 1).
We now consider another special case q = 1. The linear forms are
of the type
aih + gi bi i = 1, . . . , n
a1, . . . , an, b1, . . . , bn being real numbers. Suppose now we insist thatthe condition on b1, . . . , bn be true whatever b1, . . . , bn are. This will
mean that from above Corollary c1 = c2 = . . . = cn = 0 or, in other
words, that a1, . . . , an have to satisfy the condition thati
ciai 0(mod 1), ci integral
if and only if ci = 0, i = 1, . . . , n. This is equivalent to saying that20
the real numbers 1, a1, . . . , a1 are linearly independent over the field of
rational numbers.
Let us denote by Rn the Euclidean space ofn dimensions and by Fn
the unit cube consisting of points (x1, . . . , xn) with
0 xi < 1 i = 1, . . . , n.
For any real number x, let ((x)) denote the fractional part ofx, i.o. ((x)) =
x [x]. Then
Corollary 2. If1, a1, . . . , an are real numbers linearly independent over
the field of rational numbers, then the points (x1, . . . , xn) where
xi = ((hai)) i = 1, . . . , n
are dense in the unit cube, if h runs through all integers.
-
8/3/2019 Lectures on Quadratic Forms
21/170
4. Diophantine approximations 17
We consider now the homogeneous problem namely of obtaining
integral solutions of the inequalities
|Li(h)| < t, i = 1, . . . , n
t > 0 being arbitrary. Here we have to insist that h1, . . . , hm should not
all be zero.
We study only the case m > n. As before introduce the vector space
V of n-tuples. Let 1, . . . , m and G have the same meaning as before.
If the group G is not discrete, it will mean that the inequalities will have
solutions for any t, however small. If however G is discrete then since 21
m > n the elements 1, . . . , m have to be linearly integrally dependent.
Hence we have integers h1
, . . . , hm
not all zero such that
1h1 + + mhm = 0.
We have hence the
Theorem 6. If m > n, the linear inequalities
|Li(h)| < t, i = 1, . . . , n
have for every t > 0, a non-trivial integral solution.
-
8/3/2019 Lectures on Quadratic Forms
22/170
-
8/3/2019 Lectures on Quadratic Forms
23/170
Bibliography
[1] O. Perron : Irrationalzahlen Chelsea, 1948.
[2] C. L. Siegel : Lectures on Geometry of Numbers, New York Uni-
versity, New York, 1946.
19
-
8/3/2019 Lectures on Quadratic Forms
24/170
-
8/3/2019 Lectures on Quadratic Forms
25/170
Chapter 2
Reduction of positive
quadratic forms
1 Quadratic forms22
Let V be a vector space of dimension n over the field K of real numbers.
Define an inner product between vectors , ofV by
i) Kii) =
iii) ( + ) = +
iv) (a) = ()a, a K.Obviously if1, . . . , n is a base ofV and , have the expression =
i
iai, =
iibi then
=
ni,j=1
aibj(i j).
If we denote by S the n-rowed real matrix S = (si j), si j = ij then S is
symmetric and = aS b (1)
21
-
8/3/2019 Lectures on Quadratic Forms
26/170
22 2. Reduction of positive quadratic forms
where a = a1...an , b =
b1...bn and a denotes the transpose of the column
vector a. (1) is a bilinear form in the 2n quantities a1, . . . , an, b1, . . . , bn.
In particular
2 = aS a
is a quadratic form in a1, . . . , an.
Suppose that 1
, . . . , n is another base ofV. Then23
i =
j
jai j i = 1, . . . , n.
and the matrix A = (ai j) is non-singular. IfS 1 = (i
j) then one sees
easily thatS 1 = S [A] = A
S A.
Thus ifS with regard to one base is non-singular, then the S correspond-
ing to any other base is also non-singular.
Conversely let S by and real n-rowed symmetric matrix and 1, . . . ,
n a base ofV over K. Put
i j = si j (j, i = 1, . . . , n)
and extend it by linearity to any two vectors of V. Then we have in V an
inner product defined.
If = i
ixi is a generic vector ofV over K,
2 = xS x = S [x] =
i,j
xixj si j.
The expression on the right is a quadratic form in the n variables x1, . . . ,
xn and we call S its matrix. The quadratic form is degenerate or non-
degenerate according as its matrix S is or is not singular.
Let xS x =n
k,l=1
sklxkxl be a quadratic form in the n variables x1, . . . ,
xn and let s1 = s11 0. We may write
xS x = s1x21 + 2s12x1x2 + + 2s1nx1xn + Q(x2, . . . , xn)
-
8/3/2019 Lectures on Quadratic Forms
27/170
1. Quadratic forms 23
so that Q(x2, . . . , xn) is a quadratic form in the n1 variables x2, . . . , xn.24We now write, since s1 0,
xS x = s1
x1 +
s12
s1x2 + +
s1n
s1xn
2 s
212
s1x22 . . .
s1n2
s1x2n + Q(x2, . . . , xn).
We have thus finally
xS x = s1y21 +R(x2, . . . , xn)
where y1 = x1 +s12
s1x2 + +
s1n
s1xn and R(x2, . . . , xn) is a quadratic form
in the n 1 variables x2, . . . , xn. If we make a change of variablesy1 = x1
s12
s1x2 + +
s1n
s1xn
y1 = xi i > 1
(2)then we may write
xS x =
s1 0
0 S 1
y1
...
yn
where S 1 is the matrix of the quadratic form R(x2, . . . , xn). Using matrix
notation we have
S =
s1 qq S 2
= s1 00 S 1
1 s1
1q
0 E
(3)
where E is the unit matrix of order n 1, q is a column ofn 1 rows and 25
S 1 = S 2 s11 qq;which, incidentally gives an expression for the matrix ofR.
More generally suppose S =
S 1 QQ S 2
where S 1 is a k-rowed matrix
and is non-singular. Put x =y
z
where y is a column ofkrows and z has
n
krows. Then
S [x] = S 1[y] +yQz +zQy + S 2[z],
-
8/3/2019 Lectures on Quadratic Forms
28/170
24 2. Reduction of positive quadratic forms
which can be written in the form
S [x] = S 1[y + S11 Qz] + W[z] (4)
where W = S 2 QS 1Q. In matrix notation we have
S =
S 1 0
0 W
E S 1
1Q
0 E
(5)
the orders of the two unit matrices being evident. In particular, we have
|S | = |S 1| |W|.
Let S be a real, non-singular, n-rowed, symmetric matrix. It is wellknown that there exists an orthogonal matrix V such that
S [V] = VS V = D
where D = [d1, . . . , dn] is a real diagonal matrix. The elements d1, . . . ,
dn ofD are called the eigen-values of S . Let L denote the unit sphere26
L : x x = 1
so that a generic point x on L is an n-tuple x =
x1...
xn
of real numbers.
Let m and N denote the smallest and largest of the eigen values of S .Then for any x on L.
m S [x] M
For, if we put
y1...yn
= y V1x, then yy = 1 andS [x] = D[V1x] = D[y] = d1y21 + + dny2n.
But then
S [x] = (d1 M)y21 + + (dn M)y2n + M M.
The other inequality is obtained by changing S to S .
-
8/3/2019 Lectures on Quadratic Forms
29/170
1. Quadratic forms 25
More generally we have, for any arbitrary real vector x
mx x S [x] M xx. (6)
If x = 0, the statement is obvious. Let x 0. Then t2 = xx 0. Puty = t1x. Then yy = 1 and so m S [y] M. Multiplying throughoutby t2 we get the result in (6).
We now define a quadratic form xS x to be positive definite (or sim-ply positive) ifS [x] > 0 for all vectors x 0. It is positive semi-definite
if S [x] 0 for real x 0. We shall denote these by S > 0 and S 0respectively. IfS > 0, then obviously |S | 0. For, if |S | = 0, then there 27exists x 0 such that S x = 0. Then
0 = xS x > 0
which is absurd.
IfS > 0 and |A| 0 and A is a real matrix, then T = S [A] is againpositive. For, if x = 0, the Ax y 0 and so
T[x] = S [Ax] = S [y] > 0.
We now prove two lemmas for later use.
Lemma 1. A matrix S is positive definite if and only if |S r| > 0 forr = 1, . . . , n, where S r is the matrix formed by the first r rows and
columns of S .
Proof. We shall use induction on n. Ifn = 1, the lemma is trivial. Let
therefore lemma be proved for matrices of order n 1 instead ofn. Let
S =
S n1 qq a
IfS > 0 then S n1 > 0 and so |S n1 | 0. We can therefore write
S =
S n1 0
0 l
E S 1
n1q0 l
(7)
so that |S | = |S n1|l. Induction hypothesis shows that |S n1| > 0 andl > 0 so that |S | > 0 and |S r| > 0 for all r.
-
8/3/2019 Lectures on Quadratic Forms
30/170
26 2. Reduction of positive quadratic forms
The converse also follows since by hypothesis |S | > 0 and |S n1| > 280. So 1 > 0. But by induction hypothesis S n1 > 0.Lemma 2. If S > 0 and S = (skl), then
|S | s1 . . . snwhere skk = sk, k = 1, . . . , n.
Proof. We again use induction on n. From the equation (7) we have
|S | = |S n1| l.
But l = sn qS 1n1q > 0 since S 1n1 > 0 and sn > 0. If we assumelemma proved for n
1 instead ofn we get
|S | s1 . . . sn1l s1 . . . sn.
More generally we can prove that ifS > 0 and S =
S 1 S 12S
12S 2
then
|S | |S 1| |S 2| (8)
It is easy to see that equality holds in (8) if and only ifS 12 = 0.
Let S > 0, then s1, . . . , sn are all positive. We can write as in (3)
S =
s1 0
0 W
1 s1
1q
0 E
But since now W > 0, its first diagonal element is different from zero29
and we can write W also in the form (3). In this way we get
S =
d1 0
. . .
0 dn
1, d12, . . . , d1n0, 1, d23, . . . , d2n: . . . . . .
0 . . . . . . 1
= D[V] (9)where D = [d1, . . . , dn] is a diagonal matrix and V = (dkl) is a triangle
matrix with dkk = 1, k = 1, . . . , n, and dkl = 0, k > 1. We can therefore
write
S [x] =
nk=1
dk(xk + dk k+1xk+1 + + dknxn)2
-
8/3/2019 Lectures on Quadratic Forms
31/170
2. Minima of definite forms 27
The expression S = D[V] is unique. For ifS = D1[V1] where D1 is
a diagonal matrix and V1 is triangular, then
D[W] = D1
where W = VV11
is also a triangular matrix. In this case, it readily
follows that W = E and D = D1.
In general we have the fact that if
S =
S 0
0 S 2
E T
0 E
(10)
where S 1 has order k then S 1, S 2 and T are unique.
We call the decomposition (9) of S the Jacobi transformation of
S .
2 Minima of definite forms30
Let S and T be two real, non-singular n-rowed symmetric matrices.
They are said to be equivalent (denoted S T) if there exists a uni-modular matrix U such that
S [U] = T.
Since the unimodular matrices form a group, the above relation is an
equivalence relation. We can therefore put the n-rowed real symmetric
matrices into classes of equivalent matrices. Evidently, two matrices in
a class have the same determinant.
If S = S is real and t is a real number, we say that S represents tintegrally, if there is an integral vector x such that
S [x] = t.
In case t = 0, we insist that x 0. The representation is said to be
primitive, if x is a primitive vector. Obviously if S T then S and Tboth represent the same set of real numbers.
If S > 0, then all the eigen values of S are positive. Let m > 0 bethe smallest eigen value of S . Let t > 0 be a large real number. Then if
-
8/3/2019 Lectures on Quadratic Forms
32/170
28 2. Reduction of positive quadratic forms
S [x] < t then mxx < tand so the elements of x are bounded. Therefore
there exist only finitely many integral vectors x satisfying
S [x] < t.
This means that if x runs through all non-zero integral vectors, S [x] has
a minimum. We denote this minimum by (S ). There is therefore an31
integral x such that
S [x] = (S )., x 0.
Moreover x is a primitive vector. For if x is not primitive, then x = qy
where q > 1 is an integer, and y is a primitive vector. Then
(S ) = S [x] = q2S [y] > S [y]
which is impossible. Furthermore ifS T then (S ) = (T). For, letS = T[U] where U is unimodular. If x is a primitive vector such that
(S ) = S [x], then
(S ) = S [x] = T[U x] (T).
Also if(T) = T[y], then
(T) = T[y] = S [U1
y] (S ).This proves the contention.
IfS > 0 and t is a real number, then (tS ) = t(S ). But |tS | = tn|S |so that it seems reasonable to compare (S ) with |S |1/n.
We not prove the following important theorem due to Hermite.
Theorem 1. If(S ) is the minimum of the positive matrix S of n rows,
there exist a constant cn depending only on n, such that
(S ) cn|S |1/n
Proof. We use induction on n.
-
8/3/2019 Lectures on Quadratic Forms
33/170
2. Minima of definite forms 29
If n = 1, then S is a positive real number s. If x 0, and integral,32
then sx2
> s unless x = 1 so thatc1 = 1.
Let us assume theorem proved for n 1 instead ofn. Let x be the primi-tive integral vector such that (S ) = S [x]. Complete x into a unimodular
matrix U. Then T = S [U] has first diagonal element equal to (S ). Also
(S ) = (T) by our remarks above. Furthermore |S | = |T|. Thereforein order to prove the theorem we may assume that the first diagonal
element s1 ofS is equal to (S ).
Let S = s1 q
q S 1 . ThenS =
s1 0
0 W
1 s1
1q
0 E
where W = S 1 qs11 q! Also |S | = s1|W|.
Let x =
x1y
be a vector and let y have n 1 rows, so that
S [x] = s1(x1 + s11 q
y)2 + W[y]. (11)
Since W > 0, we can choose an integral y so that W[y] is minimum. x1can now be chosen integral in such a manner that
12
x1 + s11 qy 1
2(12)
using (11) and (12) and induction hypothesis we get 33
(S ) S [x] (S )4
+ cn1W1/(n1)
Substituting for |W| we get
(S )
4
3cn1
n1
n
|S | 1n
which proves the theorem.
-
8/3/2019 Lectures on Quadratic Forms
34/170
30 2. Reduction of positive quadratic forms
Using c1 = 1 and computing successively from the recurrence for-
mula cn =4
3cn1 n1n we see that
cn = (4/3)n1
2 (13)
is a possible value ofcn. This estimate is due to Hermite.
The best possible value for cn is unknown except in a few cases. We
shall show that c2 =
4
3and that it is the best possible for n = 2. From
Hermites estimate (13), we see that for a positive binary matrix S ,
(S ) 43 12 |S | 12 .
Consider now the positive quadratic from x2 + xy +y2 whose matrix
S =
1 1
212
1
For integral x, y not both zero, x2 + xy +y2 1 so that (S ) = 1. Also|S | = 3
4. We have
1 = 43 12 |S | 12
which proves that
4
3is the best possible value of c2.34
We shall now obtain a finer estimate for cn due to Minkowski. This
estimate is better than Hermites for large values of n. To this end we
make the following consideration.
Let Rn denote the Euclidean space of n dimensions regarded as a
vector space of ordered n-tuples (x1, . . . , xn). A point set L in Rn is
said to be convex if whenever A and B are two points of it,A + B
2, the
mid point of the line joining A and B, is also a point ofL. It is saidto be symmetric about the origin if whenever x belongs to it, x also
-
8/3/2019 Lectures on Quadratic Forms
35/170
2. Minima of definite forms 31
belongs to it. Obviously ifL is both convex and symmetric, it contains
the origin.IfL is a point set in Rn and h is any point in Rn we denote by Lh
the set of points x such that x Lh if and only if x h is a point ofL.With this notation L = L0.
IfL is an open, bounded symmetric convex set, the L has a mea-
sure (L) in the Jordon sense and for h Rn
(L) = (Lh).
We call a point P = (x1, . . . , xn) in Rn a lattice pointif x1, . . . , xn are
all integers. The lattice points form a lattice in Rn considered as a vector
group. We shall denote points of this lattice by the letters g, g, . . ..The following lemma, due to Minkowski, shows the relationship be-tween convex sets and lattices.
Lemma 3. IfL is an open, bounded, symmetric and convex set of vol- 35
ume > 2n, thenL contains a lattice point other than the origin.
Proof. We shall assume that L has no lattice point in it other than the
origin and then prove that (L) 2n.
So let L have no lattice point in it other than the origin. Define the
point set M by x M if and only if 2x L. Then M is an open,symmetric, bounded and convex set. Also
(L) = 2n(M). (14)
Consider now the translatesMg ofM by the lattice points. Ifg g
thenMg andMg are disjoint sets. For, if x Mg Mg then x g andx g are points ofM. Since M is symmetric and convex.
g g2
=(x g) + (g x)
2
is a point ofM. By definition ofM, g g is a point ofL. But g g.Thus L has a lattice point other than the origin. This contradicts ourassumption. Thus the Mg for all g are distinct.
-
8/3/2019 Lectures on Quadratic Forms
36/170
32 2. Reduction of positive quadratic forms
Let denote the unit cube, that is the set of points x = (x1, . . . , xn)
with 0 xi < 1, i = 1, . . . , n. By the property ofMgs aboveg
(Mg ) = (
g
Mg) () = 1 (15)
But (Mg ) = (M g) so that by (15)
1
g
( Mg) =
g
(g M) = (
g
g M).
But the g cover Rn completely without gaps or overlapping when g36runs over all lattice points. Hence
(M)
1.
Using (14) our theorem follows.
We can now prove the following theorem due to Minkowski.
Theorem 2. If S > 0 and(S ) is its minimum, then
(S ) 4
n
2+ 1
2/n|S |1/n
Proof. In Rn let us consider the point set L defined by the set of x = x1...
xn
with
S [x] <
It is trivially seen to be open and symmetric. Also since S > 0, L is
bounded. To see that it is convex, write S = AA and put Ax1 = y1,Ax2 = y2
. Then a simple calculation proves that
2
y1 +y2
2
y1
+y2
2
y
1y
1+y
2y
2.
This shows that L is a convex set. The volume ofL is
(L) =n/2n/2
( n2
+ 1)
S 1/2
If we put =
(S ), then L contains no lattice point other than the37origin. Minkowskis lemma then proves theorem 2.
-
8/3/2019 Lectures on Quadratic Forms
37/170
3. Half reduced positive forms 33
Denote the constants in Hermites and Minkowskis theorems by cn
and cn respectively. If we use stirlings formula for the -function in theform
log (x) x log x.
We get log n = log4
+
2
nlog
n
2+ 1
log n whereas log cn =
n 12
log 4/3 n where is an absolute constant. This shows thatfor large n, Minkowskis estimate is better than Hermites.
3 Half reduced positive forms
We now consider the space Rh, h = n(n + 1)2
of real symmetric n-rowed
matrices and impose on it the topology of the h-dimensional real Eu-
clidean space. Let P denote the subspace of positive matrices. If
S P then all the principal minors of S have positive determinant.This shows that P is the intersection of a finite number of open subsets
ofRh and hence is open.
Let S be a matrix in the frontier ofP in Rh. Let S 1, S 2, . . . be a
sequence of matrices in P converging to S . Let x 0 be any real
column vector. Then S k[x] > 0 and hence by continuity S [x] 0. Fromthe arbitrariness of x, it follows that S 0. On the other hand let S beany positive semi-definite matrix in Rh. Let E denote the unit matrix oforder n. Then for > 0, S + E is a positive matrix, which shows that
in every neighbourhood ofS there are points ofP. This proves that the 38
frontier ofP in Rh consists precisely of positive semi-definite matrices.
Let denote the group of unimodular matrices. We represent in
Rh as a group of transformations S S [U], S Rh. Also U and Uload to the same representation in Rh. It is easy to see that the only
elements in which keep every element of Rh fixed are E. Thus if weidentify in , the matrices U and U then S S [U] gives a faithfulrepresentation of 0 in Rh, 0 = / E. If U runs over all elementsof and S Rh, S [U] runs through all matrices in the class of S . Weshall now find in each class of positive matrices, a matrix having certainnice properties.
-
8/3/2019 Lectures on Quadratic Forms
38/170
34 2. Reduction of positive quadratic forms
Let T P and let u run over the first columns of all the matrices in. There u are precisely all the primitive vectors. Consider the valuesT[u] as u runs over these first columns. Then T[u] has a minimum,
which is none other than (T). Let this be attained for u = u1. It is
obvious that u1 is not unique for, u1, also satisfies this condition. Inany case, since T > 0, there are only finitely many us with the property
T[u] = T[u1]. Let u1 be fixed and let u run over the second columns of
all unimodular matrices whose first column is u1. The us now are notall
the primitive vectors (for instance u u1). T[u] again has a minimum
say for u = u2 and by our remark above39
T[u1] T[u2]
Also there are only finitely many u with T[u] = T[u2]. Consider now all
unimodular matrices whose first two columns are u1, u2 and determine
a u3 such that T[u3] is minimum. Continuing in this way one finally
obtains a unimodular matrix
U = (u1, . . . , un)
and a positive matrix S = T[U].
S T and by our construction, it is obvious, that S is not unique inthe class ofT. We shall study the matrices S and U more closely.
Suppose we have constructed the columns u1, . . . , uk
1. In order to
construct the k-th column we consider all unimodular matrices V whosefirst k 1 columns are u1, . . . , uk1 in that order. Using the matrix Uabove which has this property,
U1V =Ek1 A
0 B
(16)
where Ek1 is the unit matrix of order k 1 and A and B are integralmatrices. Since U and V are unimodular, B is unimodular. Ifw =
w1...
wn
denotes the first column of the matrix
AB then, since B is unimodular
(wk, wk+1, . . . , wn) = 1. (17)
-
8/3/2019 Lectures on Quadratic Forms
39/170
3. Half reduced positive forms 35
The k-th column yk
ofV is Uw. Conversely let w be any integral column40
satisfying (17). Then wk, . . . , wn can be made the first column of a uni-modular matrix B of order n k+ 1. Choosing any integral matrix A ofk 1 rows and n k+ 1 columns, whose first column is w1, . . . , wk1, weget a matrix V whose first k 1 columns are u1, . . . , uk1 (by means ofthe equation (16)). Thus the k-th column of all the unimodular matrices
with first k 1 columns equal to u1, . . . , uk1 is of the form Uw, wherew is an arbitrary integral vector with (wk, . . . , wn) = 1.
Consider the matrix S = T[U]. By the choice ofuk, we have if w
satisfies (17), then
S [w] = T[Uw] T[uk] = sk
where S = (skl). We have thus proved that in each class of T there existsa matrix S satisfying
I) s1 > 0
II) S [w] sk, k = 1, . . . , n
for every integral column w =
w1...
wn
with (wk, . . . , wn) = 1.
Matrices which satisfy (I) and (II) shall be called half reduced and
the subset ofP of matrices S , half reduced, shall be denoted R0.
In the sequel we shall denote by e1, . . . , en, the n columns in order
of the unit matrix of order n and by an admissible k-vector w we shallunderstand an integral vector w of n rows, satisfying (17). ek is clearly 41
an admissible k-vector.
Since ek+1 is an admissible k+ 1-vector, we have
sk+1 = S [ek+1] skwhich shows that
s1 s2 . . . sn. (18)
Let u =
x1...
xn
be an integral vector with xk = 1, xl = 1, xi = 0 for
i k, i l and k < l. Then u is an admissible l-vector and so
sk + 2skl + sl = S [u] sl.
-
8/3/2019 Lectures on Quadratic Forms
40/170
36 2. Reduction of positive quadratic forms
This means that 2skl sk. Changing the sign of xk we get 2skl sk.Hence sk 2skl sk, 1 k < l n (19)Remark. Suppose S is a real symmetric matrix satisfying (II). Let S 1be the matrix obtained from S by deleting the h1-th, h2-th,...,hl-th rows
and columns from S . Then S 1 also has properties similar to S since we
have only to consider such admissible vectors w for which the h1, . . . , hl-
th elements are zero.
We now prove the
Theorem 3. Let S be a real, symmetric n-rowed matrix with the prop-
erty (II). Then S 0. If, in addition, it satisfies (I), then S > 0.42Proof. Suppose s1 = 0. Then by (19) we have
0 = s1 2s1l s1 = 0
which shows that S has the form
S =
0 0
0 S 1
If s2 = 0, we again have a similar decomposition for S 1, since S 1, by
our remark above, also satisfies II. Thus either S = 0 or else there is a
first diagonal element sk, such that sk 0. Then
S =
0 0
0 S k
S k having sk for its first diagonal element. We shall now show that S k >
0. Observe that S k satisfies both I) and II) and therefore for proving the
theorem it is enough to show that if S satisfies I and II, then S > 0.
Ifn = 1, the theorem is trivially true. Let therefore theorem proved
for n 1 instead ofn. Put
S = S 1 qq sn
-
8/3/2019 Lectures on Quadratic Forms
41/170
3. Half reduced positive forms 37
where q is a column of n 1 rows. S 1 satisfies I and II and so by43induction hypothesis S 1 > 0. Also since sn s1, therefore sn > 0.
Let x =yz
be a column ofn rows, y having n 1 rows and let z be
a real number. Then
S [x] = S 1[y + S11 qz] + (sn qS 11 q)z2.
We assert that sn qS 11 q > 0. For let sn qS 11 q. Then for > 0 andevery x 0
S [x] S 1[y + S 11 qz] + z2 (20)Consider the quadratic form on the right side of the inequality above. It
is of order n, positive and has a determinant |S 1|. Therefore we mayfind a column vector x =
yz
, integral, such that the value of the right
side is a minimum and so by Hermites theorem
S 1[y + S11 qz] + z
2 cn|S 1|1/n1/n.
Using (20) and observing that s1 is the minimum ofS [x] we get, for this
x,
0 < s1 S [x] cn|S 1|1/n1/n (21)Since can be chosen arbitrarily small we get a contradiction from (21).
Thus sn
qS 1
1q > 0. This means the S > 0.
We have thus shown that all matrices satisfying (I) and (II) are inP.
We prove now the following important theorem due to Minkowski. 44
Theorem 4. If S is a positive half-reduced matrix, then
1 s1 . . . sn|S | bn
where bn is a constant depending only on n.
Proof. The left hand side inequality has already been proved in lemma
2 even for all matrices in P. In order to prove the right hand sideinequality we use induction.
-
8/3/2019 Lectures on Quadratic Forms
42/170
38 2. Reduction of positive quadratic forms
Consider now the ratios
sn
sn1,
sn1sn2
, . . . ,s2
s1.
Since S is half-reduced, all these ratios are 1. Let = n(n 1)4
. For
the above ratios, therefore, one of two possibilities can happen. Either
there exists a k, 2 k n such thatsn
sn1< ,
sn1sn2
< , . . . ,sk+1
sk<
sk
sk
1
(22)
or thatsn
sn1, . . . ,
s2
s1< (23)
Note that in the case n = 2, the second possibility cannot occur since
then =1
2and
s2
s1 1.
Consider (23) first. We have45
s1 . . . sn
sn1
< n(n 1)
2
and sinces1 . . . sn
|S | =s1 . . . sn
sn1
sn
1
|S |we get, using Hermites inequality
s1 . . . sn
|S | < cnn
n(n 1)2
which proves theorem.
Suppose now that (22) is true and so k 2. Write
S = S k1 QQ1 R
-
8/3/2019 Lectures on Quadratic Forms
43/170
3. Half reduced positive forms 39
where S k1 has k 1 rows. Let x = y
z where y is a column with k 1rows. We have, by completion of squaresS [x] = S k1[y + S 1k1Qz] + (R QS 1k1 Q)[z] (24)
Also |R QS 1k1Q| = |S |/|S k1|. Choose z to be an integral primitive
vector such that (R QS 1k1 Q)[z] is minimum. By Hermites theorem
therefore
(R QS 1k1 Q)[z] cnk+1(|S |/|S k1|)1/nk+1 (25)
Put y + S 1k1
Qz = w so that w = w1..
.wk1 . Choose now y to be an integralvector such that
12
wi 1
2, i = 1, . . . , k 1. (26)
By the choice of z, it follows that x =y
z
is an admissible k-vector. 46
Hence
sk S [x]. (27)Also since S k1 is half-reduced, we get
S k1[w] =k
1p,q=1
spqwpwq k(k 1)8
sk1.
Using (22) we get
S k1[w] sk
2(28)
From (24), (25), (27) and (28) we get
sk 2cnk+1(|S |/|S k1|)1/(nk+1) (29)
Since
s1 . . . sn|S | = s1 . . . sk1|S k1 | |Sk1 ||S | snk+1k sk . . . snsnk+1
k
-
8/3/2019 Lectures on Quadratic Forms
44/170
40 2. Reduction of positive quadratic forms
we get by induction hypothesis on S k1, that
s1 . . . sn
|S | bk1 (2cnk+1 )nk+1 (n k)(n k+ 1)
2
which proves the theorem completely.
The best possible value ofbn is again unknown except in a few sim-
ple cases. We shall prove that
b2 = 4/3 (30)
and it is the best possible value.
Let ax2 + 2bxy + cy2 be a half-reduced positive form. Then 2b a c. The determinant of the form is d = ac b2. Thus47
ac = ac b2 + b2 d+ a2
4 d+ ac
4
which gives
ac 43
d (31)
Consider the binary quadratic form x2 + xy + y2. It is half-reduced
because if x and y are two integers not both zero, then x2 + xy +y2 1.The determinant of the form is 3/4. Product of diagonal elements is
unity. Hence
1 =4
3d
and this shows that 4/3 is the best possible value.
4 Two auxiliary regions
Let R0 denote the space of half-reduced matrices. Define the point set
Rt for t > bn 1 as the set ofS satisfying0 < sk < tsk+1 k = 1, . . . , n 1
t < sklsk
< t 1 k < 1 n
s1 . . . sn|S | < t
(32)
-
8/3/2019 Lectures on Quadratic Forms
45/170
4. Two auxiliary regions 41
Because of (18), (19) and theorem 4, it follows that
R0 Rt . (33)
But what is more important is that
limtR
t = P (34)
This is easy to see. For, if S P, let t be chosen larger than the 48maximum of the finite number of ratios
sk
sk+1, k = 1, . . . , n 1; skl
sk, 1
k < 1 n, s1 . . . sn|S | and bn. Then S Rt for this value oft.
Let S Rt and consider the Jacobi transformation of S ; namely
S =
d1 0
. . .
0 dn
1, t12, . . . t1n
0 . . . . . . 1
= D[T] (35)Then
skl = dktkl +
k1h=1
dhthkthl, 1 k l n.
In particular, putting k = 1, and using the fact that d1, . . . , dn are all
positive, we getsk
dk 1. (36)
Also since |S | = d1 . . . dn, we haven
k=1
sk
dk=
s1 . . . sn
|S | < t. Since t > 1,we have
sk
dk< t (k = 1, . . . , n).
Using (32) we get
dk
dk+1=
dk
sk sk
sk+1 sk+1
dk+1< t2. (37)
Now s1l = d1 t1l so that
|t1l| = |s1l|d1
= |s1l|s1
s1d1
< t2
-
8/3/2019 Lectures on Quadratic Forms
46/170
42 2. Reduction of positive quadratic forms
Let us assume that we have proved that 49
abs tgl < u0, 1 g k 1, g < l n (38)for a constant u0 depending on tand n. Then
abs tkl abs skldk
+
k1h=1
dh
dkabs thk abs thl < u1,
because of (37) and (38), u1 depending only on t and n. It therefore
follows that if u is the maximum of u0, u1, t2, then for the elements of
D and T in (35) we have
0 < dk < udk+1, k = 1, . . . , n 1
abs tkl < u, k < l. (39)We now define Ru to be the set of points S P such that if S =
D[T] where D = [d1, . . . , dn] is a diagonal matrix and T = (tkl) is a
triangle matrix then D and T satisfy (39) for some u. Since the Jacobi
transformation is unique, this point set is well defined.
From what we have seen above, it follows that given Rt , there existsa u = u(t, n) such that
Rt Ru
Conversely one sees easily that given Ru there exists a t = t(u, n) suchthat
Ru Rt .In virtue of (34), it follows that
limuR
u = P. (40)
50
We now prove two lemmas useful later.
Let S P and let t be a real number such that S Rt . Let S 0denote the matrix
S 0 =
s1 0. . .
0 sn
(41)
We prove
-
8/3/2019 Lectures on Quadratic Forms
47/170
4. Two auxiliary regions 43
Lemma 4. There exists a constant c = c(t, n) such that whatever be the
vector x,1
cS 0[x] S [x] cS 0[x].
Proof. Let P1 denote the diagonal matrix P1 = [
s1, . . . ,
sn]. Put
W = S [P]. In order to prove the lemma, it is enough to show that if
xx = 1 then1
c W[w] c.
Let W = (wkl). Then wkl = skl/ sksl. Because S Rt we have
abs wkl = absskl
sk
sk
s1< t c1, k l (42)
where c1 depends only on t and n. W being symmetric, it follows that
the elements ofW are in absolute value less than a constant c2 = c2(t, n).
Consider now the characteristic polynomial f() = |E W|. By(42) all the coefficients of the polynomial f() are bounded in absolute
value by a constant c3 = c3(t, n). Also since W > 0, the eigen values 51
ofW are bounded by c4 = c4(t, n). Let 1, . . . , n be these eigen values.
Then1 . . . n = |W| =
|S |s1 . . . sn
> t1
which means that there exists a constant c5 = c5(t, n) such that
i > c5(t, n), i = 1, . . . , n.
(6) then gives the result of lemma 4.
Next we prove
Lemma 5. If S Rt and S = S 1 S 12
S 12
S 2 , then S 1
1S 12 has all its ele-
ments bounded in absolute value by a constant depending only on t andn.
-
8/3/2019 Lectures on Quadratic Forms
48/170
44 2. Reduction of positive quadratic forms
Proof. By the Jacobi transformation we have S = D[T]. Since Rt Ru for u = u(t, n), the elements of T are u in absolute value. Write
T =
T1 T120 T2
, D =
D1 0
0 D2
where T1 and D1 have the same number of rows and columns as S 1. We
have S 1 = D1[T1] and S 12 = T1D1T12 so that
S 11 S 12 = T11 T12.
Since T1 is a triangle matrix, so is T11
and its elements are u1 inabsolute value, u1 = u1(t, n). The elements ofT12 are already u. Ourlemma is proved.
We are now ready to prove the following important52
Theorem 5. Let S and T be two matrices in Rt . Let G be an integralmatrix such that1) S [G] = T and2) abs |G| < t. Then the elements of Gare less, in absolute value, then a constant c depending only on t and n.
Proof. The constants c1, c2, . . . occurring in the following proof depend
only on t and n. Also bounded shall mean bounded in absolute value
by such constants.
Let G = (gkl) and let g1
, . . . , gn
denote the n columns ofG. We then
have
S [gl] = tl l = 1, . . . , n.
Introducing the positive diagonal matrix of lemma 4, we obtain
S 0[gl] c1S [gl] = c1tl.
But S 0[gl] =k
skg2kl
so that
skg2kl c1t1 k, l = 1, . . . , n (43)
Consider now the matrix G. Since |G| 0, there exists in its expan-sion a non-zero term. That means there is a permutation l1, . . . , ln of 1,
2, 3, . . . , n such thatg1l1 g2l2 . . . gnln 0.
-
8/3/2019 Lectures on Quadratic Forms
49/170
4. Two auxiliary regions 45
From (43) therefore we get
sk skg2klk c1tlk k = 1, . . . , n.
Consider now the integers k, k+ 1, . . . and lk, lk+1, . . . , ln. All of the 53
latter cannot be > k. So there is an i k such that li k. Hence
si c1tli .
So, since S and T are in R0
,
sk c2tk, k = 1, . . . , n. (44)
On the other handn
k=1
tk
sk=
t1 . . . tn
|T| |S |
s1 . . . sn|G|2
and all the factors on the right are bounded. Therefore
nk=1
tk
sk< c3.
Using (44), it follows that
tk c4 sk, (k = 1, 2, . . . , n). (45)Combining (43) and (45) we have the inequality
skg2kl < c5 sl k, l = 1, . . . , n. (46)
Let p now be defined to be the largest integer such that
sl c5 sl, k p, l p 1. (47)
If p = 1, this condition does not exist. From the definition of p, it
follows that for every integer g with p + 1 g n, there exists a kg gand an lg < g such that
skg < c5 slg (48)
-
8/3/2019 Lectures on Quadratic Forms
50/170
46 2. Reduction of positive quadratic forms
This holds for p = 1, but if p = n, it does not exist. 54
Let c6 be a constant such that
sk < c6 sl k l. (49)
This exists since S Rt . Using (48) and (49) and putting c7 = c5c26 wehave
sg < c7 sg1 g p + 1 (50)(49) and (50) give the important inequality
1
c8 1.
-
8/3/2019 Lectures on Quadratic Forms
51/170
5. Space of reduced matrices 47
In order to prove the theorem we use induction. Ifn = 1, the theorem
is trivially true. Assume theorem therefore proved for n 1 instead ofn. Split S and T in the form
S =
S 1 S 12S
12S 2
T =
T1 T12
T12
T2
where S 1 and T1 are p 1 rowed square matrices. Because S [G] = T,we get
S 1[G1] = T1
G1S 1G12 + G1S 12G2 = T12
(55)
By considerations above G21 = 0 therefore
|G
|=
|G1
| |G2
|. Since G
is integral it follows that abs |G1| < t. Also S 1 and T1 are p 1 rowedsquare matrices which are in R
t,p1, where Rt,p1 is the same as R
t
with p 1 instead ofn. Ey induction hypothesis and (55) we see that G1is bounded.
Using the fact that G1
S 1 = T1G11
we get
G12 = G1T11 T12 S 11 S 12G2.
Using lemma 5, it follows that the elements ofG12 are bounded.
Our theorem is completely proved.
In particular,
Corollary. If S and T are inRt and S [U] = T for a unimodular U, then 56U belongs to a finite set of unimodular matrices determined completely
by t and n.
5 Space of reduced matrices
We have seen that given any matrix T > 0, there exists in the class ofT,
a half-reduced matrix S . Consider now the 2n unimodular matrices of
the form
A=
a1 0. .
.0 an
-
8/3/2019 Lectures on Quadratic Forms
52/170
48 2. Reduction of positive quadratic forms
where ai = 1. If S is half-reduced, then S [A] also is half-reduced.For, if x = x1...
xn
is an admissible k-vector, then Ax = x1... xn is also anadmissible k-vector. Also, the diagonal elements of S and S [A] are the
same. We shall choose A properly so that S [A] satisfies some further
conditions.
Since S [A] = S [A], there is no loss in generality if we assumea1 = 1. Denote by 1, . . . , n the n columns of the matrix A. Consider
now 1
S 2. This equals a2 s12. If s12 0 choose a2 so that
a2 s12 0.
If s12=
0, a2 may be chosen arbitrarily. Having chosen a1, . . . , ak con-sider k
S k+1
= akak+1 skk+1. Since ak has been chosen, we choose
ak+1 = 1 by the condition
akak+1 skk+1 0,
provided skk+1 0. If skk+1 = 0, ak+1 may be arbitrarily chosen. We57
have thus shown that in each class of equivalent matrices, there is a
matrix S satisfying
) s1 > 0
) skk+1 0, k = 1, . . . , n 1.) S [x] sk 0, k = 1, . . . , n for every admissible k-vector.
We shall call a matrix satisfying the above conditions a reduced ma-
trix, reduced in the sense ofMinkowski. Let Rdenote the set of reduced
matrices, then
R R0. (56)Since the elements of S P are coordinates of the point S , the
conditions ) and ) above show that R is defined by the intersection
of an infinity of closed half spaces ofP. We shall denote the linear
functions in ) and ) by Lr, r=
1, 2, 3, . . .. It is to be noted that weexclude the case when an Lr is identically zero. This happens when in
-
8/3/2019 Lectures on Quadratic Forms
53/170
5. Space of reduced matrices 49
), x is the admissible k-vector equal to ek. We may therefore say thatR is defined by
) s1 > 0, ) Lr 0 r = 1, 2, 3, . . . (57)We shall see presently that the infinite system of linear inequalities can
be replaced by a finite number of them.
In order to study some properties of the reduced space R, we first 58
make some definitions.
Definition. i) S is said to be an inner point ofR if s1 > 0 and
Lr(S ) > 0 for all r.
ii) It is said to be a boundary point ofR if s1 > 0 Lr(S )
0 for all r
and Lr(S ) = 0 at least for one r.
iii) It is said to be an outer point ofR if s1 > 0 and Lr(S ) < 0 at least
for one r.
We first show that R has inner points.
Consider the quadratic form
S |x| = x21 + + x2n + (p1x1 + + pnxn)2
where p1, . . . , pn are n real numbers satisfying
0 < p1 < p2 . . . < pn < 1.
The matrix S = (skl) is then given by
sk = 1 + p2k, k = 1, . . . , n
skl = pkpl, k l.
We assert that S is an inner point ofR. In the first place
s1 > 0, skk+1 = pkpk+1 > 0; k = 1, . . . , n 1.Next let x be an admissible k-vector not equal to ek. Then at least oneof xk, . . . , xn has to be different from zero. If at least two of then, say xj,
x1 are different from zero, so that k
1 < j
n, then
S [x] x21 + x2j + 2 > 1 + p2k = sk.
-
8/3/2019 Lectures on Quadratic Forms
54/170
-
8/3/2019 Lectures on Quadratic Forms
55/170
5. Space of reduced matrices 51
Consider now the outer points ofR. Let S be one such. Then at least
for one r, Lr(S ) < 0. Since the Lr are linear functions of the coordinatesand hence continuous, we may choose a neighbourhood of S consisting
of points for all of which Lr < 0. This means that the set of outer points
ofR is open. Note that here it is enough to deal with one inequality
alone unlike the previous one where one had to deal with all the Lrs.
Let now S be a boundary point ofR. Let S be an inner point.Consider the points T defined by
T = S
+ (1 )S .
These are points on the line joining S and S and every neighbourhoodofS contains points T with > 0 and points T with < 0.
Consider the points T with 0 < 1. These are the points betweenS and S . Let Lr be one of the linear polynomials defining R. NowLr(S ) 0, and Lr(S ) > 0, for all r. Thus
Lr(T) = Lr(S) + (1 )Lr(S ) > 0.
Hence T is an inner point. 61
Let now T be a point with < 0. Since S is a boundary point, there
is an r such that Lr(S ) = 0. For this r
Lr(T) = Lr(S) < 0
which proves that T is an outer point.
Since linear functions are continuous, the limit of a sequence of
points ofR is again a point ofR. This proves
Theorem 7. R is a closed set in P and the boundary points ofR
constitute the frontier ofR in the topology ofP.
We now prove the following
Theorem 8. Let S and S be two points ofR such that S [U] = S fora unimodular U E. Then S and S are boundary points ofRand Ubelongs to a finite set of unimodular matrices determined completely by
the integer n.
-
8/3/2019 Lectures on Quadratic Forms
56/170
52 2. Reduction of positive quadratic forms
Proof. The second part of the theorem follows readily from the Corol-
lary to Theorem 5. To prove the first part, we consider two cases: (1) Uis a diagonal matrix, and (2) U is not a diagonal matrix.
Let U be a diagonal matrix, U = (a1, . . . , an), with ai = 1. We mayassume, since S [U] = S [U] that a1 = 1. Let ak+1 be the first element= 1. Then, with usual notation,
skk+1 = skk+1.
But S and S being points ofRwe have62
0 skk+1 = skk+1 0
which means that skk+1 = 0 = skk+1. Hence S and S are both boundarypoints ofR.
Suppose U is not a diagonal matrix and denote its columns by u1,
. . . , un. Let uk be the first column different from the corresponding col-
umn of a diagonal matrix. Hence ui = ei, i = 1, . . . , k 1. (Note that kmay very well be equal to 1). Then
U =
D 0 V
where D is a diagonal matrix which is unimodular. V is a unimodular
matrix. FurthermoreU1 =
D1 0 V1
is unimodular. Let wk be the k-th column ofU
1. Then wk ek. Now
sk = S [uk] skand
sk = S[wk] sk
which proves that S [uk] sk = 0 = S [wk] sk and therefore S and S are boundary points ofR.
Suppose now that S is a boundary point ofR. By Theorem 7, there-fore, there exists a sequence of outer points S 1, S 2, . . . converging to S .63
-
8/3/2019 Lectures on Quadratic Forms
57/170
5. Space of reduced matrices 53
If the suffix k is sufficiently large, then all the S ks lie in a neighbour-
hood of S . Therefore they are all contained in an Rt for some t. Foreach k let Uk be a unimodular matrix such that S k[Uk] is in R. Since
R Rt , we have for all sufficiently large k, S k and S k[Uk] are both inRt . It follows therefore by Theorem 5, that Uks belong to a finite set ofmatrices. There exists therefore a subsequence S k1 , S k2 , . . . converging
to S such that one unimodular matrix U, among these finitely many, car-
ries S ki into R. Also Limn
S kn = S and therefore lim S kn [U] = S [U] is a
point ofR. Since S is a point ofR, it follows from the above theorem
that S [U] is also a boundary point ofR. Furthermore U E since S kare all outer points and S k[U] R. Hence
Theorem 9. If S is a boundary point ofR, there exists a unimodularmatrix U E and belonging to the finite set determined by Theorem8, such that S [U] is again a boundary point ofR.
By Theorem 8, there exist finitely many unimodular matrices say
U1, . . . , Ug which occur in the transformation of boundary points into
boundary points. Ifuk is the k-th column of one of these matrices, then
uk is an admissible k-vector. Suppose it is ek. Then for all S R,S [uk] sk 0. Let us denote by L1,L2, . . . ,Lh all the linear forms, notidentically zero, which result from all the uks k = 1, . . . , n occurring in 64
the set U1, . . . , Ug. Let L1, . . . ,Lh also include the linear forms skk+1,
k = 1, . . . , n
1; then from above we see that for a boundary point S of
R, there is an r h such that Lr(S ) = 0 (not identically). Also for allpoints ofR
s1 > 0,L1(S ) 0, . . . ,Lh(S ) 0. (59)But what is more important, we have
Theorem 10. A point S ofP belongs to R if and only if s1 > 0 and
Lr(S ) 0 for r = 1, . . . , h.Proof. The interest in the theorem is in the sufficiency of the conditions
(59).
Let S be a point ofP satisfying (59). Suppose S is not in R. Sinceit is inP, it is an outer point ofR. Therefore Lr(S ) < 0 for some r > h.
-
8/3/2019 Lectures on Quadratic Forms
58/170
54 2. Reduction of positive quadratic forms
Let S be an inner point ofR. Consider the points T,
T = S + (1 )S
for 0 < < 1, in the open segment joining S and S . Since the set ofinner points ofR is open and S is assumed to be an outer point, there
exists a 0 such that T0 is on the frontier ofR and 0 < 0 < 1. By our
remarks above, there exists for T0 an s h such that Ls(T0 ) = 0. Thismeans that
0 = Ls(T0 ) = 0Ls(S ) + (1 0)Ls(S ).
But (1 0)Ls(S ) > 0 so that Ls(T0 ) > 0. This is a contradiction.Therefore S R.
We have therefore proved that R is bounded by a finite number of65
planes all passing through the origin. R is thus a pyramid.
Let now R denote the closure ofR in the space Rh. At every point
S or R one has, because of continuity of linear functions,
s1 0, Lr(S ) 0, r = 1, 2, 3, . . .
If S R but not in R, then s1 = 0. In virtue of the other inequalities,we see that
S =
0 0
0 S 1.
S 1 again has similar properties. Thus either S = 0 or
S =
0 0
0 S k
where S k is non-singular and is a reduced matrix of order r, 0 < r < n.
We thus see that the points ofR which are not in Rare the semi-positive
reduced matrices.
Consider now the space P and the group . IfU , the mappingS S [U] is topological and takes P onto itself. For U denote byRU the set of matrices S [U] with S R. Because U and U lead to thesame mapping, we have RU
=RU. Since in every class of matrices
there is a reduced matrix we see that
-
8/3/2019 Lectures on Quadratic Forms
59/170
5. Space of reduced matrices 55
1) URU = Pwhere in the summation we identify U and U. Thus the RUs66cover P without gaps.
Let Uand Vbe in and U V. Consider the intersection ofRUand RV. Let S RU RV. Then T1 = S [U1] and T2 = S [V1]are both points ofR. Moreover T1 = T2[VU
1] and VU1 Eso that T1 is a boundary point ofR. Since the mapping S S [U]is topological S is a boundary point ofRU and also ofRV. Hence
2) If UV1 E and U and V are unimodular, then RU andRVcan have at most boundary points in common.
In particular, if U E, R and RU can have only boundarypoints in common. If S R RU then S and S [U1] are in Rand by Theorem 9, U belongs to a finite set of matrices depending
only on n. If we call RU a neighbourofR ifRRU is not empty,then we have proved
3) R has only finitely many neighbours.
Let K now be a compact subset ofP. It is therefore bounded in
P and hence there exists a t > 0 such that K Rt . Suppose RU,for a unimodular U, intersects K. Let S RU K. There is thena T R such that T[U] = S . For large t, R Rt . Then T andS are both in Rt and S = T[U]. Therefore U belongs to a finiteset of matrices. Hence there exist a finite number of unimodular
matrices, say U1, . . . , Up such that
K p
i=1
RUi
Hence 67
4) Every compact subset ofP is covered by a finite number of im-
ages RU ofR.
We have thus obtained the fundamental results of Minkowskisreduction theory.
-
8/3/2019 Lectures on Quadratic Forms
60/170
56 2. Reduction of positive quadratic forms
We now give a simple application.
Suppose S is a positive, reduced, integral matrix. Then since s1 s2 . . .sn bn|S |, s1, . . . , sn are positive and bn depends only on n, it followsthat for a given |S |, there exist only finitely many integer values fors1, . . . , sn. Also
sk 2skl sk, k < lso that skl being integers, there are finitely many values of skl satisfying
the above inequalities. We have therefore the
Theorem 11. There exist only finitely many positive, integral, reduced
matrices with a given determinant and number of rows.
Since all matrices in a class have the same determinant, and in each
class there is at least one reduced matrix, we get the
Theorem 12. There exist only a finite number of classes of positive in-
tegral matrices with given determinant and number of rows.
It has to be noticed, that in virtue of property 3) above, one has, in
general, only one reduced matrix in a class.
6 Binary forms68
We now study the particular case n = 2.
Let S = a bb c be a positive binary matrix and x = xy a vector. Thequadratic form S [x] = ax2 +2bxy +cy2 is positive definite. By the results
of the previous section, we see that, ifS is reduced then
a > 0, 0 2b a c. (60)
We shall now prove that any matrix S satisfying (60) is reduced.
Let x =xy
be an admissible one-vector. Ify = 0, then x = 1. Ify
0, then x and y are coprime integers. Consider the value ax2 + 2bxy + cy2
for admissible one-vectors. We assert that ax2 + 2bxy + cy2 a. In thefirst case S [x] = a. In the second case, because of (60)
ax2 + 2bxy + cy2 a(x2 xy +y2).
-
8/3/2019 Lectures on Quadratic Forms
61/170
6. Binary forms 57
But x and y are not both zero. Thus x2 xy +y2 1 which means thatS [x] a.
Let now x =xy
be an admissible two-vector. Then y = 1. Ifx = 0,
then S [x] = c. Let x 0, then
S [x] = ax2 2bx + c = c + x(ax 2b).
Because of (60), it follows that x(ax 2b) 0. Thus S satisfies condi-tions I) and II) of half reduction. Also b 0. This proves that S > 0and reduced.
(60) thus gives the necessary and sufficient conditions for a binary 69
quadratic form to be reduced.
In the theory of binary quadratic forms, one discusses some-timesequivalence not under all unimodular matrices, but only with respect to
those unimodular matrices whose determinant is unity. We say that two
binary matrices S and T are properly equivalent if there is a unimodular
matrix U such that
S = T[U], |U| = 1. (61)The properly equivalent matrices constitute a proper class. Note that
the properly unimodular matrices form a group. Two matrices S and T
which are equivalent in the sense of the previous sections, but which do
not satisfy (61) are said to be improperly equivalent. Note that improper
equivalence is notan equivalence relation.
In order to obtain the reduction theory for proper equivalence we
proceed thus: If S 1 =
a1 b1b1 c1
is positive, then there is a unimodular
matrix U such that S = S 1[U] =
a bb c
satisfies (60). If |U| = 1 we call
S a properly reducedmatrix. If |U| = 1, then consider W
W =
1 0
0 1
(62)
Then V = UW has the property |V| = 1. Now S [W] =
a bb c
and we
call this properly reduced. In any case we see that S is properly reduced 70
meansa > 0, 0 |2b| a c. (63)
-
8/3/2019 Lectures on Quadratic Forms
62/170
58 2. Reduction of positive quadratic forms
If we denote by R the reduced domain, that is the set of reduced
matrices in the old sense and R the properly reduced domain, one seesimmediately that
R
= R+ RW
where W has the meaning in (62).
We shall now give two applications.
Let S =
a bb c
be a positive integral matrix. Because of conditions
(63) and the additional condition (31), it follows that for given |S |, thereexist only finitely many properly reduced integral matrices. Consider
now the case |S | = 1. Then because of (31),
ac 4
3 (64)
and hence the only integers a, b, c, satisfying (63) and (64) are a = c =
1, b = 0. This proves
i) Every binary integral positive quadratic form of determinant unity
is properly equivalent to x2 +y2.
Let now p be a prime number > 2. Let p be representable by the
quadratic form x2 +y2. We assert that then p 1(mod 4). For, if x andy are integers such that
x2 +y2 = p
then x and y cannot be congruent to each other mod 2. So let x be odd71
and y even. Then p = x2 +y2 1(mod 4).We will now prove that conversely if p 1(mod 4), the form x2 +y2
represents p (integrally). For, let be a primitive root mod p. There is
then an integer k, 1 k < p 1 such that
k 1(mod p).
This means that 2k 1(mod p) and by definition of primitive root, weget k = p1/2. But p 1(mod 4) so that kis an even integer. Therefore
1 (k/2)2(mod p).
-
8/3/2019 Lectures on Quadratic Forms
63/170
6. Binary forms 59
There is thus an integer b, 1 b p 1 such that b2 1(mod p). Putb
2
= 1 + p, 1 an integer.Consider the binary form px2 + 2bxy + y2. Its determinant is p
b2 = 1. By the result obtained in i), this form is equivalent to x2 + y2.
But px2 + 2bxy + y2 represents p, (x = 1,y = 0). Therefore x2 + y2
represents p. Thus
ii) If p is a prime > 2, then x2 +y2 = p has a solution if and only if
p 1(mod 4).
Results i) and ii) are due originally to Lagrange.
Let S [x] = ax2 + 2bxy + cy2 be a real, positive, binary quadratic
form. We can write
S [x] = a(x y)(x y) (65)
where is a root, necessarily complex, of the polynomial az2 + 2bz + c 72
and is its conjugate. Let = + i have positive imaginary part.
Let V =
be a real matrix of unit determinant and consider the
mapping
S S [V].Then S [V x] is given by
S [V x] = a(x
y)(x
y) (66)
where a = a( ) ( ) is necessarily real and positive, and
= V1() = + . (67)
It is easy to see that also has positive imaginary part. Let us also
observe that =b + i |S |
a.
Consider now the relationship between S and . IfS is given, then
(65) determines a with positive imaginary part. Now given , (65)
itself shows that S is determined only upto a real factor. This real factor
can be determined by insisting that the associated quadratic forms havea given determinant. In particular, if |S | = 1 then the is uniquely
-
8/3/2019 Lectures on Quadratic Forms
64/170
60 2. Reduction of positive quadratic forms
determined by S and conversely. If = + i, > 0, then the S is given
by
S =
1 0
0
1 0 1
(68)
LetP denote the space of positive binary forms of unit determinant73
and G the upper half complex -plane. By what we have seen above the
mapping S in (65) is (1, 1) both ways. Let denote the group ofproper unimodular matrices. It acts on G as a group of mappings
U() = +
+ , U = (69)ofG onto itself. If we define two points 1, 2 in G as equivalent if
there is a U such that 1 = U(2), then the classical problem ofconstructing a fundamental region in G for , is seen to be the same
as selecting from each class of equivalent points one point so that the
resulting point set has nice properties.
By means of the (1, 1) correspondence, we have established in (68)
betweenP and G, we have S 1 = S 2[U] if and only if the corresponding
points 1, 2 respectively satisfy
1 = U1(2).
We define the fundamental region F in G to be the set of points such
that the matrices corresponding to them are properly reduced; in other
words, they satisfy (63). For the S in (68), S [x] =1
(x2 2xy + (2 +
2)y2). Therefore F consists of points = + i for which
|2
| 1
2 + 2 1 (70)
-
8/3/2019 Lectures on Quadratic Forms
65/170
7. Reduction of lattices 61
This is the familiar modular region in the upper half -plane. That 74
it is a fundamental region follows from the properties of the space of
reduced matrices in P. The points P and Q are the complex numbers1 + i 32
, and so for any point in F,
3
2. This means that for a
positive reduced binary form ax2 + 2bxy + cy2 of determinant d
ad
23
,
which we had already seen in Theorem 1.
7 Reduction of lattices
Let V be the Euclidean space of n dimensions formed by n-rowed real
columns
=
a1...
an
.Let 1, . . . , n be a basis ofV so that
i =
a1i...
ani
, i = 1, . . . , n.
Denote by A the matrix (akl). Obviously |A| 0.
-
8/3/2019 Lectures on Quadratic Forms
66/170
62 2. Reduction of positive quadratic forms
Let L be a lattice in V and let 1, . . . , n be a basis of this lattice. L
then consists of elements 1g1 + + ngn where g1, . . . , gn are integers.We shall call A the matrix of the lattice.
Conversely if A is any non-singular n-rowed matrix, then the75
columns of A, as elements of V are linearly independent and therefore
determine a lattice.
Let L be the lattice above and let 1, . . . , n be any other base of L
and B its matrix, then
B = AU
where U is a unimodular matrix. Also ifU runs through all unimodular
matrices, then AU runs through all bases of L. We now wish to single
out among these bases one which has some distinguished properties.
Let us introduce in V, the inner product of two vectors and by
= a1b1 + + anbn
where =
a1...
an
, =
b1...bn
. The square of the length of the vector isgiven by
2 = a21 + + a2n.Let A be the matrix of a base 1, . . . , n of L. Consider the positive
matrix S = AA. IfS is given A is determined only upto an orthogonalmatrix P on its left. For, if AA = A
1
A1 then AA11
= P is orthogonal.
But multiplication on the left by an orthogonal matrix implies a rotation
in V about the origin.
We shall call a base B ofL reduced ifS 1 = BB is a reduced matrix.
Obviously in this case
0 < 21 . . . 2nkk+1 0, k = 1, . . . , n 1.
From the way reduced matrices are determined we see that a reduced76
base 1, . . . , n of L may be defined to be a base such that for every set
of integers x1, . . . , xn such that (xk, . . . , xn) = 1 the vector
= 1x1 + +nxn
-
8/3/2019 Lectures on Quadratic Forms
67/170
7. Reduction of lattices 63
satisfies
2
2k (k = 1, . . . , n.)
Also
k k+1 0(k = 1, . . . , n + 1).If follows therefore that
21 . . . 2n cn|AA| = cn|A|2
cn being a constant depending only on n. Also abs |A| is the volume ofthe parallelopiped formed by the vectors 1, . . . , n.
consider the case n = 2.
We have, because of (30)
21 22 4
3|A|2 (72)
Let now denote the acute angle between the vectors 1 and 2.
Since the area of the parallelogram formed by 1 and 2 on the one hand
equals abs |A| and on the other
212
2 sin , we see that 77
sin2 34
(73)
Since 0
2 , it follows from (73) that
3
2.
Hence for a two dimensional lattice we may choose a basis in such a
manner that the angle (acute) between the basis vectors is between 60
and 90.
-
8/3/2019 Lectures on Quadratic Forms
68/170
-
8/3/2019 Lectures on Quadratic Forms
69/170
Bibliography
[1] C.F. Gauss Disquisitiones Arithmeticae Ges. Werke 1, (1801).
[2] C. Hermite Oeuvres Vol.1, Paris (1905) P. 94-273.
[3] H. Minkowski Geometrie der Zahlen, Leipzig (1896).
[4] H. Minkowski Discontinuitatsbereich fur arithmetische Aquiv-
alenz. Ges. Werke, Bd.2, (1911), P.53 - 100.
[5] C.L. Siegel Einheiten quadratischer Formen Abh. Math. Sem. Han-
sischen Univ. 13, (1940), P. 209-239.
65
-
8/3/2019 Lectures on Quadratic Forms
70/170
-
8/3/2019 Lectures on Quadratic Forms
71/170
Chapter 3
Indefinite quadratic forms
1 Discontinuous groups78
In the previous chapter we had met with the situation in which a group
of transformations acts on a topological space and we constructed, by
a certain method, a subset of this space which has some distinguished
properties relative to the group. We shall now study the following gen-
eral situation.
Let be an abstract group and T a Hausdorff topological space on
which has a representation
t t, t T, (1)
carrying T into itself. We say that this representation of is discontin-
uous if for every point t T, the set of points {t} for has nolimit point in T. The problem now is to determine, for a given , all the
spaces T on which has a discontinuous representation. For an arbitrar-
ily given group, this problem can be very difficult. We shall, therefore,
impose certain restrictions on and T. Let us assume that there is a
group , of transformations of T onto itself, which is transitive on T.
This means that ift1 and t2 are any two elements ofT, there exists such that
t1 = t2. (2)
67
-
8/3/2019 Lectures on Quadratic Forms
72/170
68 3. Indefinite quadratic forms
Let us further assume that is a subgroup of . Let t0 be a point in T
and consider the subgroup of consisting of such that79t0 = t0. (3)
Ift is any point ofT, we have because of transitivity,
t = t0
for some . Because of (3), we get
t = t0.
Conversely if is such that t = t0, then t0 = t0 or .Thus every point t T determines a coset of\ that is, the space ofright cosets of modulo . Conversely if is any coset, then t = t0
is a point determined by . Hence the mapping
t (4)
of T on \ is (1, 1) both ways. In order to make this correspondencetopological, let us study the following situation.
Let be locally compact topological group and T a Hausdorff topo-
logical space on which has a representation
t t (5)as a transitive group of mappings. Let us assume that this representation
is open and continuous. We recall that (5) is said to be open if for every
open set P in and every t T the set {t}, P is an open set inT. Then it follows that the subgroup of leaving t0 T fixed isnot only a closed subgroup but that the mapping (4) of T on \ is ahomeomorphism.
Let be a subgroup of which has on T a discontinuous represen-80
tation. Then has trivially a representation in \. By the remarksabove, the representation
, (6)
-
8/3/2019 Lectures on Quadratic Forms
73/170
1. Discontinuous groups 69
is discontinuous in \.On the other hand, let be any closed subgroup of . Then the
representation
of on \ is open and continuous. It is clearly transitive. In order,therefore, to find all spaces on which has a discontinuous representa-
tion, it is enough to consider the spaces of right cosets of with regard
to closed subgroups of.
Suppose is a closed subgroup of and has a discontinuous
representation on \. Let K be a closed subgroup of contained in. Then has a discontinuous representation on K\. For, if K isa coset such that the set of cosets
{K
},
has a limit point in
K\, then the set {}, also has a limit point in \ and so (6)would not be discontinuous. In particular, if we take for K the subgroup
consisting only of the identity element e, then is discontinuous in is
clearly equivalent to is a discrete subgroup of.
Thus if there exists some subgroup of such that is discontin-
uous in \, then necessarily has to be discrete. It can be proved thatif has a countable basis of open sets, then is enumerable.
Suppose now that is a locally compact group with a countable ba- 81
sis of open sets. Let be a discrete subgroup of. If is any compact,
hence closed, subgroup of then it follows that the representation (6)
of in
\ is discontinuous. This can be seen by assuming that for a
certain , the set n, n has limit point and this will lead to acontradiction because of the discreteness of .
In general the fact (6) is discontinuous in \ does not entail that is compact. Let us, therefore, consider the following situation.
Let be a locally compact group possessing a countable basis of
open sets. Then there exists in a right invariant Haar measure d
which is determined uniquely upto a positive multiplicative factor. Let
be a discrete subgroup of. There exists then in a subset Fpossessing
the following properties: 1)
aFa = , 2) the sets {Fa} for a are
mutually disjoint and 3) F is measurable in terms of the Haar measure
d. F is then said to be a fundamental set relative to
. Note that if Fis a fundamental set then so if Fa for any a so that a fundamental
-
8/3/2019 Lectures on Quadratic Forms
74/170
70 3. Indefinite quadratic forms
set is not unique. 1) and 2) assert that F intersects each coset of \in exactly one point so that F has to be formed in by choosing oneelement from each coset \. The interesting point is that, under theconditions on , this can be done in such a way that the resulting set F
is measurable. Let us now assume thatF
d < . (7)
It can then be shown that the value of the integral in (7) is independent82
of the choice of F. We now state, without proof, the important
Theorem 1. Let be a locally compact topological group with a count-
able basis of open sets. Let be a discrete subgroup of and F a funda-
mental set in relative to . Let F have finite Haar measure in . If
is any closed subgroup of , then has a discontinuous representation
in \ if and only if is compact.
The interest in the theorem lies in the necessity part of it.
Let us assume that is, as will be in the applications, a Lie group.
Let be a discrete subgroup of . For any closed subgroup of, the
dimensions of, \ and are connected by
dim + dim \ = dim .
If F is a fundamental set in with regard to and is of finite measure,
in terms of the invariant measure in , then by Theorem 1, will be
discontinuous in \ if and only if is compact. In order, therefore,to obtain a space T = \ of smallest dimension in which has adiscontinuous representation, one has to consider a which is compact
and maximal with this property.
Let us consider the following example.83
Let be the group of n-rowed real matrices A is a Lie group.Let us determine first all compact subgroups of . Let K be a compact
subgroup of. IfC K, then |C|