projective geometry lecture notes - · pdf fileprojective geometry lecture notes thomas baird...

69
Projective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector spaces and their duals ......................... 4 2.1.1 Fields .................................. 4 2.1.2 Vector spaces and subspaces ...................... 5 2.1.3 Matrices ................................. 7 2.1.4 Dual vector spaces ........................... 7 2.2 Projective spaces and homogeneous coordinates ............... 8 2.2.1 Visualizing projective space ...................... 8 2.2.2 Homogeneous coordinates ....................... 13 2.3 Linear subspaces ................................ 13 2.3.1 Two points determine a line ...................... 14 2.3.2 Two planar lines intersect at a point ................. 14 2.4 Projective transformations and the Erlangen Program ............ 15 2.4.1 Erlangen Program ........................... 16 2.4.2 Projective versus linear ......................... 17 2.4.3 Examples of projective transformations ................ 18 2.4.4 Direct sums ............................... 19 2.4.5 General position ............................ 20 2.4.6 The Cross-Ratio ............................ 22 2.5 Classical Theorems ............................... 23 2.5.1 Desargues’ Theorem .......................... 23 2.5.2 Pappus’ Theorem ............................ 24 2.6 Duality ...................................... 26 3 Quadrics and Conics 28 3.1 Affine algebraic sets ............................... 28 3.2 Projective algebraic sets ............................ 30 3.3 Bilinear and quadratic forms .......................... 31 3.3.1 Quadratic forms ............................. 33 3.3.2 Change of basis ............................. 33 1

Upload: phamnguyet

Post on 06-Mar-2018

232 views

Category:

Documents


11 download

TRANSCRIPT

Page 1: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Projective Geometry Lecture Notes

Thomas Baird

March 26, 2014

Contents

1 Introduction 2

2 Vector Spaces and Projective Spaces 42.1 Vector spaces and their duals . . . . . . . . . . . . . . . . . . . . . . . . . 4

2.1.1 Fields . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.1.2 Vector spaces and subspaces . . . . . . . . . . . . . . . . . . . . . . 52.1.3 Matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.1.4 Dual vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.2 Projective spaces and homogeneous coordinates . . . . . . . . . . . . . . . 82.2.1 Visualizing projective space . . . . . . . . . . . . . . . . . . . . . . 82.2.2 Homogeneous coordinates . . . . . . . . . . . . . . . . . . . . . . . 13

2.3 Linear subspaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.3.1 Two points determine a line . . . . . . . . . . . . . . . . . . . . . . 142.3.2 Two planar lines intersect at a point . . . . . . . . . . . . . . . . . 14

2.4 Projective transformations and the Erlangen Program . . . . . . . . . . . . 152.4.1 Erlangen Program . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.4.2 Projective versus linear . . . . . . . . . . . . . . . . . . . . . . . . . 172.4.3 Examples of projective transformations . . . . . . . . . . . . . . . . 182.4.4 Direct sums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 192.4.5 General position . . . . . . . . . . . . . . . . . . . . . . . . . . . . 202.4.6 The Cross-Ratio . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.5 Classical Theorems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 232.5.1 Desargues’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . 232.5.2 Pappus’ Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

2.6 Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

3 Quadrics and Conics 283.1 Affine algebraic sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 283.2 Projective algebraic sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 303.3 Bilinear and quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . 31

3.3.1 Quadratic forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 333.3.2 Change of basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

1

Page 2: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

3.3.3 Digression on the Hessian . . . . . . . . . . . . . . . . . . . . . . . 363.4 Quadrics and conics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 373.5 Parametrization of the conic . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.5.1 Rational parametrization of the circle . . . . . . . . . . . . . . . . . 423.6 Polars . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 443.7 Linear subspaces of quadrics and ruled surfaces . . . . . . . . . . . . . . . 463.8 Pencils of quadrics and degeneration . . . . . . . . . . . . . . . . . . . . . 47

4 Exterior Algebras 524.1 Multilinear algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.2 The exterior algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 554.3 The Grassmanian and the Plucker embedding . . . . . . . . . . . . . . . . 594.4 Decomposability of 2-vectors . . . . . . . . . . . . . . . . . . . . . . . . . . 594.5 The Klein Quadric . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

5 Extra details 64

6 The hyperbolic plane 65

7 Ideas for Projects 68

1 Introduction

Projective geometry has its origins in Renaissance Italy, in the development of perspectivein painting: the problem of capturing a 3-dimensional image on a 2-dimensional canvas.It is a familiar fact that objects appear smaller as the they get farther away, and that theapparent angle between straight lines depends on the vantage point of the observer. Themore familiar Euclidean geometry is not well equipped to make sense of this, because inEuclidean geometry length and angle are well-defined, measurable quantities independentof the observer. Projective geometry provides a better framework for understanding howshapes change as perspective varies.

The projective geometry most relevant to painting is called the real projective plane,and is denoted RP 2 or P (R3).

Definition 1. The real projective plane, RP 2 = P (R3) is the set of 1-dimensional sub-spaces of R3.

This definition is best motivated by a picture. Imagine an observer sitting at the originin R3 looking out into 3-dimensional space. The 1-dimensional subspaces of R3 can beunderstood as lines of sight. If we now situate a (Euclidean) plane P that doesn’t containthe origin, then each point in P determines unique sight line. Objects in (subsets of) R3

can now be “projected” onto the plane P , enabling us to transform a 3-dimensional sceneinto a 2-dimensional image. Since 1-dimensional lines translate into points on the plane,we call the 1-dimensional lines projective points.

Of course, not every projective point corresponds to a point in P , because some 1-dimensional subspaces are parallel to P . Such points are called points at infinity. To

2

Page 3: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 1: A 2D representation of a cube

Figure 2: Projection onto a plane

3

Page 4: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

motivate this terminology, consider a family of projective points that rotate from pro-jective points that intersect P to one that is parallel. The projection onto P becomes afamily of points that diverges to infinity and then disappears. But as projective pointsthey converge to a point at infinity. It is important to note that a projective point is onlyat infinity with respect to some Euclidean plane P .

One of the characteristic features of projective geometry is that every distinct pairof projective lines in the projective plane intersect. This runs contrary to the parallelpostulate in Euclidean geometry, which says that lines in the plane intersect except whenthey are parallel. We will see that two lines that appear parallel in a Euclidean plane willintersect at a point at infinity when they are considered as projective lines. As a generalrule, theorems about intersections between geometric sets are easier to prove, and requirefewer exceptions when considered in projective geometry.

According to Dieudonne’s History of Algebraic Geometry, projective geometry wasamong the most popular fields of mathematical research in the late 19th century. Pro-jective geometry today is a fundamental part of algebraic geometry, arguably the richestand deepest field in mathematics, of which we will gain a glimpse in this course.

2 Vector Spaces and Projective Spaces

2.1 Vector spaces and their duals

Projective geometry can be understood as the application of linear algebra to classicalgeometry. We begin with a review of basics of linear algebra.

2.1.1 Fields

The rational numbers Q, the real numbers R and the complex numbers C are examples offields. A field is a set F equipped with two binary operations F ×F → F called additionand multiplication, denoted + and · respectively, satisfying the following axioms for alla, b, c ∈ R.

1. (commutativity) a+ b = b+ a, and ab = ba.

2. (associativity) (a+ b) + c = a+ (b+ c) and a(bc) = (ab)c

3. (identities) There exists 0, 1 ∈ F such that a+ 0 = a and a · 1 = a

4. (distribution) a(b+ c) = ab+ ac and (a+ b)c = ac+ bc

5. (additive inverse) There exists −a ∈ F such that a+ (−a) = 0

6. (multiplicative inverse) If a 6= 0, there exists 1a∈ F such that a · 1

a= 1.

Two other well know operations are best defined in terms of the above properties. Forexample

a− b = a+ (−b)

4

Page 5: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

and if b 6= 0 then

a/b = a · 1

b.

We usually illustrate the ideas in this course using the fields R and C, but it isworth remarking that most of the results are independent of the base field F . A perhapsunfamiliar example that is fun to consider is the field of two elements Z/2. This is thefield containing two elements Z/2 = {0, 1} subject to addition and multiplication rules

0 + 0 = 0, 0 + 1 = 1 + 0 = 1, 1 + 1 = 0

0 · 0 = 0, 0 · 1 = 1 · 0 = 0, 1 · 1 = 1

and arises as modular arithmetic with respect to 2.

2.1.2 Vector spaces and subspaces

Let F denote a field. A vector space V over F is set equipped with two operations:

• Addition: V × V → V , (v, w) 7→ v + w.

• Scalar multiplication: F × V → V , (λ, v) 7→ λv

satisfying the following list of axioms for all u, v, w ∈ V and λ, µ ∈ F .

• (u+ v) + w = u+ (v + w) (additive associativity)

• u+ v = v + u (additive commutativity)

• There exists a vector 0 ∈ V called the zero vector such that, 0 + v = v for all v ∈ V(additive identity)

• For every v ∈ V there exists −v ∈ V such that v +−v = 0 (additive inverses)

• (λ+ µ)v = λv + µv (distributivity)

• λ(u+ v) = λu+ λv (distributivity)

• λ(µv) = (λµ)v (associativity of scalar multiplication)

• 1v = v ( identity of scalar multiplication)

I will leave it as an exercise to prove that 0v = 0 and that −1v = −v.

Example 1. The set V = Rn of n-tuples of real numbers is a vector space over F = R inthe usual way. Similarly, F n is a vector space over F . The vector spaces we consider inthis course will always be isomorphic to one of these examples.

Given two vector spaces V,W over F , a map φ : V → W is called linear if for allv1, v2 ∈ V and λ1, λ2 ∈ F , we have

φ(λ1v1 + λ2v2) = λ1φ(v1) + λ2φ(v2).

5

Page 6: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Example 2. A linear map φ from Rm to Rn can be uniquely represented by an n ×mmatrix of real numbers. In this representation, vectors in Rm and Rn are represented bycolumn vectors, and φ is defined using matrix multiplication. Similarly, a linear map fromCm to Cn can be uniquely represented by an n×m matrix of complex numbers.

We say that a linear map φ : V → W is an isomorphism if φ is a one-to-one and onto.In this case, the inverse map φ−1 is also linear, hence is also a isomorphism. Two vectorspaces are called isomorphic if there exists an isomorphism between them. A vector spaceV over F is called n-dimensional if it is isomorphic to F n. V is called finite dimensionalif it is n-dimensional for some n ∈ {0, 1, 2, ....}. Otherwise we say that V is infinitedimensional.

A vector subspace of a vector space V is a subset U ⊂ V containing zero which isclosed under addition and scalar multiplication. This makes U into a vector space in it’sown right, and the inclusion map U ↪→ V in linear. Given any linear map φ : V → W ,we can define two vector subspaces: the image

φ(V ) := {φ(v)|v ∈ V } ⊆ W

and the kernelker(φ) := {v ∈ V |φ(v) = 0} ⊆ V.

The dimension of φ(V ) is called the rank of φ, and the dimension of ker(φ) is called thenullity of φ. The rank-nullity theorem states that the rank plus the nullity equals thedimension of V . I.e., given a linear map φ : V → W

dim(V ) = dim(ker(φ)) + dim(φ(V )).

Given a set of vectors S := {v1, ..., vk} ⊂ V a linear combination is an expression ofthe form

λ1v1 + ...+ λkvk

where λ1, ..., λk ∈ F are scalars. The set of vectors that can be written as a linearcombination of vectors in S is called the span of S. The set S is called linearly independentif the only linear combination satisfying

λ1v1 + ...+ λkvk = 0

is when λ1 = λ2 = ... = λk = 0. If some non-trivial linear combination equals zero, thenwe say S is linearly dependent.

A basis of V is a linearly independent set S ⊂ V that spans V . Every vector spacehas a basis, and the dimension of a vector space is equal to the number of vectors in anybasis of that vector space. If {v1, ..., vn} is a basis of V , then this defines an isomorphismφ : F n → V by

φ((x1, ..., xn)) = x1v1 + ...+ xnvn.

Conversely, given an isomorphism φ : F n → V , we obtain a basis {v1, ..., vn} ⊂ V bysetting vi = φ(ei) where {e1, ..., en} ∈ F n is the standard basis. So choosing a basis for Vamounts to the same thing as defining an isomorphism from F n to V .

Proposition 2.1. Suppose that W ⊆ V is a vector subspace. Then dim(W ) ≤ dim(V ).Furthermore, we have equality dim(W ) = dim(V ) if and only if W = V .

Proof. Exercise.

6

Page 7: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

2.1.3 Matrices

A m × n-matrix A over the field F is a set of scalars Ai,j ∈ F parametrized by i ∈{1, ...,m} and j ∈ {1, ..., n}, normally presented as a 2 dimensional array for which i andj parametrize rows and columns respectively.

A =

A1,1 A1,2 ... A1,n

A2,1 A2,2 ... ...... ... ... ...Am,1 ... ... Am,n

.

If A is m × n and B is n × p then the product matrix AB is an m × p matrix withentries

(AB)i,j =n∑k=1

Ai,kBk,j (1)

The identity matrix I is the n× n-matrix satisfying

Ii,j :=

{1 i = j

0 i 6= j.

An n× n-matrix A is called invertible if there exists an n× n-matrix B such that AB =I = BA. If B exists, it is unique and is called the inverse matrix B = A−1.

The determinant of an n× n-matrix A is the scalar value∑σ∈Sn

(−1)σA1,σ(1)...An,σ(n)

where the sum is indexed by permutations σ of the set {1, ..., n} and (−1)σ = ±1 is thesign of the permutation.

Theorem 2.2. Let A be an n× n-matrix. The following properties are equivalent (i.e. ifone holds, they all hold).

• A is invertible.

• A has non-zero determinant.

• A has rank n.

The simplest way to check that A is invertible is usually to check if the rank is n byrow reducing.

2.1.4 Dual vector spaces

Given any vector space V , the dual vector space V ∗ is the set of linear maps from V toF = F 1

V ∗ := {φ : V → F | φ is linear}.

7

Page 8: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

The dual V ∗ is naturally a vector space under point-wise addition and multiplication

(φ1 + φ2)(v) = φ1(v) + φ2(v)

and(λφ)(v) = λ(φ(v)).

Example 3. Suppose V ∼= F n is the set of 1 × n column vectors. Then V ∗ is identifiedwith the set of n× 1 row vectors under matrix multiplication.

Suppose that v1, ..., vn ∈ V is a basis. Then the dual basis v∗1, ..., v∗n ∈ V ∗ is the unique

set of linear maps satisfyingv∗i (vj) = δji ,

where δji is the Kronecker delta defined by δii = 1 and δji = 0 if i 6= j.

Example 4. Let v1, ..., vn ∈ F n be a basis of column vectors and let A = [v1...vn] be then × n-matrix made up of these columns. The rules of matrix multiplication imply thatthe dual basis v∗1, ..., v

∗n are the rows of the inverse matrix

A−1 =

v∗1v∗2:.

v∗n

.

2.2 Projective spaces and homogeneous coordinates

Definition 2. Let V be a vector space. The projective space P (V ) of V is the set of1-dimensional subspaces of V .

Definition 3. If V is a vector space of dimension n + 1, then we say that P (V ) is aprojective space of dimension n. A 0-dimensional projective space is called a projectivepoint, a 1-dimensional vector space is called a projective line, and a 2-dimensional vectorspace is called a projective plane. If V is one of the standard vector spaces Rn+1 or Cn+1

we also use notation RP n := P (Rn+1) and CP n := P (Cn+1).

2.2.1 Visualizing projective space

To develop some intuition about projective spaces, we consider some low dimensionalexamples.

If V is one-dimensional, then the only one dimensional subspace is V itself, and con-sequently P (V ) is simply a point (a one element set).

Now consider the case of the real projective line. Let V = R2 with standard coordinatesx, y and let L ⊆ R2 be the line defined x = 1. With the exception of the y-axis, each1-dimensional subspace of R2 intersects L in exactly one point. Thus RP 1 = P (R2) isisomorphic as a set with the union of L with a single point. Traditionally, we identify Lwith R and call the extra point {∞}, the point at infinity. Thus we have a bijection ofsets

RP 1 ∼= R ∪ {∞}

8

Page 9: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 3: The real projective plane

Figure 4: The Riemann Sphere

Another approach is to observe that every 1-dimensional subspace intersects the unitcircle S1 in a pair of antipodal points. Thus a point in RP 1 corresponds to a pair ofantipodal points in S1. Topologically, we can construct RP 1 by taking a closed half circleand identifying the endpoints. Thus RP 1 is itself a circle.

By similar reasoning we can see that the complex projective line CP 1 is in bijectionwith C ∪ {∞}

CP 1 ∼= C ∪ {∞}.

Topologically, CP 1 is a isomorphic to the 2-dimensional sphere S2 and is sometimes calledthe Riemann Sphere. The correspondence between C ∪ {∞} and S2 can be understoodthrough stereographic projection, pictured in Figure 2.2.1.

We turn next to the real projective plane RP 2 = P (R3). Let x, y, z be the standardcoordinates on R3. Then every 1-dimensional subspace of R3 either intersects the planez = 1 or lies in the plane z = 0. The first class of 1-dimensional subspaces is in bijection

9

Page 10: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

with R2, and the second class is in bijection with RP 1. Thus we get a bijection

RP 2 ∼= R2 ∪ RP 1 ∼= R2 ∪ R ∪ {∞}.

It is not hard to see that this reasoning extends inductively to every dimension n todetermine the following bijection

RP n ∼= Rn ∪ RP n−1 ∼= Rn ∪ Rn−1 ∪ ... ∪ R0 (2)

and similarlyCP n ∼= Cn ∪ CP n−1 ∼= Cn ∪ Cn−1 ∪ ... ∪ C0. (3)

Returning now to the projective plane, observe that every 1-dimensional subspace ofR3 intersects the unit sphere S2 in a pair of antipodal points.

Figure 5: Antipodal points on the sphere

Thus we can construct RP 2 by taking (say) the northern hemisphere and identifyingantipodal points along the equator. This makes the boundary a copy of RP 1.

RP 2 is difficult to visual directly because it can not be topologically embedded in R3,though it can be in R4. However, it can be embedded if we allow self intersections as a“cross cap”.

See the website Math Images for more about the projective plane.It is also interesting to consider the field of two elements F = Z2. As a set, the standard

vector space F n is an n-fold Cartesian product of F with itself so Zn2 has cardinality 2n.

From the bijection FP n ∼= F n ∪ F n−1 ∪ ... ∪ F 1 ∪ F 0, it follows that

|Z2Pn| = 2n + 2n−1 + ....+ 1 = 2n+1 − 1

so a projective line contains 3 points and an projective plane contains 7 points. The Z2

projective plane is also called the Fano plane.

10

Page 11: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 6: RP 2 as quotient of a disk

Figure 7: Cross Caps

11

Page 12: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 8: Sliced open Cross Caps

Figure 9: The Fano Plane

12

Page 13: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

2.2.2 Homogeneous coordinates

To go further it is convenient to introduce some notation. Given a non-zero vector v ∈V \ {0}, define [v] ∈ P (V ) to be the 1-dimensional subspace spanned by v. If λ ∈ F \ {0}is a non-zero scalar, then

[λv] = [v].

Now suppose we choose a basis {v0, ..., vn} for V (recall this is basically the same asdefining an isomorphism V ∼= F n). Then every vector in v ∈ V can be written

v =n∑i=0

xivi

for some set of scalars xi and we can write

[v] = [x0 : ... : xn]

with respect to this basis. These are known as homogeneous coordinates. Once again, forλ 6= 0 we have

[λx0 : ... : λxn] = [x0 : ... : xn]

Consider now the subset U0 ⊂ P (V ) consisting of points [x0 : ... : xn] with x0 6= 0(Observe that this subset is well-defined independent of the representative (x0 : ... : xn)because for λ 6= 0 we have x0 6= 0 if and only if λx0 6= 0). For points in U0

[x0 : ... : xn] = [x0(1/x0) : x0(x1/x0) : ... : x0(xn/x0)] = [1 : x1/x0 : ... : xn/x0]

Thus we can uniquely represent any point in U0 by a representative vector of the form(1 : y1 : ... : yn). It follows that U0

∼= F n. This result has already been demonstratedgeometrically in equations (2) and (3) because U0 is the set of 1-dimensional subspacesin V that intersect the “hyperplane” x0 = 1. The complement of U0 is the set of pointswith x0 = 0, which form a copy of P (F n).

2.3 Linear subspaces

Definition 4. A linear subspace of a projective space P (V ) is a subset

P (W ) ⊆ P (V )

consisting of all 1-dimensional subspaces lying in some vector subspace W ⊆ V . Observethat any linear subspace P (W ) is a projective space in its own right.

For example, each point in P (V ) is a linear subspace because it corresponds to a 1-dimensional subspace. If W ⊆ V is 2-dimensional, we call P (W ) ⊆ P (V ) a projectiveline in P (V ). Similarly, if W is 3-dimensional then P (W ) ⊆ P (V ) is a projective planein P (V ). Observe that when each of these linear subspaces are intersected with thesubset U0 = {x0 = 1} ∼= F n then they become points, lines and planes in the traditionalEuclidean sense.

13

Page 14: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

2.3.1 Two points determine a line

Theorem 2.3. Given two distinct points [v1], [v2] ∈ P (V ), there is a unique projectiveline containing them both.

Proof. Since [v1] and [v2] represent distinct points in P (V ), then the representative vectorsv1, v2 ∈ V must be linearly independent, and thus span a 2-dimensional subspace W ⊆ V .Thus P (W ) is a projective line containing both [v1] and [v2].

Now suppose that there exists another projective line P (W ′) ⊆ P (V ) containing [v1]and [v2]. Then v1, v2 ∈ W , so W ⊆ W ′. Since both W and W ′ are 2-dimensional it followsthat W = W ′ and P (W ) = P (W ′).

Example 5. Consider the model of RP 2 as pairs of antipodal points on the sphere S2. Aprojective line in this model is represented by a great circle. Proposition 2.3 amounts tothe result that any pair of distinct and non-antipodal points in S2 lie on a unique greatcircle (by the way, this great circle also describes the shortest flight route for airplanesflying between two points on the planet).

Figure 10: Two points determine a line

2.3.2 Two planar lines intersect at a point

Theorem 2.4. In a projective plane, two distinct projective lines intersect in a uniquepoint.

Proof. Let P (V ) be a projective plane where dim(V ) = 3. Two distinct projective linesP (U) and P (W ) correspond to two distinct subspaces U,W ⊆ V each of dimension 2.Then we have an equality

P (U) ∩ P (W ) = P (U ∩W )

so proposition amounts to showing that U ∩W ⊂ V is a vector subspace of dimension 1.Certainly

dim(U ∩W ) ≤ dim(U) = 2.

Indeed we see that dim(U ∩W ) ≤ 1 for otherwise dim(U ∩W ) = 2 = dim(U) = dim(W )so U ∩W = U = W which contradicts the condition that U,W are distinct.

14

Page 15: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Thus to prove dim(U ∩W ) = 1 it is enough to show that U ∩W contains a non-zerovector. Choose bases u1, u2 ∈ U and w1, w2 ∈ W . Then the subset {u1, u2, w1, w2} ⊂ Vmust be linearly dependent because V only has dimension 3. Therefore, there exist scalarsλ1, ..., λ4 not all zero such that

λ1u1 + λ2u2 + λ3w1 + λ4w2 = 0

from which it follows that λ1u1+λ2u2 = −λ3w1−λ4w2 is a non-zero element in U∩W .

Example 6. Using the model of RP 2 as pairs of antipodal points in S2, then Proposition2.4 is the geometric fact that pairs of great circles always intersect in a pair of antipodalpoints.

Figure 11: Two lines in a plane determine a point

2.4 Projective transformations and the Erlangen Program

So far we have explored how a vector space V determines a projective space P (V ) andhow a vector subspace W ⊆ V determines a linear subspace P (W ). How do linear mapsfit into projective geometry?

It is not true in general that a linear map T : V → V ′ induces a map between theprojective spaces P (V ) and P (V ′).1 This is because if T has a non-zero kernel, thenit sends some 1-dimensional subspaces of V to zero in V ′. However, if ker(T ) = 0, orequivalently if T is injective, then this works fine.

Definition 5. Let T : V → V ′ be an injective linear map. Then the map

τ : P (V )→ P (V ′)

defined byτ([v]) = [T (v)]

is called the projective morphism or projective transformation induced by T .

Projective transformations from a projective space to itself τ : P (V )→ P (V ) are calledprojective automophisms and are considered to be the group of symmetries of projectivespace.

1If T is non-zero but not injective, then it does define a map from a subset of P (V ) to P (V ′). This isan example of a rational map which play an important role in algebraic geometry. We will not considerrational maps in this course though.

15

Page 16: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

2.4.1 Erlangen Program

To better understand the meaning of this phrase, we make a brief digression to considerthe more familiar Euclidean geometry in the plane. This kind of geometry deals withcertain subsets of R2 and studies their properties. Especially important are points, lines,lengths and angles. For example, a triangle is an object consisting of three points and threeline segments joining them; the line segments have definite lengths and their intersectionshave definite angles that add up to 180 degrees. A circle is a subset of R2 of points lying afixed distance r from a point p ∈ R2. The length r is called the radius and circumferenceof the circle is 2πr.

The group of symmetries of the Euclidean plane (or Euclidean transformations) con-sists of translations, rotations, and combinations of these. All of the objects and quan-tities studied in Euclidean geometry are preserved by these symmetries: lengths of linesegments, angles between lines, circles are sent to circles and triangles are sent to tri-angles. Two objects in the plane are called congruent if they are related by Euclideantransformations, and this is the concept of “isomorphism” in this context.

In 1872, Felix Klein initiated a highly influential re-examination of geometry called theErlangen Program. At the time, several different types of “non-Euclidean” geometries hadbeen introduced and it was unclear how they related to each other. Klein’s philosophywas that a fundamental role was played by the group of symmetries in the geometry;that the meaningful concepts in a geometry are those that preserved by the symmetriesand that geometries can be related in a hierarchy according to how much symmetry theypossess.

With this philosophy in mind, let us explore the relationship between projective andEuclidean geometry in more depth. Recall from §2.2.2 that we have a subset U0 ⊂ RP n

consisting of those points with homogeneous coordinates [1 : y1 : ... : yn]. There is anobvious bijection

U0∼= Rn, [1 : y1 : ... : yn]↔ (y1, ..., yn)

Proposition 2.5. Any Euclidean transformation of U0∼= Rn extends uniquely to a pro-

jective transformation of RP n.

Proof. We focus on the case n = 2 for notational simplicity. It is enough to show thattranslations and rotations of U0 extend uniquely.

Consider first translation in R2 by a vector (a1, a2). That is the map (y1, y2) 7→(y1 + a1, y2 + a2). The corresponding projective transformation is associated to the lineartransformation 1

y1

y2

7→ 1 0 0

a1 1 0a2 0 1

1y1

y2

=

1y1 + a1

y2 + a2

A rotation by angle θ of U0 = R2 extends to the projective transformation induced by thelinear transformation

16

Page 17: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

1y1

y2

7→ 1 0 0

0 cos(θ) −sin(θ)0 sin(θ) cos(θ)

1y1

y2

=

1cos(θ)y1 − sin(θ)y2

cos(θ)y2 + sin(θ)y1

Uniqueness will be clear after we learn about general position.

From the point of view of the Erlangen Program, Proposition 2.5 tells us that pro-jective geometry more basic than Euclidean geometry because it has more symmetry.This means that theorems and concepts in projective geometry translate into theoremsand concepts in Euclidean geometry, but not vice-versa. For example, we have lines andpoints and triangles in projective geometry, but no angles or lengths. Furthermore, ob-jects in Euclidean geometry that are not isomorphic (or congruent) to each other canbe isomorphic to each other in projective geometry. We will show for example that allprojective triangles are isomorphic to one another. We will also learn that circles, ellipses,parabolas and hyperbolas all look the same in projective geometry, where they are calledconics.

2.4.2 Projective versus linear

Next, we should clarify the relationship between linear maps and projective morphisms.

Proposition 2.6. Two injective linear maps T : V → W and T ′ : V → W determine thesame projective morphism if and only if T = λT ′ for some non-zero scalar λ.

Proof. Suppose that T = λT ′. Then for [v] ∈ P (V ),

[T (v)] = [λT ′(v)] = [T ′(v)],

so T and T ′ define the same projective morphism.Conversely, suppose that for all [v] ∈ P (V ) that [T (v)] = [T ′(v)]. Then for any basis

v0, ..., vn ∈ V , there exist non-zero scalars λ0, ..., λn such that

T (vi) = λiT′(vi).

However it is also true that [T (v0 + ... + vn)] = [T ′(v0 + ... + vn)] so for some non-zeroscalar λ we must have

T (v0 + ...+ vn) = λT ′(v0 + ...+ vn).

Combining these equations and using linearity, we get

n∑i=0

λT ′(vi) = λT ′(n∑i=0

vi) = T (n∑i=0

vi) =n∑i=0

T (vi) =n∑i=0

λiT′(vi)

Subtracting gives∑n

i=0(λ − λi)T ′(vi) = 0. Since T ′ is an injective, {T ′(vi)} is a linearlyindependent set, so λ = λi for all i and T = λT ′.

17

Page 18: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

2.4.3 Examples of projective transformations

To develop some intuition about projective transformations, let’s consider transformationsof the projective line FP 1 = P (F 2). A linear transformation of F 2 is uniquely describedby a 2× 2 matrix, [

a bc d

] [xy

]=

[ax+ bycx+ dy

]so for a projective point of the form [1 : y] we have

τ([1 : y]) = [a+ by : c+ dy] = [1 :c+ dy

a+ by].

In the case F = C, the function f : C→ C ∪ {∞} of the complex numbers

f(y) =c+ dy

a+ by

is called a Mobius transformation and plays an important role in complex analysis.

Proposition 2.7. Expressed in a Euclidean coordinate y ↔ [1 : y], every projectivetransformation of the line FP 1 is equal to a combination of the following transformations:

• A translation: y 7→ y + a

• A scaling: y 7→ λy, λ 6= 0

• An inversion: y 7→ 1y

Proof. Recall from linear algebra that any invertible matrix can be obtained from theidentity matrix using elementary row operations : multiplying a row by a non-zero scalar,permuting rows, and adding one row to another. Equivalently, every invertible matrixcan be written as a product of matrices of the form[

1 00 λ

],

[0 11 0

],

[1 01 1

]and these correspond to scaling, inversion and translation respectively. For instance,[

0 11 0

] [1y

]=

[y1

]= y

[1

1/y

]so corresponds to the inversion y 7→ 1

y

It is also informative to try and picture these transformations using our model whereRP 1 is a circle and CP 1 is a sphere (leave this to class discussion). A delightful videoabout Moebius transformations is available here.

18

Page 19: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Remark 1. More generally, a projective transformation of FP n takes the form

y 7→ c+Dy

a+ b · ywhere D is an n× n matrix, b, c are vectors in F n, a is a scalar and · is the dot product.Proving this is an exercise.

A more geometrically defined example of a projective transformation is obtained asfollows. Let P (V ) be a projective plane, let P (U), P (U ′) be two projective lines in P (V ),and let [w] ∈ P (V ) be a projective points not lying on P (U) or P (U ′). We define a map

τ : P (U)→ P (U ′)

as follows: Given [u] ∈ P (U), there is a unique line L ⊂ P (V ) containing both [w] and[u]. Define τ([u]) to be the unique intersection point L ∩ P (U ′). Observe that τ is welldefined by Theorem 2.3 and 2.4 .

Proposition 2.8. The map τ : P (U)→ P (U ′) is a projective transformation.

Before proving this proposition, it is convenient to introduce a concept from linearalgebra: the direct sum.

2.4.4 Direct sums

Let V and W be vector spaces over the same field F . The direct sum V ⊕W is the vectorspace with vector set equal to the cartesian product

V ⊕W := V ×W := {(v, w)|v ∈ V,w ∈ W}

with addition and scalar multiplication defined by (v, w) + (v′, w′) = (v + v′, w +w′) andλ(v, w) = (λv, λw). Exercise: show that

dim(V ⊕W ) = dim(V ) + dim(W ).

There are natural linear maps

i : V → V ⊕W, i(v) = (v, 0)

andp : V ⊕W → V, p(v, w) = v

called inclusion and projection maps respectively. Similar maps exist for W as well.

Proposition 2.9. Let V be a vector space and let U,W be vector subspaces of V . Thenthe natural map

Φ : U ⊕W → V, (u,w) 7→ u+ w

has imageim(Φ) = span{U,W}

with kernelker(Φ) = U ∩W.

In particular, if span{U,W} = V and U ∩W = {0}, then Φ defines an isomorphismU ⊕W ∼= V and we say that V is the internal direct sum of U and W .

19

Page 20: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proof. The image of Φ is the set {u+w ∈ V | u ∈ U, w ∈ W}. Since U and W are bothclosed under linear combinations, it follows easily that {u + w ∈ V | u ∈ U, w ∈ W} =span{U,W}. To find the kernel:

ker(Φ) = {(u,w) ∈ U ×W | u+ w = 0} = {(u,−u) ∈ U ×W} ∼= U ∩W.

Finally, if ker(Φ) = U ∩W = {0} and span{U,W} = V , then Φ is both injective andsurjective, so it is an isomorphism.

Proof of Proposition 2.8. Let W = span{w} be the one-dimensional subspace of V corre-sponding to [w]. Since [w] 6∈ P (U ′), it follows that W ∩ U ′ = {0}. Counting dimensions,we deduce that

V ∼= W ⊕ U ′

is an internal direct sum. Denote by p : V → U ′ the resulting projection map.Now let [u] ∈ P (U) and take

[u′] = τ([u]) ∈ P (U ′).

By definition of τ , [u′] lies in the line determined by [w] and [u], so u′ is a linear combination

u′ = λw + µu

for some scalars λ, µ ∈ F . Moreover, [w] ∈ P (U ′) so [w] 6= [u′] and we deduce that µ 6= 0,so

u =λ

µw − 1

µu′

so p(u) = − 1µu′ and τ([u]) = [p(u)]. It follows that τ is the projective transformation

induced by the composition of linear maps i : U ↪→ V and p : V → U ′.

2.4.5 General position

We gain a firmer grasp of the freedom afforded by projective transformations using thefollowing concept.

Definition 6. Let P (V ) be a projective space of dimension n. A set of n + 2 points inP (V ) are said to lie in general position if the representative vectors of any proper subsetare linearly independent in V (A subset S ′ ⊂ S is called a proper subset if S ′ is non-emptyand S ′ does not equal S).

Example 7. Any pair of distinct points in a projective line are represented by linearlyindependent vectors, so any three distinct points on a projective line lie in general position.

Example 8. Four points in a projective plane lie in general position if and only if nothree of them are collinear (prove this).

20

Page 21: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Theorem 2.10. If P (V ) and P (W ) are projective spaces of the same dimension n andboth p1, ..., pn+2 ∈ P (V ) and q1, ..., qn+2 ∈ P (W ) lie in general position, then there is aunique projective transformation

τ : P (V )→ P (W )

sending τ(pi) = qi for all i = 1, 2, ..., n+ 2.

Proof. Choose representative vectors v1, ..., vn+2 ∈ V such that [vi] = pi for all i. Becausethe points pi lie in general position, it follows that the first n + 1 vectors v1, ..., vn+1 arelinearly independent and thus form a basis of V . It follows that vn+2 can be written as alinear combination

n+1∑i=1

λivi = vn+2

Moreover, all of the scalars λi are non-zero, for otherwise the non-trivial linear expression

n+1∑i=1

λivi − vn+2 = 0

would imply that some proper subset of {v1, ..., vn+2} is linearly dependent, in contradic-tion with the points lying in general position. By replacing our representative vectors viwith the alterative representing vectors λivi, we may assume without loss of generalitythat

n+1∑i=1

vi = vn+2.

Similarly, we may choose representing vectors wi ∈ W for qi ∈ P (W ) such that

n+1∑i=1

wi = wn+2.

Since v1, ..., vn+1 ∈ V are linearly independent, they form a basis for V and we can definea linear map T : V → W by sending T (vi) = wi for i = 1, ..., n+ 1 and extending linearly.By linearity we have

T (vn+2) = T (n+1∑i=1

vi) =n+1∑i=1

T (vi) =n+1∑i=1

wi = wn+2

So it induces a projective transformation satisfying τ(pi) = qi for all i.To prove uniqueness, assume that there is another linear map T ′ : V → W satisfying

the conditions of the theorem. Then for each i there exists a non-zero scalar µi such that

T ′(vi) = µiwi.

But then by linearity

µn+2wn+2 = T ′(vn+2) = T ′(n+1∑i=1

vi) =n+1∑i=1

µiwi

21

Page 22: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

son+1∑i=1

wi = wn+2 =n+1∑i=1

µiµn+2

wi

and we deduce that µi

µn+2= 1 for all i and thus that T ′ = µT for some non-zero constant

scalar µ.

Example 9. There is a unique projective transformation τ : FP 1 → FP 1 sending anyordered set of three points to any other ordered set of three points. In the context ofcomplex analysis, we traditionally say that a mobius transformation is determined bywhere it sends 0, 1,∞.

Remark 2. In the proof of Theorem 2.10, we proved that for points p1, ..., pn+2 ∈ P (V )in general position, and given a representative vector vn+2 ∈ V such that [vn+2] = pn+2,we may choose vectors v1, ..., vn+1 ∈ V with [vi] = pi for i = 1, ..., n+ 1 and satisfying

n+1∑i=1

vi = vn+2.

This is a useful fact that we will apply repeatedly in the rest of the course.

2.4.6 The Cross-Ratio

The concept of general position enables us to understand an invariant known at leastsince Pappus of Alexandria (c. 290 - c. 350), called the cross-ratio. Consider four points

A,B,C,D lying on a Euclidean line. We denote by ~AB the vector pointing from A to B.The cross ratio of this ordered 4-tuple of points is

R(A,B;C,D) :=~AC

~AD/~BC

~BD=

~AC

~AD

~BD

~BC,

where the ratio of vectors is meaningful because one is a scalar multiple of the other. Ifa, b, c, d ∈ R, then the cross ratio is

R(a, b; c, d) =(a− c)(b− d)

(a− d)(b− c).

This extends to 4-tuples of points on RP 1 by defining ∞/∞ = 1.For homework you will prove that cross-ratio is preserved under projective transfor-

mations of the line. By Proposition 2.8, it follows that in Diagram 14 we have an equalityof cross-ratios R(A,B;C,D) = R(A′, B′;C ′, D′).

We can define the cross-ratio in a different way using Theorem 2.10. For any field Flet

τ : L→ FP 1 = F ∪ {∞}be the unique projective transformation sending τ(A) =∞, τ(B) = 0 and τ(C) = 1. Letτ(D) = d. Then

R(A,B;C,D) = R(∞, 0; 1, d) =(∞− 1)(0− d)

(∞− d)(0− 1)=(∞∞

)(−d−1

)= d.

22

Page 23: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 12: The cross-ratio is preserved under projective transformations

2.5 Classical Theorems

2.5.1 Desargues’ Theorem

Desargues (1591-1661) a french mathematician, architect and engineer, is considered oneof the founders of projective geometry. His work was forgotten for many years, until itwas rediscovered in the 19th century during the golden age of projective geometry. Thefollowing result is named in his honour.

First, we adopt some notation commonly used in Euclidean geometry. Given distinctpoints A,B in a projective space, let AB denote the unique line passing through themboth.

Theorem 2.11 (Desargues’ Theorem). Let A,B,C,A′, B′, C ′ be distinct points in a pro-jective space P (V ) such that the lines AA′, BB′, CC ′ are distinct and concurrent (mean-ing the three lines intersect at a common point). Then the three points of intersectionX := AB ∩ A′B′, Y := BC ∩ B′C ′ and Z := AC ∩ A′C ′ are collinear (meaning they lieon a common line).

Proof. Let P be the common point of intersection of the three lines AA′, BB′ and CC ′.First suppose that P coincides with one of the points A,B,C,A′, B′, C ′. Without loss

of generality, let P = A. Then we have equality of the lines AB = BB′ and AC = CC ′,so it follows that X = AB ∩ A′B′ = B′, Z = AC ∩ A′C ′ = C ′, so B′C ′ = XZ must becollinear with Y = BC ∩B′C ′ and the result is proven.

Now suppose that P is distinct from A,B,C,A′, B′, C ′. Then P , A and A′ are threedistinct points lying on a projective line, so they are in general position. By Remark 2,given a representative p ∈ V of P , we can find representative vectors a, a′ of A,A′ suchthat

p = a+ a′.

23

Page 24: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 13: Illustration of Desargues’ Theorem

Similarly, we have representative vectors b, b′ for B,B′ and c, c′ for C,C ′ such that

p = b+ b′, p = c+ c′

It follows that a+ a′ = b+ b′ so

x := a− b = b′ − a′

must represent the intersection point X = AB ∩ A′B′, because x is a linear combinationof both a, b and of a′, b′. Similarly we get

z := b− c = c′ − b′ and y := c− a = a′ − c′

representing the intersection points Y and Z respectively.To see that X, Y and Z are collinear, observe that

x+ z + y = (a− b) + (b− c) + (c− a) = 0

so x, y, z are linearly dependent and thus must lie in a 2-dimensional subspace of Vcorresponding to a projective line.

2.5.2 Pappus’ Theorem

Desargues’ Theorem is our first example of a result that is much easier to prove usingprojective geometry than using Euclidean geometry. Our next result, named after Pappusof Alexandria (290-350) is similar.

Theorem 2.12. Let A,B,C and A′, B′, C ′ be two pairs of collinear triples of distinctpoints in a projective plane. Then the three points X = BC ′ ∩B′C, Y := CA′ ∩C ′A andZ := AB′ ∩ A′B are collinear.

24

Page 25: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 14: Illustration of Pappus’ Theorem

Proof. Without loss of generality, we can assume that A,B,C ′, B′ lie in general position.If not, then two of the three required points coincide, so the conclusion is trivial. ByTheorem 2.10, we can assume that

A = [1 : 0 : 0], B = [0 : 1 : 0], C ′ = [0 : 0 : 1], B′ = [1 : 1 : 1]

The line AB corresponds to the two dimensional vector space spanned be (1, 0, 0) and(0, 1, 0) in F 3, so the point C ∈ AB must have the form

C = [1 : c : 0]

for some c 6= 0 (because it is not equal to A or B). Similarly, the line B′C ′ correspondsto the vector space spanned by (0, 0, 1) and (1, 1, 1) so

A′ = [1, 1, a]

for some a 6= 1.Next compute the intersection points of lines

BC ′ = span{(0, 1, 0), (0, 0, 1)} = {(x0, x1, x2) | x0 = 0}B′C = span{(1, 1, 1), (1, c, 0)}

so we deduce that BC ′ ∩B′C is represented by (1, 1, 1)− (1, c, 0) = (0, 1− c, 1).

AC ′ = span{(1, 0, 0), (0, 0, 1)} = {(x0, x1, x2) | x1 = 0}A′C = span{(1, 1, a), (1, c, 0)}

so we deduce that A′C ∩ AC ′ is represented by (1, c, 0)− c(1, 1, a) = (1− c, 0,−ca).

AB′ = span{(1, 0, 0), (1, 1, 1)} = {(x0, x1, x2) | x1 = x2}A′B = span{(1, 1, a), (0, 1, 0)}

so we deduce that AB′ ∩ A′B is represented by (a− 1)(0, 1, 0) + (1, 1, a) = (1, a, a).

25

Page 26: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Finally, we must check that the intersection points [0 : 1− c : 1], [1− c : 0 : −ca] and[1 : a : a] are collinear, which is equivalent to showing that (0, 1− c, 1), (1− c, 0,−ca) and(1, a, a) are linearly dependent. This can be accomplished by row reducing the matrix,

1 a a1− c 0 −ca

0 1− c 1

⇒ 1 a a

0 (c− 1)a −a0 1− c 1

1 a a0 0 00 1− c 1

which has rank two, so the span of the rows has dimension two and the vectors are linearlydependent.

2.6 Duality

In this section we explore the relationship between the geometry of P (V ) and that ofP (V ∗) where V ∗ denotes the dual vector space. Recall that given any vector space V , thedual vector space V ∗ is the set of linear maps from V to F

V ∗ := {φ : V → F} = Hom(V, F ).

• The dual V ∗ is naturally a vector space under the operations

(φ1 + φ2)(v) = φ1(v) + φ2(v)

(λφ)(v) = λ(φ(v)).

• Suppose that v1, ..., vn ∈ V is a basis. Then the dual basis φ1, ..., φn ∈ V ∗ is theunique set of linear maps satisfying φi(vj) = 1, if i = j and φi(vj) = 0 otherwise.

• If T : V → W is a linear map, then there is a natural linear map T ∗ : W ∗ → V ∗

(called the transpose) defined by

T ∗(f)(v) = f(T (v)).

Although the vector spaces V and V ∗ have the same dimension, there is no natural iso-morphism between them. However there is a natural correspondence between subspaces.

Definition 7. Let U ⊆ V be a vector subspace. The annihilator U0 ⊆ V ∗ is defined by

U0 := {φ ∈ V ∗ | φ(u) = 0, for all u ∈ U}.

To check that U0 is a subspace of V ∗, observe that if φ1, φ2 ∈ U0 and u ∈ U then

(λ1φ1 + λ2φ2)(u) = λ1φ1(u) + λ2φ2(u) = 0 + 0 = 0

so (λ1φ1 + λ2φ2) ∈ U0, so U0 is closed under linear combinations.

Proposition 2.13. If U1 ⊆ U2 ⊂ V then U01 ⊇ U0

2 .

26

Page 27: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proof. This is simply because if φ(v) = 0 for all v ∈ U2 then necessarily φ(v) = 0 for allv ∈ U1.

We also have the following result.

Proposition 2.14. dim(U) + dim(U0) = dim(V )

Proof. Choose a basis u1, ..., um ∈ U and extend to a basis u1, ..., um, vm+1, ..., vn of V .Let φ1, ..., φn be the dual basis. For a general element

φ =n∑i=1

λiφi ∈ V ∗

we have φ(ui) = λi. Thus φ ∈ U0 if and only if λi = 0 for i = 1, ...,m so U0 ⊆ V ∗ is thesubspace with basis φm+1, ..., φn which has dimension n−m.

The following proposition justifies the term duality.

Proposition 2.15. There is a natural isomorphism VS∼= (V ∗)∗, defined by v 7→ S(v)

where S(v)(φ) = φ(v).

Proof. We already know that dim(V ) = dim(V ∗) = dim((V ∗)∗), so (by Proposition 2.1)it only remains to prove that S : V → (V ∗)∗ is injective. But if v ∈ V is non-zero, wemay extend v1 := v to a basis and then in the dual basis φ1(v1) = S(v1)(φ1) = 1 6= 0 soS(v) 6= 0. Thus ker(S) = {0} and S is injective.

Remark 3. Proposition 2.15 depends on V being finite dimensional. If V is infinitedimensional, then S is injective but not surjective.

In practice, we tend to replace the phrase “naturally isomorphic” with “equals” andwrite V = (V ∗)∗. Using this abuse of terminology, one may easily check that (U0)0 = U ,so the operation of taking a subspace to its annihilator is reversible.

Proposition 2.16. Let P (V ) be a projective space of dimension n. Then there is a one-to-one correspondence between linear subspaces P (U) ⊂ P (V ) of dimension m and linearsubspaces of P (V ∗) of dimension n−m− 1 by the rule

P (U)↔ P (U0).

We use notation P (U0) = P (U)0 and call P (U)0 the dual of P (U).

Proof. By preceding remarks there is a one-to-one correspondence

U ↔ U0

between subspaces of dimension m+1 in V and subspaces of dimension n−m in V ∗. Theresult follows immediately.

It follows from Propositions 2.13 and 2.16 that geometric statements about linearsubspaces of P (V ) translate into geometric statements about linear subspaces of P (V ∗).

27

Page 28: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Example 10. Let P (V ) be a projective plane, so P (V ∗) is also a projective plane. Thereis a one-to-one correspondence between points in P (V ) and lines in P (V ∗). Similarly,there is one-to-one correspondence between lines in P (V ) and points in P (V ∗).

Corollary 2.17. Let A,B be distinct points in a projective plane lying on the line L.Then A0, B0 are lines intersecting at the point L0.

Proof. If A,B ⊂ L then L0 ⊂ A0 and L0 ⊂ B0, so L0 = A0 ∩ B0 is their single point ofintersection.

Corollary 2.18. Two distinct projective planes in projective 3-space always intersect ina line.

Proof. This statement is dual to statement “two points determine a line” in 3-space(Theorem 2.3).

Duality can be used to establish “dual” versions of theorems. For example, simply beexchanging points with lines and collinear with concurrent, we get the following results.

Theorem 2.19 (Dual version of Desargues’ Theorem = Converse of Desargue’s Theorem).Let α, β, γ, α′, β′, γ′ be distinct lines in a projective plane such that the points α∩α′, β∩β′and γ ∩ γ′ are distinct and collinear. Then the lines joining α ∩ β to α′ ∩ β′, α ∩ γ toα′ ∩ γ′ and β ∩ γ′ to β′ ∩ γ′ are concurrent.

Theorem 2.20 (Dual version of Pappus’ Theorem). Let α, β, γ and α′, β′, γ′ be two setsof concurrent triples of distinct lines in a projective plane. Then the three lines determinedby pairs of points β ∩ γ′ and β′ ∩ γ, γ ∩ α′ and γ′ ∩ α, α ∩ β′ and α′ ∩ β are concurrent.

In fact, with a little effort one can show that the statement of Pappus’ Theorem isequivalent to the Dual of Pappus’ Theorem (see Hitchin’s notes for details). Thus Pappus’Theorem may be considered “self-dual”.

As a final result, we describe the set of lines in R2.

Proposition 2.21. The set of lines in R2 is in one-to-one correspondence with the Mobiusstrip.

Proof. The set of lines in R2 is the set of lines in RP 2 minus the line at infinity. Byduality, this is identified with RP 2 minus a single point, which is the same as a mobiusstrip.

3 Quadrics and Conics

3.1 Affine algebraic sets

Given a set {x1, ..., xn} of variables, a polynomial f(x1, ..., xn) is an expression of the form

f = f(x1, ..., xn) :=∑I

aIxI

where

28

Page 29: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

• the sum is over n-tuples of non-negative integers I = (i1, ..., in),

• xI = xi11 ...xinn are called monomials, and

• aI ∈ F are scalars called coefficients, of which all but finitely many are zero.

The degree of a monomial xI is the sum |I| = i1 + ...+ in. The degree of a polynomial fequals the largest degree of a monomial occuring with non-zero coefficient.

Example 11. • x1 − x2 is a polynomial of degree one.

• x2 + y + 7 is a polynomial of degree two.

• xy2 + 2x3 − xyz + yz − 11 is a polynomial of degree three.

A polynomial f(x1, ..., xn) in n variables determines a function2

f : F n → F

by simply plugging in the entries n-tuples in F n in place of the variables x1, ..., xn.

Definition 8. Given an n variable polynomial f , the set

Zaff (f) := {(x1, ..., xn) ∈ F n | f(x1, ..., xn) = 0}

is called the (affine) zero set of f . More generally, if f1, ..., fk are polynomials then

Zaff (f1, ..., fk) := Zaff (f1) ∩ ... ∩ Zaff (fk)

is called the zero set of f1, ..., fk. An affine algebraic set X ⊂ F n is a set that equalsZaff (f1, ..., fk) for some set of polynomials f1, ..., fk.

Example 12. An affine line in F 2 is defined by a single equation ax+by+c = 0, where aand b are not both zero. Thus a line in F 2 is the zero set of a single degree one polynomial.More generally, an affine linear subset of F n (points, lines, planes, etc.) is the zero set ofa collection of degree one polynomials.

Example 13. The graph of a one variable polynomial f(x) is defined by the equationy = f(x), thus is equal to the algebraic set Zaff (y − f(x)).

Example 14. Any finite set of points {r1, ..., rd} ⊂ F is an algebraic set Zaff (f) wheref(x) =(x− r1)(x− r2)...(x− rd).

Definition 9. An affine quadric X ⊂ F n is a subset of the form X = Zaff (f) where fis a single polynomial of degree 2. In the special case that X ⊂ F 2, we call X a (affine)conic.

Example 15. Some examples of affine in R2 conics include: circles, ellipses, hyperbolasand parabolas. The unit sphere Zaff (x

21 + ...+ x2

n − 1) is a quadric in Rn.

2For some fields, two different polynomials can define the same function. For example x2 + x definesthe zero function over Z2.

29

Page 30: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

3.2 Projective algebraic sets

What is the right notion of an algebraic set in projective geometry? We begin with adefinition.

Definition 10. A polynomial is called homogeneous if all non-zero monomials have thesame degree.

Example 16. • x− y is a homogeneous polynomial of degree one.

• x2 + y + 7 is a non-homogeneous polynomial.

• yz2 + 2x3 − xyz is a homogeneous polynomial of degree three.

Projective algebraic sets in FP n are defined using homogeneous polynomials in n+ 1variables, which we normally denote {x0, x1, ..., xn}. The crucial property that makeshomogenous polynomials fit into projective geometry is the following.

Proposition 3.1. Let f(x0, ..., xn) be a homogeneous polynomial of degree d. Then forany scalar λ ∈ F , we have

f(λx0, ..., λxn) = λdf(x0, ..., xn).

Proof. For a monomial xI = xi00 ...xinn of degree d = |I| = i0 + ...+ in, we get

(λx)I = (λx0)i0 ...(λxn)in = λi0+...+inxi00 ...x

inn = λdxI .

The same will hold true for a linear combination of monomials of degree d.

It follows from Proposition 3.1 that for λ a non-zero scalar, f(x0, ..., xn) = 0 if andonly if f(λx0, ..., λxn) = 0. This makes the following definition well-defined.

Definition 11. Let f1, ..., fk be a set of homogeneous polynomials in the variables x0, ..., xn.The zero set

Z(f1, ..., fk) := {[x0 : ... : xn] ∈ FP n | fi(x0, ..., xn) = 0 for all i=1,...k}

is a called a projective algebraic set. Notice that

Z(f1, ..., fk) = Z(f1) ∩ ... ∩ Z(fk).

The main example we have encountered so far are linear subsets of projective space.These are always of the form Z(l1, ..., lk) for some set of homogeneous linear polynomialsl1, ..., lk.

To understand the correspondence between projective and algebraic sets we recall thatwithin FP n, we have a copy of the affine plane U0 := {[1 : y1, ... : yn] ∈ FP n} ∼= F n.Under this correspondence, we have an equality

Z(f1, ..., fk) ∩ U0 = Zaff (g1, ..., gk)

where gi(x1, ..., xn) := f(1, x1, ..., xn).

30

Page 31: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Conversely, given an affine algebraic set we construct a projective algebraic set asfollows. Let f(x1, .., xn) be a non-homogeneous polynomial of degree d. By multiplyingthe monomials of f by appropriate powers of x0, we can construct a homogenous, de-gree d polynomial h(x0, ..., xn) such that h(1, x1, ..., xn) = f(x1, ..., xn). We call h thehomogenization of f .

Example 17. Here are some examples of homogenizing polynomials

• g(x) = 1− x+ 2x2 ⇒ h(x, y) = y2 − xy + 2x2

• g(x, y) = y − x2 + 1 ⇒ h(x, y, z) = yz − x2 + z2

• g(x1, x2) = x1 + x32 + 7 ⇒ h(x0, x1, x2) = x2

0x1 + x32 + 7x3

0

Now suppose that g1, ..., gk are non-homogeneous polynomials with homogenizationsh1, ..., hk. Then we have equality

Z(h1, ..., hk) = U0 ∩ Zaff (g1, ..., gk).

The study of algebraic sets is called algebraic geometry. It is a very active field of re-search today, of which projective geometry is a small part. So far we have only consideredlinear algebraic sets, where the polynomials all have order one. Our next big topic arealgebraic sets of the form Z(f) where f is a degree two polynomial. Such algebraic setsare called quadrics.

3.3 Bilinear and quadratic forms

Definition 12. A symmetric bilinear form B on a vector space V is a map

B : V × V → F

such that

• B(u, v) = B(v, u), (Symmetric)

• B(λ1u1 + λ2u2, v) = λ1B(u1, v) + λ2B(u2, v), (Linear)

Observe that by combining symmetry and linearity, a bilinear form also satisfies

B(u, λ1v1 + λ2v2) = λ1B(u, v1) + λ2B(u, v2).

We say that B is linear in both entries, or bilinear. We say that B is nondegenerate iffor each non-zero v ∈ V , there is a w ∈ V such that B(v, w) 6= 0.

Example 18. The standard example of a symmetric bilinear form is the dot product

B : Rn × Rn → R, B((x1, ..., xn), (y1, ..., yn)) =n∑i=1

xiyi.

The dot product is nondegenerate because B(v, v) = |v|2 > 0 if v 6= 0.

31

Page 32: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Example 19. The bilinear form B : R4 × R4 → R defined by

B((x0, x1, x2, x3), (y0, y1, y2, y3)) = −x0y0 + x1y1 + x2y2 + x3y3

is called the Minkowski metric that measures lengths in space-time according to specialrelativity. In contrast with the dot product, the equation B(v, v) = 0 has holds for somenon-trivial vectors called null-vectors. The null-vectors form a subset of R4 called the lightcone dividing those vectors satisfying B(v, v) < 0 (called time-like vectors) from thosesatisfying B(v, v) > 0 (called space-like vectors).

The bilinear form is completely determined by its values on a basis.

Lemma 3.2. Let v1, ..., vn ∈ V be a basis. For any pair of vectors u = x1v1 + ...xnvn andw = y1v1 + ...+ ynvn we have

B(u,w) =n∑

i,j=1

xiyjB(vi, vj).

This means that the bilinear form is completely determined by the n × n-matrix β withentries βi,j = B(vi, vj).

Proof.

B(u,w) = B(x1v1 + ...xnvn, w)

=n∑i=1

xiB(vi, w) =n∑i=1

xiB(vi, y1v1 + ...+ ynvn)

=n∑i=1

n∑j=1

xiyjB(vi, vj) =n∑

i,j=1

xiyjβi,j

Lemma 3.2 provides a nice classification of bilinear forms: Given a vector space Vequipped with a basis v1, ..., vn, then there is a one-to-one correspondence between sym-metric bilinear forms B on V and symmetric n× n matrices β, according to the rule

B(v, w) = vTβw

where v, w denote the vectors v, w expressed as column vectors and T denotes transpose,so that vT is v as a row vector.

Proposition 3.3. The bilinear form B is non-degenerate if and only if the matrix β hasnon-zero determinant.

Proof. Suppose that det(β) is zero if and only if there exists a nonzero row vector vT suchthat vTβ = 0 if and only if vTβw = 0 for all column vectors w if and only if there existsv ∈ V \ {0} such that B(v, w) = 0 for all w ∈ V if and only if B is degenerate.

32

Page 33: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

3.3.1 Quadratic forms

Given a symmetric bilinear form B : V × V → F , we can associate a function called aquadratic form Q : V → F by the rule Q(v) = B(v, v).

Proposition 3.4. Let V be a vector space over a field F in which 1 + 1 6= 0. Let B be asymmetric bilinear form on V and let Q : V → F be the associated quadratic form. Thenfor any u, v ∈ V we have

B(u, v) = (Q(u+ v)−Q(u)−Q(v))/2.

In particular, B is completely determined by its quadratic form Q.

Proof. Using bilinearity and symmetry, we have

Q(u+v) = B(u+v, u+v) = B(u, u)+B(u, v)+B(v, u)+B(v, v) = Q(u)+Q(v)+2B(u, v)

from which the result follows.

Remark 4. Proposition 3.4 does not hold when F is the field of two elements becausewe are not allowed to divide by 2 = 0.

We can understand these relationship better using a basis. Let v1, ..., vn for V be abasis so B can be expressed in terms of the n× n matrix β. According to ...

Q(x1v1 + ...+ xnvn) = B(x1v1 + ...+ xnvn, x1v1 + ...+ xnvn) =∑i,j

βi,jxixj

is a homogeneous quadratic polynomial.Some examples: [

a]↔ ax2

[a bb c

]↔ ax2 + 2bxy + cy2

a b db c ed e f

↔ ax2 + 2bxy + cy2 + 2dxy + 2exz + fz2

3.3.2 Change of basis

We’ve seen so far that a symmetric bilinear form B can be described using a symmetricmatrix with respect to a basis. What happens if we change the basis?

Let v1, ..., vn and w1, ..., wn be two different bases of V . Then we can find scalars{Pi,j}ni,j=1 such that

wi =∑j

Pi,jvj (4)

The matrix P with entries Pi,j is called the change of basis matrix from w1, ..., wn tov1, ..., vn.

33

Page 34: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proposition 3.5. The change of basis matrix P is invertible and the inverse P−1 is thechange of basis matrix from v1, ..., vn to w1, ..., wn.

Proof. Let Q be the change of basis matrix from v1, ..., vn to w1, ..., wn. Then

vi =n∑j=1

Qi,jwj =n∑j=1

Qi,j(n∑k=1

Pj,kvk) =n∑k=1

(n∑j=1

Qi,jPj,k)vk =n∑k=1

(QP )i,kvk

where we have used (1) for matrix multiplication. We conclude that

(QP )i,k =

{1 i = k

0 i 6= k

and QP is the identity matrix.

Observe that P is invertible, with P−1 providing the change of matrix from w1, ..., wnback to v1, ..., vn. In fact, if P ′ is an invertible matrix then it is the change of basis matrixfrom v1, ..., vn to the new basis w1, ..., wn satisfying (4).

Proposition 3.6. Let B be a bilinear form on V , and let β, β′ be symmetric matricesrepresenting B with respect to bases v1, ..., vn and w1, ..., wn respectively. Then

β′ = PβP T

where P is the change of basis matrix.

Proof. There are two approaches. Simply calculate:

β′i,j = B(wi, wj) = B(∑k

Pi,kvk,∑l

Pj,lvl)

=∑k,l

Pi,kPj,lβk,l =∑k,l

Pi,kβk,lPTl,j

= (PβP T )i,j

Alternatively, we can argue more conceptually as follows. Let ei denote the ith stan-dard basis row vector. Then in terms of the basis v1, ..., vn, wi corresponds to the rowvector eiP . Then

β′i,j = B(wi, wj) = (eiP )β(ejP )T = eiPβPT eTj = (PβP T )i,j.

Our next task is to classify bilinear forms up to change of basis. First we need alemma.

Lemma 3.7. Let V be a vector space of dimension n equipped with a symmetric bilinearform B. For w ∈ V define

w⊥ := {v ∈ V |B(v, w) = 0}.

Then w⊥ ⊂ V is a vector space of dimension ≥ n− 1.

34

Page 35: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proof. Consider the map φ : V → F , defined by φ(v) = B(v, w). Then φ is linear because

φ(λ1v1 + λ2v2) = B(λ1v1 + λ2v2, w) = λ1B(v1, w) + λ2B(v2, w) = λ1φ(v1) + λ2φ(v2)

so w⊥ is equal to the kernel of φ, hence it is a vector subspace. By the rank nullitytheorem,

dim(w⊥) = dim(V )− dim(im(φ)) =

{n im(φ) = {0}n− 1 im(φ) = F.

Theorem 3.8. Let B be a symmetric, bilinear form on a vector space V over F = C orF = R. Then there exists a basis v1, ..., vn such that

B(vi, vj) = 0 (5)

for i 6= j and such that for all i = 1, ..., n

B(vi, vi) ∈

{0, 1 F = C0,±1 F = R.

(6)

I.e., there is a choice of basis so that β is diagonal with diagonal entries 0, 1 if F = Cand 0, 1,−1 if F = R.

Proof. As a first step construct a basis satisfying (5) using induction on the dimension ofV . In the base case dim(V ) = 1, (5) holds vacuously.

Now suppose by induction that all n − 1 dimensional symmetric bilinear forms canbe diagonalized. Let V be n dimensional. If B is trivial, then β is the zero matrix withrespect to any basis (hence diagonal). If B is non-trivial then by Proposition 3.4, thereexists a vector wn ∈ V such that B(wn, wn) = Q(wn) 6= 0. Clearly wn 6∈ v⊥n , so w⊥n is n−1dimensional. By induction we may choose a basis w1, ..., wn−1 ∈ w⊥n satisfying (5) for alli, j < n. Furthermore, for all i < n, B(wi, wn) = 0 because wi ∈ w⊥n . Thus w1, ..., wnsatisfy (5).

Finally, we want to modify w1, ..., wn to satisfy (6). Suppose that B(wi, wi) = c ∈ F .If c = 0, then let vi = wi. If c 6= 0 and F = C, let vi = 1√

cwi so that

B(vi, vi) = B(1√cwi,

1√cwi) =

( 1√c

)2

B(wi, wi) =c

c= 1.

If c 6= 0 and F = R then let vi := 1√|c|wi so

B(vi, vi) =c

|c|= ±1.

Corollary 3.9. Let V and B be as above and let Q(v) = B(v, v). Then

35

Page 36: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

• If F = C then for some m ≤ n there exists a basis such that if v =∑n

i=1 zivi

Q(v) =m∑i=1

z2i

• If F = R then for some p, q with p+ q ≤ n there exists a basis such that

Q(v) =

p∑i=1

z2i −

p+q∑i=p+1

z2i

Proof. Choose the basis v1, ..., vn so that B(vi, vj) satisfy the equations (5) and (6). Then

Q(z1v1+...+znvn) = B(z1v1+...+znvn, z1v1+...+znvn) =n∑

i,j=1

zizjB(vi, vj) =n∑i=1

z2iB(vi, vi),

where the B(vi, vi) are 0, 1 or −1 as appropriate.

Exercise: Show that the numbers m, p, q occurring above are independent of thechoice of basis.

It follows that B and Q are non-degenerate if and only if m = n or m = p+ q. In thecase of a real vector space, we say that B is positive definite if p = n, negative definite ifq = n and indefinite otherwise.

3.3.3 Digression on the Hessian

This discussion will not be on the test or problem sets.An important place where symmetric bilinear forms occur in mathematics is the study

of critical points of differentiable functions. Let f(x1, ..., xn) be a differentiable functionand let (a1, ..., an) be a point at which all partial derivatives vanish:

fxi(a1, ..., an) =

∂f

∂xi|(a1,...,an) = 0, for all i = 1, ..., n

If the the second order partials are defined and continuous then the n× n-matrix H withentries

Hi,j = fxixj(a1, ..., an) =

∂2f

∂xj∂xi|(a1,...,an)

is a symmetric matrix (Hi,j = Hj,i) defining a symmetric bilinear form on the set oftangent vectors called the Hessian. The corresponding quadratic form is equal to thesecond order Taylor polynomial of the function.

The Hessian can be used to determine if f has a local maximum or minimum at(a1, ..., an) (using the Second Derivative Test). Specifically, if H is non-degenerate thenf has a local maximum at (a1..., an) if H is negative definite, has a local minimum if His positive definite, and has neither a max or min if H is neither positive nor negativedefinite.

36

Page 37: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Similarly, there is a notion of a complex valued function of complex variables beingdifferentiable (also called analytic). Reasoning as above one can show that for a complex-valued function f , |f | has no local maximums where f is differentiable. This is a versionof the maximum principle.

3.4 Quadrics and conics

We now can state a coordinate independent definition of a quadric.

Definition 13. A quadric X ⊂ P (V ) is a subset defined by

X = Z(B) := {[v] ∈ V |B(v, v) = 0}.

By convention, we exclude the trivial case B = 0.

Is the case P (V ) = FP n, this definition is equivalent to X = Z(f) for some homoge-neous degree two polynomial f .

We say that B and B′ are projectively equivalent if there exists non-zero scalarλ ∈ F − {0} such that B = λB′. Observe that

Z(B) = Z(λB)

so projectively equivalently symmetric bilinear forms determine same quadric.Consider now the implications of Corollary 3.9 for quadrics in a projective line. Over

the reals such a quadric has the form Z(f) = Z(−f) where (up to a change of coordi-nates/linear change of variables) f equals

• x2

• x2 + y2

• x2 − y2

In the first case, Z(x2) consists of the single point [0 : 1]. In algebraic geometry, wewould say that Z(x2) has a root of “multiplicity” two at [0 : 1], but we won’t define thisconcept in this course. The case Z(x2 + y2) is empty set because there are no solutionswith both x and y non zero. Finally Z(x2 − y2) = Z((x − y)(x + y)) has two solutions:[1 : 1] and [1 : −1].

These three possibilities correspond to a familiar fact about roots of quadratic poly-nomials 0 = ax2 + bx + c: such a polynomial can have two roots, one root, or no roots.A consequence of Corollary 3.9 is that up to projective transformations, the number ofroots completely characterizes the quadric.

Over the complex numbers, the classification is even simpler: a quadric has the formZ(f) where f is one of (up to coordinate change)

• x2

• x2 + y2

37

Page 38: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

The first case Z(x2) is a single point of multiplicity 2, while Z(x2+y2) = Z((x−iy)(x+iy))consists of the two points [i : 1] and [i : −1]. This corresponds to the fact that over thecomplex numbers 0 = ax2 + bx + c with a 6= 0 has either two distinct roots or a singleroot of order two (complex polynomials in one variable can always be factored!).

We now consider the case F = R and P (V ) is a projective plane (in this case a quadricis called a conic). We may choose coordinates so that X = Z(f) where f is one of:

• x2,

• x2 + y2,

• x2 + y2 + z2,

• x2 − y2,

• x2 + y2 − z2 ,

up to multiplication by −1 which doesn’t change Z(f) = Z(−f).The first example

x2 = xx = 0

has the same solution set as x = 0, which is projective line {[0 : a : b]} ∼= RP 1, howeverthis line is counted twice in a sense we won’t make precise (similar two a double root ofsingle variable polynomial). The second example

x2 + y2 = 0

has a single solution [0 : 0 : 1]. The third example x2 + y2 + z2 = 0, has no projectivesolutions (remember (0, 0, 0) does not represent a projective point!). The example

x2 − y2 = (x− y)(x+ y)

factors as a product of linear polynomials. It follows that Z(x2 − y2) is a union of twodistinct lines intersecting at one point.

The last example Z(x2 + y2 − z2) is (up to coordinate change) the only non-emptyand non-degenerate real conic, up to projective transformation. The apparent differencebetween the affine conics: ellipses, parabolas, hyperbolas, can be understood as how theconic relates to the line at infinity. A conic looks like an ellipse if it is disjoint form theline at infinity, looks like parabola if it is tangent to the line at infinity, and looks like ahyperbola if it intersects the line at infinity at two points.

To illustrate, consider what happens when we set z = 1 and treat x and y like affinecoordinates (so the equation z = 0 is the “line at infinity”). Then the polynomial equationx2 + y2 − z2 = 0 becomes x2 + y2 − 1 = 0 or

x2 + y2 = 1

which defines the unit circle in the x-y plane.

38

Page 39: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Now instead, set y = 1 and consider x, z as affine coordinates (so y = 0 defines theline at infinity). Then we get the equation

x2 + 1− z2 = 0,

orz2 − x2 = (z − x)(z + x) = 1

which is the equation of a hyperbola asymptotic to the lines x = z and x = −z.Finally, consider performing the linear change of variables z′ = y + z. In these new

coordinates, the equation becomes

x2 + y2 − (z′ − y)2 = x2 + 2z′y − z′2 = 0.

Passing to affine coordinates x, y by setting z′ = 1 we get

x2 + 2y − 1 = 0

ory = −x2/2 + 1/2

which defines a parabola.Since ellipses, hyperbolas, and parabolas can all be obtained by intersecting an affine

plane with the cone in R3 defined by x2 + y2 − z2, they are sometimes called conicsections.

Figure 15: Conic sections

In case F = C then the possible conics is even more restricted. Up to change ofcoordinates, the possible quadratics in the complex projective plane are defined by thequadratic polynomial equations

• x2 = 0,

• x2 + y2 = 0,

39

Page 40: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

• x2 + y2 + z2 = 0.

The equation x2 = 0 defines a projective line that is counted twice or has multiplicitytwo (in a sense we won’t make precise). The equation

x2 + y2 = (x+ iy)(x− iy) = 0

defines the union of two distinct lines determined by the equations x + iy = 0 andx− iy = 0. The last equation x2 + y2 + z2 = 0 defines the non-degenerate quadric curve.We study what this curve looks like in the next section.

3.5 Parametrization of the conic

Theorem 3.10. Let C be a non-degenerate conic in a projective plane P (V ) over thefield F in which 2 6= 0, and let A be a point on C. Let L ⊂ P (V ) be a projective line notcontaining A. Then there is a bijection

α : L∼→ C

such that, for X ∈ L, the points A, X, α(X) are collinear.

Figure 16: Parametrization of the conic

Proof. Suppose the conic is defined by the symmetric bilinear form B. Let a ∈ V representthe fixed point A. Because A ∈ C, we know that B(a, a) = 0. Let x ∈ V represent apoint X ∈ L. Because A 6∈ L, it follows that X 6= A so the vectors a, x are linearlyindependent, so we can extend them to a basis a, x, y ∈ V .

The restriction of B to the span of a, x is not identically zero. If it were then thematrix for B associated to the basis a, x, y would be of the form 0 0 ∗

0 0 ∗∗ ∗ ∗

40

Page 41: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

which has determinant zero, and this is impossible because B is non-degenerate. It followsthat either B(a, x) or B(x, x) is non-zero.

An arbitrary point on the line AX has the form [λa + µx] for some scalars λ and µ.Such a point lies on the conic C if and only if

B(λa+ µx, λa+ µx) = λ2B(a, a) + 2λµB(a, x) + µ2B(x, x)

= µ(2λB(a, x) + µB(x, x)) = 0

for which there are two solutions in the homogeneous coordinates λ and µ: The solutionµ = 0 corresponding the point A and solution to 2λB(a, x) +µB(x, x) = 0 correspondingto the point represented by the vector

w := 2B(a, x)x−B(x, x)a

which is non-zero because one of the coefficients 2B(a, x) or −B(x, x) is non-zero.We define the map α : L→ C by

α(X) = [w].

It remains to prove that α is a bijection. We do this by proving the preimage of anypoint is a paint. If Y is a point on C distinct from A, then by Theorem 2.4 there is aunique point of intersection AY ∩ L, so α−1(Y ) = AY ∩ L.

To determine α−1(A), observe that

α(X) = [2B(a, x)x−B(x, x)a] = A

if and only if B(a, x) = 0, if and only if x ∈ a⊥ and X ∈ L. Since a⊥ is 2-dimensionalsubspace by Lemma 3.7, P (a⊥) is a projective line. The intersection of lines P (a⊥) andL is the unique point in L mapping to A.

In the course of proving Theorem 3.10, we proved the following result that will comeup again later.

Proposition 3.11. Let C = Z(B) be a non-degenerate conic and P ∈ C a point. Thepolar of P is the unique line L such that C ∩ L = P . We call L the tangent line of Cat P .

41

Page 42: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Though Theorem 3.10 only says that α is a bijection of sets, it is in fact an isomorphismof “algebraic varieties”. In particular, this means that a real projective conic is has thetopology to a real projective line, which in turn has the topology of a circle - a fact thatwe’ve already seen. A second consequence is that the complex projective conic is has thetopology of CP 1 which in turn has the topology of the two sphere S2.

3.5.1 Rational parametrization of the circle

Let us now describe a special case of the map α in coordinates. Let C = Z(x2 + y2 − z2)corresponding to the matrix

β =

1 0 00 1 00 0 −1

.

Choose A = [1 : 0 : 1] and let L = Z(x). Then points in L have the form [0 : t : 1] or[0 : 1 : 0] and the map α : L→ C satisfies

α([0 : t : 1]) = [2B((1, 0, 1), (0, t, 1))(0, t, 1)−B((0, t, 1), (0, t, 1))(1, 0, 1)]

= [−2(0, t, 1)− (t2 − 1)(1, 0, 1)]

= [(−t2 + 1,−2t,−t2 − 1)]

= [t2 − 1 : 2t : t2 + 1]

and

α([0 : 1 : 0]) = [2B((1, 0, 1), (0, 1, 0))(0, 0, 1)−B((0, 1, 0), (0, 1, 0))(1, 0, 1)]

= [0(0, 0, 1) + 1(1, 0, 1)]

= [1 : 0 : 1] = A

These formulas can be used to parametrize the circle using rational functions. For t areal number 1 + t2 > 0 is non-zero so we can divide

[t2 − 1 : 2t : t2 + 1] = [t2 − 1

t2 + 1:

2t

t2 + 1: 1]

and we gain a parametrization of the unit circle in affine co-ordinates α : R → S1 =Zaff (x

2 + y2 − 1)

α(t) =(t2 − 1

t2 + 1,

2t

t2 + 1

)that hits all points except (1, 0).

These formulas have an interesting application in the case F = Q is the field of rationalnumbers (those of the form a/b where a and b are integers).

Since we have focused on the cases F = R and F = C so far, we take a moment todiscuss informally linear algebra and projective geometry over F = Q. To keep matterssimple, we concentrate on the standard dimension n vector space over Q, Qn consistingof n-tuples of rational numbers. It is helpful geometrically to consider Qn as a subset

42

Page 43: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

of Rn in the standard way. Elements in Qn can be added in the usual way and can bemultiplied by scalars from Q, so it makes sense to form linear combinations

λ1v1 + ...λkvk ∈ Qn,

where λ1, ..., λk ∈ Q and v1, ..., vk ∈ Qn. The span of a set of vectors span(v1, ..., vk) ⊂ Qn

is the set of all linear combinations of v1, ..., vk. In particular, a one-dimensional subspaceof Qn is the span of a single non-zero vector. We define the projective space

QP n = P (Qn+1)

to be the set of one dimensional subspaces in Qn+1. Geometrically, we can regard

QP n ⊂ RP n

consisting of those projective points that can be represented by [v] ∈ RP n where v ∈ Qn+1.Many of the results about projective geometry we have developed so far generalize

immediately to geometry in QP n (one notable exception is the classification of symmetricbilinear forms because the proof depends on taking square roots but the square root of arational number is not necessarily rational!).

In particular the formulas parametrizing the circle remain valid. That is, solutions tothe equation

x2 + y2 = z2

for x, y, z ∈ Q must be of the form

[x : y : z] = [1− t2 : 2t : 1 + t2]

for some rational number t ∈ Q. If we let t = a/b where a and b are integers sharing nocommon factors, then

[x : y : z] = [1− (a/b)2 : 2(a/b) : 1 + (a/b)2] = [b2 − a2 : 2ab : b2 + a2].

It follows in particular that every integer solution to the equation x2 + y2 = z2 must havethe form

x = c(b2 − a2),

y = c(2ab),

z = c(b2 + a2)

for some integers a, b, c. Such solutions are called Pythagorean triples because they defineright triangles with integer side lengths. For example, a = c = 1, b = 2 determinesx = 3, y = 4, z = 5, while a = 2, b = 3, c = 1 determines x = 5, y = 8, z = 13.

43

Page 44: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

3.6 Polars

Let B be a non-degenerate bilinear form on a vector space V . Recall that V ∗ = Hom(V, F )is the dual vector space for V .

Lemma 3.12. The map B# : V → V ∗ defined by

B#(v)(w) := B(v, w)

is an isomorphism of vector spaces

Proof. The map is linear because

B#(λ1v1 + λ2v2)(w) = B(λ1v1 + λ2v2, w)

= λ1B(v1, w) + λ2B(v2, w)

= λ1B#(v1)(w) + λ2B

#(v2)(w)

soB#(λ1v1 + λ2v2) = λ1B

#(v1) + λ2B#(v2).

Since B is non-degenerate, if v 6= 0 then there exists w ∈ V such that B#(v)(w) =B(v, w) 6= 0. Thus ker(B#) = 0 and B# is injective. By ... dim(V ) = dim(V ∗) so B#

must also be surjective and thus is an isomorphism between V and V ∗.

Given a vector subspace U ⊂ V , define

U⊥ := {v ∈ V |B(u, v) = 0, for all u ∈ V }.

We call U⊥ orthogonal or perpendicular subspace of U (relative to B).

Proposition 3.13. The isomorphism of Lemma 3.12 sends U⊥ to U0. In particular, forany subspaces of V

• (U⊥)⊥ = U

• U1 ⊂ U2 ⇒ U⊥2 ⊂ U⊥1

• dim(U) + dim(U⊥) = dim(V )

Proof.

B#(U⊥) = {B#(v) ∈ V ∗|v ∈ U⊥}= {ξ ∈ V ∗|ξ(u) = 0 for all u ∈ U }= U0.

The properties is inherited from Section 2.6.

The isomorphism B# determines a projective isomorphism between P (V ) with its dualP (V ∗)

P (V ) ∼= P (V ∗). (7)

44

Page 45: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Definition 14. Let B be a non-degenerate symmetric bilinear form on V and let P (U) ⊂P (V ) be a linear subspace. We call P (U⊥) the polar of P (U).

If P (V ) has dimension n and P (U) ⊂ P (V ) has dimension m then P (U⊥) has dimen-sion n−m− 1, just like for duality.

Proposition 3.14. Let C be a non-degenerate conic in a complex projective plane P (V ).Then

(i) Each line in the plane meets the conic in one or two points

(ii) If P ∈ C, its polar line is the unique tangent to C passing through P

(iii) If P 6∈ C, the polar line of P meets C in two points, and the tangents to C at thesepoints intersect at P .

Figure 17: Duality for conics

Proof. Let U ⊂ V be the 2-dimensional subspace defining the projective line P (U) andlet u, v ∈ U be a basis for U . A general point in P (U) has the form [λu+ µv] for scalarsλ, µ ∈ C not both zero. A point [λu+ µv] lies on the conic C if

0 = B(λu+ µv, λu+ µv) = λ2B(u, u) + 2λµB(u, v) + µ2B(v, v). (8)

We think of λ and µ as homogeneous coordinates on the projective line, so that (8) definesthe roots of a quadratic polynomial. From §3.4 we know that (8) has one or two solutions,establishing (i). Property (ii) is a direct consequence of Proposition 3.11.

Now let [a] denote a point not lying on C, and let P (U) denote the polar of [a]. Fromthe calculation above, it follows that P (U) meets C in two distinct points [u] and [w] with[u] 6= [w]. Because [u] lies on C we know B(u, u) = 0. Because [u] lies on the polar of [a]we know B(u, a) = 0. It follows that for all points on the line joining [u] and [a] that

B(u, λu+ µa) = λB(u, u) + µB(u, a) = 0

so the line joining [u] and [a] is the polar of [u] hence also the tangent to C at [u]. Thesame argument applies to [w] and this establishes (iii).

45

Page 46: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proposition 3.14 establishes a duality between “circumscribed polygons” and “in-scribed polygons” for complex conics, as depicted in the real plane below.3

Figure 18: Duality for conics

3.7 Linear subspaces of quadrics and ruled surfaces

Quadrics in projective spaces P (V ) of dimension greater than 2 often contain linear spaces.However, such a linear space can only be so big.

Proposition 3.15. Suppose that Q ⊂ P (V ) is a non-degenerate quadric over field F inwhich 0 6= 2. Then if some subspace P (U) is a subset of Q, then

dim(P (U)) ≤ (dim(P (V ))− 1)/2.

Proof. Because of dimension shifts, the statement is equivalent to

dim(U) ≤ dim(V )/2

Suppose that P (U) ⊆ Q. Then for all u ∈ U , B(u, u) = 0. Furthermore, for any pairu, u′ ∈ U we have by Proposition 3.4 that

B(u, u′) =1

2(B(u+ u′, u+ u′)−B(u, u)−B(u′, u′)) = 0

from which it follows that U ⊆ U⊥. According to Proposition 2.14 this means that

2 dim(U) ≤ dim(U) + dim(U⊥) = dim(V ) (9)

from which the conclusion follows.

3This duality is explored in the suggested project topic “Pascal’s Theorem, Brianchon’s Theorem andduality”

46

Page 47: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Indeed, over the complex numbers the maximum value of (9) is always realized. If nis odd then we can factor

x20 + ...+ x2

n = (x0 + ix1)(x0 − ix1) + ...+ (xn−1 + ixn)(xn−1 − ixn)

so the non-degenerate quadric Z(x20 + ...+ x2

n) in CP n contains the linear space

Z((x0 − ix1), ..., (xn−1 − ixn))

of dimension (n− 1)/2.In the real case, the maximum value of (9) is achieved when n is odd and n + 1 =

2p = 2q (we say B has split signature) . In this case,

x21 + ...+ x2

p − x2p+1 − ...− x2

2p = (x1 + xp+1)(x1 − xp+1) + ...+ (xp + x2p)(xp − x2p)

so the quadric contains the linear subspace Z(x1 − xp+1, ..., xp − x2p).This is best pictured in the case n = 3:

x2 + y2 − z2 − w2 = (x− z)(x+ z) + (y − w)(y + w) = 0

passing to affine coordinates by setting w = 1 gives

x2 + y2 = z2 + 1

which defines a surface called a hyperboloid or “cooling tower”.Indeed, the hyperboloid can be filled or “ruled” by a family of lines satisfying the

equations

λ(x− z) = µ(y − 1)

−µ(x+ z) = λ(y + 1)

for some homogeneous coordinates [λ : µ]. It can also by ruled by the family of linessatisfying

λ(x− z) = µ(y + 1)

−µ(x+ z) = λ(y − 1)

for some homogeneous coordinates [λ : µ].

3.8 Pencils of quadrics and degeneration

In this section, we move beyond the study of individual quadrics to the study of familiesof quadrics. The proof of the following proposition is straightforward and will be omitted.

Proposition 3.16. The set of symmetric bilinear forms on a vector space V forms avector space under the operation

(λ1B1 + λ2B2)(v, w) = λ1B1(v, w) + λ2B2(v, w).

47

Page 48: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 19: Bi-ruled hyperboloid

Figure 20: Bi-ruled hyperboloid

48

Page 49: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

The vector space of symmetric bilinear forms on V is denoted S2(V ∗).

Proposition 3.17. Two symmetric bilinear forms B,B′ on a complex vector space Vdefine the same quadric if and only if B = λB′ for some non-zero scalar λ ∈ C. Con-sequently, the set of quadrics in P (V ) is in one-to-one correspondence with the points ofP (S2(V ∗)).

Proof. The statement is immediate if dim(V ) = 1.If dim(V ) = 2, then by choosing a basis and passing to the associated quadratic form,

the statement is equivalent to saying that a homogeneous degree two polynomial in twovariables ax2+bxy+cy2 is determined up to scalar multiplication by its roots. This followsfrom the fact that homogeneous polynomials in two variables can always be factored overthe complex numbers (by dehomogenizing, factoring and rehomogenizing). This is thehomogeneous version of the Fundamental Theorem of Algebra.

For the general case, choose a basis v1, ..., vn of V and define n× n-matrices in whichB(vi, vj) = βi,j and B′(vi, vj) = β′i,j. For any pair of i, j ∈ 1, ..., n, we may restrict thebilinear forms to the span of vi and vj. The result for two dimensional vector spacesimplies that (

βi,i βi,jβj,i βj,j

)= λ

(β′i,i β′i,jβ′j,i β′j,j

),

for some non-zero scalar λ (possibly depending on i, j). This means in particular that

βi,j = 0 if and only if β′i,j = 0. (10)

According to Theorem 3.8, we may choose the basis v1, ..., vn so that for some 0 ≤ p ≤ nwe have β1,1 = β2,2 = ... = βp,p = 1 and all other entries βi,j = 0 . By (10), β′i,j = 0unless (i, j) ∈ {(1, 1), (2, 2), ..., (p, p)} . If p ≤ 1 then we are done. If p ≥ 2, then for any1 ≤ i ≤ p, (

β′1,1 00 β′i,i

). = β′1,1

(1 00 1

).

so β = λβ′ with λ = β′1,1 and thus B = λB′.

Remark 5. Proposition 3.17 does not hold over general fields. For example, the quadraticforms x2 + y2 and x2 + 2y2 are not scalar multiples of each other, but Z(x2 + y2) =Z(x2 + 2y2) = ∅ over the real numbers.

Definition 15. Let B,B′ be two linearly independent symmetric bilinear forms on avector space V . The pencil of quadrics generated by B and B′ is the set of quadricsassociated to bilinear forms of the form λB + µB′, for λ, µ not both zero. For complexV , the pencil forms a projective line in P (S2(V ∗)).

Proposition 3.18. Let V be a vector space of dimension n. A pencil of quadrics containsat most n degenerate quadrics. If F = C, then the pencil contains at least one degeneratequadric.

49

Page 50: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proof. Choose a basis for V so that B and B′ are represented by matrices β and β′. Thebilinear for λB + µB′ is degenerate if and only if the determinant det(λβ + µβ′) equalszero. Expanding out the expression det(λβ +µβ′) is a degree n homogeneous polynomialin the homogeneous variables [λ : µ], so it has at most n roots. Over C, such a polynomialhas at least one root by the Fundamental Theorem of Algebra.

Theorem 3.19. Two non-degenerate conics in a complex projective plane intersect in atleast one and no more than four points.

Proof. First we show that all pairs of conics on the pencil intersect at the same points.Let [v] ∈ P (V ) be an intersection point of two non-degenerate conics Z(B) and Z(B′). IfB′′ := λB + µB′ is a linear combination then

B′′(v, v) = λB(v, v) + µB′(v, v) = 0 + 0 = 0

so [v] ∈ Z(B′′). It follows that Z(B) ∩ Z(B′) ⊆ Z(B′′) so

Z(B) ∩ Z(B′) ⊆ Z(B) ∩ Z(B′′).

If µ 6= 0, then the pencil determined by B and B′ coincides with the one determinded byB and B′′. In particular,

B′ =1

µ(λB + µB′)− λ

µB =

1

µB′′ − λ

µB

so arguing conversely, we get

Z(B) ∩ Z(B′) = Z(B) ∩ Z(B′′). (11)

By Proposition 3.18 we know that Z(B′′) is degenerate for some choice [λ : µ], so weare reduced to counting the intersection points between degenerate and a non-degenerateconic. From §3.4 we know that degenerate conics can either be a line or a union of twolines. By Proposition 3.14 it follows that

Z(B) ∩ Z(B′′) = Z(B) ∩ Z(B′)

consists of between one and four points.

50

Page 51: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Corollary 3.20. Two conics in a complex projective plane have between one and fourtangent lines in common.

Proof. Follows from Theorem 3.19 by duality (see Proposition ??).

Remark 6. With an appropriate notion of multiplicity, Theorem 3.19 (resp. Corollary3.20) can be improved to say there are exactly four intersection points (resp. commontangents) counted with multiplicity.

Using equation (11), we are able to produce pairs on non-degenerate conics realizingall possible intersections 1 to 4.

For example, consider the affine circle defined by the equation (x− 1)2 + y2 − 1 = 0.This circle intersects the y-axis at one point, and so it intersects the degenerate conicdefined be x2 = 0 exactly once. Homogenize to get polynomials

f(x, y, z) := (x− z)2 + y2 − z2 = x2 − 2xz + y2

and x2. I claim that the quadrics Z(f) and Z(x2) have only one intersection point overthe complex numbers. To see this, observe that Z(x2) = {[0 : a : b] | [a : b] ∈ CP 1} andthat f(0, a, b) = (0− b)2 + a2 − b2 = a2 vanishes only if [0 : a : b] = [0 : 0 : 1]. Accordingto (11), this means that Z(−x2 + f(x, y, z)) = Z(y2 − 2xz) intersects Z(f) at the uniquepoint [0 : 0 : 1].

How do we interpret pencils of conics geometrically? Here is the generic answer.

Proposition 3.21. Suppose that two complex conics intersect in four points lying ingeneral position. Then the pencil of conics they determine consists of all conics passingthrough those four points.

Proof. All conics in the pencil must contain the four intersection points by (11). It remainsto prove that the set of conics passing through four points in general position is a pencil.Up to a projective transformation, we may assume that the four intersection points are[1 : 0 : 0], [0 : 1 : 0], [0 : 0 : 1] and [1 : 1 : 1]. The set of quadratic forms vanishing at thesefour points is

{axy + byz + cxz | a, b, c ∈ C, s.t. a+ b+ c = 0}= {axy + byz + (−b− a)xz = ax(y − z) + by(x− z) | a, b ∈ C}.

51

Page 52: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 21:

which is equal to the pencil spanned by the two (degenerate) conics Z(x(y − z)) andZ(y(x− z)). This contains also a third degenerate conic Z(z(x− y)).

From the above argument, we see that if two conics intersect in four points lying ingeneral position, then the pencil they determine contains three degenerate conics (themaximum allowed by Proposition 3.19), corresponding to the

(42

)/2 = 3 ways to contain

four points in general position in a union of two lines:

4 Exterior Algebras

In this chapter we will always assume that fields F satisfy 2 6= 0 and consequently that1 6= −1.

4.1 Multilinear algebra

Definition 16. Let V be a vector space over a field F . A multilinear form of degree k(k ≥ 1) or k-linear form is a function

M : V k = V × ...× V → F

that is linear in each entry. That is, for each i = 1, 2, ..., k we have

M(u1, u2, ...., λui + µu′i, ...., uk) = λM(u1, ..., ui, ..., uk) + µM(u1, ..., u′i, ..., uk)

for all u1, ..., uk, u′i ∈ V and λ, µ ∈ F . The set of k-linear forms is denoted T k(V ∗). By

convention T 0(V ∗) = F .

We’ve seen many example of multilinear forms already.

• A multilinear form of degree 1 is simply a linear map ξ : V → F , so T 1(V ) = V ∗.

52

Page 53: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 22: Degenerate Conics determining pencil, viewed in the z = 1 affine plane.

Figure 23: Degenerate Conics lying on four points in general position.

53

Page 54: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

• Symmetric bilinear forms are multilinear forms of degree two.

• If V ∼= F n is the set of n × 1 column matrices, then given v1, ..., vn ∈ V we mayform the n× n matrix [v1....vn]. The determinant

det : V × ...× V → F, (v1, ...., vn) 7→ det(v1....vn)

is a multilinear form of degree n.

Multilinearity implies that that the form M is completely determined by it’s valueson on k-tuples of basis elements. If v1, ..., vn is a basis for V , then

M(n∑j=1

λ1,jvj, ...,n∑j=1

λk,jvj) =n∑

j1,...,jk=1

λ1,j1 ...λk,jkM(vj1 , ..., vjk). (12)

There are two distinguished subsets of T k(V ∗). The symmetric k-linear forms Sk(V ∗) ⊆T k(V ∗) possess the property that transposing any two entries does not change the valueof the form. That is, if for any u1, ...uk ∈ V , we have

M(u1, ..., ui, ..., uj, ..., uk) = M(u1, ..., uj, ..., ui, ...uk).

In this chapter we will concentrate on what are called alternating (or skew-symmetricor exterior) forms

Λk(V ∗) ⊆ T k(V ∗).

A multilinear form M is called alternating if transposing two entries changes M by aminus sign. That is, if for any u1, ...uk ∈ V , we have

M(u1, ..., ui, ..., uj, ..., uk) = −M(u1, ..., uj, ..., ui, ...uk). (13)

Of the examples above, dual vectors are alternating because the condition is vacuousin degree one. The determinant is alternating, because transposing columns (or rows forthat matter) changes the determinant by minus one. Symmetric bilinear forms are ofcourse not alternating, because transposing the entries leaves it unchanged.

A consequence of (13) is that an alternating multilinear form will vanish if two entriesare equal. From (12) it follows that there are no nonzero multilinear forms of degree kgreater than n the dimension of V . More generally, we can say

Proposition 4.1. Let V be a vector space of dimension n over a field F in which 0 6= 2.For k a non-negative integer, then Λk(V ∗) is a vector space of dimension

(nk

).

Proof. Let M1 and M2 be k-linear alternating forms. The vector space structure is definedby

(λ1M1 + λ2M2)(u1, ..., uk) = λ1M1(u1, ..., uk) + λ2M2(u1, ..., uk). (14)

That this makes the set of k-linear forms into a vector space is a routine verification.

54

Page 55: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Let v1, ..., vn ∈ V be a basis. According to (12), a k-linear form is determined by itsvalues on k-tuples of basis vectors

Mi1,...,ik := M(vi1 , ..., vik)

for vij ∈ v1, ..., vn. The constants Mi1,...,ik are sometimes called structur constants. If twoindices are duplicated, then Mi1,...,ik = 0 and if two indices are transposed Mi1,...,ik changesonly by a minus sign. Thus M is completely determined by the list of scalars

(Mi1,...,ik | i1 < i2 < .... < ik) (15)

which is a list of(nk

)scalars. Conversely, it is straightforward to show that any list of

scalars (15) defines a k-linear alternating form.

The following corollary helps establish the significance of the determinant of a matrix.

Corollary 4.2. The determinant is the only alternating n-linear form on F n, up to scalarmultiplication.

Proof. The determinant is a non-trivial element of the one-dimensional vector space n-linear alternating forms, so every other element is a scalar multiple of the determinant.

4.2 The exterior algebra

Definition 17. The kth exterior power Λk(V ) is the dual of the vector space of k-linearalternating forms. Elements of Λk(V ) are sometimes called k-vectors.

It follows immediately from results in the last section that Λ0(V ) = F , that Λ1(V ) = Vand that dim(Λk(V )) =

(nk

).

Definition 18. If u1, ...uk ∈ V , define the exterior product u1 ∧ .... ∧ uk ∈ Λk(V ) to bethe dual vector satisfying

(u1 ∧ .... ∧ uk)(M) = M(u1, ..., uk).

That u1 ∧ ... ∧ uk is in fact an element of Λk(V ) is immediate from (14).

An element of Λk(V ) is called decomposable if it equals u1 ∧ ...∧ uk for some choice ofvectors u1, ..., uk. The following proposition tells us that every k-vector in Λk(V ) is equalto a linear combination of decomposable k-vectors.

Proposition 4.3. Let V be a vector space with basis v1, ..., vn. The set

S := {vi1 ∧ ... ∧ vik | 1 ≤ i1 < ... < ik ≤ n}

is a basis for Λk(V ).

Proof. It follows from the proof of Proposition 4.1 that the vector space of k-linear al-ternating forms has a basis {MI} indexed by k-subsets I ∈ {{i1 < ... < ik} | i1, ..., ik ∈{1, 2, ..., n}}, defined by the condition MI(vi1 , ..., vik) = 1 and MI(vj1 , ..., vjk) = 0 for allother k-tuples. The set S is simply the dual basis.

55

Page 56: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

For example, if V has basis v1, v2, v3, v4 then Λ2(V ) has a basis of six 2-vectors v1∧ v2,v1 ∧ v3, v1 ∧ v4, v2 ∧ v3, v2 ∧ v4, v3 ∧ v4.

Consider the function (called the exterior product)

V k = V × ...× V → Λk(V ), (u1, ..., uk) 7→ u1 ∧ ... ∧ uk. (16)

It is a straightforward consequence of the definition that (16) is linear in each factor (i.e.it is k-linear) and that transposing two factors introduces a minus sign. Consequently, ifui = uj for some i 6= j, then u1 ∧ ... ∧ uk = 0.

Proposition 4.4. A set of vectors u1, ..., uk ∈ V is linearly independent if and only ifu1 ∧ ... ∧ uk ∈ Λk(V ) is non-zero.

Proof. Suppose that u1, ..., uk ∈ V are linearly dependent. Then it is possible to writeone of them as a linear combination of the others

ui =∑j 6=i

λjuj.

It follows by linearity that

u1 ∧ ... ∧ uk = u1 ∧ ... ∧ (∑j 6=i

λjuj) ∧ uk

=∑j 6=i

λju1 ∧ ... ∧ uj ∧ ... ∧ uk

= 0

which equals zero because it is a sum of terms containing repeated factors.On the other hand, if u1, ..., uk is linearly independent, then it may be extended to a

basis of V . By Proposition 4.3, this means that u1 ∧ ... ∧ uk is part of a basis for Λk(V ),so in particular it is non-zero.

Suppose that T : V → V is linear transformation from V to itself. If M is a k-linearform on V , then the pull-back T ∗M is the k-linear form on V defined by the formula

T ∗M(u1, ..., uk) = M(T (u1), ..., T (uk)).

If M is alternating, then T ∗M is alternating because

T ∗M(u1, ..., ui, ..., uj, ..., uk) = M(T (u1), ..., T (ui), ...T (uj), ..., T (uk))

= −M(T (u1), ..., T (uj), ...T (ui), ..., T (uk))

= −T ∗M(u1, ..., uj, ..., ui, ..., uk).

The transpose of T ∗ is the linear map (abusing notation)

ΛkT : Λk(V )→ Λk(V )

that sends u1 ∧ ... ∧ uk to T (u1) ∧ ... ∧ T (uk).In the case k = n = dim(V ), we get something familiar (the following proof involves

material outside of the prerequisites and will not be tested).

56

Page 57: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proposition 4.5. If dim(V ) = n and T : V → V is a linear map, then ΛnT : Λn(V ) →Λn(V ) is scalar multiplication by the determinant of T .

Proof. The vector space Λn(V ) has dimension(nn

)= 1, thus ΛnT : Λn(V )→ Λn(V ) must

be multiplication by some scalar (it is defined by a 1×1-matrix). Choose a basis v1, ..., vnis a basis of V . The linear map T determines an n×n matrix [Ti,j] with entries determinedby the formuala

T (vi) =n∑j=1

Ti,jvj.

By Proposition 4.3, the vector v1 ∧ ... ∧ vn spans Λn(V ), and

ΛnT (v1 ∧ ... ∧ vn) = T (v1) ∧ ... ∧ T (vn)

= (∑j1=1

T1,j1vj1) ∧ ... ∧ (∑j1=1

Tn,jnvjn)

=n∑

j1,...,jn=1

T1,j1 ...Tn,jnvj1 ∧ ... ∧ vjn

To proceed further, observe that the product vj1 ∧ ...∧vjn equals zero if any of the indicesare repeated and equals ±v1 ∧ ... ∧ vn if they are all distinct. The sign is determined bywhether it takes an odd or an even number of transpositions get from the list (j1, ..., jn) tothe list (1, 2, ..., n), because a minus sign is introduced for each transposition. Generally, apermutation σ is a list of the numbers {1, ..., n} written without repetition and we definesign(σ) = (−1)t, where t is how many transpositions it takes to reorder σ to (1, ..., n).With this notation, we have

ΛnT (v1 ∧ ... ∧ vn) =n∑

j1,...,jn=1

T1,j1 ...Tn,jnvj1 ∧ ... ∧ vjn

=∑σ

T1,σ(1)...Tn,σ(n)vσ(1) ∧ ... ∧ vσ(n)

=∑σ

sign(σ)T1,σ(1)...Tn,σ(n)v1 ∧ ... ∧ vn

so ΛnT is scalar multiplication by the scalar∑

σ sign(σ)T1,σ(1)...Tn,σ(n), which is a formulafor the determinant of T .

Proposition 4.6. There exists a unique bilinear map, called the exterior product,

∧ : Λk(V )× Λl(V )→ Λk+l(V ),

satisfying the property that for decomposable elements, we have

(u1 ∧ ... ∧ uk) ∧ (w1 ∧ ... ∧ wl) = u1 ∧ ... ∧ uk ∧ w1 ∧ ... ∧ wl, (17)

57

Page 58: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proof. By choosing a basis and applying Proposition 4.3, it is clear that bilinearity com-bined with (17) uniquely determines the map. It remains to show the map exists and isindependent of any choice of basis. We do this by supplying a definition of the exteriorproduct that is independent of any basis.

Let b ∈ Λl(V ) be an arbitrary element. Define a map

ιb : Λk+l(V ∗)→ Λk(V ∗)

by the rule that for vectors u1, ..., uk ∈ V ,

(ιb(M))(u1, ..., uk) = b(M(u1, ..., uk,−,−, ...−)).

Observe that when k = 0, then Λk(V ∗) = Λ0(V ∗) = F and ιb is simply b.It is easy to check that ιb(M) inherits multi-linearity and the alternating property

from M so ib is well-defined. Given a ∈ Λk(V ) and b ∈ Λl(V ), we define the exteriorproduct a ∧ b ∈ Λk+l(V ) by the formula

(a ∧ b)(M) = ιaιb(M) ∈ Λ0(V ∗) = F,

for all alternating (k + l)-forms M .

Example 20. Suppose we have vectors (v1 − v2) ∈ V = Λ1(V ) and (v2 ∧ v3 + v1 ∧ v3) ∈Λ2(V ). Then the product satisfies

(v1 − v2) ∧ (v2 ∧ v3 + v1 ∧ v3) = v1 ∧ (v2 ∧ v3 + v1 ∧ v3)− v2 ∧ (v2 ∧ v3 + v1 ∧ v3)

= v1 ∧ v2 ∧ v3 + v1 ∧ v1 ∧ v3 − v2 ∧ v2 ∧ v3 + v2 ∧ v1 ∧ v3

= v1 ∧ v2 ∧ v3 + v2 ∧ v1 ∧ v3

= v1 ∧ v2 ∧ v3 − v1 ∧ v2 ∧ v3

= 0

An important property of the exterior product is that it is graded commutative. Giventwo decomposable elements we have

(u1 ∧ ... ∧ uk) ∧ (w1 ∧ ... ∧ wl) = u1 ∧ ... ∧ uk ∧ w1 ∧ ... ∧ wl= (−1)kw1 ∧ u1 ∧ ... ∧ uk ∧ w2 ∧ ... ∧ wl= (−1)2kw1 ∧ w2 ∧ u1 ∧ ... ∧ uk ∧ w3 ∧ ... ∧ wl= ...

= (−1)klw1 ∧ w2 ∧ .... ∧ wl ∧ u1 ∧ ... ∧ uk

because moving each wj through u1 ∧ ... ∧ uk requires k transpositions. Extending thisformula by linearity gives us the third of the following list of properties

Proposition 4.7. Let a ∈ Λk(V ), b, b′ ∈ Λl(V ) and c ∈ Λp(V ). Then

• a ∧ (b+ b′) = a ∧ b+ a ∧ b′ (linearity)

• (a ∧ b) ∧ c = a ∧ (b ∧ c) (associativity)

• a ∧ b = (−1)klb ∧ a (graded commutativity).

58

Page 59: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

4.3 The Grassmanian and the Plucker embedding

Let V be a vector space and let k be an integer, 0 ≤ k ≤ dim(V ). The Grassmanian ofk-planes in V, denoted Grk(V ), is the set of k-dimensional subspaces of V . In terms ofprojective geometry, Gr1(V ) is equal to the projective space P (V ), Gr2(V ) is the set ofprojective lines in P (V ), Gr3(V ) is the set of planes in P (V ), etc..

A k-dimensional subspace U ⊆ V , naturally determines a 1-dimensional subspace

Λk(U) ⊆ Λk(V ).

In terms of a basis u1, ..., uk ∈ U , the Λk(U) is spanned by u1 ∧ ... ∧ uk ∈ Λk(V ), whichwe know is non-zero by Proposition 4.4.

Theorem 4.8. The map

Φ : Grk(V )→ P (Λk(V )), Φ(U) = Λk(U)

is injective so we may identify Grk(V ) the subset of P (Λk(V )) represented by decomposablevectors. The map Φ is called the Plucker embedding.

Proof. Suppose that U 6= W . Construct a basis of V as follows. Begin with a basisv1, v2, ..., vk of U . Since U 6= W , there exists a vector vk+1 ∈ W \U , such that {v1, ..., vk+1}is linearly independent (since its span has dimension strictly larger than dim(U) = k).Now extend to a basis v1, ..., vn of V .

With respect to this basis Φ(U) is spanned by v1 ∧ ... ∧ vk, whereas Φ(W ) is spannedby a vector a∧ vk+1, for some a ∈ Λk−1(V ). Since every term in the expansion of a∧ vk+1

will include a factor of vk+1, it can not equal v1∧ ...∧ vk, even up to scalar multiplication.We conclude that Φ(U) 6= Φ(W ).

It can be shown that the image of the Plucker embedding is not only a set, it isprojective algebraic set in the sense of §3.2. Our next task is to describe this algebraic setin the case k = 2 and use this to study the set of lines in projective space as a geometricobject.

4.4 Decomposability of 2-vectors

Recall that a 2-vector a ∈ Λ2(V ) is called decomposable if it can be factored as a = u1∧u2

for some vectors u1, u2 ∈ Λ2(V ). For example

(3v1−v2)∧(v1 +v2 +v3) = 3v1∧v2 +3v1∧v3−v2∧v1−v2∧v3 = 4v1∧v2 +3v1∧v3−v2∧v3

is decomposable. Now imagine simply being presented with the expression 4v1∧v2 +3v1∧v3 − v2 ∧ v3. How can you tell that it factors?

Given any decomposable 2-vector, a = u1 ∧ u2, we have

a ∧ a = u1 ∧ u2 ∧ u1 ∧ u2 = 0

because it contains repeated factors. It turns out that the converse of this is also true.

59

Page 60: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proposition 4.9. Let V be a vector space. A 2-vector a ∈ Λ2(V ) is decomposable if andonly if a ∧ a = 0.

Proof. We have already proven the only if direction.Now suppose that a ∧ a = 0. We wish to prove that a is decomposable. We proceed

by induction on the dimension of V .Base case: Suppose that V has dimension three (the statement is obvious for dimen-

sion less than three, because then all 2-vectors are decomposable). Consider the linearmap

A : V → Λ3(V ), A(v) = a ∧ v.

Since dim(Λ3(V )) = 1, the kernel of A has dimension at least 3 − 1 = 2 by the Rank-Nullity Theorem .... Let v1, v2 ∈ V be two linearly independent vectors such that A(v1) =A(v2) = 0 and extend to a basis v1, v2, v3. By ..., there exist scalars such that

a = λv1 ∧ v2 + µv1 ∧ v3 + νv2 ∧ v3.

By construction a ∧ v1 = 0, so

0 = a ∧ v1

= λv1 ∧ v2 ∧ v1 + µv1 ∧ v3 ∧ v1 + νv2 ∧ v3 ∧ v1

= νv2 ∧ v3 ∧ v1

so we conclude that ν = 0. Similarly, because a ∧ v2 = 0 we conclude that µ = 0. Itfollows that a = λv1 ∧ v2 = (λv1) ∧ v2 is decomposable.

Induction step: Let dim(V ) = n and suppose that the property holds for dimensionsless than n. Suppose a ∈ Λ2(V ) satisfies a∧a = 0. Given an arbitrary basis v1, ..., vn ∈ V ,express a as a linear combination

a =∑

1≤i<j≤n

ai,jvi ∧ vj

= (n−1∑i=1

ainvi) ∧ vn +∑

1≤i<j≤n−1

ai,jvi ∧ vj

= u ∧ vn + a′

where U = Span{v1, ..., vn−1} ⊂ V , u ∈ U and a′ ∈ Λ2(U). Then

0 = a ∧ a= (u ∧ vn + a′) ∧ (u ∧ vn + a′)

= u ∧ vn ∧ u ∧ vn + a′ ∧ u ∧ vn + u ∧ vn ∧ a′ + a′ ∧ a′

= 2a′ ∧ u ∧ vn + a′ ∧ a′

60

Page 61: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

where in the last step we use the fact that products with repeated factors vanish andgraded commutativity of exterior multiplication. Because vn does not appear in theexpansion of a′ ∧ u or a′ ∧ a′, they must both vanish separately and we get

a′ ∧ u = 0, a′ ∧ a′ = 0.

Since a′ ∈ Λ2(U) and dim(U) = n−1, it must be decomposable by induction, so a′ = u1∧u2

for some u1, u2 ∈ U . Subbing in we get the equation

u1 ∧ u2 ∧ u = 0,

so by Proposition 4.4, the vectors u1, u2, u are linearly dependent. Consequently

a = u ∧ vn + u1 ∧ u2 ∈ Λ2(W ).

where W is the (at most) 3-dimensional subspace spanned by u, u1, u2, vn, so by the basecase must a be decomposable.

Example 21. Returning to our previous example, it is immediately clear that a :=4v1 ∧ v2 + 3v1 ∧ v3 − v2 ∧ v3 is decomposable because there are only three basis vectorsv1, v2, v3 appearing as factors in its terms, thus every term of the product a ∧ a mustcontain a repeated factor and thus must vanish, so a ∧ a = 0.

4.5 The Klein Quadric

A special case of Theorem 4.8 when k = 2 identifies Gr2(V ), the set of lines in projectivespace, with the set of points in P (Λ2(V )) represented by decomposable vectors. Let usconsider the implications of our algebraic characterization of decomposability (Proposition4.9) in various dimensions of V .

If dim(V ) = 2, then Λ2(V ) is one-dimensional. In this case P (V ) is itself a projectiveline so it contains only a single line and P (Λ2(V )) is a single point.

If dim(V ) = 3, then P (V ) is a projective plane. Since dim(Λ2(V )) =(32

)= 0, it follows

from Proposition 4.9 that every 2-vector is decomposable so Theorem 4.8 identifies theset of lines in P (V ) with the set of points in P (Λ2(V )). Of course, we have already know(§2.6) that the set of lines in a projective plane P (V ) is in one-to-one correspondencewith the dual projective plane P (V ∗), so this result implies a natural bijection

P (Λ2(V )) ∼= P (V ∗).

To see where this comes from consider the wedge product

∧ : V × Λ2(V )→ Λ3(V ).

Since the product is bilinear, 2-vectors in Λ2(V ) determine linear maps form V to Λ3(V ).Since Λ3(V ) is one-dimensional, there is an isomorphism Λ3(V ) ∼= F , which is uniquelydetermined up to a non-zero scalar. Combining these allows us to consider 2-vectors asdefining linear maps V → F which are determined up to a non-zero scalar, and this leadsto the natural bijection P (Λ2(V )) ∼= P (V ∗).

61

Page 62: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Now consider the case when dim(V ) = 4, so that P (V ) is projective 3-space. Choosea basis v1, ..., vn ∈ V . An arbitrary 2-vector in Λ2(V ) has the form

a = x1,2v1 ∧ v2 + x1,3v1 ∧ v3 + x1,4v1 ∧ v4 + x2,3v2 ∧ v3 + x2,4v2 ∧ v4 + x3,4v3 ∧ v4

for some scalars xi,j which we treat as homogeneous variables for P (Λ2(V )). This givesrise to a formula

a ∧ a = B(a, a)v1 ∧ v2 ∧ v3 ∧ v4

where B(a, a) is the quadratic form 2(x1,2x3,4− x1,3x2,4 + x1,4x2,3). Combing with Propo-sition 4.9 we deduce the following.

Corollary 4.10. The set of lines in a three dimensional projective space P (V ), is in one-to-one correspondence with the points on a quadric X (called the Klein Quadric) lyingin the five dimensional projective space P (Λ2(V )). With respect to the standard basis ofΛ2(V ) coming from a basis v1, v2, v3, v4 we have X = Z(f) where

f = x1,2x3,4 − x1,3x2,4 + x1,4x2,3

Homework exercise: Prove that for dim(V ) = n ≥ 4, that the set of lines in P (V ) canbe identified with an intersection of quadrics in P (Λ2(V )).

In the remainder of this section we study the Klein Quadric Q. Recall from §3.7 that aquadric in complex projective 5-space P (Λ2(V )) is a filled by families of projective planes.Our next task is to characterize these planes in terms of the the geometry of lines in P (V ).

Proposition 4.11. Let L1 and L2 be two lines in a projective 3-space P (V ). Then L1

and L2 intersect if and only if they correspond to points Φ(L1), Φ(L2) ∈ P (Λ2(V )) thatare joined by a line contained in the Klein Quadric Q.

Proof. The lines L1 and L2 correspond to 2-dimensional subspaces U1, U2 ⊂ V . The linesintersect if and only if U1 ∩ U2 contains an non-zero vector u ∈ U1 ∩ U2.

Suppose that the lines do intersect. Then we may choose bases u1, u ∈ U1 and u2, u ∈U2. The lines are represented by points [u1 ∧ u], [u2 ∧ u] ∈ P (Λ2(V )). These determine aline represented by

λu1 ∧ u+ µu2 ∧ u = (λu1 + µu2) ∧ u

which are all decomposable and thus are contained in Q.Suppose that the lines do not intersect. Then U1∩U2 = {0} and thus by ... U1⊕U2

∼=V . In particular, there is a basis v1, v2, v3, v4 ∈ V such that span{v1, v2} = U1 andspan{v3, v4} = U2. The line containing Φ(L1) and Φ(L2) has points represented by

λv1 ∧ v2 + µv3 ∧ v4,

so(λv1 ∧ v2 + µv3 ∧ v4) ∧ (λv1 ∧ v2 + µv3 ∧ v4) = 2λµv1 ∧ v2 ∧ v3 ∧ v4

is non-zero except when either λ or µ is zero, hence does not lie in Q.

62

Page 63: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Proposition 4.12. The set of lines passing through a fixed point X ∈ P (V ) correspondsto a projective plane contained in the Klein Quadric Q ⊆ P (Λ2(V )).

Proof. Let x ∈ V represent the point X ∈ P (V ), and extend to a basis x, v1, v2, v3 ∈ V .Then a projective line L containing X corresponds to span{x, u} for some u = µx+λ1v1+λ2v2 + λ2v2 with at least one of λi non-zero. Then

Φ(L) = [x ∧ (µx+ λ1v1 + λ2v2 + λ2v2)] = [λ1x ∧ v1 + λ2x ∧ v2 + λ3x ∧ v3],

which lies in the plane in P (Λ2(V ) spanned by the points [x ∧ v1], [x ∧ v2], [x ∧ v3]. Con-versely, any point lying in this plane has form λ1x ∧ v1 + λ2x ∧ v2 + λ3x ∧ v3 and thuscorresponds to the line span{x, λ1v1 + λ2v2 + λ3v3}.

We call a plane in the Klein Quadric of type described above an α-plane. There isanother type of plane as well.

Proposition 4.13. The set of lines lying in a fixed plane P ⊂ P (V ) determines a planein the Klein quadric Q ⊂ P (Λ2(V )). Such a plane in Q is called a β-plane.

Proof. Let [u1], [u2], [u3] ∈ P be three non-collinear points lying on the plane. An arbitraryline in P is determined by a pair of points [λu1 + µu2 + νu3] and [λ′u1 + µ′u2 + ν ′u3], andmaps to the point in Q

[(λu1 + µu2 + νu3) ∧ (λ′u1 + µ′u2 + ν ′u3)]

= [(λµ′ − λ′µ)u1 ∧ u2 + (λν ′ − λ′ν)u1 ∧ u3 + (µν ′ − µ′ν)u2 ∧ u3]

contained in the plane determined by u1 ∧ u2, u1 ∧ u3 and u2 ∧ u3. Conversely, points inP (Λ2(V )) of the form [λ1u1 ∧ u2 + λ2u1 ∧ u3 + λ3u2 ∧ u3] must be decomposable, becausethey are contained in Λ2(span{u1, u2, u3}) and 2-vectors over a 3-dimensional vector spaceare always decomposable.

It is interesting to note that α-planes and β-planes are related by duality. A pointX ⊂ P (V ) corresponds by duality to a plane X0 ∈ P (V ∗). The set of lines in P (V )containing the point X corresponds via duality to the set of lines contained in X0.

Proposition 4.14. Every plane contained in the Klein quadric is either an α-plane or aβ-plane.

Proof. Let P ⊂ Q be a plane in the quadric and choose three non-collinear points in Pcorresponding to three lines L1, L2, L3 ∈ P (V ). By Proposition 4.11, the intersectionsLi ∩ Lj are non-empty. There are two cases:

Case 1: L1 ∩ L2 ∩ L3 is a point X. Then all Φ(Li) lie in the α-plane associated to Xwhich must equal P .

Case 2: L1 ∩ L2 ∩ L3 is empty. Then the pairs of lines intersect three distinct pointsX, Y, Z ∈ P (V ). These three points can’t be collinear, because that would imply L1 =L2 = L3. Thus X, Y, Z determine a plane containing L1, L2, L3 so P must equal thecorresponding β-plane.

63

Page 64: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

5 Extra details

Characterization of affine algebraic sets in F 1.

Proposition 5.1. If f(x) is a polynomial of degree n, then Zaff (f) consists of no morethan n points. Conversely, if X = {r1, ..., rn} ⊂ F 1 is a set of n points, then there existsa polynomial g(x) of degree n such that X = Z(g(x)).

Proof. For the first part, we use induction on n.Base case: If n = 1 then f(x) = ax + b for some scalars a, b ∈ F with a 6= 0. The

equation ax+ b = 0 has unique solution x = −b/a, so Z(f) = {a/b} is a single point.Induction step: Let f(x) = anx

n + an−1xn−1 + ...+ a0 with an 6= 0. Suppose that f(x)

has at least one root r ∈ F . Then using the Long Division Algorithm we have

f(x) = (x− r)h(x)

where h(x) has degree n − 1. Thus f(x) = 0 if and only if x = r or h(x) = 0. Byinduction, h(x) has at most n− 1 roots, so f(x) has at most n roots.

Conversely, note that if

g(x) = (x− r1)(x− r2)...(x− rn)

then Z(g(x)) = {r1, ..., rn}.

Characterization of algebraic sets in FP 1.

Proposition 5.2. If f(x, y) be a homogeneous polynomial of degree n, then Z(f) is a setwith at most n points. Conversely, if X ⊆ FP 1 is a set of n points then there exists adegree n homogeneous polynomial g(x, y) such that X = Z(g).

Proof. Let f(x, y) = anxn + an−1x

n−1y + ...a1xyn−1 + a0y

n. Then

[1 : 0] ∈ Z(f)⇐⇒ f(1, 0) = an = 0.

All remaining points in FP 1 have the form [t : 1], and

[t : 1] ∈ Z(f)⇐⇒ f(t, 1) = antn + an−1t

n−1 + ...+ a0 = 0.

If an 6= 0 then f(t, 1) has at most n roots. If an = 0 then f(t, 1) has no more than n− 1roots. In either case Z(f) consists of at most n points.

Conversely, suppose that [a1 : b1], ...., [an : bn] are a set of points in FP n. Then

f(x, y) = (b1x− a1y)...(bnx− any)

vanishes if and only if [x : y] = [ai : bi] for some i = 1, ..., n, so

Z(f) = {[a1 : b1], ...., [an : bn]}.

64

Page 65: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Linear sets.Let V be a vector space with basis v1, ..., vn ∈ V . The dual basis x1, ..., xn ∈ V ∗ are

the linear maps xi : V → F for which xj(vi) = δi,j. Equivalently, the dual basis is definedby the property that

v = x1(v)v1 + ....+ xn(v)vn

for all v ∈ V . The general element of V ∗ is a linear combination

f(x1, ..., xn) = a1x1 + ....+ anxn.

It follows that V ∗ is equal to the set of homogenous polynomials of degree one for V .Let U ⊂ V be a vector subspace of dimension n. Choose a basis u1, ..., uk ∈ U and

extend to a basis u1, ..., uk, vk+1, ..., vn ∈ V and let x1, ..., xn ∈ V ∗ be the dual basis. ThenU = Zaff (xk+1, ..., xn) ⊆ V and P (U) = Z(xk+1, ..., xn) ⊆ P (V ).

If U1 and U2 are a pair of 2-dimensional subspaces of V , then there exist linear poly-nomials f1 and f2 such that P (U1) = Z(f1) and P (U2) = Z(f2) and thus

Z(f1f2) = P (U1) ∪ P (U2)

is a quadric in P (V ).

6 The hyperbolic plane

The material in this section is not to be tested.Recall that the Riemann sphere

CP 1 = C ∪ {∞}

is a 2-dimensional sphere. The group GL2(C) of 2× 2 invertible matrices act on CP 1 byprojective transformations. Two matrices determine the same transformation if and onlyif they are scalar multiples of each other.

The real projective lineRP 1 = R ∪ {∞}

looks like a circle, and sits inside of CP 1 as the equator (or can be understood that way).Picture.The subgroup of GL2(R) ⊂ GL2(C) of real matrices sends RP 1 ⊆ CP 1 to itself

(i.e. sends the equator to the equator) and acts by projective transformations. If werestrict to the subgroup GL+

2 (R) ⊆ GL2(R) with positive determinant, the correspondingtransformations of CP 1 also sends the northern hemisphere H to the northern hemisphere.In the spirit of Klein’s Erlangen program, the hyperbolic plane is defined to be H andhyperbolic geometry is defined to be all concepts left invariant invariant by GL+

2 (R).Hyperbolic geometry is distinguished from projective geometry in the following ways:

• There is a notion of distance and area

• There is a notion of angle for intersecting lines

65

Page 66: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

• Distinct lines to not always intersect (but can intersect at most once).

Hyperbolic geometry is distinguished from Euclidean geometry in the following ways:

• If L is a line and P is a point not on L, then there are many lines passing throughP that do not intersect L (multiple parallel lines).

• The sum of angles in a triangle is strictly less than π radians.

There are several different models of the hyperbolic plane.

1. The Poincare half plane model.

In this case we pass to affine C, where ∞ lying on the equator, the rest of theequator passing to R ⊂ C, and H passing to the upper half plane of C.

Figure 24: Poincare half plane model (blue lines and a tiling by 7-gons).

In this model, lines are represented by half circles centred on the real line, or byvertical lines (note that distinct lines intersect at most once). Hyperbolic angles arethe same as Euclidean angles in this model.

2. The Poincare disk model.

We can also pass to affine C by choosing the south pole of CP 1 as ∞ and sendingthe equator RP 1 to the unit circle and H is sent to the unit disk in C.

Hyperbolic angles are the same as Euclidean angles in this model. Lines correspondto circular arcs that intersect the boundary at right angles, or straight lines thatbisect the disk. You can see why the angles of a triangle add to less than π radians,because triangles are “bent in” in this geometry.

3. The Klein disk model.

In this version, H is represented by the unit disk in R2 and lines a represented bystraight Euclidean lines.

In this model, the hyperbolic angles are not the same as Euclidean angles, butdistance has a nice interpretation. Given points P,Q in the interior of the disk, they

66

Page 67: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

Figure 25: Poincare disk model (lines and a tiling by triangles with vertices on the bound-ary).

Figure 26: Klein disk model (lines and tiling by 7-gons and triangles).

67

Page 68: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

determine a line that intersects the unit circle in two points A,B. The hyperbolicdistance of between P and Q is equal to the logarithm of the cross ratio:

d(P,Q) =1

2log(R(A,B;P,Q)).

Below, we have an arrangement of lines depicted in the three different models:

Figure 27: Three models at once.

Recall that any three distinct points in RP 1 lie in general position

7 Ideas for Projects

• The octonions and the Fano plane

• Projective spaces over finite fields and the Fano plane

• Cramer’s paradox

• Cubic curves

• Hopf Fibration

• Incidence geometries and non-Desarguesian planes

68

Page 69: Projective Geometry Lecture Notes - · PDF fileProjective Geometry Lecture Notes Thomas Baird March 26, 2014 Contents 1 Introduction 2 2 Vector Spaces and Projective Spaces 4 2.1 Vector

• Elliptic curves and group laws

• Symmetric bilinear forms in calculus and the Hessian

• The Grassmanian and the Plucker embedding for k > 2.

• Pascal’s Theorem, Brianchon’s Theorem and duality

• Rational quadratic forms

69