dual space

79
Dual space From Wikipedia, the free encyclopedia

Upload: man

Post on 06-Nov-2015

233 views

Category:

Documents


6 download

DESCRIPTION

1. From Wikipedia, the free encyclopedia2. Lexicographical order

TRANSCRIPT

  • Dual spaceFrom Wikipedia, the free encyclopedia

  • Contents

    1 Defective matrix 11.1 Jordan block . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

    2 Denite quadratic form 32.1 Associated symmetric bilinear form . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.2 Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

    3 Delta operator 53.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53.2 Basic polynomials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

    4 Dependence relation 74.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74.2 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

    5 Determinant 85.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

    5.1.1 2 2 matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95.1.2 3 3 matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115.1.3 n n matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

    5.2 Properties of the determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135.2.1 Multiplicativity and matrix groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155.2.2 Laplaces formula and the adjugate matrix . . . . . . . . . . . . . . . . . . . . . . . . . . 155.2.3 Sylvesters determinant theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    5.3 Properties of the determinant in relation to other notions . . . . . . . . . . . . . . . . . . . . . . . 165.3.1 Relation to eigenvalues and trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

    i

  • ii CONTENTS

    5.3.2 Cramers rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.3.3 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185.3.4 Derivative . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

    5.4 Abstract algebraic aspects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.4.1 Determinant of an endomorphism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.4.2 Exterior algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205.4.3 Square matrices over commutative rings and abstract properties . . . . . . . . . . . . . . . 21

    5.5 Generalizations and related notions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.5.1 Innite matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.5.2 Related notions for non-commutative rings . . . . . . . . . . . . . . . . . . . . . . . . . . 225.5.3 Further variants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    5.6 Calculation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.6.1 Decomposition methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225.6.2 Further methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

    5.7 History . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235.8 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

    5.8.1 Linear independence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.8.2 Orientation of a basis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 245.8.3 Volume and Jacobian determinant . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.8.4 Vandermonde determinant (alternant) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255.8.5 Circulants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

    5.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.10 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 265.11 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 285.12 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

    6 Dieudonn determinant 296.1 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.2 TannakaArtin problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 296.4 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

    7 Dimension (vector space) 317.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317.2 Facts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 317.3 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

    7.3.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 327.5 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 337.7 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

  • CONTENTS iii

    8 Direct sum of modules 348.1 Construction for vector spaces and abelian groups . . . . . . . . . . . . . . . . . . . . . . . . . . 34

    8.1.1 Construction for two vector spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 348.1.2 Construction for two abelian groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

    8.2 Construction for an arbitrary family of modules . . . . . . . . . . . . . . . . . . . . . . . . . . . 358.3 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.4 Internal direct sum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.5 Universal property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.6 Grothendieck group . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.7 Direct sum of modules with additional structure . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

    8.7.1 Direct sum of algebras . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.7.2 Direct sum of Banach spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 388.7.3 Direct sum of modules with bilinear forms . . . . . . . . . . . . . . . . . . . . . . . . . . 388.7.4 Direct sum of Hilbert spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

    8.8 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 398.9 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

    9 Direction vector 419.1 Parametric equation for a line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419.2 Generative versus predicate forms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419.3 Predicate form of 2D line equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 419.4 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 429.5 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

    10 Dot product 4310.1 Denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

    10.1.1 Algebraic denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4310.1.2 Geometric denition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4410.1.3 Scalar projection and rst properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4410.1.4 Equivalence of the denitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45

    10.2 Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4610.2.1 Application to the cosine law . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

    10.3 Triple product expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4710.4 Physics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4810.5 Generalizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

    10.5.1 Complex vectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4910.5.2 Inner product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4910.5.3 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4910.5.4 Weight function . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.5.5 Dyadics and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.5.6 Tensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    10.6 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

  • iv CONTENTS

    10.7 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5010.8 External links . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

    11 Dual basis 5111.1 A categorical and algebraic construction of the dual space . . . . . . . . . . . . . . . . . . . . . . 5111.2 Existence and uniqueness . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5111.3 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5211.4 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5211.5 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5311.6 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53

    12 Dual basis in a eld extension 54

    13 Dual norm 5513.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5513.2 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5513.3 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56

    14 Dual number 5714.1 Linear representation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5714.2 Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

    14.2.1 Cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5814.3 Algebraic properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5814.4 Generalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5814.5 Dierentiation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5814.6 Superspace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5914.7 Division . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5914.8 Projective line . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6014.9 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6014.10Notes and references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6014.11Further reading . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

    15 Dual space 6215.1 Algebraic dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

    15.1.1 Finite-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6215.1.2 Innite-dimensional case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6315.1.3 Bilinear products and dual spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6415.1.4 Injection into the double-dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6415.1.5 Transpose of a linear map . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6515.1.6 Quotient spaces and annihilators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

    15.2 Continuous dual space . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6615.2.1 Examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6715.2.2 Transpose of a continuous linear map . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

  • CONTENTS v

    15.2.3 Annihilators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6815.2.4 Further properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6815.2.5 Topologies on the dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6815.2.6 Double dual . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

    15.3 See also . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6915.4 Notes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7015.5 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7015.6 Text and image sources, contributors, and licenses . . . . . . . . . . . . . . . . . . . . . . . . . . 71

    15.6.1 Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7115.6.2 Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7215.6.3 Content license . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

  • Chapter 1

    Defective matrix

    In linear algebra, a defective matrix is a square matrix that does not have a complete basis of eigenvectors, and istherefore not diagonalizable. In particular, an n n matrix is defective if and only if it does not have n linearly inde-pendent eigenvectors.[1] A complete basis is formed by augmenting the eigenvectors with generalized eigenvectors,which are necessary for solving defective systems of ordinary dierential equations and other problems.A defective matrix always has fewer than n distinct eigenvalues, since distinct eigenvalues always have linearly inde-pendent eigenvectors. In particular, a defective matrix has one or more eigenvalues with algebraic multiplicity m >1 (that is, they are multiple roots of the characteristic polynomial), but fewer than m linearly independent eigenvec-tors associated with .[1] However, every eigenvalue with algebraic multiplicity m always has m linearly independentgeneralized eigenvectors.A Hermitian matrix (or the special case of a real symmetric matrix) or a unitary matrix is never defective; moregenerally, a normal matrix (which includes Hermitian and unitary as special cases) is never defective.

    1.1 Jordan blockAny Jordan block of size 22 or larger is defective. For example, the n n Jordan block,

    J =

    266664 1

    . . .. . . 1

    377775;has an eigenvalue, , with algebraic multiplicity n, but only one distinct eigenvector,

    v =

    2666410...0

    37775:

    1.2 ExampleA simple example of a defective matrix is:

    3 10 3

    1

  • 2 CHAPTER 1. DEFECTIVE MATRIX

    which has a double eigenvalue of 3 but only one distinct eigenvector

    10

    (and constant multiples thereof).

    1.3 See also Jordan normal form

    1.4 Notes[1] Golub & Van Loan (1996, p. 316)

    1.5 References Golub, Gene H.; Van Loan, Charles F. (1996), Matrix Computations (3rd ed.), Baltimore: Johns HopkinsUniversity Press, ISBN 0-8018-5414-8

    Strang, Gilbert (1988). Linear Algebra and Its Applications (3rd ed.). San Diego: Harcourt. ISBN 970-686-609-4.

  • Chapter 2

    Denite quadratic form

    In mathematics, a denite quadratic form is a quadratic form over some real vector space V that has the same sign(always positive or always negative) for every nonzero vector of V. According to that sign, the quadratic form is calledpositive denite or negative denite.A semidenite (or semi-denite) quadratic form is dened in the same way, except that positive and negativeare replaced by not negative and not positive, respectively. An indenite quadratic form is one that takes on bothpositive and negative values.More generally, the denition applies to a vector space over an ordered eld.[1]

    2.1 Associated symmetric bilinear formQuadratic forms correspond one-to-one to symmetric bilinear forms over the same space.[2] A symmetric bilinearform is also described as denite, semidenite, etc. according to its associated quadratic form. A quadratic form Qand its associated symmetric bilinear form B are related by the following equations:

    Q(x) = B(x; x)

    B(x; y) = B(y; x) = 12 (Q(x+ y)Q(x)Q(y))

    2.2 ExampleAs an example, let V = R2 , and consider the quadratic form

    Q(x) = c1x12 + c2x2

    2

    where x = (x1, x2) and c1 and c2 are constants. If c1 > 0 and c2 > 0, the quadratic form Q is positive denite. Ifone of the constants is positive and the other is zero, then Q is positive semidenite. If c1 > 0 and c2 < 0, then Q isindenite.

    2.3 See also Anisotropic quadratic form Positive denite function Positive denite matrix

    3

  • 4 CHAPTER 2. DEFINITE QUADRATIC FORM

    2.4 References[1] Milnor & Husemoller (1973) p.61

    [2] This is true only over a eld of characteristic dierent of 2, but here we consider only ordered elds which necessarily havecharacteristic 0.

    Nathanael Leedom Ackerman (2006) Lecture notes Math 371, Positive denite bilinear form is denition0.5.0.7, weblink from University of California, Berkeley.

    Kitaoka, Yoshiyuki (1993). Arithmetic of quadratic forms. Cambridge Tracts in Mathematics 106. CambridgeUniversity Press. ISBN 0-521-40475-4. Zbl 0785.11021.

    Lang, Serge (2004), Algebra, Graduate Texts in Mathematics 211 (Corrected fourth printing, revised thirded.), New York: Springer-Verlag, p. 578, ISBN 978-0-387-95385-4

    Milnor, J.; Husemoller, D. (1973). Symmetric Bilinear Forms. Ergebnisse der Mathematik und ihrer Grenzge-biete 73. Springer-Verlag. ISBN 3-540-06009-X. Zbl 0292.10016.

  • Chapter 3

    Delta operator

    In mathematics, a delta operator is a shift-equivariant linear operator Q : K[x] ! K[x] on the vector space ofpolynomials in a variable x over a eld K that reduces degrees by one.To say that Q is shift-equivariant means that if g(x) = f(x+ a) , then

    (Qg)(x) = (Qf)(x+ a):

    In other words, if f is a "shift" of g , then Qf is also a shift of Qg , and has the same "shifting vector" a .To say that an operator reduces degree by one means that if f is a polynomial of degree n , then Qf is either apolynomial of degree n 1 , or, in case n = 0 , Qf is 0.Sometimes a delta operator is dened to be a shift-equivariant linear transformation on polynomials in x that mapsx to a nonzero constant. Seemingly weaker than the denition given above, this latter characterization can be shownto be equivalent to the stated denition, since shift-equivariance is a fairly strong condition.

    3.1 Examples The forward dierence operator

    (f)(x) = f(x+ 1) f(x)

    is a delta operator.

    Dierentiation with respect to x, written as D, is also a delta operator.

    Any operator of the form

    1Xk=1

    ckDk

    (where Dn() = (n) is the nth derivative) with c1 6= 0 is a delta operator. It can be shown that all deltaoperators can be written in this form. For example, the dierence operator given above can be expandedas

    = eD 1 =1Xk=1

    Dk

    k!:

    5

  • 6 CHAPTER 3. DELTA OPERATOR

    The generalized derivative of time scale calculus which unies the forward dierence operator with the deriva-tive of standard calculus is a delta operator.

    In computer science and cybernetics, the term discrete-time delta operator () is generally taken to mean adierence operator

    (f)(x) =f(x+t) f(x)

    t;

    the Euler approximation of the usual derivative with a discrete sample time t . The delta-formulationobtains a signicant number of numerical advantages compared to the shift-operator at fast sampling.

    3.2 Basic polynomialsEvery delta operator Q has a unique sequence of basic polynomials, a polynomial sequence dened by threeconditions:

    p0(x) = 1; pn(0) = 0; (Qpn)(x) = npn1(x); 8n 2 N:

    Such a sequence of basic polynomials is always of binomial type, and it can be shown that no other sequences ofbinomial type exist. If the rst two conditions above are dropped, then the third condition says this polynomialsequence is a Sheer sequencea more general concept.

    3.3 See also Pincherle derivative Shift operator Umbral calculus

    3.4 References Nikol'Skii, Nikolai Kapitonovich (1986), Treatise on the shift operator: spectral function theory, Berlin, NewYork: Springer-Verlag, ISBN 978-0-387-15021-5

  • Chapter 4

    Dependence relation

    In mathematics, a dependence relation is a binary relation which generalizes the relation of linear dependence.LetX be a set. A (binary) relation / between an element a ofX and a subset S ofX is called a dependence relation,written a / S , if it satises the following properties:

    if a 2 S , then a / S ; if a / S , then there is a nite subset S0 of S , such that a / S0 ; if T is a subset of X such that b 2 S implies b / T , then a / S implies a / T ; if a / S but a 6/S fbg for some b 2 S , then b / (S fbg) [ fag .

    Given a dependence relation / onX , a subset S ofX is said to be independent if a 6/Sfag for all a 2 S: If S T, then S is said to span T if t / S for every t 2 T: S is said to be a basis of X if S is independent and S spans X:Remark. If X is a non-empty set with a dependence relation / , then X always has a basis with respect to /:Furthermore, any two bases ofX have the same cardinality.

    4.1 Examples Let V be a vector space over a eld F: The relation / , dened by / S if is in the subspace spanned by S ,is a dependence relation. This is equivalent to the denition of linear dependence.

    LetK be a eld extension of F:Dene / by /S if is algebraic over F (S): Then / is a dependence relation.This is equivalent to the denition of algebraic dependence.

    4.2 See also matroid

    This article incorporates material from Dependence relation on PlanetMath, which is licensed under the Creative Com-mons Attribution/Share-Alike License.

    7

  • Chapter 5

    Determinant

    This article is about determinants in mathematics. For determinants in epidemiology, see risk factor.

    In linear algebra, the determinant is a useful value that can be computed from the elements of a square matrix. Thedeterminant of a matrix A is denoted det(A), det A, or |A|.In the case of a 2 2 matrix, the specic formula for the determinant is simply the upper left element times the lowerright element, minus the product of the other two elements. Similarly, suppose we have a 3 3 matrix A, and wewant the specic formula for its determinant |A|:

    jAj =a b cd e fg h i

    = ae fh i

    bd fg i+ cd eg h

    = aei+ bfg+ cdh ceg bdiafh:Each determinant of a 2 2 matrix in this equation is called a "minor" of the matrix A. The same sort of procedurecan be used to nd the determinant of a 4 4 matrix, the determinant of a 5 5 matrix, and so forth.Determinants occur throughout mathematics. For example, a matrix is often used to represent the coecients ina system of linear equations, and the determinant is used to solve those equations. The use of determinants incalculus includes the Jacobian determinant in the change of variables rule for integrals of functions of several variables.Determinants are also used to dene the characteristic polynomial of a matrix, which is essential for eigenvalueproblems in linear algebra. Sometimes, determinants are used merely as a compact notation for expressions thatwould otherwise be unwieldy to write down.It can be proven that any matrix has a unique inverse if its determinant is nonzero. Various other theorems can beproved as well, including that the determinant of a product of matrices is always equal to the product of determinants;and, the determinant of a Hermitian matrix is always real.

    5.1 DenitionThere are various ways to dene the determinant of a square matrix A, i.e. one with the same number of rows andcolumns. Perhaps the simplest way to express the determinant is by considering the elements in the top row andthe respective minors; starting at the left, multiply the element by the minor, then subtract the product of the nextelement and its minor, and alternate adding and subtracting such products until all elements in the top row have beenexhausted. For example, here is the result for a 4 4 matrix:

    a b c de f g hi j k lm n o p

    = af g hj k ln o p

    be g hi k lm o p

    +ce f hi j lm n p

    de f gi j km n o

    :Another way to dene the determinant is expressed in terms of the columns of the matrix. If we write an n nmatrixA in terms of its column vectors

    8

  • 5.1. DEFINITION 9

    A =a1; a2; : : : ; an

    where the aj are vectors of size n, then the determinant of A is dened so that

    deta1; : : : ; baj + cv; : : : ; an

    = b det(A) + c det

    a1; : : : ; v; : : : ; an

    deta1; : : : ; aj ; aj+1; : : : ; an

    = det a1; : : : ; aj+1; aj ; : : : ; an

    det(I) = 1

    where b and c are scalars, v is any vector of size n and I is the identity matrix of size n. These equations say thatthe determinant is a linear function of each column, that interchanging adjacent columns reverses the sign of thedeterminant, and that the determinant of the identity matrix is 1. These properties mean that the determinant is analternating multilinear function of the columns that maps the identity matrix to the underlying unit scalar. Thesesuce to uniquely calculate the determinant of any square matrix. Provided the underlying scalars form a eld (moregenerally, a commutative ring with unity), the denition below shows that such a function exists, and it can be shownto be unique.[1]

    Equivalently, the determinant can be expressed as a sum of products of entries of the matrix where each product hasn terms and the coecient of each product is 1 or 1 or 0 according to a given rule: it is a polynomial expression ofthe matrix entries. This expression grows rapidly with the size of the matrix (an n n matrix contributes n! terms),so it will rst be given explicitly for the case of 2 2 matrices and 3 3 matrices, followed by the rule for arbitrarysize matrices, which subsumes these two cases.Assume A is a square matrix with n rows and n columns, so that it can be written as

    A =

    26664a1;1 a1;2 : : : a1;na2;1 a2;2 : : : a2;n... ... . . . ...

    an;1 an;2 : : : an;n

    37775:The entries can be numbers or expressions (as happens when the determinant is used to dene a characteristic poly-nomial); the denition of the determinant depends only on the fact that they can be added and multiplied together ina commutative manner.The determinant of A is denoted as det(A), or it can be denoted directly in terms of the matrix entries by writingenclosing bars instead of brackets:

    a1;1 a1;2 : : : a1;na2;1 a2;2 : : : a2;n... ... . . . ...

    an;1 an;2 : : : an;n

    :

    5.1.1 2 2 matricesThe determinant of a 2 2 matrix is dened by

    a bc d = ad bc:

    If the matrix entries are real numbers, the matrix A can be used to represent two linear maps: one that maps thestandard basis vectors to the rows of A, and one that maps them to the columns of A. In either case, the images of thebasis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogramdened by the rows of the above matrix is the one with vertices at (0, 0), (a, b), (a + c, b + d), and (c, d), as shownin the accompanying diagram.

  • 10 CHAPTER 5. DETERMINANT

    adbc

    (a,b)

    (0,0)

    (c,d)

    (a+c,b+d)

    The area of the parallelogram is the absolute value of the determinant of the matrix formed by the vectors representing the parallel-ograms sides.

    The absolute value of ad bc is the area of the parallelogram, and thus represents the scale factor by which areas aretransformed by A. (The parallelogram formed by the columns of A is in general a dierent parallelogram, but sincethe determinant is symmetric with respect to rows and columns, the area will be the same.)The absolute value of the determinant together with the sign becomes the oriented area of the parallelogram. Theoriented area is the same as the usual area, except that it is negative when the angle from the rst to the second vectordening the parallelogram turns in a clockwise direction (which is opposite to the direction one would get for theidentity matrix).Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by A. Whenthe determinant is equal to one, the linear mapping dened by the matrix is equi-areal and orientation-preserving.The object known as the bivector is related to these ideas. In 2D, it can be interpreted as an oriented plane segmentformed by imagining two vectors each with origin (0, 0), and coordinates (a, b) and (c, d). The bivector magnitude

  • 5.1. DEFINITION 11

    (denoted (a, b) (c, d)) is the signed area, which is also the determinant ad bc.[2]

    5.1.2 3 3 matrices

    The volume of this parallelepiped is the absolute value of the determinant of the matrix formed by the rows constructed from thevectors r1, r2, and r3.

    The determinant of a 3 3 matrix is dened by

    a b cd e fg h i

    = ae fh i

    bd fg i+ cd eg h

    = a(ei fh) b(di fg) + c(dh eg)= aei+ bfg + cdh ceg bdi afh:

    The rule of Sarrus is a mnemonic for the 3 3 matrix determinant: the sum of the products of three diagonal north-west to south-east lines of matrix elements, minus the sum of the products of three diagonal south-west to north-eastlines of elements, when the copies of the rst two columns of the matrix are written beside it as in the illustration.This scheme for calculating the determinant of a 3 3 matrix does not carry over into higher dimensions.

    5.1.3 n n matricesThe determinant of a matrix of arbitrary size can be dened by the Leibniz formula or the Laplace formula.The Leibniz formula for the determinant of an n n matrix A is

  • 12 CHAPTER 5. DETERMINANT

    Sarrus rule: The determinant of the three columns on the left is the sum of the products along the solid diagonals minus the sum ofthe products along the dashed diagonals

    det(A) =X2Sn

    sgn()nYi=1

    ai;i :

    Here the sum is computed over all permutations of the set {1, 2, ..., n}. A permutation is a function that reordersthis set of integers. The value in the ith position after the reordering is denoted i. For example, for n = 3, theoriginal sequence 1, 2, 3 might be reordered to = [2, 3, 1], with 1 = 2, 2 = 3, and 3 = 1. The set of all suchpermutations (also known as the symmetric group on n elements) is denoted Sn. For each permutation , sgn()denotes the signature of , a value that is +1 whenever the reordering given by can be achieved by successivelyinterchanging two entries an even number of times, and 1 whenever it can be achieved by an odd number of suchinterchanges.In any of the n! summands, the term

    nYi=1

    ai;i

    is notation for the product of the entries at positions (i, i), where i ranges from 1 to n:

    a1;1 a2;2 an;n :For example, the determinant of a 3 3 matrix A (n = 3) is

    X2Sn

    sgn()nYi=1

    ai;i = sgn([1; 2; 3])nYi=1

    ai;[1;2;3]i + sgn([1; 3; 2])nYi=1

    ai;[1;3;2]i + sgn([2; 1; 3])nYi=1

    ai;[2;1;3]i

    + sgn([2; 3; 1])nYi=1

    ai;[2;3;1]i + sgn([3; 1; 2])nYi=1

    ai;[3;1;2]i + sgn([3; 2; 1])nYi=1

    ai;[3;2;1]i

    =

    nYi=1

    ai;[1;2;3]i nYi=1

    ai;[1;3;2]i nYi=1

    ai;[2;1;3]i +

    nYi=1

    ai;[2;3;1]i +

    nYi=1

    ai;[3;1;2]i nYi=1

    ai;[3;2;1]i

    = a1;1a2;2a3;3 a1;1a2;3a3;2 a1;2a2;1a3;3 + a1;2a2;3a3;1+ a1;3a2;1a3;2 a1;3a2;2a3;1:

  • 5.2. PROPERTIES OF THE DETERMINANT 13

    Levi-Civita symbol

    It is sometimes useful to extend the Leibniz formula to a summation in which not only permutations, but all sequencesof n indices in the range 1, ..., n occur, ensuring that the contribution of a sequence will be zero unless it denotes apermutation. Thus the totally antisymmetric Levi-Civita symbol "i1; ;in extends the signature of a permutation, bysetting "(1); ;(n) = sgn() for any permutation of n, and "i1; ;in = 0 when no permutation exists such that(j) = ij for j = 1; : : : ; n (or equivalently, whenever some pair of indices are equal). The determinant for an n n matrix can then be expressed using an n-fold summation as

    det(A) =nX

    i1;i2;:::;in=1

    "i1ina1;i1 an;in ;

    or using two epsilon symbols as

    det(A) = 1n!

    X"i1in"j1jnai1j1 ainjn ;

    where now each ir and each jr should be summed over 1, ..., n.

    5.2 Properties of the determinantThe determinant has many properties. Some basic properties of determinants are

    1. det(In) = 1 where In is the n n identity matrix.2. det(AT) = det(A):3. det(A1) = 1det(A) = det(A)1:

    4. For square matrices A and B of equal size,

    det(AB) = det(A) det(B):

    1. det(cA) = cn det(A) for an n n matrix.2. If A is a triangular matrix, i.e. ai,j = 0 whenever i > j or, alternatively, whenever i < j, then its determinant

    equals the product of the diagonal entries:

    det(A) = a1;1a2;2 an;n =nYi=1

    ai;i:

    This can be deduced from some of the properties below, but it follows most easily directly from the Leibniz formula(or from the Laplace expansion), in which the identity permutation is the only one that gives a non-zero contribution.A number of additional properties relate to the eects on the determinant of changing particular rows or columns:

    1. Viewing an n nmatrix as being composed of n columns, the determinant is an n-linear function. This meansthat if one column of a matrix A is written as a sum v + w of two column vectors, and all other columns areleft unchanged, then the determinant of A is the sum of the determinants of the matrices obtained from Aby replacing the column by v and then by w (and a similar relation holds when writing a column as a scalarmultiple of a column vector).

    2. If in a matrix, any row or column is 0, then the determinant of that particular matrix is 0.

  • 14 CHAPTER 5. DETERMINANT

    3. This n-linear function is an alternating form. This means that whenever two columns of a matrix are identical,or more generally some column can be expressed as a linear combination of the other columns (i.e. the columnsof the matrix form a linearly dependent set), its determinant is 0.

    Properties 1, 7 and 9 which all follow from the Leibniz formula completely characterize the determinant; inother words the determinant is the unique function from n n matrices to scalars that is n-linear alternating in thecolumns, and takes the value 1 for the identity matrix (this characterization holds even if scalars are taken in any givencommutative ring). To see this it suces to expand the determinant by multi-linearity in the columns into a (huge)linear combination of determinants of matrices in which each column is a standard basis vector. These determinantsare either 0 (by property 8) or else 1 (by properties 1 and 11 below), so the linear combination gives the expressionabove in terms of the Levi-Civita symbol. While less technical in appearance, this characterization cannot entirelyreplace the Leibniz formula in dening the determinant, since without it the existence of an appropriate function isnot clear. For matrices over non-commutative rings, properties 7 and 8 are incompatible for n 2,[3] so there is nogood denition of the determinant in this setting.Property 2 above implies that properties for columns have their counterparts in terms of rows:

    1. Viewing an n n matrix as being composed of n rows, the determinant is an n-linear function.

    2. This n-linear function is an alternating form: whenever two rows of a matrix are identical, its determinant is 0.

    3. Interchanging any pair of columns or rows of a matrix multiplies its determinant by 1. This follows fromproperties 7 and 9 (it is a general property of multilinear alternating maps). More generally, any permutationof the rows or columns multiplies the determinant by the sign of the permutation. By permutation, it is meantviewing each row as a vector Ri (equivalently each column as Ci) and reordering the rows (or columns) byinterchange of Rj and Rk (or Cj and Ck), where j,k are two indices chosen from 1 to n for an n n squarematrix.

    4. Adding a scalar multiple of one column to another column does not change the value of the determinant. Thisis a consequence of properties 7 and 8: by property 7 the determinant changes by a multiple of the determinantof a matrix with two equal columns, which determinant is 0 by property 8. Similarly, adding a scalar multipleof one row to another row leaves the determinant unchanged.

    Property 5 says that the determinant on n n matrices is homogeneous of degree n. These properties can be usedto facilitate the computation of determinants by simplifying the matrix to the point where the determinant can bedetermined immediately. Specically, for matrices with coecients in a eld, properties 11 and 12 can be used totransform any matrix into a triangular matrix, whose determinant is given by property 6; this is essentially the methodof Gaussian elimination.For example, the determinant of

    A =

    242 2 31 1 32 0 1

    35can be computed using the following matrices:

    B =

    242 2 30 0 4:52 0 1

    35; C =242 2 30 0 4:5

    0 2 4

    35; D =242 2 30 2 4

    0 0 4:5

    35:Here, B is obtained from A by adding 1/2the rst row to the second, so that det(A) = det(B). C is obtained from Bby adding the rst to the third row, so that det(C) = det(B). Finally, D is obtained from C by exchanging the secondand third row, so that det(D) = det(C). The determinant of the (upper) triangular matrix D is the product of itsentries on the main diagonal: (2) 2 4.5 = 18. Therefore, det(A) = det(D) = +18.

  • 5.2. PROPERTIES OF THE DETERMINANT 15

    5.2.1 Multiplicativity and matrix groupsThe determinant of a matrix product of square matrices equals the product of their determinants:

    det(AB) = det(A) det(B):

    Thus the determinant is a multiplicative map. This property is a consequence of the characterization given above ofthe determinant as the unique n-linear alternating function of the columns with value 1 on the identity matrix, sincethe function Mn(K) K that maps M det(AM) can easily be seen to be n-linear and alternating in the columnsof M, and takes the value det(A) at the identity. The formula can be generalized to (square) products of rectangularmatrices, giving the CauchyBinet formula, which also provides an independent proof of the multiplicative property.The determinant det(A) of a matrix A is non-zero if and only if A is invertible or, yet another equivalent statement,if its rank equals the size of the matrix. If so, the determinant of the inverse matrix is given by

    det(A1) = 1det(A) :

    In particular, products and inverses of matrices with determinant one still have this property. Thus, the set of suchmatrices (of xed size n) form a group known as the special linear group. More generally, the word special indicatesthe subgroup of another matrix group of matrices of determinant one. Examples include the special orthogonal group(which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.

    5.2.2 Laplaces formula and the adjugate matrixLaplaces formula expresses the determinant of a matrix in terms of its minors. The minor Mi,j is dened to bethe determinant of the (n1) (n1)-matrix that results from A by removing the ith row and the jth column. Theexpression (1)i+jMi,j is known as cofactor. The determinant of A is given by

    det(A) =nXj=1

    (1)i+jai;jMi;j =nXi=1

    (1)i+jai;jMi;j :

    Calculating det(A) by means of that formula is referred to as expanding the determinant along a row or column. Forthe example 3 3 matrix

    A =

    242 2 31 1 32 0 1

    35 ;Laplace expansion along the second column (j = 2, the sum runs over i) yields:However, Laplace expansion is ecient for small matrices only.The adjugate matrix adj(A) is the transpose of the matrix consisting of the cofactors, i.e.,

    (adj(A))i;j = (1)i+jMj;i:

    5.2.3 Sylvesters determinant theoremSylvesters determinant theorem states that for A, an m n matrix, and B, an n m matrix (so that A and B havedimensions allowing them to be multiplied in either order):

    det(Im +AB) = det(In +BA)

  • 16 CHAPTER 5. DETERMINANT

    where Im and In are the m m and n n identity matrices, respectively.From this general result several consequences follow.

    (a) For the case of column vector c and row vector r, each with m components, the formula allows quickcalculation of the determinant of a matrix that diers from the identity matrix by a matrix of rank 1:

    det(Im + cr) = 1 + rc

    (b) More generally,[4] for any invertible m m matrix X,

    det(X +AB) = det(X) det(In +BX1A)

    det(X + cr) = det(X)(1 + rX1c) = det(X) + r adj(X) c

    5.3 Properties of the determinant in relation to other notions

    5.3.1 Relation to eigenvalues and traceMain article: Eigenvalues and eigenvectors

    LetA be an arbitrary nnmatrix of complex numbers with eigenvalues 1 , 2 , ... n . (Here it is understood thatan eigenvalue with algebraic multiplicities occurs times in this list.) Then the determinant of A is the product ofall eigenvalues:

    det(A) =nYi=1

    i = 12 n

    The product of all non-zero eigenvalues is referred to as pseudo-determinant.Conversely, determinants can be used to nd the eigenvalues of thematrixA: they are the solutions of the characteristicequation

    det(A xI) = 0;where I is the identity matrix of the same dimension as A.An Hermitian matrix is positive denite if all its eigenvalues are positive. Sylvesters criterion asserts that this isequivalent to the determinants of the submatrices

    Ak :=

    26664a1;1 a1;2 : : : a1;ka2;1 a2;2 : : : a2;k... ... . . . ...

    ak;1 ak;2 : : : ak;k

    37775being positive, for all k between 1 and n.The trace tr(A) is by denition the sum of the diagonal entries of A and also equals the sum of the eigenvalues. Thus,for complex matrices A,

    det(exp(A)) = exp(tr(A))

  • 5.3. PROPERTIES OF THE DETERMINANT IN RELATION TO OTHER NOTIONS 17

    or, for real matrices A,

    tr(A) = log(det(exp(A))):

    Here exp(A) denotes the matrix exponential of A, because every eigenvalue of A corresponds to the eigenvalueexp() of exp(A). In particular, given any logarithm of A, that is, any matrix L satisfying

    exp(L) = A

    the determinant of A is given by

    det(A) = exp(tr(L)):

    For example, for n = 2, n = 3, and n = 4, respectively,

    det(A) =(trA)2 tr(A2)/2;

    det(A) =(trA)3 3 trA tr(A2) + 2 tr(A3)

    /6;

    det(A) =(trA)4 6 tr(A2)(trA)2 + 3(tr(A2))2 + 8 tr(A3) trA 6 tr(A4)

    /24:

    cf. Cayley-Hamilton theorem. Such expressions are deducible from Newtons identities.In the general case,[5]

    det(A) =X

    k1;k2;:::;kn

    nYl=1

    (1)kl+1lklkl!

    tr(Al)kl ;

    where the sum is taken over the set of all integers kl 0 satisfying the equation

    nXl=1

    lkl = n:

    This formula can also be used to nd the determinant of a matrix AIJ with multidimensional indices I = (i1,i2,...,i)and J = (j1,j2,...,j). The product and trace of such matrices are dened in a natural way as

    (AB)IJ =XK

    AIKBKJ ; tr(A) =

    XI

    AII :

    An arbitrary dimension n identity can be obtained from the Mercator series expansion of the logarithm when A 2B(0; 1)

    det(I +A) =1Xk=0

    1

    k!

    0@ 1Xj=1

    (1)jj

    tr(Aj)

    1Ak ;where I is the identity matrix. The sum and the expansion of the exponential only need to go up to n instead of ,since the determinant cannot exceed O(An).See also: fr:Algorithme de Faddeev Leverrier

  • 18 CHAPTER 5. DETERMINANT

    Upper and lower bounds

    For a positive denite matrix A, the trace operator gives the following tight lower and upper bounds on the logdeterminant

    tr(I A1) log det(A) tr(A I)with equality if and only if A = I . This relationship can be derived via the formula for the KL-divergence betweentwo multivariate normal distributions.

    5.3.2 Cramers ruleFor a matrix equation

    Ax = b

    the solution is given by Cramers rule:

    xi =det(Ai)det(A) i = 1; 2; 3; : : : ; n

    where Ai is the matrix formed by replacing the ith column of A by the column vector b. This follows immediately bycolumn expansion of the determinant, i.e.

    det(Ai) = deta1; : : : ; b; : : : ; an

    =

    nXj=1

    xj deta1; : : : ; ai1; aj ; ai+1; : : : ; an

    = xi det(A)

    where the vectors aj are the columns of A. The rule is also implied by the identity

    A adj(A) = adj(A)A = det(A) In:

    It has recently been shown that Cramers rule can be implemented in O(n3) time,[6] which is comparable to morecommon methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.

    5.3.3 Block matricesSuppose A, B, C, and D are matrices of dimension n n, n m, m n, and m m, respectively. Then

    detA 0C D

    = det(A) det(D) = det

    A B0 D

    :

    This can be seen from the Leibniz formula, or from a decomposition like (for the former case)

    A 0C D

    =

    A 0C Im

    In 00 D

    :

    When A is invertible, one has

    detA BC D

    = det(A) det(D CA1B):

  • 5.3. PROPERTIES OF THE DETERMINANT IN RELATION TO OTHER NOTIONS 19

    as can be seen by employing the decomposition

    A BC D

    =

    A 0C Im

    In A

    1B0 D CA1B

    :

    When D is invertible, a similar identity with det(D) factored out can be derived analogously,[7] that is,

    detA BC D

    = det(D) det(ABD1C):

    When the blocks are square matrices of the same order further formulas hold. For example, if C and D commute(i.e., CD = DC), then the following formula comparable to the determinant of a 2 2 matrix holds:[8]

    detA BC D

    = det(AD BC):

    When A = D and B = C, the blocks are square matrices of the same order and the following formula holds (even if Aand B do not commute)

    detA BB A

    = det(AB) det(A+B):

    When D is a 11 matrix, B is a column vector, and C is a row vector then

    detA BC D

    = (D 1) det(A) + det(ABC) = (D + 1) detA det(A+BC) :

    5.3.4 DerivativeBy denition, e.g., using the Leibniz formula, the determinant of real (or analogously for complex) square matrices isa polynomial function from Rn n to R. As such it is everywhere dierentiable. Its derivative can be expressed usingJacobis formula:

    d det(A)d = tr

    adj(A)dAd

    :

    where adj(A) denotes the adjugate of A. In particular, if A is invertible, we have

    d det(A)d = det(A) tr

    A1

    dAd

    :

    Expressed in terms of the entries of A, these are

    @ det(A)@Aij

    = adj(A)ji = det(A)(A1)ji:

    Yet another equivalent formulation is

    det(A+ X) det(A) = tr(adj(A)X)+O(2) = det(A) tr(A1X)+O(2)using big O notation. The special case where A = I , the identity matrix, yields

  • 20 CHAPTER 5. DETERMINANT

    det(I + X) = 1 + tr(X)+O(2):

    This identity is used in describing the tangent space of certain matrix Lie groups.If the matrix A is written asA =

    a b c where a, b, c are vectors, then the gradient over one of the three vectorsmay be written as the cross product of the other two:

    ra det(A) = b crb det(A) = c arc det(A) = a b:

    5.4 Abstract algebraic aspects

    5.4.1 Determinant of an endomorphismThe above identities concerning the determinant of products and inverses of matrices imply that similar matrices havethe same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that A = X1BX.Indeed, repeatedly applying the above identities yields

    det(A) = det(X)1 det(B) det(X) = det(B) det(X)1 det(X) = det(B):

    The determinant is therefore also called a similarity invariant. The determinant of a linear transformation

    T : V ! Vfor some nite-dimensional vector space V is dened to be the determinant of the matrix describing it, with respectto an arbitrary choice of basis in V. By the similarity invariance, this determinant is independent of the choice of thebasis for V and therefore only depends on the endomorphism T.

    5.4.2 Exterior algebraThe determinant of a linear transformation A : V V of an n-dimensional vector space V can be formulated in acoordinate-free manner by considering the nth exterior power nV of V. A induces a linear map

    nA : nV ! nVv1 ^ v2 ^ ^ vn 7! Av1 ^Av2 ^ ^Avn:As nV is one-dimensional, the map nA is given by multiplying with some scalar. This scalar coincides with thedeterminant of A, that is to say

    (nA)(v1 ^ ^ vn) = det(A) v1 ^ ^ vn:This denition agrees with the more concrete coordinate-dependent denition. This follows from the characterizationof the determinant given above. For example, switching two columns changes the sign of the determinant; likewise,permuting the vectors in the exterior product v1 v2 v3 ... vn to v2 v1 v3 ... vn, say, also changes itssign.For this reason, the highest non-zero exterior power n(V) is sometimes also called the determinant ofV and similarlyfor more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrix can alsobe cast in this setting, by considering lower alternating forms kV with k < n.

  • 5.4. ABSTRACT ALGEBRAIC ASPECTS 21

    Transformation on alternating multilinear n-forms

    The vector spaceW of all alternating multilinear n-forms on an n-dimensional vector space V has dimension one. Toeach linear transformation T on V we associate a linear transformation T on W, where for each w in W we dene(T w)(x1, ..., xn) = w(Tx1, ..., Txn). As a linear transformation on a one-dimensional space, T is equivalent to ascalar multiple. We call this scalar the determinant of T.

    5.4.3 Square matrices over commutative rings and abstract properties

    The determinant can also be characterized as the unique function

    D : Mn(K)! K

    from the set of all n nmatrices with entries in a eld K to this eld satisfying the following three properties: rst, Dis an n-linear function: considering all but one column of A xed, the determinant is linear in the remaining column,that is

    D(v1; : : : ; vi1; avi+bw; vi+1; : : : ; vn) = aD(v1; : : : ; vi1; vi; vi+1; : : : ; vn)+bD(v1; : : : ; vi1; w; vi+1; : : : ; vn)

    for any column vectors v1, ..., vn, andw and any scalars (elements of K) a and b. Second, D is an alternating function:for any matrix A with two identical columns D(A) = 0. Finally, D(In) = 1. Here In is the identity matrix.This fact also implies that every other n-linear alternating function F: Mn(K) K satises

    F (M) = F (I)D(M):

    This denition can also be extended where K is a commutative ring R, in which case a matrix is invertible if and onlyif its determinant is a invertible element in R. For example, a matrix A with entries in Z, the integers, is invertible(in the sense that there exists an inverse matrix with integer entries) if the determinant is +1 or 1. Such a matrix iscalled unimodular.The determinant denes a mapping

    GLn(R)! R;

    between the group of invertible n n matrices with entries in R and the multiplicative group of units in R. Since itrespects the multiplication in both groups, this map is a group homomorphism. Secondly, given a ring homomorphismf: R S, there is amapGLn(R)GLn(S) given by replacing all entries inR by their images under f. The determinantrespects these maps, i.e., given a matrix A = (ai,j) with entries in R, the identity

    f(det((ai;j))) = det((f(ai;j)))

    holds. For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant ofits conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reduction modulomof the determinant of such amatrix is equal to the determinant of the matrix reducedmodulom (the latter determinantbeing computed using modular arithmetic). In the more high-brow parlance of category theory, the determinant isa natural transformation between the two functors GLn and ().[9] Adding yet another layer of abstraction, this iscaptured by saying that the determinant is a morphism of algebraic groups, from the general linear group to themultiplicative group,

    det : GLn ! Gm:

  • 22 CHAPTER 5. DETERMINANT

    5.5 Generalizations and related notions

    5.5.1 Innite matricesFor matrices with an innite number of rows and columns, the above denitions of the determinant do not carry overdirectly. For example, in the Leibniz formula, an innite sum (all of whose terms are innite products) would haveto be calculated. Functional analysis provides dierent extensions of the determinant for such innite-dimensionalsituations, which however only work for particular kinds of operators.The Fredholm determinant denes the determinant for operators known as trace class operators by an appropriategeneralization of the formula

    det(I +A) = exp(tr(log(I +A))):Another innite-dimensional notion of determinant is the functional determinant.

    5.5.2 Related notions for non-commutative ringsFor square matrices with entries in a non-commutative ring, there are various diculties in dening determinantsanalogously to that for commutative rings. A meaning can be given to the Leibniz formula provided that the order forthe product is specied, and similarly for other ways to dene the determinant, but non-commutativity then leads tothe loss of many fundamental properties of the determinant, for instance the multiplicative property or the fact thatthe determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is no reasonablenotion of a multilinear form (existence of a nonzero bilinear form with a regular element of R as value on some pair ofarguments implies that R is commutative). Nevertheless various notions of non-commutative determinant have beenformulated, which preserve some of the properties of determinants, notably quasideterminants and the Dieudonndeterminant. It may be noted that if one considers certain specic classes of matrices with non-commutative elements,then there are examples where one can dene the determinant and prove linear algebra theorems that are very similarto their commutative analogs. Examples include quantum groups and q-determinant, Capelli matrix and Capellideterminant, super-matrices and Berezinian; Manin matrices is the class of matrices which is most close to matriceswith commutative elements.

    5.5.3 Further variantsDeterminants of matrices in superrings (that is, Z2-graded rings) are known as Berezinians or superdeterminants.[10]

    The permanent of a matrix is dened as the determinant, except that the factors sgn() occurring in Leibnizs ruleare omitted. The immanant generalizes both by introducing a character of the symmetric group Sn in Leibnizs rule.

    5.6 CalculationDeterminants are mainly used as a theoretical tool. They are rarely calculated explicitly in numerical linear algebra,where for applications like checking invertibility and nding eigenvalues the determinant has largely been supplantedby other techniques.[11] Nonetheless, explicitly calculating determinants is required in some situations, and dierentmethods are available to do so.Naive methods of implementing an algorithm to compute the determinant include using the Leibniz formula orLaplaces formula. Both these approaches are extremely inecient for large matrices, though, since the number ofrequired operations grows very quickly: it is of order n! (n factorial) for an n n matrix M. For example, Leibnizsformula requires calculating n! products. Therefore, more involved techniques have been developed for calculatingdeterminants.

    5.6.1 Decomposition methodsGiven a matrix A, some methods compute its determinant by writing A as a product of matrices whose determinantscan be more easily computed. Such techniques are referred to as decomposition methods. Examples include the

  • 5.7. HISTORY 23

    LU decomposition, the QR decomposition or the Cholesky decomposition (for positive denite matrices). Thesemethods are of order O(n3), which is a signicant improvement over O(n!)The LU decomposition expresses A in terms of a lower triangular matrix L, an upper triangular matrix U and apermutation matrix P:

    A = PLU:

    The determinants of L and U can be quickly calculated, since they are the products of the respective diagonal en-tries. The determinant of P is just the sign " of the corresponding permutation (which is +1 for an even number ofpermutations and is 1 for an uneven number of permutations). The determinant of A is then

    det(A) = " det(L) det(U);

    Moreover, the decomposition can be chosen such that L is a unitriangular matrix and therefore has determinant 1, inwhich case the formula further simplies to

    det(A) = " det(U):

    5.6.2 Further methodsIf the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows toquickly calculate the determinant of A + uvT, where u and v are column vectors.Since the denition of the determinant does not need divisions, a question arises: do fast algorithms exist that do notneed divisions? This is especially interesting for matrices over rings. Indeed algorithms with run-time proportionalto n4 exist. An algorithm of Mahajan and Vinay, and Berkowitz[12] is based on closed ordered walks (short clow).It computes more products than the determinant denition requires, but some of these products cancel and the sumof these products can be computed more eciently. The nal algorithm looks very much like an iterated product oftriangular matrices.If two matrices of order n can be multiplied in time M(n), where M(n) na for some a > 2, then the determinantcan be computed in time O(M(n)).[13] This means, for example, that an O(n2.376) algorithm exists based on theCoppersmithWinograd algorithm.Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed to storeintermediate values occurring in the computation. For example, the Gaussian elimination (or LU decomposition)methods is of order O(n3), but the bit length of intermediate values can become exponentially long.[14] The BareissAlgorithm, on the other hand, is an exact-division method based on Sylvesters identity is also of order n3, but the bitcomplexity is roughly the bit size of the original entries in the matrix times n.[15]

    5.7 HistoryHistorically, determinants were used long before matrices: originally, a determinant was dened as a property of asystem of linear equations. The determinant determines whether the system has a unique solution (which occursprecisely if the determinant is non-zero). In this sense, determinants were rst used in the Chinese mathematics text-book The Nine Chapters on the Mathematical Art (, Chinese scholars, around the 3rd century BCE). In Europe,2 2 determinants were considered by Cardano at the end of the 16th century and larger ones by Leibniz.[16][17][18][19]

    In Japan, Seki Takakazu ( ) is credited with the discovery of the resultant and the determinant (at rst in 1683,the complete version no later than 1710). In Europe, Cramer (1750) added to the theory, treating the subject inrelation to sets of equations. The recurrence law was rst announced by Bzout (1764).It was Vandermonde (1771) who rst recognized determinants as independent functions.[16] Laplace (1772) [20][21]gave the general method of expanding a determinant in terms of its complementaryminors: Vandermonde had alreadygiven a special case. Immediately following, Lagrange (1773) treated determinants of the second and third order.

  • 24 CHAPTER 5. DETERMINANT

    Lagrange was the rst to apply determinants to questions of elimination theory; he proved many special cases ofgeneral identities.Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers.He introduced the word determinant (Laplace had used resultant), though not in the present signication, but ratheras applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, andcame very near the multiplication theorem.The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product oftwo matrices ofm columns and n rows, which for the special case ofm = n reduces to the multiplication theorem. Onthe same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on thesubject. (See CauchyBinet formula.) In this he used the word determinant in its present sense,[22][23] summarizedand simplied what was then known on the subject, improved the notation, and gave the multiplication theorem witha proof more satisfactory than Binets.[16][24] With him begins the theory in its generality.The next important gure was Jacobi[17] (from 1827). He early used the functional determinant which Sylvesterlater called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class ofalternating functions which Sylvester has called alternants. About the time of Jacobis last memoirs, Sylvester (1839)and Cayley began their work.[25][26]

    The study of special forms of determinants has been the natural result of the completion of the general theory.Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants bySylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaans, inconnection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so calledby Muir) by Christoel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians andHessians by Sylvester; and symmetric gauche determinants by Trudi. Of the textbooks on the subject Spottiswoodeswas the rst. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.

    5.8 Applications

    5.8.1 Linear independence

    As mentioned above, the determinant of a matrix (with real or complex entries, say) is zero if and only if the columnvectors (or the row vectors) of the matrix are linearly dependent. Thus, determinants can be used to characterizelinearly dependent vectors. For example, given two linearly independent vectors v1, v2 in R3, a third vector v3 liesin the plane spanned by the former two vectors exactly if the determinant of the 3 3 matrix consisting of the threevectors is zero. The same idea is also used in the theory of dierential equations: given n functions f1(x), ..., fn(x)(supposed to be n 1 times dierentiable), the Wronskian is dened to be

    W (f1; : : : ; fn)(x) =

    f1(x) f2(x) fn(x)f 01(x) f

    02(x) f 0n(x)

    ... ... . . . ...f(n1)1 (x) f

    (n1)2 (x) f (n1)n (x)

    :It is non-zero (for some x) in a specied interval if and only if the given functions and all their derivatives up toorder n1 are linearly independent. If it can be shown that the Wronskian is zero everywhere on an interval then, inthe case of analytic functions, this implies the given functions are linearly dependent. See the Wronskian and linearindependence.

    5.8.2 Orientation of a basis

    Main article: Orientation (vector space)

    The determinant can be thought of as assigning a number to every sequence of n vectors in Rn, by using the squarematrix whose columns are the given vectors. For instance, an orthogonal matrix with entries in Rn represents anorthonormal basis in Euclidean space. The determinant of such a matrix determines whether the orientation of the

  • 5.8. APPLICATIONS 25

    basis is consistent with or opposite to the orientation of the standard basis. If the determinant is +1, the basis has thesame orientation. If it is 1, the basis has the opposite orientation.More generally, if the determinant of A is positive, A represents an orientation-preserving linear transformation (ifA is an orthogonal 2 2 or 3 3 matrix, this is a rotation), while if it is negative, A switches the orientation of thebasis.

    5.8.3 Volume and Jacobian determinantAs pointed out above, the absolute value of the determinant of real vectors is equal to the volume of the parallelepipedspanned by those vectors. As a consequence, if f: Rn Rn is the linear map represented by the matrix A, and S isany measurable subset of Rn, then the volume of f(S) is given by |det(A)| times the volume of S. More generally, ifthe linear map f: Rn Rm is represented by the m n matrix A, then the n-dimensional volume of f(S) is given by:

    volume(f(S)) =pdet(ATA) volume(S):

    By calculating the volume of the tetrahedron bounded by four points, they can be used to identify skew lines. Thevolume of any tetrahedron, given its vertices a, b, c, and d, is (1/6)|det(a b, b c, c d)|, or any other combinationof pairs of vertices that would form a spanning tree over the vertices.For a general dierentiable function, much of the above carries over by considering the Jacobian matrix of f. For

    f : Rn ! Rn;the Jacobian is the n n matrix whose entries are given by

    D(f) =

    @fi@xj

    1i;jn

    :

    Its determinant, the Jacobian determinant appears in the higher-dimensional version of integration by substitution:for suitable functions f and an open subset U of R'n (the domain of f), the integral over f(U) of some other function: Rn Rm is given by

    Zf(U)

    (v) dv =ZU

    (f(u)) jdet(D f)(u)j du:

    The Jacobian also occurs in the inverse function theorem.

    5.8.4 Vandermonde determinant (alternant)Main article: Vandermonde matrix

    Third order

    1 1 1x1 x2 x3x21 x

    22 x

    23

    = (x3 x2) (x3 x1) (x2 x1) :In general, the nth-order Vandermonde determinant is [27]

    1 1 1 1x1 x2 x3 xnx21 x

    22 x

    23 x2n

    ... ... ... . . . ...xn11 x

    n12 x

    n13 xn1n

    =

    Y1i

  • 26 CHAPTER 5. DETERMINANT

    where the right-hand side is the continued product of all the dierences that can be formed from the n(n1)/2 pairsof numbers taken from x1, x2, ..., xn, with the order of the dierences taken in the reversed order of the suxes thatare involved.

    5.8.5 CirculantsMain article: Circulant matrix

    Second order

    x1 x2x2 x1 = (x1 + x2) (x1 x2) :

    Third order

    x1 x2 x3x3 x1 x2x2 x3 x1

    = (x1 + x2 + x3) x1 + !x2 + !2x3 x1 + !2x2 + !x3 ;where and 2 are the complex cube roots of 1. In general, the nth-order circulant determinant is[27]

    x1 x2 x3 xnxn x1 x2 xn1xn1 xn x1 xn2... ... ... . . . ...x2 x3 x4 x1

    =

    nYj=1

    x1 + x2!j + x3!

    2j + : : :+ xn!

    n1j

    ;

    where j is an nth root of 1.

    5.9 See also Dieudonn determinant Functional determinant Immanant Matrix determinant lemma Permanent Pfaan Slater determinant

    5.10 Notes[1] Serge Lang, Linear Algebra, 2nd Edition, Addison-Wesley, 1971, pp 173, 191.[2] WildLinAlg episode 4, Norman J Wildberger, Univ. of New South Wales, 2010, lecture via youtube[3] In a non-commutative setting left-linearity (compatibility with left-multiplication by scalars) should be distinguished from

    right-linearity. Assuming linearity in the columns is taken to be left-linearity, one would have, for non-commuting scalarsa, b:

    ab = ab

    1 00 1 = a 1 00 b

    = a 00 b = b a 00 1

    = ba 1 00 1 = ba;

    a contradiction. There is no useful notion of multi-linear functions over a non-commutative ring.

  • 5.10. NOTES 27

    [4] Proofs can be found in http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/proof003.html

    [5] A proof can be found in the Appendix B of Kondratyuk, L. A.; Krivoruchenko, M. I. (1992). Superconducting quarkmatter in SU(2) color group. Zeitschrift fr Physik A 344: 99115. doi:10.1007/BF01291027.

    [6] Ken Habgood, Itamar Arel, A condensation-based application of Cramers rule for solving large-scale linear systems, JournalofDiscreteAlgorithms, 10 (2012), pp. 98109. Available online 1 July 2011, ISSN 15708667, 10.1016/j.jda.2011.06.007.

    [7] These identities were taken from http://www.ee.ic.ac.uk/hp/staff/dmb/matrix/proof003.html

    [8] Proofs are given in J.R. Silvester, Determinants of Block Matrices, Math. Gazette, 84 (2000), pp. 460467, availableat http://www.jstor.org/stable/3620776 or freely at http://www.ee.iisc.ernet.in/new/people/faculty/prasantg/downloads/blocks.pdf

    [9] Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics 5 ((2nd ed.) ed.),Springer-Verlag, ISBN 0-387-98403-8

    [10] Varadarajan, V. S (2004), Supersymmetry for mathematicians: An introduction, ISBN 978-0-8218-3574-6.

    [11] L. N. Trefethen and D. Bau, Numerical Linear Algebra (SIAM, 1997). e.g. in Lecture 1: "... we mention that the deter-minant, though a convenient notion theoretically, rarely nds a useful role in numerical algorithms.

    [12] http://page.inf.fu-berlin.de/~{}rote/Papers/pdf/Division-free+algorithms.pdf

    [13] Bunch, J. R.; Hopcroft, J. E. (1974). Triangular Factorization and Inversion by Fast Matrix Multiplication. Mathematicsof Computation 28 (125): 231236. doi:10.1090/S0025-5718-1974-0331751-8.

    [14] Fang, Xin Gui; Havas, George (1997). On the worst-case complexity of integer Gaussian elimination (PDF). Proceedingsof the 1997 international symposium on Symbolic and algebraic computation. ISSAC '97. Kihei, Maui, Hawaii, UnitedStates: ACM. pp. 2831. doi:10.1145/258726.258740. ISBN 0-89791-875-4.

    [15] Bareiss, Erwin (1968), Sylvesters Identity and Multistep Integer-Preserving Gaussian Elimination (PDF), Mathematicsof computation 22 (102): 565578

    [16] Campbell, H: Linear Algebra With Applications, pages 111112. Appleton Century Crofts, 1971

    [17] Eves, H: An Introduction to the History of Mathematics, pages 405, 493494, Saunders College Publishing, 1990.

    [18] A Brief History of Linear Algebra and Matrix Theory : http://darkwing.uoregon.edu/~{}vitulli/441.sp04/LinAlgHistory.html

    [19] Cajori, F. A History of Mathematics p. 80

    [20] Expansion of determinants in terms of minors: Laplace, Pierre-Simon (de) Researches sur le calcul intgral et sur lesystme du monde, Histoire de l'Acadmie Royale des Sciences (Paris), seconde partie, pages 267376 (1772).

    [21] Muir, Sir Thomas, The Theory of Determinants in the historical Order of Development [London, England: Macmillan andCo., Ltd., 1906]. JFM 37.0181.02

    [22] The rst use of the word determinant in the modern sense appeared in: Cauchy, Augustin-Louis Memoire sur lesfonctions qui ne peuvent obtenir que deux valeurs gales et des signes contraires par suite des transpositions operes entreles variables qu'elles renferment, which was rst read at the Institute de France in Paris on November 30, 1812, and whichwas subsequently published in the Journal de l'Ecole Polytechnique, Cahier 17, Tome 10, pages 29112 (1815).

    [23] Origins of mathematical terms: http://jeff560.tripod.com/d.html

    [24] History ofmatrices and determinants: http://www-history.mcs.st-and.ac.uk/history/HistTopics/Matrices_and_determinants.html

    [25] The rst use of vertical lines to denote a determinant appeared in: Cayley, Arthur On a theorem in the geometry ofposition, Cambridge Mathematical Journal, vol. 2, pages 267271 (1841).

    [26] History of matrix notation: http://jeff560.tripod.com/matrices.html

    [27] Gradshteyn, I. S., I. M. Ryzhik: Table of Integrals, Series, and Products, 14.31, Elsevier, 2007.

  • 28 CHAPTER 5. DETERMINANT

    5.11 ReferencesSee also: Linear algebra Further reading

    Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0-387-98259-0 de Boor, Carl (1990), An empty exercise (PDF),ACMSIGNUMNewsletter 25 (2): 37, doi:10.1145/122272.122273. Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley, ISBN 978-0-321-28713-7

    Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra, Society for Industrial andApplied Mathematics (SIAM), ISBN 978-0-89871-454-8

    Muir, Thomas (1960) [1933], A treatise on the theory of determinants, Revised and enlarged by William H.Metzler, New York, NY: Dover

    Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3 Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall

    5.12 External links Hazewinkel, Michiel, ed. (2001), Determinant, Encyclopedia of Mathematics, Springer, ISBN 978-1-55608-010-4 |rst1= missing |last1= in Authors list (help)

    Weisstein, Eric W., Determinant, MathWorld. O'Connor, John J.; Robertson, Edmund F., Matrices and determinants, MacTutor History of Mathematicsarchive, University of St Andrews.

    WebApp to calculate determinants and descriptively solve systems of linear equations Determinant Interactive Program and Tutorial Online Matrix Calculator Linear algebra: determinants. Compute determinants of matrices up to order 6 using Laplace expansion youchoose.

    Matrices and Linear Algebra on the Earliest Uses Pages Determinants explained in an easy fashion in the 4th chapter as a part of a Linear Algebra course. Instructional Video on taking the determinant of an nxn matrix (Khan Academy) Online matrix calculator (determinant, track, inverse, adjoint, transpose) Compute determinant of matrix upto order 8

    Derivation of Determinant of a Matrix

  • Chapter 6

    Dieudonn determinant

    In linear algebra, the Dieudonn determinant is a generalization of the determinant of a matrix to matrices overdivision rings and local rings. It was introduced by Dieudonn (1943).If K is a division ring, then the Dieudonn determinant is a homomorphism of groups from the group GLn(K) ofinvertible n by n matrices over K onto the abelianization K*/[K*, K*] of the multiplicative group K* of K.For example, the Dieudonn determinant for a 2-by-2 matrix is

    det

    a bc d

    =

    cb ifa = 0ad aca1b ifa 6= 0 :

    6.1 PropertiesLet R be a local ring. There is a determinant map from the matrix ring GL(R) to the abelianised unit group R withthe following properties:[1]

    The determinant is invariant under elementary row operations The determinant of the identity is 1 If a row is left multiplied by a in R then the determinant is left multiplied by a The determinant is multiplicative: det(AB) = det(A)det(B) If two rows are exchanged, the determinant is multiplied by 1 The determinant is invariant under transposition

    6.2 TannakaArtin problemAssume that K is nite over its centre F. The reduced norm gives a homomorphism Nn from GLn(K) to F*. Wealso have a homomorphism from GLn(K) to F* obtained by composing the Dieudonn determinant from GLn(K) toK*/[K*, K*] with the reduced norm N1 from GL1(K) = K* to F* via the abelianization.The TannakaArtin problem is whether these two maps have the same kernel SLn(K). This is true when F is locallycompact[2] but false in general.[3]

    6.3 See also Moore determinant over a division algebra

    29

  • 30 CHAPTER 6. DIEUDONN DETERMINANT

    6.4 References[1] Rosenberg (1994) p.64

    [2] Nakayama, Tadasi; Matsushima, Yoz (1943). "ber die multiplikative Gruppe einer p-adischen Divisionsalgebra. Proc.Imp. Acad. Tokyo (in German) 19: 622628. doi:10.3792/pia/1195573246. Zbl 0060.07901.

    [3] Platonov, V.P. (1976). The Tannaka-Artin problem and reduced K-theory. Izv. Akad. Nauk SSSR, Ser. Mat. (in Russian)40: 227261. Zbl 0338.16005.

    Dieudonn, Jean (1943), Les dterminants sur un corps non commutatif, Bulletin de la Socit Mathmatiquede France 71: 2745, ISSN 0037-9484, MR 0012273, Zbl 0028.33904

    Rosenberg, Jonathan (1994), Algebraic K-theory and its applications, Graduate Texts in Mathematics 147,Berlin, New York: Springer-Verlag, ISBN 978-0-387-94248-3, MR 1282290, Zbl 0801.19001. Errata

    Serre, Jean-Pierre (2003), Trees, Springer, p. 74, ISBN 3-540-44237-5, Zbl 1013.20001 Suprunenko, D.A. (2001), Determinant, in Hazewinkel, Michiel, Encyclopedia of Mathematics, Springer,ISBN 978-1-55608-010-4

  • Chapter 7

    Dimension (vector space)

    In mathematics, the dimension of a vector space V is the cardinality (i.e. the number of vectors) of a basis of V overits base eld.[1][lower-alpha 1]

    For every vector space there exists a basis,[lower-alpha 2] and all bases of a vector space have equal cardinality;[lower-alpha 3]as a result, the dimension of a vector space is uniquely dened. We say V is nite-dimensional if the dimension ofV is nite, and innite-dimensional if its dimension is innite.The dimension of the vector space V over the eld F can be written as dimF(V) or as [V : F], read dimension of Vover F". When F can be inferred from context, dim(V) is typically written.

    7.1 ExamplesThe vector space R3 has

    8

  • 32 CHAPTER 7. DIMENSION (VECTOR SPACE)

    If F/K is a eld extension, then F is in particular a vector space over K. Furthermore, every F-vector space V is alsoa K-vector space. The dimensions are related by the formula

    dimK(V) = dimK(F) dimF(V).

    In particular, every complex vector space of dimension n is a real vector space of dimension 2n.Some simple formulae relate the dimension of a vector space with the cardinality of the base eld and the cardinalityof the space itself. If V is a vector space over a eld F then, denoting the dimension of V by dim V, we have:

    If dim V is nite, then |V | = |F |dim V .If dim V is innite, then |V | = max(|F |, dim V).

    7.3 GeneralizationsOne can see a vector space as a particular case of amatroid, and in the latter there is a well-dened notion of dimension.The length of a module and the rank of an abelian group both have several properties similar to the dimension ofvector spaces.The Krull dimension of a commutative ring, named after Wolfgang Krull (18991971), is dened to be the maximalnumber of strict inclusions in an increasing chain of prime ideals in the ring.

    7.3.1 TraceSee also: Trace (linear algebra)

    The dimension of a vector space may alternatively be characterized as the trace of the identity operator. For instance,tr idR2 = tr ( 1 00 1 ) = 1 + 1 = 2: This appears to be a circular denition, but it allows useful generalizations.Firstly, it allows one to dene a notion of dimension when one has a trace but no natural sense of basis. For example,one may have an algebra A with maps : K ! A (the inclusion of scalars, called the unit) and a map : A ! K(corresponding to trace, called the counit). The composition : K ! K is a scalar (being a linear operator on a1-dimensional space) corresponds to trace of identity, and gives a notion of dimension for an abstract algebra. Inpractice, in bialgebras one requires that this map be the identity, which can be obtained by normalizing the counit bydividing by dimension ( := 1n tr ), so in these cases the normalizing constant corresponds to dimension.Alternatively, one may be able to take the trace of operators on an innite-dimensional space; in this case a (nite)trace is dened, even though no (nite) dimension exists, and gives a notion of dimension of the operator. Thesefall under the rubric of "trace class operators on a Hilbert space, or more generally nuclear operators on a Banachspace.A subtler generalization is to consider the trace of a family of operators as a kind of twisted dimension. Thisoccurs signicantly in representation theory, where the character of a representation is the trace of the representation,hence a scalar-valued function on a group : G ! K; whose value on the identity 1 2 G is the dimension of therepresentation, as a representation sends the identity in the group to the identity matrix: (1G) = tr IV = dimV:One can view the other values (g) of the character as twisted dimensions, and nd analogs or generalizations ofstatements about dimensions to statements about characters or representations. A sophisticated example of this occursin the theory of monstrous moonshine: the j-invariant is the graded dimension of an innite-dimensional gradedrepresentation of the Monster group, and replacing the dimension with the character gives the McKayThompsonseries for each element of the Monster group.[2]

    7.4 See also Basis (linear algebra) Topological dimension, also called Lebesgue covering dimension

  • 7.5. NOTES 33

    Fractal dimension Krull dimension Matroid rank Rank (linear algebra)

    7.5 Notes[1] It is sometimes called Hamel dimension or algebraic dimension to distinguish it from other types of dimension.

    [2] if one assumes the axiom of choice

    [3] see dimension theorem for vector spaces

    7.6 References[1] Itzkov, Mikhail (2009). Tensor Algebra and Tensor Analysis for Engineers: With Applications to Continuum Mechanics.

    Springer. p. 4. ISBN 978-3-540-93906-1.

    [2] Gannon, Terry (2006),Moonshine beyond the Monster: The Bridge Connecting Algebra, Modular Forms and Physics, ISBN0-521-83531-3

    7.7 External links MIT Linear Algebra Lecture on Independence, Basis, and Dimension by Gilbert Strang at MIT OpenCourse-Ware

  • Chapter 8

    Direct sum of modules

    For the broader use of the term in mathematics, see Direct sum.

    In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. Thedirect sum of modules is the smallest module which contains the given modules as submodules with no unnecessaryconstraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion.The most familiar examples of this construction occur when considering vector spaces (modules over a eld) andabelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spacesand Hilbert spaces.

    8.1 Construction for vector spaces and abelian groupsWe give the construction rst in these two cases, under the assumption that we have only two objects. Then wegeneralise to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearlyidentied by considering these two cases in depth.

    8.1.1 Construction for two vector spaces

    Suppose V and W are vector spaces over the eld K. The cartesian product V W can be given the structure of avector space over K (Halmos 1974, 18) by dening the operations componentwise:

    (v1, w1) + (v2, w2) = (v1 + v2, w1 + w2)

    (v, w) = ( v, w)

    for v, v1, v2 V, w, w1, w2 W, and K.The resulting vector space is called the direct sum of V andW and is usually denoted by a plus symbol inside a circle:

    V W

    It is customary to write the elements of an ordered sum not as ordered pairs (v, w), but as a sum v + w.The subspace V {0} of V W is isomorphic to V and is often identied with V; similarly for {0} W andW. (Seeinternal direct sum below.) With this identication, every element of V W can be written in one and only one wayas the sum of an element of V and an element ofW. The dimension of V W is equal to the sum of the dimensionsof V andW.This construction readily generalises to any nite number of vector spaces.

    34

  • 8.2. CONSTRUCTION FOR AN ARBITRARY FAMILY OF MODULES 35

    8.1.2 Construction for two abelian groups

    For abelian groups G and H which are written additively, the direct product of G and H is also called a direct sum(Mac Lane & Birkho 1999, V.6). Thus the cartesian product G H is equipped with the structure of an abeliangroup by dening the operations componentwise:

    (g1, h1) + (g2, h2) = (g1 + g2, h1 + h2)

    for g1, g2 in G, and h1, h2 in H.Integral multiples are similarly dened componentwise by

    n(g, h) = (ng, nh)

    for g in G, h in H, and n an integer. This parallels the extension of the scalar product of vector spaces to the directsum above.The resulting abelian group is called the direct sum of G and H and is usually denoted by a plus symbol inside a circle:

    GH

    It is customary to write the elements of an ordered sum not as ordered pairs (g, h), but as a sum g + h.The subgroup G {0} of G H is isomorphic to G and is often identied with G; similarly for {0} H and H. (Seeinternal direct sum below.) With this identication, it is true that every element of G H can be written in one andonly one way as the sum of an element of G and an element of H. The rank of G H is equal to the sum of the ranksof G and H.This construction readily generalises to any nite number of abelian groups.

    8.2 Construction for an arbitrary family of modulesOne should notice a clear similarity between the denitions of the direct sum of two vector spaces and of two abeliangroups. In fact, each is a special case of the construction of the direct sum of twomodules. Additionally, by modifyingthe denition one can accommodate the direct sum of an innite family ofmodules. The precise denition is as follows(Bourbaki 1989, II.1.6).Let R be a ring, and {Mi : i I} a family of left R-modules indexed by the set I. The direct sum of {Mi} is thendened to be the set of all sequences (i) where i 2 Mi and i = 0 for conitely many indices i. (The directproduct is analogous but the indices do not need to conitely vanish.)It can also be dened as functions from I to the disjoint union of the modules Mi such that (i) Mi for all i Iand (i) = 0 for conitely many indices i. These functions can equivalently be regarded as nitely supported sectionsof the ber bundle over the index set I, with the ber over i 2 I beingMi .This set inherits the module structure via component-wise addition and scalar multiplication. Explicitly, two suchsequences (or functions) and can be added by writing (+)i = i+i for all i (note that this is again zero for allbut nitely many indices), and such a function can be multiplied with an element r from R by