vector and matrices - some articles

239
PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information. PDF generated at: Wed, 25 May 2011 17:28:12 UTC Vector and Matrices Some articles

Upload: amberget

Post on 27-Nov-2014

876 views

Category:

Documents


4 download

TRANSCRIPT

Page 1: Vector and Matrices - Some Articles

PDF generated using the open source mwlib toolkit. See http://code.pediapress.com/ for more information.PDF generated at: Wed, 25 May 2011 17:28:12 UTC

Vector and MatricesSome articles

Page 2: Vector and Matrices - Some Articles

ContentsArticles

Dot product 1Cross product 7Triple product 21Binet–Cauchy identity 24Inner product space 25Sesquilinear form 33Scalar multiplication 36Euclidean space 37Orthonormality 40Cauchy–Schwarz inequality 43Orthonormal basis 48Vector space 50Matrix multiplication 74Determinant 82Exterior algebra 97Geometric algebra 112Levi-Civita symbol 120Jacobi triple product 126Rule of Sarrus 128Laplace expansion 129Lie algebra 131Orthogonal group 136Rotation group 146Vector-valued function 150Gramian matrix 154Lagrange's identity 156Quaternion 159Skew-symmetric matrix 175Xyzzy 180Quaternions and spatial rotation 181Seven-dimensional cross product 192Octonion 200Multilinear algebra 205Pseudovector 208

Page 3: Vector and Matrices - Some Articles

Bivector 214

ReferencesArticle Sources and Contributors 231Image Sources, Licenses and Contributors 234

Article LicensesLicense 236

Page 4: Vector and Matrices - Some Articles

Dot product 1

Dot productIn mathematics, the dot product is an algebraic operation that takes two equal-length sequences of numbers (usuallycoordinate vectors) and returns a single number obtained by multiplying corresponding entries and then summingthose products. The name is derived from the centered dot "·" that is often used to designate this operation; thealternative name scalar product emphasizes the scalar (rather than vector) nature of the result. At a basic level, thedot product is used to obtain the cosine of the angle between two vectors.The principal use of this product is the inner product in a Euclidean vector space: when two vectors are expressedon an orthonormal basis, the dot product of their coordinate vectors gives their inner product. For this geometricinterpretation, scalars must be taken to be real numbers; while the dot product can be defined in a more generalsetting (for instance with complex numbers as scalars) many properties would be different. The dot product contrasts(in three dimensional space) with the cross product, which produces a vector as result.

DefinitionThe dot product of two vectors a = [a1, a2, ... , an] and b = [b1, b2, ... , bn] is defined as:

where Σ denotes summation notation and n is the dimension of the vector space.In dimension 2, the dot product of vectors [a,b] and [c,d] is ac + bd. Similarly, in a dimension 3, the dot product ofvectors [a,b,c] and [d,e,f] is ad + be + cf. For example, the dot product of two three-dimensional vectors [1, 3, −5]and [4, −2, −1] is

The dot product can also be obtained via transposition and matrix multiplication as follows:

where both vectors are interpreted as column vectors, and aT denotes the transpose of a, in other words thecorresponding row vector.

Page 5: Vector and Matrices - Some Articles

Dot product 2

Geometric interpretation

is the scalar projection of onto .

Since , then .

In Euclidean geometry, the dot product of vectors expressed in an orthonormal basis is related to their length andangle. For such a vector , the dot product is the square of the length of , or

where denotes the length (magnitude) of . If is another such vector,

where is the angle between them.This formula can be rearranged to determine the size of the angle between two nonzero vectors:

The Cauchy–Schwarz inequality guarantees that the argument of is valid.One can also first convert the vectors to unit vectors by dividing by their magnitude:

then the angle is given by

The terminal points of both unit vectors lie on the unit circle. The unit circle is where the trigonometric values for thesix trig functions are found. After substitution, the first vector component is cosine and the second vector componentis sine, i.e. for some angle . The dot product of the two unit vectors then takes and for angles and and returns where

.As the cosine of 90° is zero, the dot product of two orthogonal vectors is always zero. Moreover, two vectors can beconsidered orthogonal if and only if their dot product is zero, and they have non-null length. This property provides asimple method to test the condition of orthogonality.

Page 6: Vector and Matrices - Some Articles

Dot product 3

Sometimes these properties are also used for "defining" the dot product, especially in 2 and 3 dimensions; thisdefinition is equivalent to the above one. For higher dimensions the formula can be used to define the concept ofangle.The geometric properties rely on the basis being orthonormal, i.e. composed of pairwise perpendicular vectors withunit length.

Scalar projectionIf both and have length one (i.e., they are unit vectors), their dot product simply gives the cosine of the anglebetween them.

If only is a unit vector, then the dot product gives , i.e., the magnitude of the projection of inthe direction of , with a minus sign if the direction is opposite. This is called the scalar projection of onto ,or scalar component of in the direction of (see figure). This property of the dot product has several usefulapplications (for instance, see next section).

If neither nor is a unit vector, then the magnitude of the projection of in the direction of is , as

the unit vector in the direction of is .

RotationA rotation of the orthonormal basis in terms of which vector is represented is obtained with a multiplication of by a rotation matrix . This matrix multiplication is just a compact representation of a sequence of dot products.For instance, let

• and be two different orthonormal bases of the same space , with obtained by just rotating ,

• represent vector in terms of ,• represent the same vector in terms of the rotated basis ,• , , , be the rotated basis vectors , , represented in terms of .Then the rotation from to is performed as follows:

Notice that the rotation matrix is assembled by using the rotated basis vectors , , as its rows, andthese vectors are unit vectors. By definition, consists of a sequence of dot products between each of the threerows of and vector . Each of these dot products determines a scalar component of in the direction of arotated basis vector (see previous section).If is a row vector, rather than a column vector, then must contain the rotated basis vectors in its columns, andmust post-multiply :

Page 7: Vector and Matrices - Some Articles

Dot product 4

PhysicsIn physics, vector magnitude is a scalar in the physical sense, i.e. a physical quantity independent of the coordinatesystem, expressed as the product of a numerical value and a physical unit, not just a number. The dot product is alsoa scalar in this sense, given by the formula, independent of the coordinate system. Example:• Mechanical work is the dot product of force and displacement vectors.• Magnetic flux is the dot product of the magnetic field and the area vectors.

PropertiesThe following properties hold if a, b, and c are real vectors and r is a scalar.The dot product is commutative:

The dot product is distributive over vector addition:

The dot product is bilinear:

When multiplied by a scalar value, dot product satisfies:

(these last two properties follow from the first two).Two non-zero vectors a and b are orthogonal if and only if a • b = 0.Unlike multiplication of ordinary numbers, where if ab = ac, then b always equals c unless a is zero, the dot productdoes not obey the cancellation law:

If a • b = a • c and a ≠ 0, then we can write: a • (b − c) = 0 by the distributive law; the result above says thisjust means that a is perpendicular to (b − c), which still allows (b − c) ≠ 0, and therefore b ≠ c.

Provided that the basis is orthonormal, the dot product is invariant under isometric changes of the basis: rotations,reflections, and combinations, keeping the origin fixed. The above mentioned geometric interpretation relies on thisproperty. In other words, for an orthonormal space with any number of dimensions, the dot product is invariant undera coordinate transformation based on an orthogonal matrix. This corresponds to the following two conditions:• The new basis is again orthonormal (i.e., it is orthonormal expressed in the old one).• The new base vectors have the same length as the old ones (i.e., unit length in terms of the old basis).If a and b are functions, then the derivative of a • b is a' • b + a • b'

Page 8: Vector and Matrices - Some Articles

Dot product 5

Triple product expansionThis is a very useful identity (also known as Lagrange's formula) involving the dot- and cross-products. It iswritten as

which is easier to remember as "BAC minus CAB", keeping in mind which vectors are dotted together. This formulais commonly used to simplify vector calculations in physics.

Proof of the geometric interpretationConsider the element of Rn

Repeated application of the Pythagorean theorem yields for its length |v|

But this is the same as

so we conclude that taking the dot product of a vector v with itself yields the squared length of the vector.Lemma 1

Now consider two vectors a and b extending from the origin, separated by an angle θ. A third vector c may bedefined as

creating a triangle with sides a, b, and c. According to the law of cosines, we have

Substituting dot products for the squared lengths according to Lemma 1, we get

(1)

But as c ≡ a − b, we also have

,which, according to the distributive law, expands to

(2)

Merging the two c • c equations, (1) and (2), we obtain

Subtracting a • a + b • b from both sides and dividing by −2 leaves

Q.E.D.

Page 9: Vector and Matrices - Some Articles

Dot product 6

GeneralizationThe inner product generalizes the dot product to abstract vector spaces and is usually denoted by . Due tothe geometric interpretation of the dot product the norm ||a|| of a vector a in such an inner product space is defined as

such that it generalizes length, and the angle θ between two vectors a and b by

In particular, two vectors are considered orthogonal if their inner product is zero

For vectors with complex entries, using the given definition of the dot product would lead to quite differentgeometric properties. For instance the dot product of a vector with itself can be an arbitrary complex number, andcan be zero without the vector being the zero vector; this in turn would have severe consequences for notions likelength and angle. Many geometric properties can be salvaged, at the cost of giving up the symmetric and bilinearproperties of the scalar product, by alternatively defining

where bi is the complex conjugate of bi. Then the scalar product of any vector with itself is a non-negative realnumber, and it is nonzero except for the zero vector. However this scalar product is not linear in b (but ratherconjugate linear), and the scalar product is not symmetric either, since

.This type of scalar product is nevertheless quite useful, and leads to the notions of Hermitian form and of generalinner product spaces.The Frobenius inner product generalizes the dot product to matrices. It is defined as the sum of the products of thecorresponding components of two matrices having the same size.

Generalization to tensorsThe dot product between a tensor of order n and a tensor of order m is a tensor of order n+m-2. The dot product iscalculated by multiplying and summing across a single index in both tensors. If and are two tensors withelement representation and the elements of the dot product are given by

This definition naturally reduces to the standard vector dot product when applied to vectors, and matrixmultiplication when applied to matrices.Occasionally, a double dot product is used to represent multiplying and summing across two indices. The double dotproduct between two 2nd order tensors is a scalar.

Page 10: Vector and Matrices - Some Articles

Dot product 7

External links• Weisstein, Eric W., "Dot product [1]" from MathWorld.• A quick geometrical derivation and interpretation of dot product [2]

• Interactive GeoGebra Applet [3]

• Java demonstration of dot product [4]

• Another Java demonstration of dot product [5]

• Explanation of dot product including with complex vectors [6]

• "Dot Product" [7] by Bruce Torrence, Wolfram Demonstrations Project, 2007.

References[1] http:/ / mathworld. wolfram. com/ DotProduct. html[2] http:/ / behindtheguesses. blogspot. com/ 2009/ 04/ dot-and-cross-products. html[3] http:/ / xahlee. org/ SpecialPlaneCurves_dir/ ggb/ Vector_Dot_Product. html[4] http:/ / www. falstad. com/ dotproduct/[5] http:/ / www. cs. brown. edu/ exploratories/ freeSoftware/ repository/ edu/ brown/ cs/ exploratories/ applets/ dotProduct/ dot_product_guide.

html[6] http:/ / www. mathreference. com/ la,dot. html[7] http:/ / demonstrations. wolfram. com/ DotProduct/

Cross productIn mathematics, the cross product, vector product, or Gibbs vector product is a binary operation on two vectorsin three-dimensional space. It results in a vector which is perpendicular to both of the vectors being multiplied andnormal to the plane containing them. It has many applications in mathematics, engineering and physics.If either of the vectors being multiplied is zero or the vectors are parallel then their cross product is zero. Moregenerally, the magnitude of the product equals the area of a parallelogram with the vectors for sides; in particular forperpendicular vectors this is a rectangle and the magnitude of the product is the product of their lengths. The crossproduct is anticommutative, distributive over addition and satisfies the Jacobi identity. The space and product forman algebra over a field, which is neither commutative nor associative, but is a Lie algebra with the cross productbeing the Lie bracket.Like the dot product, it depends on the metric of Euclidean space, but unlike the dot product, it also depends on thechoice of orientation or "handedness". The product can be generalized in various ways; it can be made independentof orientation by changing the result to pseudovector, or in arbitrary dimensions the exterior product of vectors canbe used with a bivector or two-form result. Also, using the orientation and metric structure just as for the traditional3d cross product, one can in n dimensions take the product of n - 1 vectors to produce a vector perpendicular to all ofthem. But if the product is limited to non-trivial binary products with vector results it exists only in three and sevendimensions.

Page 11: Vector and Matrices - Some Articles

Cross product 8

The cross-product in respect to a right-handedcoordinate system

Definition

Finding the direction of the cross product by theright-hand rule

The cross product of two vectors a and b is denoted by a × b. Inphysics, sometimes the notation a∧b is used,[1] though this is avoidedin mathematics to avoid confusion with the exterior product.

The cross product a × b is defined as a vector c that is perpendicular toboth a and b, with a direction given by the right-hand rule and amagnitude equal to the area of the parallelogram that the vectors span.

The cross product is defined by the formula[2] [3]

where θ is the measure of the smaller angle between a and b (0° ≤ θ ≤ 180°), a and b are the magnitudes of vectors aand b (i.e., a = |a| and b = |b|), and n is a unit vector perpendicular to the plane containing a and b in the directiongiven by the right-hand rule as illustrated. If the vectors a and b are parallel (i.e., the angle θ between them is either0° or 180°), by the above formula, the cross product of a and b is the zero vector 0.The direction of the vector n is given by the right-hand rule, where one simply points the forefinger of the right handin the direction of a and the middle finger in the direction of b. Then, the vector n is coming out of the thumb (seethe picture on the right). Using this rule implies that the cross-product is anti-commutative, i.e., b × a = -(a × b). Bypointing the forefinger toward b first, and then pointing the middle finger toward a, the thumb will be forced in theopposite direction, reversing the sign of the product vector.Using the cross product requires the handedness of the coordinate system to be taken into account (as explicit in the definition above). If a left-handed coordinate system is used, the direction of the vector n is given by the left-hand

Page 12: Vector and Matrices - Some Articles

Cross product 9

rule and points in the opposite direction.This, however, creates a problem because transforming from one arbitrary reference system to another (e.g., a mirrorimage transformation from a right-handed to a left-handed coordinate system), should not change the direction of n.The problem is clarified by realizing that the cross-product of two vectors is not a (true) vector, but rather apseudovector. See cross product and handedness for more detail.

Computing the cross product

Coordinate notationThe basis vectors i, j, and k satisfy the following equalities:

Together with the skew-symmetry and bilinearity of the product, these three identities are sufficient to determine thecross product of any two vectors. In particular, the following identities can be established:

(the zero vector)These can be used to compute the product of two general vectors, a = a1i + a2j + a3k and b = b1i + b2j + b3k, byexpanding the product using distributivity then collecting similar terms:

Or written as column vectors:

Matrix notationThe definition of the cross product can also be represented by the determinant of a formal matrix:

This determinant can be computed using Sarrus' rule or Cofactor expansion.Using Sarrus' Rule, it expands to

Using Cofactor expansion along the first row instead, it expands to[4]

which gives the components of the resulting vector directly.

Page 13: Vector and Matrices - Some Articles

Cross product 10

Properties

Geometric meaning

Figure 1. The area of a parallelogram as a crossproduct

Figure 2. Three vectors defining a parallelepiped

The magnitude of the cross product can be interpreted asthe positive area of the parallelogram having a and b assides (see Figure 1):

Indeed, one can also compute the volume V of a parallelepiped having a, b and c as sides by using a combination ofa cross product and a dot product, called scalar triple product (see Figure 2):

Since the result of the scalar triple product may be negative, the volume of the parallelepiped is given by its absolutevalue. For instance,

Because the magnitude of the cross product goes by the sine of the angle between its arguments, the cross productcan be thought of as a measure of "perpendicularness" in the same way that the dot product is a measure of"parallelness". Given two unit vectors, their cross product has a magnitude of 1 if the two are perpendicular and amagnitude of zero if the two are parallel. The opposite is true for the dot product of two unit vectors.

Page 14: Vector and Matrices - Some Articles

Cross product 11

Algebraic propertiesThe cross product is anticommutative,

distributive over addition,

and compatible with scalar multiplication so that

It is not associative, but satisfies the Jacobi identity:

Distributivity, linearity and Jacobi identity show that R3 together with vector addition and the cross product forms aLie algebra, the Lie algebra of the real orthogonal group in 3 dimensions, SO(3).The cross product does not obey the cancellation law: a × b = a × c with non-zero a does not imply that b = c.Instead if a × b = a × c:

If neither a nor b - c is zero then from the definition of the cross product the angle between them must be zero andthey must be parallel. They are related by a scale factor, so one of b or c can be expressed in terms of the other, forexample

for some scalar t.If a · b = a · c and a × b = a × c, for non-zero vector a, then b = c, as

and

so b − c is both parallel and perpendicular to the non-zero vector a, something that is only possible if b − c = 0 sothey are identical.From the geometrical definition the cross product is invariant under rotations about the axis defined by a × b. Moregenerally the cross product obeys the following identity under matrix transformations:

where is a 3 by 3 matrix and is the transpose of the inverseThe cross product of two vectors in 3-D always lies in the null space of the matrix with the vectors as rows:

Page 15: Vector and Matrices - Some Articles

Cross product 12

DifferentiationThe product rule applies to the cross product in a similar manner:

This identity can be easily proved using the matrix multiplication representation.

Triple product expansionThe cross product is used in both forms of the triple product. The scalar triple product of three vectors is defined as

It is the signed volume of the parallelepiped with edges a, b and c and as such the vectors can be used in any orderthat's an even permutation of the above ordering. The following therefore are equal:

The vector triple product is the cross product of a vector with the result of another cross product, and is related to thedot product by the following formula

The mnemonic "BAC minus CAB" is used to remember the order of the vectors in the right hand member. Thisformula is used in physics to simplify vector calculations. A special case, regarding gradients and useful in vectorcalculus, is

where ∇2 is the vector Laplacian operator.Another identity relates the cross product to the scalar triple product:

Alternative formulationThe cross product and the dot product are related by:

The right-hand side is the Gram determinant of a and b, the square of the area of the parallelogram defined by thevectors. This condition determines the magnitude of the cross product. Namely, since the dot product is defined, interms of the angle θ between the two vectors, as:

the above given relationship can be rewritten as follows:

Invoking the Pythagorean trigonometric identity one obtains:

which is the magnitude of the cross product expressed in terms of θ, equal to the area of the parallelogram defined bya and b (see definition above).The combination of this requirement and the property that the cross product be orthogonal to its constituents a and bprovides an alternative definition of the cross product.[5]

Page 16: Vector and Matrices - Some Articles

Cross product 13

Lagrange's identityThe relation:

can be compared with another relation involving the right-hand side, namely Lagrange's identity expressed as:[6]

where a and b may be n-dimensional vectors. In the case n=3, combining these two equations results in theexpression for the magnitude of the cross product in terms of its components:[7]

The same result is found directly using the components of the cross-product found from:

In R3 Lagrange's equation is a special case of the multiplicativity |vw| = |v||w| of the norm in the quaternion algebra.It is a special case of another formula, also sometimes called Lagrange's identity, which is the three dimensional caseof the Binet-Cauchy identity:[8] [9]

If a = c and b = d this simplifies to the formula above.

Alternative ways to compute the cross product

Conversion to matrix multiplicationThe vector cross product also can be expressed as the product of a skew-symmetric matrix and a vector:[8]

where superscript T refers to the Transpose matrix, and [a]X is defined by:

Also, if is itself a cross product:

then

This result can be generalized to higher dimensions using geometric algebra. In particular in any dimension bivectors can be identified with skew-symmetric matrices, so the product between a skew-symmetric matrix and vector is equivalent to the grade-1 part of the product of a bivector and vector. In three dimensions bivectors are dual to vectors so the product is equivalent to the cross product, with the bivector instead of its vector dual. In higher dimensions the product can still be calculated but bivectors have more degrees of freedom and are not equivalent to

Page 17: Vector and Matrices - Some Articles

Cross product 14

vectors.This notation is also often much easier to work with, for example, in epipolar geometry.From the general properties of the cross product follows immediately that

and and from fact that is skew-symmetric it follows that

The above-mentioned triple product expansion (bac-cab rule) can be easily proven using this notation.The above definition of means that there is a one-to-one mapping between the set of 3×3 skew-symmetricmatrices, also known as the Lie algebra of SO(3), and the operation of taking the cross product with some vector .

Index notationThe cross product can alternatively be defined in terms of the Levi-Civita symbol, εijk:

where the indices correspond, as in the previous section, to orthogonal vector components. Thischaracterization of the cross product is often expressed more compactly using the Einstein summation convention as

in which repeated indices are summed from 1 to 3. Note that this representation is another form of theskew-symmetric representation of the cross product:

In classical mechanics: representing the cross-product with the Levi-Civita symbol can causemechanical-symmetries to be obvious when physical-systems are isotropic in space. (Quick example: consider aparticle in a Hooke's Law potential in three-space, free to oscillate in three dimensions; none of these dimensions are"special" in any sense, so symmetries lie in the cross-product-represented angular-momentum which are made clearby the abovementioned Levi-Civita representation).

MnemonicThe word "xyzzy" can be used to remember the definition of the cross product.If

where:

then:

The second and third equations can be obtained from the first by simply vertically rotating the subscripts, x → y → z→ x. The problem, of course, is how to remember the first equation, and two options are available for this purpose:either to remember the relevant two diagonals of Sarrus's scheme (those containing i), or to remember the xyzzysequence.

Page 18: Vector and Matrices - Some Articles

Cross product 15

Since the first diagonal in Sarrus's scheme is just the main diagonal of the above-mentioned matrix, the firstthree letters of the word xyzzy can be very easily remembered.

Cross VisualizationSimilarly to the mnemonic device above, a "cross" or X can be visualized between the two vectors in the equation.While this method does not have any real mathematical basis, it may help you to remember the correct Cross Productformula.If

then:

If we want to obtain the formula for we simply drop the and from the formula, and take the next twocomponents down -

It should be noted that when doing this for the next two elements down should "wrap around" the matrix so thatafter the z component comes the x component. For clarity, when performing this operation for , the next twocomponents should be z and x (in that order). While for the next two components should be taken as x and y.

For then, if we visualize the cross operator as pointing from an element on the left to an element on the right, wecan take the first element on the left and simply multiply by the element that the cross points to in the right handmatrix. We then subtract the next element down on the left, multiplied by the element that the cross points to here aswell. This results in our formula -

We can do this in the same way for and to construct their associated formulas.

Applications

Computational geometryThe cross product can be used to calculate the normal for a triangle or polygon, an operation frequently performed incomputer graphics.In computational geometry of the plane, the cross product is used to determine the sign of the acute angle defined bythree points , and . It corresponds to the direction of the cross product of the twocoplanar vectors defined by the pairs of points and , i.e., by the sign of the expression

. In the "right-handed" coordinate system, if the result is 0, the points arecollinear; if it is positive, the three points constitute a negative angle of rotation around from to , otherwisea positive angle. From another point of view, the sign of tells whether lies to the left or to the right of line .

Page 19: Vector and Matrices - Some Articles

Cross product 16

MechanicsMoment of a force applied at point B around point A is given as:

OtherThe cross product occurs in the formula for the vector operator curl. It is also used to describe the Lorentz forceexperienced by a moving electrical charge in a magnetic field. The definitions of torque and angular momentum alsoinvolve the cross product.The trick of rewriting a cross product in terms of a matrix multiplication appears frequently in epipolar andmulti-view geometry, in particular when deriving matching constraints.

Cross product as an exterior product

The cross product in relation to the exteriorproduct. In red are the orthogonal unit vector, and

the "parallel" unit bivector.

The cross product can be viewed in terms of the exterior product. Thisview allows for a natural geometric interpretation of the cross product.In exterior calculus the exterior product (or wedge product) of twovectors is a bivector. A bivector is an oriented plane element, in muchthe same way that a vector is an oriented line element. Given twovectors a and b, one can view the bivector a∧b as the orientedparallelogram spanned by a and b. The cross product is then obtainedby taking the Hodge dual of the bivector a∧b, identifying 2-vectorswith vectors:

This can be thought of as the oriented multi-dimensional element "perpendicular" to the bivector. Only in threedimensions is the result an oriented line element – a vector – whereas, for example, in 4 dimensions the Hodge dualof a bivector is two-dimensional – another oriented plane element. So, in three dimensions only is the cross productof a and b the vector dual to the bivector a∧b: it is perpendicular to the bivector, with orientation dependent on thecoordinate system's handedness, and has the same magnitude relative to the unit normal vector as a∧b has relative tothe unit bivector; precisely the properties described above.

Cross product and handednessWhen measurable quantities involve cross products, the handedness of the coordinate systems used cannot bearbitrary. However, when physics laws are written as equations, it should be possible to make an arbitrary choice ofthe coordinate system (including handedness). To avoid problems, one should be careful to never write down anequation where the two sides do not behave equally under all transformations that need to be considered. Forexample, if one side of the equation is a cross product of two vectors, one must take into account that when thehandedness of the coordinate system is not fixed a priori, the result is not a (true) vector but a pseudovector.Therefore, for consistency, the other side must also be a pseudovector.More generally, the result of a cross product may be either a vector or a pseudovector, depending on the type of itsoperands (vectors or pseudovectors). Namely, vectors and pseudovectors are interrelated in the following ways under

Page 20: Vector and Matrices - Some Articles

Cross product 17

application of the cross product:• vector × vector = pseudovector• pseudovector × pseudovector = pseudovector• vector × pseudovector = vector• pseudovector × vector = vectorSo by the above relationships, the unit basis vectors i, j and k of an orthonormal, right-handed (Cartesian) coordinateframe must all be pseudovectors (if a basis of mixed vector types is disallowed, as it normally is) since i × j = k, j ×k = i and k × i = j.Because the cross product may also be a (true) vector, it may not change direction with a mirror imagetransformation. This happens, according to the above relationships, if one of the operands is a (true) vector and theother one is a pseudovector (e.g., the cross product of two vectors). For instance, a vector triple product involvingthree (true) vectors is a (true) vector.A handedness-free approach is possible using exterior algebra.

GeneralizationsThere are several ways to generalize the cross product to the higher dimensions.

Lie algebraThe cross product can be seen as one of the simplest Lie products, and is thus generalized by Lie algebras, which areaxiomatized as binary products satisfying the axioms of multilinearity, skew-symmetry, and the Jacobi identity.Many Lie algebras exist, and their study is a major field of mathematics, called Lie theory.For example, the Heisenberg algebra gives another Lie algebra structure on In the basis the product is

QuaternionsThe cross product can also be described in terms of quaternions, and this is why the letters i, j, k are a convention forthe standard basis on . The unit vectors i, j, k correspond to "binary" (180 deg) rotations about their respectiveaxes (Altmann, S. L., 1986, Ch. 12), said rotations being represented by "pure" quaternions (zero scalar part) withunit norms.For instance, the above given cross product relations among i, j, and k agree with the multiplicative relations amongthe quaternions i, j, and k. In general, if a vector [a1, a2, a3] is represented as the quaternion a1i + a2j + a3k, the crossproduct of two vectors can be obtained by taking their product as quaternions and deleting the real part of the result.The real part will be the negative of the dot product of the two vectors.Alternatively and more straightforwardly, using the above identification of the 'purely imaginary' quaternions with

, the cross product may be thought of as half of the commutator of two quaternions.

Page 21: Vector and Matrices - Some Articles

Cross product 18

OctonionsA cross product for 7-dimensional vectors can be obtained in the same way by using the octonions instead of thequaternions. The nonexistence of such cross products of two vectors in other dimensions is related to the result thatthe only normed division algebras are the ones with dimension 1, 2, 4, and 8; Hurwitz theorem.

Wedge productIn general dimension, there is no direct analogue of the binary cross product. There is however the wedge product,which has similar properties, except that the wedge product of two vectors is now a 2-vector instead of an ordinaryvector. As mentioned above, the cross product can be interpreted as the wedge product in three dimensions afterusing Hodge duality to identify 2-vectors with vectors.The wedge product and dot product can be combined to form the Clifford product.

Multilinear algebraIn the context of multilinear algebra, the cross product can be seen as the (1,2)-tensor (a mixed tensor) obtained fromthe 3-dimensional volume form,[10] a (0,3)-tensor, by raising an index.In detail, the 3-dimensional volume form defines a product by taking the determinant of the matrixgiven by these 3 vectors. By duality, this is equivalent to a function (fixing any two inputs gives afunction by evaluating on the third input) and in the presence of an inner product (such as the dot product;more generally, a non-degenerate bilinear form), we have an isomorphism and thus this yields a map

which is the cross product: a (0,3)-tensor (3 vector inputs, scalar output) has been transformed into a(1,2)-tensor (2 vector inputs, 1 vector output) by "raising an index".Translating the above algebra into geometry, the function "volume of the parallelepiped defined by " (wherethe first two vectors are fixed and the last is an input), which defines a function , can be represented uniquelyas the dot product with a vector: this vector is the cross product From this perspective, the cross product isdefined by the scalar triple product, In the same way, in higher dimensions one may define generalized cross products by raising indices of then-dimensional volume form, which is a -tensor. The most direct generalizations of the cross product are todefine either:• a -tensor, which takes as input vectors, and gives as output 1 vector – an -ary vector-valued

product, or• a -tensor, which takes as input 2 vectors and gives as output skew-symmetric tensor of rank n−2 – a

binary product with rank n−2 tensor values. One can also define -tensors for other k.

These products are all multilinear and skew-symmetric, and can be defined in terms of the determinant and parity.The -ary product can be described as follows: given vectors in define their generalizedcross product as:• perpendicular to the hyperplane defined by the • magnitude is the volume of the parallelotope defined by the which can be computed as the Gram determinant

of the • oriented so that is positively oriented.This is the unique multilinear, alternating product which evaluates to , and so forthfor cyclic permutations of indices.In coordinates, one can give a formula for this -ary analogue of the cross product in Rn by:

Page 22: Vector and Matrices - Some Articles

Cross product 19

This formula is identical in structure to the determinant formula for the normal cross product in R3 except that therow of basis vectors is the last row in the determinant rather than the first. The reason for this is to ensure that theordered vectors (v1,...,vn-1,Λ(v1,...,vn-1)) have a positive orientation with respect to (e1,...,en). If n is odd, thismodification leaves the value unchanged, so this convention agrees with the normal definition of the binary product.In the case that n is even, however, the distinction must be kept. This -ary form enjoys many of the sameproperties as the vector cross product: it is alternating and linear in its arguments, it is perpendicular to eachargument, and its magnitude gives the hypervolume of the region bounded by the arguments. And just like the vectorcross product, it can be defined in a coordinate independent way as the Hodge dual of the wedge product of thearguments.

HistoryIn 1773, Joseph Louis Lagrange introduced the component form of both the dot and cross products in order to studythe tetrahedron in three dimensions.[11] In 1843 the Irish mathematical physicist Sir William Rowan Hamiltonintroduced the quaternion product, and with it the terms "vector" and "scalar". Given two quaternions [0, u] and [0,v], where u and v are vectors in R3, their quaternion product can be summarized as [−u·v, u×v]. James ClerkMaxwell used Hamilton's quaternion tools to develop his famous electromagnetism equations, and for this and otherreasons quaternions for a time were an essential part of physics education.In 1878 William Kingdon Clifford published his Introduction to Dynamic which brought together manymathematical ideas. He defined the product of two vectors to have magnitude equal to the area of the parallelogramof which they are two sides, and direction perpendicular to their plane.Oliver Heaviside in England and Josiah Willard Gibbs, a professor at Yale University in Connecticut, also felt thatquaternion methods were too cumbersome, often requiring the scalar or vector part of a result to be extracted. Thus,about forty years after the quaternion product, the dot product and cross product were introduced—to heatedopposition. Pivotal to (eventual) acceptance was the efficiency of the new approach, allowing Heaviside to reducethe equations of electromagnetism from Maxwell's original 20 to the four commonly seen today.[12]

Largely independent of this development, and largely unappreciated at the time, Hermann Grassmann created ageometric algebra not tied to dimension two or three, with the exterior product playing a central role. WilliamKingdon Clifford combined the algebras of Hamilton and Grassmann to produce Clifford algebra, where in the caseof three-dimensional vectors the bivector produced from two vectors dualizes to a vector, thus reproducing the crossproduct.The cross notation, which began with Gibbs, inspired the name "cross product". Originally it appeared in privatelypublished notes for his students in 1881 as Elements of Vector Analysis. The utility for mechanics was noted byAleksandr Kotelnikov. Gibbs's notation —and the name— later reached a wide audience through Vector Analysis, atextbook by Edwin Bidwell Wilson, a former student. Wilson rearranged material from Gibbs's lectures, togetherwith material from publications by Heaviside, Föpps, and Hamilton. He divided vector analysis into three parts:

First, that which concerns addition and the scalar and vector products of vectors. Second, that which concernsthe differential and integral calculus in its relations to scalar and vector functions. Third, that which containsthe theory of the linear vector function.

Two main kinds of vector multiplications were defined, and they were called as follows:• The direct, scalar, or dot product of two vectors• The skew, vector, or cross product of two vectors

Page 23: Vector and Matrices - Some Articles

Cross product 20

Several kinds of triple products and products of more than three vectors were also examined. The above mentionedtriple product expansion was also included.

Notes[1] Jeffreys, H and Jeffreys, BS (1999). Methods of mathematical physics (http:/ / worldcat. org/ oclc/ 41158050?tab=details). Cambridge

University Press. .[2] Wilson 1901, p. 60–61[3] Dennis G. Zill, Michael R. Cullen (2006). "Definition 7.4: Cross product of two vectors" (http:/ / books. google. com/

?id=x7uWk8lxVNYC& pg=PA324). Advanced engineering mathematics (3rd ed.). Jones & Bartlett Learning. p. 324. ISBN 076374591X. .[4] Dennis G. Zill, Michael R. Cullen (2006). "Equation 7: a × b as sum of determinants" (http:/ / books. google. com/ ?id=x7uWk8lxVNYC&

pg=PA321). cited work. Jones & Bartlett Learning. p. 321. ISBN 076374591X. .[5] WS Massey (Dec. 1983). "Cross products of vectors in higher dimensional Euclidean spaces" (http:/ / www. jstor. org/ stable/ 2323537). The

American Mathematical Monthly (The American Mathematical Monthly, Vol. 90, No. 10) 90 (10): 697–701. doi:10.2307/2323537. .[6] Vladimir A. Boichenko, Gennadiĭ Alekseevich Leonov, Volker Reitmann (2005). Dimension theory for ordinary differential equations (http:/

/ books. google. com/ ?id=9bN1-b_dSYsC& pg=PA26). Vieweg+Teubner Verlag. p. 26. ISBN 3519004372. .[7] Pertti Lounesto (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC& pg=PA94& dq="which+ in+

coordinate+ form+ means+ Lagrange's+ identity"& cd=1#v=onepage& q="which in coordinate form means Lagrange's identity") (2nd ed.).Cambridge University Press. p. 94. ISBN 0521005515. .

[8] Shuangzhe Liu and Gõtz Trenkler (2008). "Hadamard, Khatri-Rao, Kronecker and other matrix products" (http:/ / www. math. ualberta. ca/ijiss/ SS-Volume-4-2008/ No-1-08/ SS-08-01-17. pdf). Int J Information and systems sciences (Institute for scientific computing andeducation) 4 (1): 160–177. .

[9] by Eric W. Weisstein (2003). "Binet-Cauchy identity" (http:/ / books. google. com/ ?id=8LmCzWQYh_UC& pg=PA228). CRC conciseencyclopedia of mathematics (2nd ed.). CRC Press. p. 228. ISBN 1584883472. .

[10] By a volume form one means a function that takes in n vectors and gives out a scalar, the volume of the parallelotope defined by the vectors:This is an n-ary multilinear skew-symmetric form. In the presence of a basis, such as on this is given by the

determinant, but in an abstract vector space, this is added structure. In terms of G-structures, a volume form is an -structure.[11] Lagrange, JL (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires". Oeuvres. vol 3.[12] Nahin, Paul J. (2000). Oliver Heaviside: the life, work, and times of an electrical genius of the Victorian age. JHU Press. pp. 108–109.

ISBN 0-801-86909-9.

References• Cajori, Florian (1929). A History Of Mathematical Notations Volume II (http:/ / www. archive. org/ details/

historyofmathema027671mbp). Open Court Publishing. p. 134. ISBN 978-0-486-67766-8• William Kingdon Clifford (1878) Elements of Dynamic (http:/ / dlxs2. library. cornell. edu/ cgi/ t/ text/

text-idx?c=math;cc=math;view=toc;subview=short;idno=04370002), Part I, page 95, London: MacMillan & Co;online presentation by Cornell University Historical Mathematical Monographs.

• E. A. Milne (1948) Vectorial Mechanics, Chapter 2: Vector Product, pp 11 –31, London: Methuen Publishing.• Wilson, Edwin Bidwell (1901). Vector Analysis: A text-book for the use of students of mathematics and physics,

founded upon the lectures of J. Willard Gibbs (http:/ / www. archive. org/ details/ 117714283). Yale UniversityPress

External links• Weisstein, Eric W., " Cross Product (http:/ / mathworld. wolfram. com/ CrossProduct. html)" from MathWorld.• A quick geometrical derivation and interpretation of cross products (http:/ / behindtheguesses. blogspot. com/

2009/ 04/ dot-and-cross-products. html)• Z.K. Silagadze (2002). Multi-dimensional vector product. Journal of Physics. A35, 4949 (http:/ / uk. arxiv. org/

abs/ math. la/ 0204357) (it is only possible in 7-D space)• Real and Complex Products of Complex Numbers (http:/ / www. cut-the-knot. org/ arithmetic/ algebra/

RealComplexProducts. shtml)• An interactive tutorial (http:/ / physics. syr. edu/ courses/ java-suite/ crosspro. html) created at Syracuse

University - (requires java)

Page 24: Vector and Matrices - Some Articles

Cross product 21

• W. Kahan (2007). Cross-Products and Rotations in Euclidean 2- and 3-Space. University of California, Berkeley(PDF). (http:/ / www. cs. berkeley. edu/ ~wkahan/ MathH110/ Cross. pdf)

Triple productIn mathematics, the triple product is a product of three vectors. The name "triple product" is used for two differentproducts, the scalar-valued scalar triple product and, less often, the vector-valued vector triple product.

Scalar triple product

Three vectors defining a parallelepiped

The scalar triple product (also called the mixed or boxproduct) is defined as the dot product of one of thevectors with the cross product of the other two.

Geometric interpretation

Geometrically, the scalar triple product

is the (signed) volume of the parallelepiped defined by the three vectors given.

PropertiesThe scalar triple product can be evaluated numerically using any one of the following equivalent characterizations:

Switching the two vectors in the cross product negates the triple product, i.e.:

.The parentheses may be omitted without causing ambiguity, since the dot product cannot be evaluated first. If itwere, it would leave the cross product of a scalar and a vector, which is not defined.The scalar triple product can also be understood as the determinant of the 3 × 3 matrix having the three vectors as itsrows or columns (the determinant of a transposed matrix is the same as the original); this quantity is invariant undercoordinate rotation.

Note that if the scalar triple product is equal to zero, then three vectors a, b, and c are coplanar, since the"parallelepiped" defined by them would be flat and have no volume.There is also this property of triple products:

Page 25: Vector and Matrices - Some Articles

Triple product 22

Scalar or pseudoscalarAlthough the scalar triple product gives the volume of the parallelepiped it is the signed volume, the sign dependingon the orientation of the frame or the parity of the permutation of the vectors. This means the product is negated ifthe orientation is reversed, by for example by a parity transformation, and so is more properly described as apseudoscalar if the orientation can change.This also relates to the handedness of the cross product; the cross product transforms as a pseudovector under paritytransformations and so is properly described as a pseudovector. The dot product of two vectors is a scalar but the dotproduct of a pseudovector and a vector is a pseudoscalar, so the scalar triple product must be pseudoscalar valued.

As an exterior product

A trivector is an oriented volume element; itsHodge dual is a scalar with magnitude equal to its

volume.

In exterior algebra and geometric algebra the exterior product of twovectors is a bivector, while the exterior product of three vectors is atrivector. A bivector is an oriented plane element and a trivector is anoriented volume element, in the same way that a vector is an orientedline element. Given vectors a, b and c, the product

is a trivector with magnitude equal to the scalar triple product, and is the pseudoscalar dual of the triple product. Asthe exterior product is associative brackets are not needed as it does not matter which of a ∧ b or b ∧ c is calculatedfirst, though the order of the vectors in the product does matter. Geometrically the trivector a ∧ b ∧ c corresponds tothe parallelepiped spanned by a, b, and c, with bivectors a ∧ b, b ∧ c and a ∧ c matching the parallelogram faces ofthe parallelepiped.

Vector triple productThe vector triple product is defined as the cross product of one vector with the cross product of the other two. Thefollowing relationships hold:

The first formula is known as triple product expansion, or Lagrange's formula,[1] [2] although the latter name isambiguous (see disambiguation page). Its right hand member is easier to remember by using the mnemonic “BACminus CAB”, provided one keeps in mind which vectors are dotted together. A proof is provided below.These formulas are very useful in simplifying vector calculations in physics. A related identity regarding gradientsand useful in vector calculus is Lagrange's formula of vector cross-product identity:[3]

This can be also regarded as a special case of the more general Laplace-de Rham operator .

Page 26: Vector and Matrices - Some Articles

Triple product 23

Proof

The x component of is given by:uy(vxwy − vywx) − uz(vzwx − vxwz)

orvx(uywy + uzwz) − wx(uyvy + uzvz)

By adding and subtracting uxvxwx, this becomes

vx(uxwx + uywy + uzwz) − wx(uxvx + uyvy + uzvz) = vx − wx Similarly, the y and z components of are given by:

vy − wy and

vz − wz By combining these three components we obtain:

[4]

Vector or pseudovectorWhere parity transformations need to be considered, so the cross product is treated as a pseudovector, the vectortriple product is vector rather than pseudovector valued, as it is the product of a vector a and a pseudovector b × c.This can also be seen from the expansion in terms of the dot product, which consists only of a sum of vectorsmultiplied by scalars so must be vector valued.

NotationUsing the Levi-Civita symbol, the triple product is

and

which can be simplified by performing a contraction on the Levi-Civita symbols, and simplifying the result.

Note[1] Joseph Louis Lagrange did not develop the cross product as an algebraic product on vectors, but did use an equivalent form of it in

components: see Lagrange, J-L (1773). "Solutions analytiques de quelques problèmes sur les pyramides triangulaires". Oeuvres. vol 3. He mayhave written a formula similar to the triple product expansion in component form. See also Lagrange's identity and Kiyoshi Itō (1987).Encyclopedic Dictionary of Mathematics. MIT Press. p. 1679. ISBN 0262590204.

[2] Kiyoshi Itō (1993). "§C: Vector product" (http:/ / books. google. com/ books?id=azS2ktxrz3EC& pg=PA1679). Encyclopedic dictionary ofmathematics (2nd ed.). MIT Press. p. 1679. ISBN 0262590204. .

[3] Pengzhi Lin (2008). Numerical Modelling of Water Waves: An Introduction to Engineers and Scientists (http:/ / books. google. com/books?id=x6ALwaliu5YC& pg=PA13). Routledge. p. 13. ISBN 0415415780. .

[4] J. Heading (1970). Mathematical Methods in Science and Engineering. American Elsevier Publishing Company, Inc. pp. 262–263.

Page 27: Vector and Matrices - Some Articles

Triple product 24

References• Lass, Harry (1950). Vector and Tensor Analysis. McGraw-Hill Book Company, Inc.. pp. 23–25.

Binet–Cauchy identityIn algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy,states that [1]

for every choice of real or complex numbers (or more generally, elements of a commutative ring). Setting ai = ci andbi = di, it gives the Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for theEuclidean space .

The Binet–Cauchy identity and exterior algebraWhen n = 3 the first and second terms on the right hand side become the squared magnitudes of dot and crossproducts respectively; in n dimensions these become the magnitudes of the dot and wedge products. We may write it

where a, b, c, and d are vectors. It may also be written as a formula giving the dot product of two wedge products, as

In the special case of unit vectors a=c and b=d, the formula yields

When both vectors are unit vectors, we obtain the usual relation

where φ is the angle between the vectors.

ProofExpanding the last term,

where the second and fourth terms are the same and artificially added to complete the sums as follows:

This completes the proof after factoring out the terms indexed by i.

Page 28: Vector and Matrices - Some Articles

Binet–Cauchy identity 25

GeneralizationA general form, also known as the Cauchy–Binet formula, states the following: Suppose A is an m×n matrix and B isan n×m matrix. If S is a subset of 1, ..., n with m elements, we write AS for the m×m matrix whose columns arethose columns of A that have indices from S. Similarly, we write BS for the m×m matrix whose rows are those rowsof B that have indices from S. Then the determinant of the matrix product of A and B satisfies the identity

where the sum extends over all possible subsets S of 1, ..., n with m elements.We get the original identity as special case by setting

In-line notes and references[1] Eric W. Weisstein (2003). "Binet-Cauchy identity" (http:/ / books. google. com/ books?id=8LmCzWQYh_UC& pg=PA228). CRC concise

encyclopedia of mathematics (2nd ed.). CRC Press. p. 228. ISBN 1584883472. .

Inner product space

Geometric interpretation of the angle between two vectors defined using an innerproduct

In mathematics, an inner product space is avector space with an additional structurecalled an inner product. This additionalstructure associates each pair of vectors inthe space with a scalar quantity known asthe inner product of the vectors. Innerproducts allow the rigorous introduction ofintuitive geometrical notions such as thelength of a vector or the angle between twovectors. They also provide the means ofdefining orthogonality between vectors(zero inner product). Inner product spacesgeneralize Euclidean spaces (in which theinner product is the dot product, also knownas the scalar product) to vector spaces of any(possibly infinite) dimension, and arestudied in functional analysis.

An inner product naturally induces an associated norm, thus an inner product space is also a normed vector space. Acomplete space with an inner product is called a Hilbert space. An incomplete space with an inner product is called apre-Hilbert space, since its completion with respect to the norm, induced by the inner product, becomes a Hilbertspace. Inner product spaces over the field of complex numbers are sometimes referred to as unitary spaces.

Page 29: Vector and Matrices - Some Articles

Inner product space 26

DefinitionIn this article, the field of scalars denoted is either the field of real numbers or the field of complex numbers

.Formally, an inner product space is a vector space V over the field together with an inner product, i.e., with amap

that satisfies the following three axioms for all vectors and all scalars :[1] [2]

• Conjugate symmetry:

Note that in , it is symmetric.• Linearity in the first argument:

• Positive-definiteness:

with equality only for Notice that conjugate symmetry implies that is real for all , since we have Conjugate symmetry and linearity in the first variable gives

so an inner product is a sesquilinear form. Conjugate symmetry is also called Hermitian symmetry, and a conjugatesymmetric sesquilinear form is called a Hermitian form. While the above axioms are more mathematicallyeconomical, a compact verbal definition of an inner product is a positive-definite Hermitian form.In the case of , conjugate-symmetric reduces to symmetric, and sesquilinear reduces to bilinear. So, aninner product on a real vector space is a positive-definite symmetric bilinear form.

From the linearity property it is derived that implies while from the positive-definitenessaxiom we obtain the converse, implies Combining these two, we have the property that

if and only if The property of an inner product space that

and is known as additivity.Remark: Some authors, especially in physics and matrix algebra, prefer to define the inner product and thesesquilinear form with linearity in the second argument rather than the first. Then the first argument becomesconjugate linear, rather than the second. In those disciplines we would write the product as (thebra-ket notation of quantum mechanics), respectively (dot product as a case of the convention of forming thematrix product AB as the dot products of rows of A with columns of B). Here the kets and columns are identifiedwith the vectors of V and the bras and rows with the dual vectors or linear functionals of the dual space V*, withconjugacy associated with duality. This reverse order is now occasionally followed in the more abstract literature,e.g., Emch [1972], taking to be conjugate linear in x rather than y. A few instead find a middle ground byrecognizing both and as distinct notations differing only in which argument is conjugate linear.There are various technical reasons why it is necessary to restrict the basefield to and in the definition. Briefly, the basefield has to contain an ordered subfield (in order for non-negativity to make sense) and therefore has to have characteristic equal to 0. This immediately excludes finite fields. The basefield has to have additional

Page 30: Vector and Matrices - Some Articles

Inner product space 27

structure, such as a distinguished automorphism. More generally any quadratically closed subfield of or will sufficefor this purpose, e.g., the algebraic numbers, but when it is a proper subfield (i.e., neither nor ) evenfinite-dimensional inner product spaces will fail to be metrically complete. In contrast all finite-dimensional innerproduct spaces over or , such as those used in quantum computation, are automatically metrically complete andhence Hilbert spaces.

In some cases we need to consider non-negative semi-definite sesquilinear forms. This means that is onlyrequired to be non-negative. We show how to treat these below.

Examples• A simple example is the real numbers with the standard multiplication as the inner product

More generally any Euclidean space n with the dot product is an inner product space

• The general form of an inner product on n is given by:

with M any Hermitian positive-definite matrix, and y* the conjugate transpose of y. For the real case thiscorresponds to the dot product of the results of directionally differential scaling of the two vectors, withpositive scale factors and orthogonal directions of scaling. Up to an orthogonal transformation it is aweighted-sum version of the dot product, with positive weights.

• The article on Hilbert space has several examples of inner product spaces wherein the metric induced by the innerproduct yields a complete metric space. An example of an inner product which induces an incomplete metricoccurs with the space C[a, b] of continuous complex valued functions on the interval [a, b]. The inner product is

This space is not complete; consider for example, for the interval [−1,1] the sequence of "step" functions fk k where

• fk(t) is 0 for t in the subinterval [−1,0]• fk(t) is 1 for t in the subinterval [1/k, 1]• fk is affine in [0, 1/k].

This sequence is a Cauchy sequence which does not converge to a continuous function.• For random variables X and Y, the expected value of their product

is an inner product. In this case, <X, X>=0 if and only if Pr(X=0)=1 (i.e., X=0 almost surely). This definition ofexpectation as inner product can be extended to random vectors as well.

• For square real matrices, with transpose as conjugation ( ) is aninner product.

Page 31: Vector and Matrices - Some Articles

Inner product space 28

Norms on inner product spacesA linear space with a norm such as:

where p ≠ 2 is a normed space but not an inner product space, because this norm does not satisfy the parallelogramequality required of a norm to have an inner product associated with it.[3] [4]

However, inner product spaces have a naturally defined norm based upon the inner product of the space itself thatdoes satisfy the parallelogram equality:

This is well defined by the nonnegativity axiom of the definition of inner product space. The norm is thought of asthe length of the vector x. Directly from the axioms, we can prove the following:• Cauchy–Schwarz inequality: for x, y elements of V

with equality if and only if x and y are linearly dependent. This is one of the most important inequalities inmathematics. It is also known in the Russian mathematical literature as the Cauchy–Bunyakowski–Schwarzinequality.Because of its importance, its short proof should be noted.

It is trivial to prove the inequality true in the case y = 0. Thus we assume ⟨y, y⟩ is nonzero, giving us thefollowing:

The complete proof can be obtained by multiplying out this result.• Orthogonality: The geometric interpretation of the inner product in terms of angle and length, motivates much of

the geometric terminology we use in regard to these spaces. Indeed, an immediate consequence of theCauchy-Schwarz inequality is that it justifies defining the angle between two non-zero vectors x and y in the caseF = by the identity

We assume the value of the angle is chosen to be in the interval [0, +π]. This is in analogy to the situation intwo-dimensional Euclidean space.In the case F = , the angle in the interval [0, +π/2] is typically defined by

Correspondingly, we will say that non-zero vectors x and y of V are orthogonal if and only if their innerproduct is zero.

• Homogeneity: for x an element of V and r a scalar

The homogeneity property is completely trivial to prove.• Triangle inequality: for x, y elements of V

The last two properties show the function defined is indeed a norm.

Page 32: Vector and Matrices - Some Articles

Inner product space 29

Because of the triangle inequality and because of axiom 2, we see that ||·|| is a norm which turns V into anormed vector space and hence also into a metric space. The most important inner product spaces are the oneswhich are complete with respect to this metric; they are called Hilbert spaces. Every inner product V space is adense subspace of some Hilbert space. This Hilbert space is essentially uniquely determined by V and isconstructed by completing V.

• Pythagorean theorem: Whenever x, y are in V and ⟨x, y⟩ = 0, then

The proof of the identity requires only expressing the definition of norm in terms of the inner product andmultiplying out, using the property of additivity of each component.The name Pythagorean theorem arises from the geometric interpretation of this result as an analogue of thetheorem in synthetic geometry. Note that the proof of the Pythagorean theorem in synthetic geometry isconsiderably more elaborate because of the paucity of underlying structure. In this sense, the syntheticPythagorean theorem, if correctly demonstrated is deeper than the version given above.An induction on the Pythagorean theorem yields:

• If x1, ..., xn are orthogonal vectors, that is, for distinct indices j, k, then

In view of the Cauchy-Schwarz inequality, we also note that is continuous from V × V to F. This allowsus to extend Pythagoras' theorem to infinitely many summands:

• Parseval's identity: Suppose V is a complete inner product space. If xk are mutually orthogonal vectors in V then

provided the infinite series on the left is convergent. Completeness of the space is needed to ensure that thesequence of partial sums

which is easily shown to be a Cauchy sequence, is convergent.• Parallelogram law: for x, y elements of V,

The Parallelogram law is, in fact, a necessary and sufficient condition for the existence of a scalar product,corresponding to a given norm. If it holds, the scalar product is defined by the polarization identity:

which is a form of the law of cosines.

Page 33: Vector and Matrices - Some Articles

Inner product space 30

Orthonormal sequencesLet V be a finite dimensional inner product space of dimension n. Recall that every basis of V consists of exactly nlinearly independent vectors. Using the Gram-Schmidt Process we may start with an arbitrary basis and transform itinto an orthonormal basis. That is, into a basis in which all the elements are orthogonal and have unit norm. Insymbols, a basis is orthonormal if if and for each i.This definition of orthonormal basis generalizes to the case of infinite dimensional inner product spaces in thefollowing way. Let V be a any inner product space. Then a collection is a basis for V if thesubspace of V generated by finite linear combinations of elements of E is dense in V (in the norm induced by theinner product). We say that E is an orthonormal basis for V if it is a basis and if and

for all .Using an infinite-dimensional analog of the Gram-Schmidt process one may show:Theorem. Any separable inner product space V has an orthonormal basis.Using the Hausdorff Maximal Principle and the fact that in a complete inner product space orthogonal projectiononto linear subspaces is well-defined, one may also show thatTheorem. Any complete inner product space V has an orthonormal basis.The two previous theorems raise the question of whether all inner product spaces have an orthonormal basis. Theanswer, it turns out is negative. This is a non-trivial result, and is proved below. The following proof is taken fromHalmos's A Hilbert Space Problem Book (see the references).

Proof

Recall that the dimension of an inner product space is the cardinality of a maximal orthonormal system that it contains (by Zorn's lemma it containsat least one, and any two have the same cardinality). An orthonormal basis is certainly a maximal orthonormal system, but as we shall see, theconverse need not hold. Observe that if G is a dense subspace of an inner product space H, then any orthonormal basis for G is automatically anorthonormal basis for H. Thus, it suffices to construct an inner product space space H with a dense subspace G whose dimension is strictly smallerthan that of H.

Let K be a Hilbert space of dimension (for instance, ). Let E be an orthonormal basis of K, so . Extend E to a Hamelbasis for K, where . Since it is known that the Hamel dimension of K is c, the cardinality of the continuum, it must be that

.Let L be a Hilbert space of dimension c (for instance, ). Let B be an orthonormal basis for L, and let be a bijection.Then there is a linear transformation such that for , and for .Let and let be the graph of T. Let be the closure of G in H; we will show . Since forany we have , it follows that .Next, if , then for some , so ; since as well, we also have . Itfollows that , so , and G is dense in H.Finally, is a maximal orthonormal set in G; if

for all then certainly , so is the zero vector in G. Hence the dimension of G is , whereas it is clearthat the dimension of H is c. This completes the proof.

Parseval's identity leads immediately to the following theorem:Theorem. Let V be a separable inner product space and ekk an orthonormal basis of V. Then the map

is an isometric linear map V → ℓ 2 with a dense image.This theorem can be regarded as an abstract form of Fourier series, in which an arbitrary orthonormal basis plays therole of the sequence of trigonometric polynomials. Note that the underlying index set can be taken to be anycountable set (and in fact any set whatsoever, provided ℓ 2 is defined appropriately, as is explained in the articleHilbert space). In particular, we obtain the following result in the theory of Fourier series:

Page 34: Vector and Matrices - Some Articles

Inner product space 31

Theorem. Let V be the inner product space . Then the sequence (indexed on set of all integers) ofcontinuous functions

is an orthonormal basis of the space with the L2 inner product. The mapping

is an isometric linear map with dense image.Orthogonality of the sequence ekk follows immediately from the fact that if k ≠ j, then

Normality of the sequence is by design, that is, the coefficients are so chosen so that the norm comes out to 1. Finallythe fact that the sequence has a dense algebraic span, in the inner product norm, follows from the fact that thesequence has a dense algebraic span, this time in the space of continuous periodic functions on with theuniform norm. This is the content of the Weierstrass theorem on the uniform density of trigonometric polynomials.

Operators on inner product spacesSeveral types of linear maps A from an inner product space V to an inner product space W are of relevance:• Continuous linear maps, i.e., A is linear and continuous with respect to the metric defined above, or equivalently,

A is linear and the set of non-negative reals ||Ax||, where x ranges over the closed unit ball of V, is bounded.• Symmetric linear operators, i.e., A is linear and ⟨Ax, y⟩ = ⟨x, A y⟩ for all x, y in V.• Isometries, i.e., A is linear and ⟨Ax, Ay⟩ = ⟨x, y⟩ for all x, y in V, or equivalently, A is linear and ||Ax|| = ||x|| for all

x in V. All isometries are injective. Isometries are morphisms between inner product spaces, and morphisms ofreal inner product spaces are orthogonal transformations (compare with orthogonal matrix).

• Isometrical isomorphisms, i.e., A is an isometry which is surjective (and hence bijective). Isometricalisomorphisms are also known as unitary operators (compare with unitary matrix).

From the point of view of inner product space theory, there is no need to distinguish between two spaces which areisometrically isomorphic. The spectral theorem provides a canonical form for symmetric, unitary and more generallynormal operators on finite dimensional inner product spaces. A generalization of the spectral theorem holds forcontinuous normal operators in Hilbert spaces.

GeneralizationsAny of the axioms of an inner product may be weakened, yielding generalized notions. The generalizations that areclosest to inner products occur where bilinearity and conjugate symmetry are retained, but positive-definiteness isweakened.

Degenerate inner products

If V is a vector space and a semi-definite sesquilinear form, then the function ‖x‖ = makes senseand satisfies all the properties of norm except that ‖x‖ = 0 does not imply x = 0 (such a functional is then called asemi-norm). We can produce an inner product space by considering the quotient W = V/ x : ‖x‖ = 0. Thesesquilinear form factors through W.This construction is used in numerous contexts. The Gelfand–Naimark–Segal construction is a particularly importantexample of the use of this technique. Another example is the representation of semi-definite kernels on arbitrary sets.

Page 35: Vector and Matrices - Some Articles

Inner product space 32

Nondegenerate conjugate symmetric formsAlternatively, one may require that the pairing be a nondegenerate form, meaning that for all non-zero x there existssome y such that though y need not equal x; in other words, the induced map to the dual space

is an isomorphism. This generalization is important in differential geometry: a manifold whose tangentspaces have an inner product is a Riemannian manifold, while if this is related to nondegenerate conjugatesymmetric form the manifold is a pseudo-Riemannian manifold. By Sylvester's law of inertia, just as every innerproduct is similar to the dot product with positive weights on a set of vectors, every nondegenerate conjugatesymmetric form is similar to the dot product with nonzero weights on a set of vectors, and the number of positiveand negative weights are called respectively the positive index and negative index.Purely algebraic statements (ones that do not use positivity) usually only rely on the nondegeneracy (theisomorphism ) and thus hold more generally.

The Minkowski inner productThe Minkowski inner product is typically defined in a 4-dimensional real vector space. It satisfies all the axioms ofan inner product, except that it is not positive-definite, i.e., the Minkowski norm ||v|| of a vector v, defined as ||v||2 =η(v,v), need not be positive. The positive-definite condition has been replaced by the weaker condition ofnondegeneracy (every positive-definite form is nondegenerate but not vice-versa). It is common to call a Minkowskiinner product an indefinite inner product, although, technically speaking, it is not an inner product according to thestandard definition above.

Related productsThe term "inner product" is opposed to outer product, which is a slightly more general opposite. Simply, incoordinates, the inner product is the product of a 1×n covector with an n×1 vector, yielding a 1×1 matrix (a scalar),while the outer product is the product of an m×1 vector with a 1×n covector, yielding an m×n matrix. Note that theouter product is defined for different dimensions, while the inner product requires the same dimension. If thedimensions are the same, then the inner product is the trace of the outer product (trace only being properly definedfor square matrices).On an inner product space, or more generally a vector space with a nondegenerate form (so an isomorphism

) vectors can be sent to covectors (in coordinates, via transpose), so one can take the inner product andouter product of two vectors, not simply of a vector and a covector.In a quip: "inner is horizontal times vertical and shrinks down, outer is vertical times horizontal and expands out".

More abstractly, the outer product is the bilinear map sending a vector and a covectorto a rank 1 linear transformation (simple tensor of type (1,1)), while the inner product is the bilinear evaluation map

given by evaluating a covector on a vector; the order of the domain vector spaces here reflects thecovector/vector distinction.The inner product and outer product should not be confused with the interior product and exterior product, which areinstead operations on vector fields and differential forms, or more generally on the exterior algebra.As a further complication, in geometric algebra the inner product and the exterior (Grassmann) product arecombined in the geometric product (the Clifford product in a Clifford algebra) – the inner product sends two vectors(1-vectors) to a scalar (a 0-vector), while the exterior product sends two vectors to a bivector (2-vector) – and in thiscontext the exterior product is usually called the "outer(alternatively,wedge) product". The inner product is morecorrectly called a scalar product in this context, as the nondegenerate quadratic form in question need not be positivedefinite (need not be an inner product).

Page 36: Vector and Matrices - Some Articles

Inner product space 33

Notes and in-line references[1] P. K. Jain, Khalil Ahmad (1995). "5.1 Definitions and basic properties of inner product spaces and Hilbert spaces" (http:/ / books. google.

com/ ?id=yZ68h97pnAkC& pg=PA203). Functional analysis (2nd ed.). New Age International. p. 203. ISBN 812240801X. .[2] Eduard Prugovecki (1981). "Definition 2.1" (http:/ / books. google. com/ ?id=GxmQxn2PF3IC& pg=PA18). Quantum mechanics in Hilbert

space (2nd ed.). Academic Press. pp. 18 ff. ISBN 012566060X. .[3] P. K. Jain, Khalil Ahmad (1995). "Example 5" (http:/ / books. google. com/ ?id=yZ68h97pnAkC& pg=PA209). Cited work. p. 209.

ISBN 812240801X. .[4] Karen Saxe (2002). Beginning functional analysis (http:/ / books. google. com/ ?id=QALoZC64ea0C& pg=PA7). Springer. p. 7.

ISBN 0387952241. .

References• Axler, Sheldon (1997). Linear Algebra Done Right (2nd ed.). Berlin, New York: Springer-Verlag.

ISBN 978-0-387-98258-8• Emch, Gerard G. (1972). Algebraic methods in statistical mechanics and quantum field theory.

Wiley-Interscience. ISBN 978-0-471-23900-0• Young, Nicholas (1988). An introduction to Hilbert space. Cambridge University Press.

ISBN 978-0-521-33717-5

Sesquilinear formIn mathematics, a sesquilinear form on a complex vector space V is a map V × V → C that is linear in one argumentand antilinear in the other. The name originates from the numerical prefix sesqui- meaning "one and a half".Compare with a bilinear form, which is linear in both arguments; although many authors, especially when workingsolely in a complex setting, refer to sesquilinear forms as bilinear forms.A motivating example is the inner product on a complex vector space, which is not bilinear, but instead sesquilinear.See geometric motivation below.

Definition and conventionsConventions differ as to which argument should be linear. We take the first to be conjugate-linear and the second tobe linear. This is the convention used by essentially all physicists and originates in Dirac's bra-ket notation inquantum mechanics. The opposite convention is perhaps more common in mathematics but is not universal.Specifically a map φ : V × V → C is sesquilinear if

for all x,y,z,w ∈ V and all a, b ∈ C.A sesquilinear form can also be viewed as a complex bilinear map

where is the complex conjugate vector space to V. By the universal property of tensor products these are inone-to-one correspondence with (complex) linear maps

For a fixed z in V the map is a linear functional on V (i.e. an element of the dual space V*).Likewise, the map is a conjugate-linear functional on V.Given any sesquilinear form φ on V we can define a second sesquilinear form ψ via the conjugate transpose:

Page 37: Vector and Matrices - Some Articles

Sesquilinear form 34

In general, ψ and φ will be different. If they are the same then φ is said to be Hermitian. If they are negatives of oneanother, then φ is said to be skew-Hermitian. Every sesquilinear form can be written as a sum of a Hermitian formand a skew-Hermitian form.

Geometric motivationBilinear forms are to squaring (z2), what sesquilinear forms are to Euclidean norm (|z|2 = z*z).The norm associated to a sesquilinear form is invariant under multiplication by the complex circle (complex numbersof unit norm), while the norm associated to a bilinear form is equivariant (with respect to squaring). Bilinear formsare algebraically more natural, while sesquilinear forms are geometrically more natural.If B is a bilinear form on a complex vector space and |x|B := B(x,x) is the associated norm, then |ix|B = B(ix,ix)=i2

B(x,x) = -|x|B.

By contrast, if S is a sesquilinear form on a complex vector space and is the associated norm,then .

Hermitian formThe term Hermitian form may also refer to a different concept than that explained below: it may refer to acertain differential form on a Hermitian manifold.

A Hermitian form (also called a symmetric sesquilinear form), is a sesquilinear form h : V × V → C such that

The standard Hermitian form on Cn is given by

More generally, the inner product on any complex Hilbert space is a Hermitian form.A vector space with a Hermitian form (V,h) is called a Hermitian space.If V is a finite-dimensional space, then relative to any basis ei of V, a Hermitian form is represented by a Hermitianmatrix H:

The components of H are given by Hij = h(ei, ej).The quadratic form associated to a Hermitian form

Q(z) = h(z,z)is always real. Actually one can show that a sesquilinear form is Hermitian iff the associated quadratic form is realfor all z ∈ V.

Page 38: Vector and Matrices - Some Articles

Sesquilinear form 35

Skew-Hermitian formA skew-Hermitian form (also called an antisymmetric sesquilinear form), is a sesquilinear form ε : V × V → Csuch that

Every skew-Hermitian form can be written as i times a Hermitian form.If V is a finite-dimensional space, then relative to any basis ei of V, a skew-Hermitian form is represented by askew-Hermitian matrix A:

The quadratic form associated to a skew-Hermitian formQ(z) = ε(z,z)

is always pure imaginary.

Generalization: over a *-ringA sesquilinear form and a Hermitian form can be defined over any *-ring, and the examples of symmetric bilinearforms, skew-symmetric bilinear forms, Hermitian forms, and skew-Hermitian forms, are all Hermitian forms forvarious involutions.Particularly in L-theory, one also sees the term ε-symmetric form, where , to refer to both symmetric andskew-symmetric forms.

References• Hazewinkel, Michiel, ed. (2001), "Sesquilinear form" [1], Encyclopaedia of Mathematics, Springer,

ISBN 978-1556080104

References[1] http:/ / eom. springer. de/ s084710. htm

Page 39: Vector and Matrices - Some Articles

Scalar multiplication 36

Scalar multiplicationIn mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra[1] [2] [3]

(or more generally, a module in abstract algebra[4] [5] ). In an intuitive geometrical context, scalar multiplication of areal Euclidean vector by a positive real number multiplies the magnitude of the vector without changing its direction.The term "scalar" itself derives from this usage: a scalar is that which scales vectors. Scalar multiplication isdifferent from the scalar product, which is an inner product between two vectors.

DefinitionIn general, if K is a field and V is a vector space over K, then scalar multiplication is a function from K × V to V. Theresult of applying this function to c in K and v in V is denoted cv.Scalar multiplication obeys the following rules (vector in boldface):• Left distributivity: (c + d)v = cv + dv;• Right distributivity: c(v + w) = cv + cw;• Associativity: (cd)v = c(dv);• Multiplying by 1 does not change a vector: 1v = v;• Multiplying by 0 gives the null vector: 0v = 0;• Multiplying by -1 gives the additive inverse: (-1)v = -v.Here + is addition either in the field or in the vector space, as appropriate; and 0 is the additive identity in either.Juxtaposition indicates either scalar multiplication or the multiplication operation in the field.Scalar multiplication may be viewed as an external binary operation or as an action of the field on the vector space.A geometric interpretation to scalar multiplication is a stretching or shrinking of a vector.As a special case, V may be taken to be K itself and scalar multiplication may then be taken to be simply themultiplication in the field. When V is Kn, then scalar multiplication is defined component-wise.The same idea goes through with no change if K is a commutative ring and V is a module over K. K can even be arig, but then there is no additive inverse. If K is not commutative, then the only change is that the order of themultiplication may be reversed from what we've written above.

References[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.[2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.[3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.[4] Dummit, David S.; Foote, Richard M. (2004). Abstract Algebra (3rd ed.). John Wiley & Sons. ISBN 0-471-43334-9.[5] Lang, Serge (2002). Algebra. Graduate Texts in Mathematics. Springer. ISBN 0-387-95385-X.

Page 40: Vector and Matrices - Some Articles

Euclidean space 37

Euclidean space

Every point in three-dimensional Euclidean space is determined bythree coordinates.

In mathematics, Euclidean space is the Euclideanplane and three-dimensional space of Euclideangeometry, as well as the generalizations of thesenotions to higher dimensions. The term “Euclidean”distinguishes these spaces from the curved spaces ofnon-Euclidean geometry and Einstein's general theoryof relativity, and is named for the Greek mathematicianEuclid of Alexandria.

Classical Greek geometry defined the Euclidean planeand Euclidean three-dimensional space using certainpostulates, while the other properties of these spaceswere deduced as theorems. In modern mathematics, itis more common to define Euclidean space usingCartesian coordinates and the ideas of analyticgeometry. This approach brings the tools of algebra andcalculus to bear on questions of geometry, and has theadvantage that it generalizes easily to Euclidean spacesof more than three dimensions.

From the modern viewpoint, there is essentially only one Euclidean space of each dimension. In dimension one thisis the real line; in dimension two it is the Cartesian plane; and in higher dimensions it is the real coordinate spacewith three or more real number coordinates. Thus a point in Euclidean space is a tuple of real numbers, and distancesare defined using the Euclidean distance formula. Mathematicians often denote the n-dimensional Euclidean spaceby , or sometimes if they wish to emphasize its Euclidean nature. Euclidean spaces have finite dimension.

Intuitive overviewOne way to think of the Euclidean plane is as a set of points satisfying certain relationships, expressible in terms ofdistance and angle. For example, there are two fundamental operations on the plane. One is translation, which meansa shifting of the plane so that every point is shifted in the same direction and by the same distance. The other isrotation about a fixed point in the plane, in which every point in the plane turns about that fixed point through thesame angle. One of the basic tenets of Euclidean geometry is that two figures (that is, subsets) of the plane should beconsidered equivalent (congruent) if one can be transformed into the other by some sequence of translations,rotations and reflections. (See Euclidean group.)In order to make all of this mathematically precise, one must clearly define the notions of distance, angle, translation,and rotation. The standard way to do this, as carried out in the remainder of this article, is to define the Euclideanplane as a two-dimensional real vector space equipped with an inner product. For then:• the vectors in the vector space correspond to the points of the Euclidean plane,• the addition operation in the vector space corresponds to translation, and• the inner product implies notions of angle and distance, which can be used to define rotation.Once the Euclidean plane has been described in this language, it is actually a simple matter to extend its concept toarbitrary dimensions. For the most part, the vocabulary, formulas, and calculations are not made any more difficultby the presence of more dimensions. (However, rotations are more subtle in high dimensions, and visualizinghigh-dimensional spaces remains difficult, even for experienced mathematicians.)

Page 41: Vector and Matrices - Some Articles

Euclidean space 38

A final wrinkle is that Euclidean space is not technically a vector space but rather an affine space, on which a vectorspace acts. Intuitively, the distinction just says that there is no canonical choice of where the origin should go in thespace, because it can be translated anywhere. In this article, this technicality is largely ignored.

Real coordinate spaceLet R denote the field of real numbers. For any positive integer n, the set of all n-tuples of real numbers forms ann-dimensional vector space over R, which is denoted Rn and sometimes called real coordinate space. An element ofRn is written

where each xi is a real number. The vector space operations on Rn are defined by

The vector space Rn comes with a standard basis:

An arbitrary vector in Rn can then be written in the form

Rn is the prototypical example of a real n-dimensional vector space. In fact, every real n-dimensional vector space Vis isomorphic to Rn. This isomorphism is not canonical, however. A choice of isomorphism is equivalent to a choiceof basis for V (by looking at the image of the standard basis for Rn in V). The reason for working with arbitraryvector spaces instead of Rn is that it is often preferable to work in a coordinate-free manner (that is, withoutchoosing a preferred basis).

Euclidean structureEuclidean space is more than just a real coordinate space. In order to apply Euclidean geometry one needs to be ableto talk about the distances between points and the angles between lines or vectors. The natural way to obtain thesequantities is by introducing and using the standard inner product (also known as the dot product) on Rn. The innerproduct of any two vectors x and y is defined by

The result is always a real number. Furthermore, the inner product of x with itself is always nonnegative. Thisproduct allows us to define the "length" of a vector x as

This length function satisfies the required properties of a norm and is called the Euclidean norm on Rn.The (non-reflex) angle θ (0° ≤ θ ≤ 180°) between x and y is then given by

Page 42: Vector and Matrices - Some Articles

Euclidean space 39

where cos−1 is the arccosine function.Finally, one can use the norm to define a metric (or distance function) on Rn by

This distance function is called the Euclidean metric. It can be viewed as a form of the Pythagorean theorem.Real coordinate space together with this Euclidean structure is called Euclidean space and often denoted En. (Manyauthors refer to Rn itself as Euclidean space, with the Euclidean structure being understood). The Euclidean structuremakes En an inner product space (in fact a Hilbert space), a normed vector space, and a metric space.Rotations of Euclidean space are then defined as orientation-preserving linear transformations T that preserve anglesand lengths:

In the language of matrices, rotations are special orthogonal matrices.

Topology of Euclidean spaceSince Euclidean space is a metric space it is also a topological space with the natural topology induced by the metric.The metric topology on En is called the Euclidean topology. A set is open in the Euclidean topology if and only if itcontains an open ball around each of its points. The Euclidean topology turns out to be equivalent to the producttopology on Rn considered as a product of n copies of the real line R (with its standard topology).An important result on the topology of Rn, that is far from superficial, is Brouwer's invariance of domain. Any subsetof Rn (with its subspace topology) that is homeomorphic to another open subset of Rn is itself open. An immediateconsequence of this is that Rm is not homeomorphic to Rn if m ≠ n — an intuitively "obvious" result which isnonetheless difficult to prove.

GeneralizationsIn modern mathematics, Euclidean spaces form the prototypes for other, more complicated geometric objects. Forexample, a smooth manifold is a Hausdorff topological space that is locally diffeomorphic to Euclidean space.Diffeomorphism does not respect distance and angle, so these key concepts of Euclidean geometry are lost on asmooth manifold. However, if one additionally prescribes a smoothly varying inner product on the manifold'stangent spaces, then the result is what is called a Riemannian manifold. Put differently, a Riemannian manifold is aspace constructed by deforming and patching together Euclidean spaces. Such a space enjoys notions of distance andangle, but they behave in a curved, non-Euclidean manner. The simplest Riemannian manifold, consisting of Rn witha constant inner product, is essentially identical to Euclidean n-space itself.If one alters a Euclidean space so that its inner product becomes negative in one or more directions, then the result isa pseudo-Euclidean space. Smooth manifolds built from such spaces are called pseudo-Riemannian manifolds.Perhaps their most famous application is the theory of relativity, where empty spacetime with no matter isrepresented by the flat pseudo-Euclidean space called Minkowski space, spacetimes with matter in them form otherpseudo-Riemannian manifolds, and gravity corresponds to the curvature of such a manifold.Our universe, being subject to relativity, is not Euclidean. This becomes significant in theoretical considerations ofastronomy and cosmology, and also in some practical problems such as global positioning and airplane navigation.Nonetheless, a Euclidean model of the universe can still be used to solve many other practical problems withsufficient precision.

Page 43: Vector and Matrices - Some Articles

Euclidean space 40

References• Kelley, John L. (1975). General Topology. Springer-Verlag. ISBN 0-387-90125-6.• Munkres, James (1999). Topology. Prentice-Hall. ISBN 0-13-181629-2.

OrthonormalityIn linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal and both of unitlength. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of unitlength. An orthonormal set which forms a basis is called an orthonormal basis.

Intuitive overviewThe construction of orthogonality of vectors is motivated by a desire to extend the intuitive notion of perpendicularvectors to higher-dimensional spaces. In the Cartesian plane, two vectors are said to be perpendicular if the anglebetween them is 90° (i.e. if they form a right angle). This definition can be formalized in Cartesian space by definingthe dot product and specifying that two vectors in the plane are orthogonal if their dot product is zero.Similarly, the construction of the norm of a vector is motivated by a desire to extend the intuitive notion of the lengthof a vector to higher-dimensional spaces. In Cartesian space, the norm of a vector is the square root of the vectordotted with itself. That is,

Many important results in linear algebra deal with collections of two or more orthogonal vectors. But often, it iseasier to deal with vectors of unit length. That is, it often simplifies things to only consider vectors whose normequals 1. The notion of restricting orthogonal pairs of vectors to only those of unit length is important enough to begiven a special name. Two vectors which are orthogonal and of length 1 are said to be orthonormal.

Simple exampleWhat does a pair of orthonormal vectors in 2-D Euclidean space look like?Let u = (x1, y1) and v = (x2, y2). Consider the restrictions on x1, x2, y1, y2 required to make u and v form anorthonormal pair.• From the orthogonality restriction, u • v = 0.• From the unit length restriction on u, ||u|| = 1.• From the unit length restriction on v, ||v|| = 1.Expanding these terms gives 3 equations:

1.

2.

3.

Converting from Cartesian to polar coordinates, and considering Equation and Equation immediately givesthe result r1 = r2 = 1. In other words, requiring the vectors be of unit length restricts the vectors to lie on the unitcircle.After substitution, Equation becomes . Rearranging gives

. Using a trigonometric identity to convert the cotangent term gives

Page 44: Vector and Matrices - Some Articles

Orthonormality 41

It is clear that in the plane, orthonormal vectors are simply radii of the unit circle whose difference in angles equals90°.

DefinitionLet be an inner-product space. A set of vectors

is called orthonormal if and only if

where is the Kronecker delta and is the inner product defined over .

SignificanceOrthonormal sets are not especially significant on their own. However, they display certain features that make themfundamental in exploring the notion of diagonalizability of certain operators on vector spaces.

PropertiesOrthonormal sets have certain very appealing properties, which make them particularly easy to work with.• Theorem. If e1, e2,...,en is an orthonormal list of vectors, then

• Theorem. Every orthonormal list of vectors is linearly independent.

Existence• Gram-Schmidt theorem. If v1, v2,...,vn is a linearly independent list of vectors in an inner-product space ,

then there exists an orthonormal list e1, e2,...,en of vectors in such that span(e1, e2,...,en) = span(v1, v2,...,vn).Proof of the Gram-Schmidt theorem is constructive, and discussed at length elsewhere. The Gram-Schmidt theorem,together with the axiom of choice, guarantees that every vector space admits an orthonormal basis. This is possiblythe most significant use of orthonormality, as this fact permits operators on inner-product spaces to be discussed interms of their action on the space's orthonormal basis vectors. What results is a deep relationship between thediagonalizability of an operator and how it acts on the orthonormal basis vectors. This relationship is characterizedby the Spectral Theorem.

Examples

Standard basisThe standard basis for the coordinate space Fn is

Page 45: Vector and Matrices - Some Articles

Orthonormality 42

e1, e2,...,en where e1 = (1, 0, ... , 0)

e2 = (0, 1, ... , 0)

en = (0, 0, ..., 1)

Any two vectors ei, ej where i≠j are orthogonal, and all vectors are clearly of unit length. So e1, e2,...,en forms anorthonormal basis.

Real-valued functionsWhen referring to real-valued functions, usually the L² inner product is assumed unless otherwise stated. Twofunctions and are orthonormal over the interval if

Fourier seriesThe Fourier series is a method of expressing a periodic function in terms of sinusoidal basis functions. TakingC[-π,π] to be the space of all real-valued functions continuous on the interval [-π,π] and taking the inner product tobe

It can be shown that

forms an orthonormal set.However, this is of little consequence, because C[-π,π] is infinite-dimensional, and a finite set of vectors cannotspan it. But, removing the restriction that n be finite makes the set dense in C[-π,π] and therefore an orthonormalbasis of C[-π,π].

References• Axler, Sheldon (1997), Linear Algebra Done Right (2nd ed.), Berlin, New York: Springer-Verlag,

ISBN 978-0-387-98258-8

Page 46: Vector and Matrices - Some Articles

Cauchy–Schwarz inequality 43

Cauchy–Schwarz inequalityIn mathematics, the Cauchy–Schwarz inequality (also known as the Bunyakovsky inequality, the Schwarzinequality, or the Cauchy–Bunyakovsky–Schwarz inequality), is a useful inequality encountered in manydifferent settings, such as linear algebra, analysis, in probability theory, and other areas. It is considered to be one ofthe most important inequalities in all of mathematics.[1] It has a number of generalizations, among them Hölder'sinequality.The inequality for sums was published by Augustin-Louis Cauchy (1821), while the corresponding inequality forintegrals was first stated by Viktor Bunyakovsky (1859) and rediscovered by Hermann Amandus Schwarz (1888)(often misspelled "Schwartz").

Statement of the inequalityThe Cauchy–Schwarz inequality states that for all vectors x and y of an inner product space,

where is the inner product. Equivalently, by taking the square root of both sides, and referring to the norms ofthe vectors, the inequality is written as

Moreover, the two sides are equal if and only if x and y are linearly dependent (or, in a geometrical sense, they areparallel or one of the vectors is equal to zero).

If and are any complex numbers and the inner product is the standard innerproduct then the inequality may be restated in a more explicit way as follows:

When viewed in this way the numbers x1, ..., xn, and y1, ..., yn are the components of x and y with respect to anorthonormal basis of V.Even more compactly written:

Equality holds if and only if x and y are linearly dependent, that is, one is a scalar multiple of the other (whichincludes the case when one or both are zero).The finite-dimensional case of this inequality for real vectors was proved by Cauchy in 1821, and in 1859 Cauchy'sstudent Bunyakovsky noted that by taking limits one can obtain an integral form of Cauchy's inequality. The generalresult for an inner product space was obtained by Schwarz in 1885.

ProofLet u, v be arbitrary vectors in a vector space V over F with an inner product, where F is the field of real or complexnumbers. We prove the inequality

This inequality is trivial in the case v = 0, so we may assume from hereon that v is nonzero. In fact, as both sides ofthe inequality clearly multiply by the same factor when is multiplied by a positive scaling factor , it sufficesto consider only the case where is normalized to have magnitude 1, as we shall assume for convenience in the restof this section.

Page 47: Vector and Matrices - Some Articles

Cauchy–Schwarz inequality 44

Any vector can be decomposed into a sum of components parallel and perpendicular to ; in particular, can bedecomposed into , where is a vector orthogonal to (this orthogonality can be seen by noting that

, so that ).Accordingly, by the Pythagorean theorem (which is to say, by simply expanding out the calculation of ), wefind that , with equality if and only if (i.e., in the case where isa multiple of ). This establishes the theorem.

Notable special cases

Rn

In Euclidean space Rn with the standard inner product, the Cauchy–Schwarz inequality is

To prove this form of the inequality, consider the following quadratic polynomial in z.

Since it is nonnegative it has at most one real root in z, whence its discriminant is less than or equal to zero, that is,

which yields the Cauchy–Schwarz inequality.An equivalent proof for Rn starts with the summation below.Expanding the brackets we have:

,

collecting together identical terms (albeit with different summation indices) we find:

Because the left-hand side of the equation is a sum of the squares of real numbers it is greater than or equal to zero,thus:

This form is used usually when solving school math problems.Yet another approach when n ≥ 2 (n = 1 is trivial) is to consider the plane containing x and y. More precisely,recoordinatize Rn with any orthonormal basis whose first two vectors span a subspace containing x and y. In thisbasis only and are nonzero, and the inequality reduces to the algebra of dot product in the plane,which is related to the angle between two vectors, from which we obtain the inequality:

When n = 3 the Cauchy–Schwarz inequality can also be deduced from Lagrange's identity, which takes the form

from which readily follows the Cauchy–Schwarz inequality.

Page 48: Vector and Matrices - Some Articles

Cauchy–Schwarz inequality 45

L2

For the inner product space of square-integrable complex-valued functions, one has

A generalization of this is the Hölder inequality.

UseThe triangle inequality for the inner product is often shown as a consequence of the Cauchy–Schwarz inequality, asfollows: given vectors x and y:

Taking square roots gives the triangle inequality.The Cauchy–Schwarz inequality allows one to extend the notion of "angle between two vectors" to any real innerproduct space, by defining:

The Cauchy–Schwarz inequality proves that this definition is sensible, by showing that the right hand side lies in theinterval [−1, 1], and justifies the notion that (real) Hilbert spaces are simply generalizations of the Euclidean space.It can also be used to define an angle in complex inner product spaces, by taking the absolute value of the right handside, as is done when extracting a metric from quantum fidelity.The Cauchy–Schwarz is used to prove that the inner product is a continuous function with respect to the topologyinduced by the inner product itself.The Cauchy–Schwarz inequality is usually used to show Bessel's inequality.

Probability theory

For multivariate case,

For univariate case, . Indeed, for random variables X and Y, the

expectation of their product is an inner product. That is,

and so, by the Cauchy–Schwarz inequality,

Moreover, if μ = E(X) and ν = E(Y), then

where Var denotes variance and Cov denotes covariance.

Page 49: Vector and Matrices - Some Articles

Cauchy–Schwarz inequality 46

GeneralizationsVarious generalizations of the Cauchy–Schwarz inequality exist in the context of operator theory, e.g. foroperator-convex functions, and operator algebras, where the domain and/or range of φ are replaced by a C*-algebraor W*-algebra.This section lists a few of such inequalities from the operator algebra setting, to give a flavor of results of this type.

Positive functionals on C*- and W*-algebrasOne can discuss inner products as positive functionals. Given a Hilbert space L2(m), m being a finite measure, theinner product < · , · > gives rise to a positive functional φ by

Since < ƒ, ƒ > ≥ 0, φ(f*f) ≥ 0 for all ƒ in L2(m), where ƒ* is pointwise conjugate of ƒ. So φ is positive. Converselyevery positive functional φ gives a corresponding inner product < ƒ, g >φ = φ(g*ƒ). In this language, theCauchy–Schwarz inequality becomes

which extends verbatim to positive functionals on C*-algebras.We now give an operator theoretic proof for the Cauchy–Schwarz inequality which passes to the C*-algebra setting.One can see from the proof that the Cauchy–Schwarz inequality is a consequence of the positivity and anti-symmetryinner-product axioms.Consider the positive matrix

Since φ is a positive linear map whose range, the complex numbers C, is a commutative C*-algebra, φ is completelypositive. Therefore

is a positive 2 × 2 scalar matrix, which implies it has positive determinant:

This is precisely the Cauchy–Schwarz inequality. If ƒ and g are elements of a C*-algebra, f* and g* denote theirrespective adjoints.We can also deduce from above that every positive linear functional is bounded, corresponding to the fact that theinner product is jointly continuous.

Positive mapsPositive functionals are special cases of positive maps. A linear map Φ between C*-algebras is said to be a positivemap if a ≥ 0 implies Φ(a) ≥ 0. It is natural to ask whether inequalities of Schwarz-type exist for positive maps. Inthis more general setting, usually additional assumptions are needed to obtain such results.

Kadison-Schwarz inequality

The following theorem is named after Richard Kadison.Theorem. If Φ is a unital positive map, then for every normal element a in its domain, we have Φ(a*a) ≥ Φ(a*)Φ(a)and Φ(a*a) ≥ Φ(a)Φ(a*).This extends the fact φ(a*a) · 1 ≥ φ(a)*φ(a) = |φ(a)|2, when φ is a linear functional.The case when a is self-adjoint, i.e. a = a*, is sometimes known as Kadison's inequality.

Page 50: Vector and Matrices - Some Articles

Cauchy–Schwarz inequality 47

2-positive maps

When Φ is 2-positive, a stronger assumption than merely positive, one has something that looks very similar to theoriginal Cauchy–Schwarz inequality:Theorem (Modified Schwarz inequality for 2-positive maps) For a 2-positive map Φ between C*-algebras, for all a,b in its domain,

i) Φ(a)*Φ(a) ≤ ||Φ(1)|| Φ(a*a).ii) ||Φ(a*b)||2 ≤ ||Φ(a*a)|| · ||Φ(b*b)||.

A simple argument for ii) is as follows. Consider the positive matrix

By 2-positivity of Φ,

is positive. The desired inequality then follows from the properties of positive 2 × 2 (operator) matrices.

Part i) is analogous. One can replace the matrix by

PhysicsThe general formulation of the Heisenberg uncertainty principle is derived using the Cauchy–Schwarz inequality inthe Hilbert space of quantum observables.

ReferencesIn-line references[1] The Cauchy-Schwarz Master Class: an Introduction to the Art of Mathematical Inequalities, Ch. 1 (http:/ / www-stat. wharton. upenn. edu/

~steele/ Publications/ Books/ CSMC/ CSMC_index. html) by J. Michael Steele.

General references• Bityutskov, V.I. (2001), "Bunyakovskii inequality" (http:/ / eom. springer. de/ b/ b017770. htm), in Hazewinkel,

Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104• Bouniakowsky, V. (1859), "Sur quelques inegalités concernant les intégrales aux différences finies" (http:/ /

www-stat. wharton. upenn. edu/ ~steele/ Publications/ Books/ CSMC/ bunyakovsky. pdf) (PDF), Mem. Acad. Sci.St. Petersbourg 7 (1): 9

• Cauchy, A. (1821), Oeuvres 2, III, p. 373• Dragomir, S. S. (2003), "A survey on Cauchy-Bunyakovsky-Schwarz type discrete inequalities" (http:/ / jipam.

vu. edu. au/ article. php?sid=301), JIPAM. J. Inequal. Pure Appl. Math. 4 (3): 142 pp• Kadison, R.V. (1952), "A generalized Schwarz inequality and algebraic invariants for operator algebras" (http:/ /

jstor. org/ stable/ 1969657), Ann. Of Math. 56 (3): 494, doi:10.2307/1969657.• Lohwater, Arthur (1982), Introduction to Inequalities (http:/ / www. mediafire. com/ ?1mw1tkgozzu), Online

e-book in PDF fomat• Paulsen, V. (2003), Completely Bounded Maps and Operator Algebras, Cambridge University Press.• Schwarz, H. A. (1888), "Über ein Flächen kleinsten Flächeninhalts betreffendes Problem der Variationsrechnung"

(http:/ / www-stat. wharton. upenn. edu/ ~steele/ Publications/ Books/ CSMC/ Schwarz. pdf) (PDF), ActaSocietatis scientiarum Fennicae XV: 318

• Solomentsev, E.D. (2001), "Cauchy inequality" (http:/ / eom. springer. de/ C/ c020880. htm), in Hazewinkel,Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104

Page 51: Vector and Matrices - Some Articles

Cauchy–Schwarz inequality 48

• Steele, J.M. (2004), The Cauchy–Schwarz Master Class (http:/ / www-stat. wharton. upenn. edu/ ~steele/Publications/ Books/ CSMC/ CSMC_index. html), Cambridge University Press, ISBN 052154677X

External links• Earliest Uses: The entry on the Cauchy-Schwarz inequality has some historical information. (http:/ / jeff560.

tripod. com/ c. html)

Orthonormal basisIn mathematics, particularly linear algebra, an orthonormal basis for inner product space V with finite dimension isa basis for V whose vectors are orthonormal.[1] [2] [3] For example, the standard basis for a Euclidean space Rn is anorthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basisunder a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis forRn arises in this fashion.For a general inner product space V, an orthonormal basis can be used to define normalized orthogonal coordinateson V. Under these coordinates, the inner product becomes dot product of vectors. Thus the presence of anorthonormal basis reduces the study of a finite-dimensional inner product space to the study of Rn under dot product.Every finite-dimensional inner product space has an orthonormal basis, which may be obtained from an arbitrarybasis using the Gram–Schmidt process.In functional analysis, the concept of an orthonormal basis can be generalized to arbitrary (infinite-dimensional)inner product spaces (or pre-Hilbert spaces).[4] Given a pre-Hilbert space H, an orthonormal basis for H is anorthonormal set of vectors with the property that every vector in H can be written as an infinite linear combination ofthe vectors in the basis. In this case, the orthonormal basis is sometimes called a Hilbert basis for H. Note that anorthonormal basis in this sense is not generally a Hamel basis, since infinite linear combinations are required.Specifically, the linear span of the basis must be dense in H, but it may not be the entire space.

Examples• The set of vectors e1 = (1, 0, 0), e2 = (0, 1, 0), e3 = (0, 0, 1) (the standard basis) forms an orthonormal basis of

R3.Proof: A straightforward computation shows that the inner products of these vectors equals zero, <e1,e2> = <e1, e3> = <e2, e3> = 0 and that each of their magnitudes equals one, ||e1|| = ||e2|| = ||e3|| = 1. Thismeans e1, e2, e3 is an orthonormal set. All vectors (x, y, z) in R3 can be expressed as a sum of the basisvectors scaled

so e1,e2,e3 spans R3 and hence must be a basis. It may also be shown that the standard basis rotatedabout an axis through the origin or reflected in a plane through the origin forms an orthonormal basis ofR3.

• The set fn : n ∈ Z with fn(x) = exp(2πinx) forms an orthonormal basis of the complex space L2([0,1]). This isfundamental to the study of Fourier series.

• The set eb : b ∈ B with eb(c) = 1 if b = c and 0 otherwise forms an orthonormal basis of ℓ 2(B).• Eigenfunctions of a Sturm–Liouville eigenproblem.• An orthogonal matrix is a matrix whose column vectors form an orthonormal set.

Page 52: Vector and Matrices - Some Articles

Orthonormal basis 49

Basic formulaIf B is an orthogonal basis of H, then every element x of H may be written as

When B is orthonormal, we have instead

and the norm of x can be given by

Even if B is uncountable, only countably many terms in this sum will be non-zero, and the expression is thereforewell-defined. This sum is also called the Fourier expansion of x, and the formula is usually known as Parseval'sidentity. See also Generalized Fourier series.If B is an orthonormal basis of H, then H is isomorphic to ℓ 2(B) in the following sense: there exists a bijective linearmap Φ : H -> ℓ 2(B) such that

for all x and y in H.

Incomplete orthogonal setsGiven a Hilbert space H and a set S of mutually orthogonal vectors in H, we can take the smallest closed linearsubspace V of H containing S. Then S will be an orthogonal basis of V; which may of course be smaller than H itself,being an incomplete orthogonal set, or be H, when it is a complete orthogonal set.

ExistenceUsing Zorn's lemma and the Gram–Schmidt process (or more simply well-ordering and transfinite recursion), onecan show that every Hilbert space admits a basis and thus an orthonormal basis; furthermore, any two orthonormalbases of the same space have the same cardinality. A Hilbert space is separable if and only if it admits a countableorthonormal basis.

As a homogeneous spaceThe set of orthonormal bases for a space is a principal homogeneous space for the orthogonal group O(n), and iscalled the Stiefel manifold of orthonormal n-frames.In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: givenan orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-onecorrespondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends agiven basis: just as an invertible map can take any basis to any other basis, an orthogonal map can take anyorthogonal basis to any other orthogonal basis.

The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal k-frames) are stillhomogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken toany other k-frame by an orthogonal map, but this map is not uniquely determined.

Page 53: Vector and Matrices - Some Articles

Orthonormal basis 50

References[1] Lay, David C. (2006). Linear Algebra and Its Applications (3rd ed.). Addison–Wesley. ISBN 0-321-28713-4.[2] Strang, Gilbert (2006). Linear Algebra and Its Applications (4th ed.). Brooks Cole. ISBN 0-03-010567-6.[3] Axler, Sheldon (2002). Linear Algebra Done Right (2nd ed.). Springer. ISBN 0-387-98258-2.[4] Rudin, Walter (1987). Real & Complex Analysis. McGraw-Hill. ISBN 0-07-054234-1.

Vector space

Vector addition and scalar multiplication: a vector v(blue) is added to another vector w (red, upper

illustration). Below, w is stretched by a factor of 2,yielding the sum v + 2·w.

A vector space is a mathematical structure formed by a collectionof vectors: objects that may be added together and multiplied("scaled") by numbers, called scalars in this context. Scalars areoften taken to be real numbers, but one may also consider vectorspaces with scalar multiplication by complex numbers, rationalnumbers, or even more general fields instead. The operations ofvector addition and scalar multiplication have to satisfy certainrequirements, called axioms, listed below. An example of a vectorspace is that of Euclidean vectors which are often used to representphysical quantities such as forces: any two forces (of the sametype) can be added to yield a third, and the multiplication of aforce vector by a real factor is another force vector. In the samevein, but in more geometric parlance, vectors representing displacements in the plane or in three-dimensional spacealso form vector spaces.

Vector spaces are the subject of linear algebra and are well understood from this point of view, since vector spacesare characterized by their dimension, which, roughly speaking, specifies the number of independent directions in thespace. The theory is further enhanced by introducing on a vector space some additional structure, such as a norm orinner product. Such spaces arise naturally in mathematical analysis, mainly in the guise of infinite-dimensionalfunction spaces whose vectors are functions. Analytical problems call for the ability to decide if a sequence ofvectors converges to a given vector. This is accomplished by considering vector spaces with additional data, mostlyspaces endowed with a suitable topology, thus allowing the consideration of proximity and continuity issues. Thesetopological vector spaces, in particular Banach spaces and Hilbert spaces, have a richer theory.Historically, the first ideas leading to vector spaces can be traced back as far as 17th century's analytic geometry,matrices, systems of linear equations, and Euclidean vectors. The modern, more abstract treatment, first formulatedby Giuseppe Peano in the late 19th century, encompasses more general objects than Euclidean space, but much ofthe theory can be seen as an extension of classical geometric ideas like lines, planes and their higher-dimensionalanalogs.Today, vector spaces are applied throughout mathematics, science and engineering. They are the appropriatelinear-algebraic notion to deal with systems of linear equations; offer a framework for Fourier expansion, which isemployed in image compression routines; or provide an environment that can be used for solution techniques forpartial differential equations. Furthermore, vector spaces furnish an abstract, coordinate-free way of dealing withgeometrical and physical objects such as tensors. This in turn allows the examination of local properties of manifoldsby linearization techniques. Vector spaces may be generalized in several directions, leading to more advancednotions in geometry and abstract algebra.

Page 54: Vector and Matrices - Some Articles

Vector space 51

Introduction and definition

First example: arrows in the planeThe concept of vector space will first be explained by describing two particular examples. The first example of avector space consists of arrows in a fixed plane, starting at one fixed point. This is used in physics to describe forcesor velocities. Given any two such arrows, v and w, the parallelogram spanned by these two arrows contains onediagonal arrow that starts at the origin, too. This new arrow is called the sum of the two arrows and is denoted v + w.Another operation that can be done with arrows is scaling: given any positive real number a, the arrow that has thesame direction as v, but is dilated or shrunk by multiplying its length by a, is called multiplication of v by a. It isdenoted a · v. When a is negative, a · v is defined as the arrow pointing in the opposite direction, instead.The following shows a few examples: if a = 2, the resulting vector a · w has the same direction as w, but is stretchedto the double length of w (right image below). Equivalently 2 · w is the sum w + w. Moreover, (−1) · v = −v has theopposite direction and the same length as v (blue vector pointing down in the right image).

Second example: ordered pairs of numbersA second key example of a vector space is provided by pairs of real numbers x and y. (The order of the components xand y is significant, so such a pair is also called an ordered pair.) Such a pair is written as (x, y). The sum of two suchpairs and multiplication of a pair with a number is defined as follows:

(x1, y1) + (x2, y2) = (x1 + x2, y1 + y2)and

a · (x, y) = (ax, ay).

DefinitionA vector space over a field F is a set V together with two binary operators that satisfy 8 axioms listed below.Elements of V are called vectors. Elements of F are called scalars. In this article, vectors are differentiated fromscalars by boldface.[1] In the two examples above, our set consists of the planar arrows with fixed starting point andof pairs of real numbers, respectively, while our field is the real numbers. The first operation, vector addition, takesany two vectors v and w and assigns to them a third vector which is commonly written as v + w, and called the sumof these two vectors. The second operation takes any scalar a and any vector v and gives another vector a · v. In viewof the first example, where the multiplication is done by rescaling the vector v by a scalar a, the multiplication iscalled scalar multiplication of v by a.To qualify as a vector space, the set V and the operations of addition and multiplication have to adhere to a numberof requirements called axioms.[2] In the list below, let u, v, w be arbitrary vectors in V, and a, b be scalars in F.

Page 55: Vector and Matrices - Some Articles

Vector space 52

Axiom Signification

Associativity of addition u + (v + w) = (u + v) + w.

Commutativity of addition v + w = w + v.

Identity element of addition There exists an element 0 ∈ V, called the zero vector, such that v + 0 = v for all v ∈ V.

Inverse elements of addition For all v ∈ V, there exists an element w ∈ V, called the additive inverse of v, such that v + w =0. The additive inverse is denoted −v.

Distributivity of scalar multiplication with respectto vector addition

a(v + w) = av + aw.

Distributivity of scalar multiplication with respectto field addition

(a + b)v = av + bv.

Compatibility of scalar multiplication with fieldmultiplication

a(bv) = (ab)v [3]

Identity element of scalar multiplication 1v = v, where 1 denotes the multiplicative identity in F.

These axioms generalize properties of the vectors introduced in the above examples. Indeed, the result of addition oftwo ordered pairs (as in the second example above) does not depend on the order of the summands:

(xv, y

v) + (x

w, y

w) = (x

w, y

w) + (x

v, y

v),

Likewise, in the geometric example of vectors as arrows, v + w = w + v, since the parallelogram defining the sum ofthe vectors is independent of the order of the vectors. All other axioms can be checked in a similar manner in bothexamples. Thus, by disregarding the concrete nature of the particular type of vectors, the definition incorporatesthese two and many more examples in one notion of vector space.Subtraction of two vectors and division by a (non-zero) scalar can be performed via

v − w = v + (−w),v / a = (1 / a) · v.

The concept introduced above is called a real vector space. The word "real" refers to the fact that vectors can bemultiplied by real numbers, as opposed to, say, complex numbers. When scalar multiplication is defined for complexnumbers, the denomination complex vector space is used. These two cases are the ones used most often inengineering. The most general definition of a vector space allows scalars to be elements of a fixed field F. Then, thenotion is known as F-vector spaces or vector spaces over F. A field is, essentially, a set of numbers possessingaddition, subtraction, multiplication and division operations.[4] For example, rational numbers also form a field.In contrast to the intuition stemming from vectors in the plane and higher-dimensional cases, there is, in generalvector spaces, no notion of nearness, angles or distances. To deal with such matters, particular types of vector spacesare introduced; see below.

Alternative formulations and elementary consequencesThe requirement that vector addition and scalar multiplication be binary operations includes (by definition of binaryoperations) a property called closure: that u + v and av are in V for all a in F, and u, v in V. Some older sourcesmention these properties as separate axioms.[5]

In the parlance of abstract algebra, the first four axioms can be subsumed by requiring the set of vectors to be anabelian group under addition. The remaining axioms give this group an F-module structure. In other words there is aring homomorphism ƒ from the field F into the endomorphism ring of the group of vectors. Then scalarmultiplication av is defined as (ƒ(a))(v).[6]

There are a number of direct consequences of the vector space axioms. Some of them derive from elementary group theory, applied to the additive group of vectors: for example the zero vector 0 of V and the additive inverse −v of any

Page 56: Vector and Matrices - Some Articles

Vector space 53

vector v are unique. Other properties follow from the distributive law, for example av equals 0 if and only if a equals0 or v equals 0.

HistoryVector spaces stem from affine geometry, via the introduction of coordinates in the plane or three-dimensionalspace. Around 1636, Descartes and Fermat founded analytic geometry by equating solutions to an equation of twovariables with points on a plane curve.[7] To achieve geometric solutions without using coordinates, Bolzanointroduced, in 1804, certain operations on points, lines and planes, which are predecessors of vectors.[8] This workwas made use of in the conception of barycentric coordinates by Möbius in 1827.[9] The foundation of the definitionof vectors was Bellavitis' notion of the bipoint, an oriented segment one of whose ends is the origin and the other onea target. Vectors were reconsidered with the presentation of complex numbers by Argand and Hamilton and theinception of quaternions and biquaternions by the latter.[10] They are elements in R2, R4, and R8; treating them usinglinear combinations goes back to Laguerre in 1867, who also defined systems of linear equations.In 1857, Cayley introduced the matrix notation which allows for a harmonization and simplification of linear maps.Around the same time, Grassmann studied the barycentric calculus initiated by Möbius. He envisaged sets of abstractobjects endowed with operations.[11] In his work, the concepts of linear independence and dimension, as well asscalar products are present. Actually Grassmann's 1844 work exceeds the framework of vector spaces, since hisconsidering multiplication, too, led him to what are today called algebras. Peano was the first to give the moderndefinition of vector spaces and linear maps in 1888.[12]

An important development of vector spaces is due to the construction of function spaces by Lebesgue. This was laterformalized by Banach and Hilbert, around 1920.[13] At that time, algebra and the new field of functional analysisbegan to interact, notably with key concepts such as spaces of p-integrable functions and Hilbert spaces.[14] Vectorspaces, including infinite-dimensional ones, then became a firmly established notion, and many mathematicalbranches started making use of this concept.

Examples

Coordinate and function spacesThe first example of a vector space over a field F is the field itself, equipped with its standard addition andmultiplication. This is the case n = 1 of a vector space usually denoted Fn, known as the coordinate space whoseelements are n-tuples (sequences of length n):

(a1, a2, ..., an), where the ai are elements of F.[15]

The case F = R and n = 2 was discussed in the introduction above. Infinite coordinate sequences, and more generallyfunctions from any fixed set Ω to a field F also form vector spaces, by performing addition and scalar multiplicationpointwise. That is, the sum of two functions ƒ and g is given by

(ƒ + g)(w) = ƒ(w) + g(w)and similarly for multiplication. Such function spaces occur in many geometric situations, when Ω is the real line oran interval, or other subsets of Rn. Many notions in topology and analysis, such as continuity, integrability ordifferentiability are well-behaved with respect to linearity: sums and scalar multiples of functions possessing such aproperty still have that property.[16] Therefore, the set of such functions are vector spaces. They are studied in greaterdetail using the methods of functional analysis, see below. Algebraic constraints also yield vector spaces: the vectorspace F[x] is given by polynomial functions:

ƒ(x) = r0 + r1x + ... + rn−1xn−1 + rnxn, where the coefficients r0, ..., rn are in F.[17]

Page 57: Vector and Matrices - Some Articles

Vector space 54

Linear equationsSystems of homogeneous linear equations are closely tied to vector spaces.[18] For example, the solutions of

a + 3b + c = 0

4a + 2b + 2c = 0

are given by triples with arbitrary a, b = a/2, and c = −5a/2. They form a vector space: sums and scalar multiples ofsuch triples still satisfy the same ratios of the three variables; thus they are solutions, too. Matrices can be used tocondense multiple linear equations as above into one vector equation, namely

Ax = 0,

where A = is the matrix containing the coefficients of the given equations, x is the vector (a, b, c), Ax

denotes the matrix product and 0 = (0, 0) is the zero vector. In a similar vein, the solutions of homogeneous lineardifferential equations form vector spaces. For example

ƒ''(x) + 2ƒ'(x) + ƒ(x) = 0yields ƒ(x) = a e−x + bx e−x, where a and b are arbitrary constants, and ex is the natural exponential function.

Field extensionsField extensions F / E ("F over E") provide another class of examples of vector spaces, particularly in algebra andalgebraic number theory: a field F containing a smaller field E becomes an E-vector space, by the givenmultiplication and addition operations of F.[19] For example the complex numbers are a vector space over R. Aparticularly interesting type of field extension in number theory is Q(α), the extension of the rational numbers Q by afixed complex number α. Q(α) is the smallest field containing the rationals and a fixed complex number α. Itsdimension as a vector space over Q depends on the choice of α.

Bases and dimension

A vector v in R2 (blue) expressed in terms of differentbases: using the standard basis of R2 v = xe1 + ye2

(black), and using a different, non-orthogonal basis: v= f1 + f2 (red).

Bases reveal the structure of vector spaces in a concise way. Abasis is defined as a (finite or infinite) set B = vii ∈ I of vectors viindexed by some index set I that spans the whole space, and isminimal with this property. The former means that any vector vcan be expressed as a finite sum (called linear combination) of thebasis elements

,

Page 58: Vector and Matrices - Some Articles

Vector space 55

where the ak are scalars and vik (k = 1, ..., n) elements of the basis B. Minimality, on the other hand, is made formalby requiring B to be linearly independent. A set of vectors is said to be linearly independent if none of its elementscan be expressed as a linear combination of the remaining ones. Equivalently, an equation

can only hold if all scalars a1, ..., an equal zero. Linear independence ensures that the representation of any vector interms of basis vectors, the existence of which is guaranteed by the requirement that the basis span V, is unique.[20]

This is referred to as the coordinatized viewpoint of vector spaces, by viewing basis vectors as generalizations ofcoordinate vectors x, y, z in R3 and similarly in higher-dimensional cases.The coordinate vectors e1 = (1, 0, ..., 0), e2 = (0, 1, 0, ..., 0), to en = (0, 0, ..., 0, 1), form basis of Fn, called thestandard basis, since any vector (x1, x2, ..., xn) can be uniquely expressed as a linear combination of these vectors:

(x1, x2, ..., xn) = x1(1, 0, ..., 0) + x2(0, 1, 0, ..., 0) + ... + xn(0, ..., 0, 1) = x1e1 + x2e2 + ... + xnen.Every vector space has a basis. This follows from Zorn's lemma, an equivalent formulation of the axiom ofchoice.[21] Given the other axioms of Zermelo-Fraenkel set theory, the existence of bases is equivalent to the axiomof choice.[22] The ultrafilter lemma, which is weaker than the axiom of choice, implies that all bases of a givenvector space have the same number of elements, or cardinality.[23] It is called the dimension of the vector space,denoted dim V. If the space is spanned by finitely many vectors, the above statements can be proven without suchfundamental input from set theory.[24]

The dimension of the coordinate space Fn is n, by the basis exhibited above. The dimension of the polynomial ringF[x] introduced above is countably infinite, a basis is given by 1, x, x2, ... A fortiori, the dimension of more generalfunction spaces, such as the space of functions on some (bounded or unbounded) interval, is infinite.[25] Undersuitable regularity assumptions on the coefficients involved, the dimension of the solution space of a homogeneousordinary differential equation equals the degree of the equation.[26] For example, the solution space for the aboveequation is generated by e−x and xe−x. These two functions are linearly independent over R, so the dimension of thisspace is two, as is the degree of the equation.The dimension (or degree) of the field extension Q(α) over Q depends on α. If α satisfies some polynomial equation

qnαn + qn−1αn−1 + ... + q0 = 0, with rational coefficients qn, ..., q0.("α is algebraic"), the dimension is finite. More precisely, it equals the degree of the minimal polynomial having α asa root.[27] For example, the complex numbers C are a two-dimensional real vector space, generated by 1 and theimaginary unit i. The latter satisfies i2 + 1 = 0, an equation of degree two. Thus, C is a two-dimensional R-vectorspace (and, as any field, one-dimensional as a vector space over itself, C). If α is not algebraic, the dimension ofQ(α) over Q is infinite. For instance, for α = π there is no such equation, in other words π is transcendental.[28]

Linear maps and matricesThe relation of two vector spaces can be expressed by linear map or linear transformation. They are functions thatreflect the vector space structure—i.e., they preserve sums and scalar multiplication:

ƒ(x + y) = ƒ(x) + ƒ(y) and ƒ(a · x) = a · ƒ(x) for all x and y in V, all a in F.[29]

An isomorphism is a linear map ƒ : V → W such that there exists an inverse map g : W → V, which is a map such thatthe two possible compositions ƒ ∘ g : W → W and g ∘ ƒ : V → V are identity maps. Equivalently, ƒ is both one-to-one(injective) and onto (surjective).[30] If there exists an isomorphism between V and W, the two spaces are said to beisomorphic; they are then essentially identical as vector spaces, since all identities holding in V are, via ƒ, transportedto similar ones in W, and vice versa via g.

Page 59: Vector and Matrices - Some Articles

Vector space 56

Describing an arrow vector v by its coordinates xand y yields an isomorphism of vector spaces.

For example, the vector spaces in the introduction are isomorphic: aplanar arrow v departing at the origin of some (fixed) coordinatesystem can be expressed as an ordered pair by considering the x- andy-component of the arrow, as shown in the image at the right.Conversely, given a pair (x, y), the arrow going by x to the right (or tothe left, if x is negative), and y up (down, if y is negative) turns backthe arrow v.

Linear maps V → W between two fixed vector spaces form a vectorspace HomF(V, W), also denoted L(V, W).[31] The space of linear mapsfrom V to F is called the dual vector space, denoted V∗.[32] Via theinjective natural map V → V∗∗, any vector space can be embedded intoits bidual; the map is an isomorphism if and only if the space isfinite-dimensional.[33]

Once a basis of V is chosen, linear maps ƒ : V → W are completely determined by specifying the images of the basisvectors, because any element of V is expressed uniquely as a linear combination of them.[34] If dim V = dim W, a1-to-1 correspondence between fixed bases of V and W gives rise to a linear map that maps any basis element of V tothe corresponding basis element of W. It is an isomorphism, by its very definition.[35] Therefore, two vector spacesare isomorphic if their dimensions agree and vice versa. Another way to express this is that any vector space iscompletely classified (up to isomorphism) by its dimension, a single number. In particular, any n-dimensionalF-vector space V is isomorphic to Fn. There is, however, no "canonical" or preferred isomorphism; actually anisomorphism φ: Fn → V is equivalent to the choice of a basis of V, by mapping the standard basis of Fn to V, via φ.Appending an automorphism, i.e. an isomorphism ψ: V → V yields another isomorphism ψ∘φ: Fn → V, thecomposition of ψ and φ, and therefore a different basis of V. The freedom of choosing a convenient basis isparticularly useful in the infinite-dimensional context, see below.

Matrices

A typical matrix

Matrices are a useful notion to encode linear maps.[36] They arewritten as a rectangular array of scalars as in the image at the right.Any m-by-n matrix A gives rise to a linear map from Fn to Fm, bythe following

, where denotes summation,

or, using the matrix multiplication of the matrix A with the coordinate vector x:x ↦ Ax.

Moreover, after choosing bases of V and W, any linear map ƒ : V → W is uniquely represented by a matrix via thisassignment.[37]

Page 60: Vector and Matrices - Some Articles

Vector space 57

The volume of this parallelepiped is the absolute valueof the determinant of the 3-by-3 matrix formed by the

vectors r1, r2, and r3.

The determinant det (A) of a square matrix A is a scalar that tellswhether the associated map is an isomorphism or not: to be so it issufficient and necessary that the determinant is nonzero.[38] Thelinear transformation of Rn corresponding to a real n-by-n matrixis orientation preserving if and only if the determinant is positive.

Eigenvalues and eigenvectors

Endomorphisms, linear maps ƒ : V → V, are particularly importantsince in this case vectors v can be compared with their imageunder ƒ, ƒ(v). Any nonzero vector v satisfying λv = ƒ(v), where λ isa scalar, is called an eigenvector of ƒ with eigenvalue λ.[39] [40]

Equivalently, v is an element of the kernel of the difference ƒ − λ ·Id (where Id is the identity map V → V). If V is finite-dimensional,this can be rephrased using determinants: ƒ having eigenvalue λ isequivalent to

det (ƒ − λ · Id) = 0.By spelling out the definition of the determinant, the expression on the left hand side can be seen to be a polynomialfunction in λ, called the characteristic polynomial of ƒ.[41] If the field F is large enough to contain a zero of thispolynomial (which automatically happens for F algebraically closed, such as F = C) any linear map has at least oneeigenvector. The vector space V may or may not possess an eigenbasis, a basis consisting of eigenvectors. Thisphenomenon is governed by the Jordan canonical form of the map.[42] The set of all eigenvectors corresponding to aparticular eigenvalue of ƒ forms a vector space known as the eigenspace corresponding to the eigenvalue (and ƒ) inquestion. To achieve the spectral theorem, the corresponding statement in the infinite-dimensional case, themachinery of functional analysis is needed, see below.

Basic constructionsIn addition to the above concrete examples, there are a number of standard linear algebraic constructions that yieldvector spaces related to given ones. In addition to the definitions given below, they are also characterized byuniversal properties, which determine an object X by specifying the linear maps from X to any other vector space.

Subspaces and quotient spaces

Page 61: Vector and Matrices - Some Articles

Vector space 58

A line passing through the origin (blue, thick) in R3 is a linearsubspace. It is the intersection of two planes (green and yellow).

A nonempty subset W of a vector space V that is closedunder addition and scalar multiplication (and thereforecontains the 0-vector of V) is called a subspace of V.[43]

Subspaces of V are vector spaces (over the same field)in their own right. The intersection of all subspacescontaining a given set S of vectors is called its span,and is the smallest subspace of V containing the set S.Expressed in terms of elements, the span is thesubspace consisting of all the linear combinations ofelements of S.[44]

The counterpart to subspaces are quotient vectorspaces.[45] Given any subspace W ⊂ V, the quotientspace V/W ("V modulo W") is defined as follows: as aset, it consists of v + W = v + w, w ∈ W, where v isan arbitrary vector in V. The sum of two such elements v1 + W and v2 + W is (v1 + v2) + W, and scalar multiplicationis given by a · (v + W) = (a · v) + W. The key point in this definition is that v1 + W = v2 + W if and only if thedifference of v1 and v2 lies in W.[46] This way, the quotient space "forgets" information that is contained in thesubspace W.

The kernel ker(ƒ) of a linear map ƒ: V → W consists of vectors v that are mapped to 0 in W.[47] Both kernel and imageim(ƒ) = ƒ(v), v ∈ V are subspaces of V and W, respectively.[48] The existence of kernels and images is part of thestatement that the category of vector spaces (over a fixed field F) is an abelian category, i.e. a corpus ofmathematical objects and structure-preserving maps between them (a category) that behaves much like the categoryof abelian groups.[49] Because of this, many statements such as the first isomorphism theorem (also calledrank-nullity theorem in matrix-related terms)

V / ker(ƒ) ≅ im(ƒ).and the second and third isomorphism theorem can be formulated and proven in a way very similar to thecorresponding statements for groups.An important example is the kernel of a linear map x ↦ Ax for some fixed matrix A, as above. The kernel of thismap is the subspace of vectors x such that Ax = 0, which is precisely the set of solutions to the system ofhomogeneous linear equations belonging to A. This concept also extends to linear differential equations

, where the coefficients ai are functions in x, too.

In the corresponding map

,

the derivatives of the function ƒ appear linearly (as opposed to ƒ''(x)2, for example). Since differentiation is a linearprocedure (i.e., (ƒ + g)' = ƒ' + g ' and (c·ƒ)' = c·ƒ' for a constant c) this assignment is linear, called a linear differentialoperator. In particular, the solutions to the differential equation D(ƒ) = 0 form a vector space (over R or C).

Page 62: Vector and Matrices - Some Articles

Vector space 59

Direct product and direct sum

The direct product of a family of vector spaces Vi consists of the set of all tuples (vi)i ∈ I, which specify foreach index i in some index set I an element vi of Vi.

[50] Addition and scalar multiplication is performedcomponentwise. A variant of this construction is the direct sum (also called coproduct and denoted

), where only tuples with finitely many nonzero vectors are allowed. If the index set I is finite, the twoconstructions agree, but differ otherwise.

Tensor productThe tensor product V ⊗F W, or simply V ⊗ W, of two vector spaces V and W is one of the central notions ofmultilinear algebra which deals with extending notions such as linear maps to several variables. A map g: V × W →X is called bilinear if g is linear in both variables v and w. That is to say, for fixed w the map v ↦ g(v, w) is linear inthe sense above and likewise for fixed v.The tensor product is a particular vector space that is a universal recipient of bilinear maps g, as follows. It is definedas the vector space consisting of finite (formal) sums of symbols called tensors

v1 ⊗ w1 + v2 ⊗ w2 + ... + vn ⊗ wn,subject to the rules

a · (v ⊗ w) = (a · v) ⊗ w = v ⊗ (a · w), where a is a scalar,(v1 + v2) ⊗ w = v1 ⊗ w + v2 ⊗ w, andv ⊗ (w1 + w2) = v ⊗ w1 + v ⊗ w2.[51]

Commutative diagram depicting the universal propertyof the tensor product.

These rules ensure that the map ƒ from the V × W to V ⊗ W thatmaps a tuple (v, w) to v ⊗ w is bilinear. The universality statesthat given any vector space X and any bilinear map g: V × W → X,there exists a unique map u, shown in the diagram with a dottedarrow, whose composition with ƒ equals g: u(v ⊗ w) = g(v, w).[52]

This is called the universal property of the tensor product, aninstance of the method—much used in advanced abstractalgebra—to indirectly define objects by specifying maps from orto this object.

Vector spaces with additional structureFrom the point of view of linear algebra, vector spaces are completely understood insofar as any vector space ischaracterized, up to isomorphism, by its dimension. However, vector spaces ad hoc do not offer a framework to dealwith the question—crucial to analysis—whether a sequence of functions converges to another function. Likewise,linear algebra is not adapted to deal with infinite series, since the addition operation allows only finitely many termsto be added. Therefore, the needs of functional analysis require considering additional structures. Much the sameway the axiomatic treatment of vector spaces reveals their essential algebraic features, studying vector spaces withadditional data abstractly turns out to be advantageous, too.A first example of an additional datum is an order ≤, a token by which vectors can be compared.[53] For example,n-dimensional real space Rn can be ordered by comparing its vectors componentwise. Ordered vector spaces, forexample Riesz spaces, are fundamental to Lebesgue integration, which relies on the ability to express a function as adifference of two positive functions

ƒ = ƒ+ − ƒ−,

Page 63: Vector and Matrices - Some Articles

Vector space 60

where ƒ+ denotes the positive part of ƒ and ƒ− the negative part.[54]

Normed vector spaces and inner product spaces"Measuring" vectors is done by specifying a norm, a datum which measures lengths of vectors, or by an innerproduct, which measures angles between vectors. Norms and inner products are denoted and ,respectively. The datum of an inner product entails that lengths of vectors can be defined too, by defining theassociated norm . Vector spaces endowed with such data are known as normed vector spaces andinner product spaces, respectively.[55]

Coordinate space Fn can be equipped with the standard dot product:

In R2, this reflects the common notion of the angle between two vectors x and y, by the law of cosines:

Because of this, two vectors satisfying are called orthogonal. An important variant of the standard dotproduct is used in Minkowski space: R4 endowed with the Lorentz product

[56]

In contrast to the standard dot product, it is not positive definite: also takes negative values, for example forx = (0, 0, 0, 1). Singling out the fourth coordinate—corresponding to time, as opposed to threespace-dimensions—makes it useful for the mathematical treatment of special relativity.

Topological vector spacesConvergence questions are treated by considering vector spaces V carrying a compatible topology, a structure thatallows one to talk about elements being close to each other.[57] [58] Compatible here means that addition and scalarmultiplication have to be continuous maps. Roughly, if x and y in V, and a in F vary by a bounded amount, then sodo x + y and ax.[59] To make sense of specifying the amount a scalar changes, the field F also has to carry a topologyin this context; a common choice are the reals or the complex numbers.In such topological vector spaces one can consider series of vectors. The infinite sum

denotes the limit of the corresponding finite partial sums of the sequence (ƒi)i∈N of elements of V. For example, the ƒi

could be (real or complex) functions belonging to some function space V, in which case the series is a functionseries. The mode of convergence of the series depends on the topology imposed on the function space. In such cases,pointwise convergence and uniform convergence are two prominent examples.

Page 64: Vector and Matrices - Some Articles

Vector space 61

Unit "spheres" in R2 consist of plane vectors of norm 1. Depicted arethe unit spheres in different p-norms, for p = 1, 2, and ∞. The bigger

diamond depicts points of 1-norm equal to .

A way to ensure the existence of limits of certaininfinite series is to restrict attention to spaces whereany Cauchy sequence has a limit; such a vector space iscalled complete. Roughly, a vector space is completeprovided that it contains all necessary limits. Forexample, the vector space of polynomials on the unitinterval [0,1], equipped with the topology of uniformconvergence is not complete because any continuousfunction on [0,1] can be uniformly approximated by asequence of polynomials, by the Weierstrassapproximation theorem.[60] In contrast, the space of allcontinuous functions on [0,1] with the same topology iscomplete.[61] A norm gives rise to a topology bydefining that a sequence of vectors vn converges to v ifand only if

Banach and Hilbert spaces are complete topological spaces whose topologies are given, respectively, by a norm andan inner product. Their study—a key piece of functional analysis—focusses on infinite-dimensional vector spaces,since all norms on finite-dimensional topological vector spaces give rise to the same notion of convergence.[62] Theimage at the right shows the equivalence of the 1-norm and ∞-norm on R2: as the unit "balls" enclose each other, asequence converges to zero in one norm if and only if it so does in the other norm. In the infinite-dimensional case,however, there will generally be inequivalent topologies, which makes the study of topological vector spaces richerthan that of vector spaces without additional data.From a conceptual point of view, all notions related to topological vector spaces should match the topology. Forexample, instead of considering all linear maps (also called functionals) V → W, maps between topological vectorspaces are required to be continuous.[63] In particular, the (topological) dual space V∗ consists of continuousfunctionals V → R (or C). The fundamental Hahn–Banach theorem is concerned with separating subspaces ofappropriate topological vector spaces by continuous functionals.[64]

Banach spaces

Banach spaces, introduced by Stefan Banach, are complete normed vector spaces.[65] A first example is the vectorspace ℓ p consisting of infinite vectors with real entries x = (x1, x2, ...) whose p-norm (1 ≤ p ≤ ∞) given by

for p < ∞ and

is finite. The topologies on the infinite-dimensional space ℓ p are inequivalent for different p. E.g. the sequence ofvectors xn = (2−n, 2−n, ..., 2−n, 0, 0, ...), i.e. the first 2n components are 2−n, the following ones are 0, converges to thezero vector for p = ∞, but does not for p = 1:

, but

Page 65: Vector and Matrices - Some Articles

Vector space 62

More generally than sequences of real numbers, functions ƒ: Ω → R are endowed with a norm that replaces the abovesum by the Lebesgue integral

The space of integrable functions on a given domain Ω (for example an interval) satisfying |ƒ|p < ∞, and equippedwith this norm are called Lebesgue spaces, denoted Lp(Ω).[66] These spaces are complete.[67] (If one uses theRiemann integral instead, the space is not complete, which may be seen as a justification for Lebesgue's integrationtheory.[68] ) Concretely this means that for any sequence of Lebesgue-integrable functions ƒ1, ƒ2, ... with |ƒn|p < ∞,satisfying the condition

there exists a function ƒ(x) belonging to the vector space Lp(Ω) such that

Imposing boundedness conditions not only on the function, but also on its derivatives leads to Sobolev spaces.[69]

Hilbert spaces

The succeeding snapshots show summation of 1 to 5 terms in approximating a periodicfunction (blue) by finite sum of sine functions (red).

Complete inner product spaces areknown as Hilbert spaces, in honor ofDavid Hilbert.[70] The Hilbert spaceL2(Ω), with inner product given by

where denotes the complex conjugate of g(x).[71] [72] is a key case.By definition, in a Hilbert space any Cauchy sequences converges to a limit. Conversely, finding a sequence offunctions ƒn with desirable properties that approximates a given limit function, is equally crucial. Early analysis, inthe guise of the Taylor approximation, established an approximation of differentiable functions ƒ by polynomials.[73]

By the Stone–Weierstrass theorem, every continuous function on [a, b] can be approximated as closely as desired bya polynomial.[74] A similar approximation technique by trigonometric functions is commonly called Fourierexpansion, and is much applied in engineering, see below. More generally, and more conceptually, the theoremyields a simple description of what "basic functions", or, in abstract Hilbert spaces, what basic vectors suffice togenerate a Hilbert space H, in the sense that the closure of their span (i.e., finite linear combinations and limits ofthose) is the whole space. Such a set of functions is called a basis of H, its cardinality is known as the Hilbertdimension.[75] Not only does the theorem exhibit suitable basis functions as sufficient for approximation purposes,together with the Gram-Schmidt process it also allows to construct a basis of orthogonal vectors.[76] Such orthogonalbases are the Hilbert space generalization of the coordinate axes in finite-dimensional Euclidean space.The solutions to various differential equations can be interpreted in terms of Hilbert spaces. For example, a great many fields in physics and engineering lead to such equations and frequently solutions with particular physical properties are used as basis functions, often orthogonal.[77] As an example from physics, the time-dependent Schrödinger equation in quantum mechanics describes the change of physical properties in time, by means of a partial differential equation whose solutions are called wavefunctions.[78] Definite values for physical properties

Page 66: Vector and Matrices - Some Articles

Vector space 63

such as energy, or momentum, correspond to eigenvalues of a certain (linear) differential operator and the associatedwavefunctions are called eigenstates. The spectral theorem decomposes a linear compact operator acting onfunctions in terms of these eigenfunctions and their eigenvalues.[79]

Algebras over fields

A hyperbola, given by the equation x · y = 1. The coordinate ring offunctions on this hyperbola is given by R[x, y] / (x · y − 1), an

infinite-dimensional vector space over R.

General vector spaces do not possess a multiplicationoperation. A vector space equipped with an additionalbilinear operator defining the multiplication of twovectors is an algebra over a field.[80] Many algebrasstem from functions on some geometrical object: sincefunctions with values in a field can be multiplied, theseentities form algebras. The Stone–Weierstrass theoremmentioned above, for example, relies on Banachalgebras which are both Banach spaces and algebras.

Commutative algebra makes great use of rings ofpolynomials in one or several variables, introducedabove. Their multiplication is both commutative andassociative. These rings and their quotients form thebasis of algebraic geometry, because they are rings offunctions of algebraic geometric objects.[81]

Another crucial example are Lie algebras, which areneither commutative nor associative, but the failure tobe so is limited by the constraints ([x, y] denotes theproduct of x and y):

• [x, y] = −[y, x] (anticommutativity), and• [x, [y, z]] + [y, [x, z]] + [z, [x, y]] = 0 (Jacobi identity).[82]

Examples include the vector space of n-by-n matrices, with [x, y] = xy − yx, the commutator of two matrices, and R3,endowed with the cross product.

The tensor algebra T(V) is a formal way of adding products to any vector space V to obtain an algebra.[83] As avector space, it is spanned by symbols, called simple tensors

v1 ⊗ v2 ⊗ ... ⊗ vn, where the degree n varies.The multiplication is given by concatenating such symbols, imposing the distributive law under addition, andrequiring that scalar multiplication commute with the tensor product ⊗, much the same way as with the tensorproduct of two vector spaces introduced above. In general, there are no relations between v1 ⊗ v2 and v2 ⊗ v1.Forcing two such elements to be equal leads to the symmetric algebra, whereas forcing v1 ⊗ v2 = − v2 ⊗ v1 yieldsthe exterior algebra.[84]

When a field, F is explicitly stated, a common term used is F-algebra.

ApplicationsVector spaces have manifold applications as they occur in many circumstances, namely wherever functions with values in some field are involved. They provide a framework to deal with analytical and geometrical problems, or are used in the Fourier transform. This list is not exhaustive: many more applications exist, for example in optimization. The minimax theorem of game theory stating the existence of a unique payoff when all players play optimally can be formulated and proven using vector spaces methods.[85] Representation theory fruitfully transfers

Page 67: Vector and Matrices - Some Articles

Vector space 64

the good understanding of linear algebra and vector spaces to other mathematical domains such as group theory.[86]

DistributionsA distribution (or generalized function) is a linear map assigning a number to each "test" function, typically asmooth function with compact support, in a continuous way: in the above terminology the space of distributions isthe (continuous) dual of the test function space.[87] The latter space is endowed with a topology that takes intoaccount not only ƒ itself, but also all its higher derivatives. A standard example is the result of integrating a testfunction ƒ over some domain Ω:

When Ω = p, the set consisting of a single point, this reduces to the Dirac distribution, denoted by δ, whichassociates to a test function ƒ its value at the p: δ(ƒ) = ƒ(p). Distributions are a powerful instrument to solvedifferential equations. Since all standard analytic notions such as derivatives are linear, they extend naturally to thespace of distributions. Therefore the equation in question can be transferred to a distribution space, which is biggerthan the underlying function space, so that more flexible methods are available for solving the equation. Forexample, Green's functions and fundamental solutions are usually distributions rather than proper functions, and canthen be used to find solutions of the equation with prescribed boundary conditions. The found solution can then insome cases be proven to be actually a true function, and a solution to the original equation (e.g., using theLax–Milgram theorem, a consequence of the Riesz representation theorem).[88]

Fourier analysis

The heat equation describes the dissipation of physicalproperties over time, such as the decline of thetemperature of a hot body placed in a colder

environment (yellow depicts hotter regions than red).

Resolving a periodic function into a sum of trigonometricfunctions forms a Fourier series, a technique much used in physicsand engineering.[89] [90] The underlying vector space is usually theHilbert space L2(0, 2π), for which the functions sin mx and cos mx(m an integer) form an orthogonal basis.[91] The Fourier expansionof an L2 function f is

The coefficients am and bm are called Fourier coefficients of ƒ, and are calculated by the formulas[92]

,

In physical terms the function is represented as a superposition of sine waves and the coefficients give informationabout the function's frequency spectrum.[93] A complex-number form of Fourier series is also commonly used.[92]

The concrete formulae above are consequences of a more general mathematical duality called Pontryagin duality.[94]

Applied to the group R, it yields the classical Fourier transform; an application in physics are reciprocal lattices,where the underlying group is a finite-dimensional real vector space endowed with the additional datum of a latticeencoding positions of atoms in crystals.[95]

Page 68: Vector and Matrices - Some Articles

Vector space 65

Fourier series are used to solve boundary value problems in partial differential equations.[96] In 1822, Fourier firstused this technique to solve the heat equation.[97] A discrete version of the Fourier series can be used in samplingapplications where the function value is known only at a finite number of equally spaced points. In this case theFourier series is finite and its value is equal to the sampled values at all points.[98] The set of coefficients is known asthe discrete Fourier transform (DFT) of the given sample sequence. The DFT is one of the key tools of digital signalprocessing, a field whose applications include radar, speech encoding, image compression.[99] The JPEG imageformat is an application of the closely-related discrete cosine transform.[100]

The fast Fourier transform is an algorithm for rapidly computing the discrete Fourier transform.[101] It is used notonly for calculating the Fourier coefficients but, using the convolution theorem, also for computing the convolutionof two finite sequences.[102] They in turn are applied in digital filters[103] and as a rapid multiplication algorithm forpolynomials and large integers (Schönhage-Strassen algorithm).[104] [105]

Differential geometry

The tangent space to the 2-sphere at some point is theinfinite plane touching the sphere in this point.

The tangent plane to a surface at a point is naturally a vector spacewhose origin is identified with the point of contact. The tangentplane is the best linear approximation, or linearization, of a surfaceat a point.[106] Even in a three-dimensional Euclidean space, thereis typically no natural way to prescribe a basis of the tangentplane, and so it is conceived of as an abstract vector space ratherthan a real coordinate space. The tangent space is thegeneralization to higher-dimensional differentiable manifolds.[107]

Riemannian manifolds are manifolds whose tangent spaces areendowed with a suitable inner product.[108] Derived therefrom, theRiemann curvature tensor encodes all curvatures of a manifold inone object, which finds applications in general relativity, for example, where the Einstein curvature tensor describesthe matter and energy content of space-time.[109] [110] The tangent space of a Lie group can be given naturally thestructure of a Lie algebra and can be used to classify compact Lie groups.[111]

Page 69: Vector and Matrices - Some Articles

Vector space 66

Generalizations

Vector bundles

A Möbius strip. Locally, it looks like U × R.

A vector bundle is a family of vector spacesparametrized continuously by a topological spaceX.[107] More precisely, a vector bundle over X is atopological space E equipped with a continuous map

π : E → Xsuch that for every x in X, the fiber π−1(x) is a vectorspace. The case dim V = 1 is called a line bundle. Forany vector space V, the projection X × V → X makes theproduct X × V into a "trivial" vector bundle. Vectorbundles over X are required to be locally a product of Xand some (fixed) vector space V: for every x in X, thereis a neighborhood U of x such that the restriction of π toπ−1(U) is isomorphic[112] to the trivial bundle U × V →U. Despite their locally trivial character, vector bundlesmay (depending on the shape of the underlying spaceX) be "twisted" in the large, i.e., the bundle need not be(globally isomorphic to) the trivial bundle X × V. For example, the Möbius strip can be seen as a line bundle over thecircle S1 (by identifying open intervals with the real line). It is, however, different from the cylinder S1 × R, becausethe latter is orientable whereas the former is not.[113]

Properties of certain vector bundles provide information about the underlying topological space. For example, thetangent bundle consists of the collection of tangent spaces parametrized by the points of a differentiable manifold.The tangent bundle of the circle S1 is globally isomorphic to S1 × R, since there is a global nonzero vector field onS1.[114] In contrast, by the hairy ball theorem, there is no (tangent) vector field on the 2-sphere S2 which iseverywhere nonzero.[115] K-theory studies the isomorphism classes of all vector bundles over some topologicalspace.[116] In addition to deepening topological and geometrical insight, it has purely algebraic consequences, suchas the classification of finite-dimensional real division algebras: R, C, the quaternions H and the octonions.The cotangent bundle of a differentiable manifold consists, at every point of the manifold, of the dual of the tangentspace, the cotangent space. Sections of that bundle are known as differential forms. They are used to do integrationon manifolds.

ModulesModules are to rings what vector spaces are to fields. The very same axioms, applied to a ring R instead of a field Fyield modules.[117] The theory of modules, compared to vector spaces, is complicated by the presence of ringelements that do not have multiplicative inverses. For example, modules need not have bases, as the Z-module (i.e.,abelian group) Z/2Z shows; those modules that do (including all vector spaces) are known as free modules.Nevertheless, a vector space can be compactly defined as a module over a ring which is a field with the elementsbeing called vectors. The algebro-geometric interpretation of commutative rings via their spectrum allows thedevelopment of concepts such as locally free modules, the algebraic counterpart to vector bundles.

Page 70: Vector and Matrices - Some Articles

Vector space 67

Affine and projective spaces

An affine plane (light blue) in R3. It is atwo-dimensional subspace shifted by a vector x (red).

Roughly, affine spaces are vector spaces whose origin is notspecified.[118] More precisely, an affine space is a set with a freetransitive vector space action. In particular, a vector space is anaffine space over itself, by the map

V × V → V, (v, a) ↦ a + v.If W is a vector space, then an affine subspace is a subset of Wobtained by translating a linear subspace V by a fixed vector x ∈W; this space is denoted by x + V (it is a coset of V in W) andconsists of all vectors of the form x + v for v ∈ V. An importantexample is the space of solutions of a system of inhomogeneouslinear equations

Ax = bgeneralizing the homogeneous case b = 0 above.[119] The space of solutions is the affine subspace x + V where x is aparticular solution of the equation, and V is the space of solutions of the homogeneous equation (the nullspace of A).

The set of one-dimensional subspaces of a fixed finite-dimensional vector space V is known as projective space; itmay be used to formalize the idea of parallel lines intersecting at infinity.[120] Grassmannians and flag manifoldsgeneralize this by parametrizing linear subspaces of fixed dimension k and flags of subspaces, respectively.

Convex analysis

The n-simplex is the standard convex set, thatmaps to every polytope, and is the intersection ofthe standard (n + 1) affine hyperplane (standard

affine space) and the standard (n + 1) orthant(standard cone).

Over an ordered field, notably the real numbers, there are the addednotions of convex analysis, most basically a cone, which allows onlynon-negative linear combinations, and a convex set, which allows onlynon-negative linear combinations that sum to 1. A convex set can beseen as the combinations of the axioms for an affine space and a cone,which is reflected in the standard space for it, the n-simplex, being theintersection of the affine hyperplane and orthant. Such spaces areparticularly used in linear programming.

In the language of universal algebra, a vector space is an algebra overthe universal vector space K∞ of finite sequences of coefficients,corresponding to finite sums of vectors, while an affine space is analgebra over the universal affine hyperplane in here (of finitesequences summing to 1), a cone is an algebra over the universalorthant, and a convex set is an algebra over the universal simplex. Thisgeometrizes the axioms in terms of "sums with (possible) restrictionson the coordinates".

Many concepts in linear algebra have analogs in convex analysis,including basic ones such as basis and span (such as in the form ofconvex hull), and notably including duality (in the form of dualpolyhedron, dual cone, dual problem). Unlike linear algebra, however,where every vector space or affine space is isomorphic to the standard spaces, not every convex set or cone isisomorphic to the simplex or orthant. Rather, there is always a map from the simplex onto a polytope, given by

generalized barycentric coordinates, and a dual map from a polytope into the orthant (of dimension equal to the number of faces) given by slack variables, but these are rarely isomorphisms – most polytopes are not a simplex or

Page 71: Vector and Matrices - Some Articles

Vector space 68

an orthant.

Notes[1] It is also common, especially in physics, to denote vectors with an arrow on top: .[2] Roman 2005, ch. 1, p. 27[3] This axiom is not asserting the associativity of an operation, since there are two operations in question, scalar multiplication: bv; and field

multiplication: ab.[4] Some authors (such as Brown 1991) restrict attention to the fields R or C, but most of the theory is unchanged over an arbitrary field.[5] van der Waerden 1993, Ch. 19[6] Bourbaki 1998, Section II.1.1. Bourbaki calls the group homomorphisms ƒ(a) homotheties.[7] Bourbaki 1969, ch. "Algèbre linéaire et algèbre multilinéaire", pp. 78–91[8] Bolzano 1804[9] Möbius 1827[10] Hamilton 1853[11] Grassmann 2000[12] Peano 1888, ch. IX[13] Banach 1922[14] Dorier 1995, Moore 1995[15] Lang 1987, ch. I.1[16] e.g. Lang 1993, ch. XII.3., p. 335[17] Lang 1987, ch. IX.1[18] Lang 1987, ch. VI.3.[19] Lang 2002, ch. V.1[20] Lang 1987, ch. II.2., pp. 47–48[21] Roman 2005, Theorem 1.9, p. 43[22] Blass 1984[23] Halpern 1966, pp. 670–673[24] Artin 1991, Theorem 3.3.13[25] The indicator functions of intervals (of which there are infinitely many) are linearly independent, for example.[26] Braun 1993, Th. 3.4.5, p. 291[27] Stewart 1975, Proposition 4.3, p. 52[28] Stewart 1975, Theorem 6.5, p. 74[29] Roman 2005, ch. 2, p. 45[30] Lang 1987, ch. IV.4, Corollary, p. 106[31] Lang 1987, Example IV.2.6[32] Lang 1987, ch. VI.6[33] Halmos 1974, p. 28, Ex. 9[34] Lang 1987, Theorem IV.2.1, p. 95[35] Roman 2005, Th. 2.5 and 2.6, p. 49[36] Lang 1987, ch. V.1[37] Lang 1987, ch. V.3., Corollary, p. 106[38] Lang 1987, Theorem VII.9.8, p. 198[39] The nomenclature derives from German "eigen", which means own or proper.[40] Roman 2005, ch. 8, p. 135–156[41] Lang 1987, ch. IX.4[42] Roman 2005, ch. 8, p. 140. See also Jordan–Chevalley decomposition.[43] Roman 2005, ch. 1, p. 29[44] Roman 2005, ch. 1, p. 35[45] Roman 2005, ch. 3, p. 64[46] Some authors (such as Roman 2005) choose to start with this equivalence relation and derive the concrete shape of V/W from this.[47] Lang 1987, ch. IV.3.[48] Roman 2005, ch. 2, p. 48[49] Mac Lane 1998[50] Roman 2005, ch. 1, pp. 31–32[51] Lang 2002, ch. XVI.1[52] Roman 2005, Th. 14.3. See also Yoneda lemma.[53] Schaefer & Wolff 1999, pp. 204–205[54] Bourbaki 2004, ch. 2, p. 48

Page 72: Vector and Matrices - Some Articles

Vector space 69

[55] Roman 2005, ch. 9[56] Naber 2003, ch. 1.2[57] Treves 1967[58] Bourbaki 1987[59] This requirement implies that the topology gives rise to a uniform structure, Bourbaki 1989, ch. II[60] Kreyszig 1989, §4.11-5[61] Kreyszig 1989, §1.5-5[62] Choquet 1966, Proposition III.7.2[63] Treves 1967, p. 34–36[64] Lang 1983, Cor. 4.1.2, p. 69[65] Treves 1967, ch. 11[66] The triangle inequality for |−|p is provided by the Minkowski inequality. For technical reasons, in the context of functions one has to identify

functions that agree almost everywhere to get a norm, and not only a seminorm.[67] Treves 1967, Theorem 11.2, p. 102[68] "Many functions in L2 of Lebesgue measure, being unbounded, cannot be integrated with the classical Riemann integral. So spaces of

Riemann integrable functions would not be complete in the L2 norm, and the orthogonal decomposition would not apply to them. This showsone of the advantages of Lebesgue integration.", Dudley 1989, sect. 5.3, p. 125

[69] Evans 1998, ch. 5[70] Treves 1967, ch. 12[71] Dennery 1996, p.190[72] For p ≠2, Lp(Ω) is not a Hilbert space.[73] Lang 1993, Th. XIII.6, p. 349[74] Lang 1993, Th. III.1.1[75] A basis of a Hilbert space is not the same thing as a basis in the sense of linear algebra above. For distinction, the latter is then called a

Hamel basis.[76] Choquet 1966, Lemma III.16.11[77] Kreyszig 1999, Chapter 11[78] Griffiths 1995, Chapter 1[79] Lang 1993, ch. XVII.3[80] Lang 2002, ch. III.1, p. 121[81] Eisenbud 1995, ch. 1.6[82] Varadarajan 1974[83] Lang 2002, ch. XVI.7[84] Lang 2002, ch. XVI.8[85] Luenberger 1997, Section 7.13[86] See representation theory and group representation.[87] Lang 1993, Ch. XI.1[88] Evans 1998, Th. 6.2.1[89] Although the Fourier series is periodic, the technique can be applied to any L2 function on an interval by considering the function to be

continued periodically outside the interval. See Kreyszig 1988, p. 601[90] Folland 1992, p. 349 ff[91] Gasquet & Witomski 1999, p. 150[92] Gasquet & Witomski 1999, §4.5[93] Gasquet & Witomski 1999, p. 57[94] Loomis 1953, Ch. VII[95] Ashcroft & Mermin 1976, Ch. 5[96] Kreyszig 1988, p. 667[97] Fourier 1822[98] Gasquet & Witomski 1999, p. 67[99] Ifeachor & Jervis 2002, pp. 3–4, 11[100] Wallace Feb 1992[101] Ifeachor & Jervis 2002, p. 132[102] Gasquet & Witomski 1999, §10.2[103] Ifeachor & Jervis 2002, pp. 307–310[104] Gasquet & Witomski 1999, §10.3[105] Schönhage & Strassen 1971[106] That is to say (BSE-3 2001), the plane passing through the point of contact P such that the distance from a point P1 on the surface to the

plane is infinitesimally small compared to the distance from P1 to P in the limit as P1 approaches P along the surface.[107] Spivak 1999, ch. 3

Page 73: Vector and Matrices - Some Articles

Vector space 70

[108] Jost 2005. See also Lorentzian manifold.[109] Misner, Thorne & Wheeler 1973, ch. 1.8.7, p. 222 and ch. 2.13.5, p. 325[110] Jost 2005, ch. 3.1[111] Varadarajan 1974, ch. 4.3, Theorem 4.3.27[112] That is, there is a homeomorphism from π−1(U) to V × U which restricts to linear isomorphisms between fibers.[113] Kreyszig 1991, §34, p. 108[114] A line bundle, such as the tangent bundle of S1 is trivial if and only if there is a section that vanishes nowhere, see Husemoller 1994,

Corollary 8.3. The sections of the tangent bundle are just vector fields.[115] Eisenberg & Guy 1979[116] Atiyah 1989[117] Artin 1991, ch. 12[118] Meyer 2000, Example 5.13.5, p. 436[119] Meyer 2000, Exercise 5.13.15–17, p. 442[120] Coxeter 1987

Footnotes

References

Linear algebra• Artin, Michael (1991), Algebra, Prentice Hall, ISBN 978-0-89871-510-1• Blass, Andreas (1984), "Existence of bases implies the axiom of choice", Axiomatic set theory (Boulder,

Colorado, 1983), Contemporary Mathematics, 31, Providence, R.I.: American Mathematical Society, pp. 31–33,MR763890

• Brown, William A. (1991), Matrices and vector spaces, New York: M. Dekker, ISBN 978-0-8247-8419-5• Lang, Serge (1987), Linear algebra, Berlin, New York: Springer-Verlag, ISBN 978-0-387-96412-6• Lang, Serge (2002), Algebra, Graduate Texts in Mathematics, 211 (Revised third ed.), New York:

Springer-Verlag, ISBN 978-0-387-95385-4, MR1878556• Meyer, Carl D. (2000), Matrix Analysis and Applied Linear Algebra (http:/ / www. matrixanalysis. com/ ), SIAM,

ISBN 978-0-89871-454-8• Roman, Steven (2005), Advanced Linear Algebra, Graduate Texts in Mathematics, 135 (2nd ed.), Berlin, New

York: Springer-Verlag, ISBN 978-0-387-24766-3• Spindler, Karlheinz (1993), Abstract Algebra with Applications: Volume 1: Vector spaces and groups, CRC,

ISBN 978-0-82479-144-5• (German) van der Waerden, Bartel Leendert (1993), Algebra (9th ed.), Berlin, New York: Springer-Verlag,

ISBN 978-3-540-56799-8

Analysis• Bourbaki, Nicolas (1987), Topological vector spaces, Elements of mathematics, Berlin, New York:

Springer-Verlag, ISBN 978-3-540-13627-9• Bourbaki, Nicolas (2004), Integration I, Berlin, New York: Springer-Verlag, ISBN 978-3-540-41129-1• Braun, Martin (1993), Differential equations and their applications: an introduction to applied mathematics,

Berlin, New York: Springer-Verlag, ISBN 978-0-387-97894-9• BSE-3 (2001), "Tangent plane" (http:/ / eom. springer. de/ T/ t092180. htm), in Hazewinkel, Michiel,

Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104• Choquet, Gustave (1966), Topology, Boston, MA: Academic Press• Dennery, Philippe; Krzywicki, Andre (1996), Mathematics for Physicists, Courier Dover Publications,

ISBN 978-0-486-69193-0

Page 74: Vector and Matrices - Some Articles

Vector space 71

• Dudley, Richard M. (1989), Real analysis and probability, The Wadsworth & Brooks/Cole Mathematics Series,Pacific Grove, CA: Wadsworth & Brooks/Cole Advanced Books & Software, ISBN 978-0-534-10050-6

• Dunham, William (2005), The Calculus Gallery, Princeton University Press, ISBN 978-0-691-09565-3• Evans, Lawrence C. (1998), Partial differential equations, Providence, R.I.: American Mathematical Society,

ISBN 978-0-8218-0772-9• Folland, Gerald B. (1992), Fourier Analysis and Its Applications, Brooks-Cole, ISBN 978-0-534-17094-3• Gasquet, Claude; Witomski, Patrick (1999), Fourier Analysis and Applications: Filtering, Numerical

Computation, Wavelets, Texts in Applied Mathematics, New York: Springer-Verlag, ISBN 0-387-98485-2• Ifeachor, Emmanuel C.; Jervis, Barrie W. (2001), Digital Signal Processing: A Practical Approach (2nd ed.),

Harlow, Essex, England: Prentice-Hall (published 2002), ISBN 0-201-59619-9• Krantz, Steven G. (1999), A Panorama of Harmonic Analysis, Carus Mathematical Monographs, Washington,

DC: Mathematical Association of America, ISBN 0-88385-031-1• Kreyszig, Erwin (1988), Advanced Engineering Mathematics (6th ed.), New York: John Wiley & Sons,

ISBN 0-471-85824-2• Kreyszig, Erwin (1989), Introductory functional analysis with applications, Wiley Classics Library, New York:

John Wiley & Sons, ISBN 978-0-471-50459-7, MR992618• Lang, Serge (1983), Real analysis, Addison-Wesley, ISBN 978-0-201-14179-5• Lang, Serge (1993), Real and functional analysis, Berlin, New York: Springer-Verlag, ISBN 978-0-387-94001-4• Loomis, Lynn H. (1953), An introduction to abstract harmonic analysis, Toronto-New York–London: D. Van

Nostrand Company, Inc., pp. x+190• Schaefer, Helmut H.; Wolff, M.P. (1999), Topological vector spaces (2nd ed.), Berlin, New York:

Springer-Verlag, ISBN 978-0-387-98726-2• Treves, François (1967), Topological vector spaces, distributions and kernels, Boston, MA: Academic Press

Historical references• (French) Banach, Stefan (1922), "Sur les opérations dans les ensembles abstraits et leur application aux équations

intégrales (On operations in abstract sets and their application to integral equations)" (http:/ / matwbn. icm. edu.pl/ ksiazki/ fm/ fm3/ fm3120. pdf), Fundamenta Mathematicae 3, ISSN 0016-2736

• (German) Bolzano, Bernard (1804), Betrachtungen über einige Gegenstände der Elementargeometrie(Considerations of some aspects of elementary geometry) (http:/ / dml. cz/ handle/ 10338. dmlcz/ 400338)

• (French) Bourbaki, Nicolas (1969), Éléments d'histoire des mathématiques (Elements of history of mathematics),Paris: Hermann

• Dorier, Jean-Luc (1995), "A general outline of the genesis of vector space theory" (http:/ / www. sciencedirect.com/ science?_ob=ArticleURL& _udi=B6WG9-45NJHDR-C& _user=1634520& _coverDate=12/ 31/ 1995&_rdoc=2& _fmt=high& _orig=browse&_srch=doc-info(#toc#6817#1995#999779996#308480#FLP#display#Volume)& _cdi=6817& _sort=d&_docanchor=& _ct=9& _acct=C000054038& _version=1& _urlVersion=0& _userid=1634520&md5=fd995fe2dd19abde0c081f1e989af006), Historia Mathematica 22 (3): 227–261,doi:10.1006/hmat.1995.1024, MR1347828

• (French) Fourier, Jean Baptiste Joseph (1822), Théorie analytique de la chaleur (http:/ / books. google. com/?id=TDQJAAAAIAAJ), Chez Firmin Didot, père et fils

• (German) Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (http:// books. google. com/ ?id=bKgAAAAAMAAJ& pg=PA1& dq=Die+ Lineale+ Ausdehnungslehre+ ein+ neuer+Zweig+ der+ Mathematik), O. Wigand, reprint: Hermann Grassmann. Translated by Lloyd C. Kannenberg.(2000), Kannenberg, L.C., ed., Extension Theory, Providence, R.I.: American Mathematical Society,ISBN 978-0-8218-2031-5

Page 75: Vector and Matrices - Some Articles

Vector space 72

• Hamilton, William Rowan (1853), Lectures on Quaternions (http:/ / historical. library. cornell. edu/ cgi-bin/ cul.math/ docviewer?did=05230001& seq=9), Royal Irish Academy

• (German) Möbius, August Ferdinand (1827), Der Barycentrische Calcul : ein neues Hülfsmittel zur analytischenBehandlung der Geometrie (Barycentric calculus: a new utility for an analytic treatment of geometry) (http:/ /mathdoc. emath. fr/ cgi-bin/ oeitem?id=OE_MOBIUS__1_1_0)

• Moore, Gregory H. (1995), "The axiomatization of linear algebra: 1875–1940" (http:/ / www. sciencedirect. com/science?_ob=ArticleURL& _udi=B6WG9-45NJHDR-D& _user=1634520& _coverDate=12/ 31/ 1995&_rdoc=3& _fmt=high& _orig=browse&_srch=doc-info(#toc#6817#1995#999779996#308480#FLP#display#Volume)& _cdi=6817& _sort=d&_docanchor=& _ct=9& _acct=C000054038& _version=1& _urlVersion=0& _userid=1634520&md5=4327258ef37b4c293b560238058e21ad), Historia Mathematica 22 (3): 262–303,doi:10.1006/hmat.1995.1025

• (Italian) Peano, Giuseppe (1888), Calcolo Geometrico secondo l'Ausdehnungslehre di H. Grassmann precedutodalle Operazioni della Logica Deduttiva, Turin

Further references• Ashcroft, Neil; Mermin, N. David (1976), Solid State Physics, Toronto: Thomson Learning,

ISBN 978-0-03-083993-1• Atiyah, Michael Francis (1989), K-theory, Advanced Book Classics (2nd ed.), Addison-Wesley,

ISBN 978-0-201-09394-0, MR1043170• Bourbaki, Nicolas (1998), Elements of Mathematics : Algebra I Chapters 1-3, Berlin, New York:

Springer-Verlag, ISBN 978-3-540-64243-5• Bourbaki, Nicolas (1989), General Topology. Chapters 1-4, Berlin, New York: Springer-Verlag,

ISBN 978-3-540-64241-1• Coxeter, Harold Scott MacDonald (1987), Projective Geometry (2nd ed.), Berlin, New York: Springer-Verlag,

ISBN 978-0-387-96532-1• Eisenberg, Murray; Guy, Robert (1979), "A proof of the hairy ball theorem", The American Mathematical

Monthly (Mathematical Association of America) 86 (7): 572–574, doi:10.2307/2320587, JSTOR 2320587• Eisenbud, David (1995), Commutative algebra, Graduate Texts in Mathematics, 150, Berlin, New York:

Springer-Verlag, ISBN 978-0-387-94268-1; 978-0-387-94269-8, MR1322960• Goldrei, Derek (1996), Classic Set Theory: A guided independent study (1st ed.), London: Chapman and Hall,

ISBN 0-412-60610-0• Griffiths, David J. (1995), Introduction to Quantum Mechanics, Upper Saddle River, NJ: Prentice Hall,

ISBN 0-13-124405-1• Halmos, Paul R. (1974), Finite-dimensional vector spaces, Berlin, New York: Springer-Verlag,

ISBN 978-0-387-90093-3• Halpern, James D. (Jun 1966), "Bases in Vector Spaces and the Axiom of Choice", Proceedings of the American

Mathematical Society (American Mathematical Society) 17 (3): 670–673, doi:10.2307/2035388, JSTOR 2035388• Husemoller, Dale (1994), Fibre Bundles (3rd ed.), Berlin, New York: Springer-Verlag, ISBN 978-0-387-94087-8• Jost, Jürgen (2005), Riemannian Geometry and Geometric Analysis (4th ed.), Berlin, New York: Springer-Verlag,

ISBN 978-3-540-25907-7• Kreyszig, Erwin (1991), Differential geometry, New York: Dover Publications, pp. xiv+352,

ISBN 978-0-486-66721-8• Kreyszig, Erwin (1999), Advanced Engineering Mathematics (8th ed.), New York: John Wiley & Sons,

ISBN 0-471-15496-2• Luenberger, David (1997), Optimization by vector space methods, New York: John Wiley & Sons,

ISBN 978-0-471-18117-0

Page 76: Vector and Matrices - Some Articles

Vector space 73

• Mac Lane, Saunders (1998), Categories for the Working Mathematician (2nd ed.), Berlin, New York:Springer-Verlag, ISBN 978-0-387-98403-2

• Misner, Charles W.; Thorne, Kip; Wheeler, John Archibald (1973), Gravitation, W. H. Freeman,ISBN 978-0-7167-0344-0

• Naber, Gregory L. (2003), The geometry of Minkowski spacetime, New York: Dover Publications,ISBN 978-0-486-43235-9, MR2044239

• (German) Schönhage, A.; Strassen, Volker (1971), "Schnelle Multiplikation großer Zahlen (Fast multiplication ofbig numbers)" (http:/ / www. springerlink. com/ content/ y251407745475773/ fulltext. pdf), Computing 7:281–292, ISSN 0010-485X

• Spivak, Michael (1999), A Comprehensive Introduction to Differential Geometry (Volume Two), Houston, TX:Publish or Perish

• Stewart, Ian (1975), Galois Theory, Chapman and Hall Mathematics Series, London: Chapman and Hall,ISBN 0-412-10800-3

• Varadarajan, V. S. (1974), Lie groups, Lie algebras, and their representations, Prentice Hall,ISBN 978-0-13-535732-3

• Wallace, G.K. (Feb 1992), "The JPEG still picture compression standard", IEEE Transactions on ConsumerElectronics 38 (1): xviii–xxxiv, ISSN 0098-3063

• Weibel, Charles A. (1994), An introduction to homological algebra, Cambridge Studies in AdvancedMathematics, 38, Cambridge University Press, ISBN 978-0-521-55987-4, OCLC 36131259, MR1269324

External links• A lecture (http:/ / ocw. mit. edu/ courses/ mathematics/ 18-06-linear-algebra-spring-2010/ video-lectures/

lecture-9-independence-basis-and-dimension/ ) about fundamental concepts related to vector spaces (given atMIT)

• A graphical simulator (http:/ / code. google. com/ p/ esla/ ) for the concepts of span, linear dependency, base anddimension

Page 77: Vector and Matrices - Some Articles

Matrix multiplication 74

Matrix multiplicationIn mathematics, matrix multiplication is a binary operation that takes a pair of matrices, and produces anothermatrix.

Matrix product

Non-Technical DetailsThe result of matrix multiplication is a matrix whose elements are found by multiplying the elements of the samerow from the first matrix times the associated elements of the same column from the second matrix and summing.The procedure for finding an element of the resultant matrix is to multiply the first element of the same row from thefirst matrix times the first element of the same column from the second matrix and add it to the second element ofthat row from the first matrix times the second element of that column from the second matrix, plus the thirdelements and so on until the last element of that row from the first matrix is multiplied by the last element of thatcolumn from the second matrix and added to the sum.

Non-Technical Example

if and then [1]

Technical DetailsThe matrix product is the most commonly used type of product of matrices. Matrices offer a concise way ofrepresenting linear transformations between vector spaces, and matrix multiplication corresponds to the compositionof linear transformations. The matrix product of two matrices can be defined when their entries belong to the samering, and hence can be added and multiplied, and, additionally, the number of the columns of the first matrix matchesthe number of the rows of the second matrix. The product of an m×p matrix A with an p×n matrix B is an m×n matrixdenoted AB whose entries are

where 1 ≤ i ≤ m is the row index and 1 ≤ j ≤ n is the column index. This definition can be restated by postulating thatthe matrix product is left and right distributive and the matrix units are multiplied according to the following rule:

where the first factor is the m×n matrix with 1 at the intersection of the ith row and the kth column and zeroselsewhere and the second factor is the p×n matrix with 1 at the intersection of the lth row and the jth column andzeros elsewhere.

Page 78: Vector and Matrices - Some Articles

Matrix multiplication 75

Application ExampleA company sells cement, chalk and plaster in bags weighing 25, 10, and 5 kg respectively. Four construction firmsArcen, Build, Construct and Demolish, buy these products from this company. The number of bags the clients buy ina specific year may be arranged in a 4×3-matrix A, with columns for the products and rows representing the clients:

We see for instance that , indicating that client Construct has bought 12 bags of chalk that year.A bag of cement costs $12, a bag of chalk $9 and a bag of plaster $8. The 3×2-matrix B shows prices and weights ofthe three products:

To find the total amount firm Arcen has spent that year, we calculate:,

in which we recognize the first row of the matrix A (Arcen) and the first column of the matrix B (prices).The total weight of the product bought by Arcen is calculated in a similar manner:

,in which we now recognize the first row of the matrix A (Arcen) and the second column of the matrix B (weight).We can make similar calculations for the other clients. Together they form the matrix AB as the matrix product of thematrices A and B:

PropertiesIn general, matrix multiplication is not commutative. More precisely, AB and BA need not be simultaneouslydefined; if they are, they may have different dimensions; and even if A and B are square matrices of the same order n,so that AB and BA are also square matrices of order n, if n is greater or equal than 2, AB need not be equal to BA. Forexample,

whereas However, if A and B are both diagonal square matrices and of the same order then AB = BA.Matrix multiplication is associative:

Matrix multiplication is distributive over matrix addition:

provided that the expression in either side of each identity is defined.Matrix product is compatible with scalar multiplication:

Page 79: Vector and Matrices - Some Articles

Matrix multiplication 76

where c is a scalar (for the second identity to hold, c must belong to the center of the ground ring — this condition isautomatically satisfied if the ground ring is commutative, in particular, for matrices over a field).If A and B are both nxn matrices with entries in a field then the determinant of their product is the product of theirdeterminants:

In particular, the determinants of AB and BA coincide.Let U, V, and W be vector spaces over the same field with certain bases, S: V → W and T: U → V be lineartransformations and ST: U → W be their composition. Suppose that A, B, and C are the matrices of T, S, and ST withrespect to the given bases. Then

Thus the matrix of the composition (or the product) of linear transformations is the product of their matrices withrespect to the given bases.

Illustration

The figure to the right illustrates the product of two matrices A and B,showing how each intersection in the product matrix corresponds to arow of A and a column of B. The size of the output matrix is always thelargest possible, i.e. for each row of A and for each column of B thereare always corresponding intersections in the product matrix. Theproduct matrix AB consists of all combinations of dot products of rowsof A and columns of B.

The values at the intersections marked with circles are:

In general, the matrix product is non-commutative. For example,

The element of the above matrix product is computed as follows

The first coordinate in matrix notation denotes the row and the second the column; this order is used both in indexingand in giving the dimensions. The element at the intersection of row and column of the product matrix isthe dot product (or scalar product) of row of the first matrix and column of the second matrix. This explainswhy the width and the height of the matrices being multiplied must match: otherwise the dot product is not defined.

Page 80: Vector and Matrices - Some Articles

Matrix multiplication 77

Product of several matricesMatrix product can be extended to the case of several matrices, provided that their dimensions match, and it isassociative, i.e. the result does not depend on the way the factors are grouped together. If A, B, C, and D are,respectively, an m×p, p×q, q×r, and r×n matrices, then there are 5 ways of grouping them without changing theirorder, and

is an m×n matrix denoted ABCD.

Alternative descriptionsThe Euclidean inner product and outer product are the simplest special cases of the matrix product. The innerproduct of two column vectors and is , where T denotes the matrix transpose. Moreexplicitly,

The outer product is , where

Matrix multiplication can be viewed in terms of these two operations by considering the effect of the matrix producton block matrices.Suppose that the first factor, A, is decomposed into its rows, which are row vectors and the second factor, B, isdecomposed into its columns, which are column vectors:

where

The method in the introduction was:

Page 81: Vector and Matrices - Some Articles

Matrix multiplication 78

This is an outer product where the product inside is replaced with the inner product. In general, block matrixmultiplication works exactly like ordinary matrix multiplication, but the real product inside is replaced with thematrix product.An alternative method results when the decomposition is done the other way around, i.e. the first factor, A, isdecomposed into column vectors and the second factor, B, is decomposed into row vectors:

This method emphasizes the effect of individual column/row pairs on the result, which is a useful point of view withe.g. covariance matrices, where each such pair corresponds to the effect of a single sample point. An example for asmall matrix:

One more description of the matrix product may be obtained in the case when the second factor, B, is decomposedinto the columns and the first factor, A, is viewed as a whole. Then A acts on the columns of B. If x is a vector and Ais decomposed into columns, then

.

Algorithms for efficient matrix multiplicationThe running time of square matrix multiplication, if carried out naively, is . The running time formultiplying rectangular matrices (one m×p-matrix with one p×n-matrix) is O(mnp). But more efficient algorithms doexist. Strassen's algorithm, devised by Volker Strassen in 1969 and often referred to as "fast matrix multiplication",is based on a clever way of multiplying two 2 × 2 matrices which requires only 7 multiplications (instead of the usual8), at the expense of several additional addition and subtraction operations. Applying this trick recursively gives analgorithm with a multiplicative cost of . Strassen's algorithm is awkward to implement,compared to the naive algorithm, and it lacks numerical stability. Nevertheless, it is beginning to appear in librariessuch as BLAS, where it is computationally interesting for matrices with dimensions n > 100,[2] and is very useful forlarge matrices over exact domains such as finite fields, where numerical stability is not an issue.The current algorithm with the lowest known exponent k is the Coppersmith–Winograd algorithm. It waspresented by Don Coppersmith and Shmuel Winograd in 1990, has an asymptotic complexity of O(n2.376). It issimilar to Strassen's algorithm: a clever way is devised for multiplying two k × k matrices with fewer than k3

multiplications, and this technique is applied recursively. However, the constant coefficient hidden by the Big ONotation is so large that the Coppersmith–Winograd algorithm is only worthwhile for matrices that are too large tohandle on present-day computers.[3]

Since any algorithm for multiplying two n × n matrices has to process all 2 × n² entries, there is an asymptotic lowerbound of operations. Raz (2002) proves a lower bound of for bounded coefficient arithmeticcircuits over the real or complex numbers.Cohn et al. (2003, 2005) put methods such as the Strassen and Coppersmith–Winograd algorithms in an entirelydifferent group-theoretic context. They show that if families of wreath products of Abelian groups with symmetricgroups satisfying certain conditions exist, then there are matrix multiplication algorithms with essentially quadraticcomplexity. Most researchers believe that this is indeed the case[4] - a lengthy attempt at proving this was undertakenby the late Jim Eve.[5]

Page 82: Vector and Matrices - Some Articles

Matrix multiplication 79

Because of the nature of matrix operations and the layout of matrices in memory, it is typically possible to gainsubstantial performance gains through use of parallelisation and vectorization. It should therefore be noted that somelower time-complexity algorithms on paper may have indirect time complexity costs on real machines.

Scalar multiplicationThe scalar multiplication of a matrix A = (aij) and a scalar r gives a product r A of the same size as A. The entries ofr A are given by

For example, if

then

If we are concerned with matrices over a more general ring, then the above multiplication is the left multiplication ofthe matrix A with scalar r while the right multiplication is defined to be

When the underlying ring is commutative, for example, the real or complex number field, the two multiplications arethe same. However, if the ring is not commutative, such as the quaternions, they may be different. For example

Hadamard productFor two matrices of the same dimensions, we have the Hadamard product (named after French mathematicianJacques Hadamard), also known as the entrywise product and the Schur product.[6]

Formally, for two matrices of the same dimensions:

the Hadamard product A · B is a matrix of the same dimensions

with elements given by

Note that the Hadamard product is a principal submatrix of the Kronecker product.The Hadamard product is commutative, associative and distributive over addition.The Hadamard product of two positive-(semi)definite matrices is positive-(semi)definite,[7] and forpositive-semidefinite and

.For vectors and , and corresponding diagonal matrices and with these vectors as their leadingdiagonals, the following identity holds:[8]

,where denotes the conjugate transpose of . In particular, using vectors of ones, this shows that the sum of allelements in the Hadamard product is the trace of .

Page 83: Vector and Matrices - Some Articles

Matrix multiplication 80

A related result for square and , is that the row-sums of their Hadamard product are the diagonal elements of[9]

The Hadamard product appears in lossy compression algorithms such as JPEG.

Frobenius inner productThe Frobenius inner product, sometimes denoted A:B is the component-wise inner product of two matrices asthough they are vectors. In other words, it is the sum of the entries of the Hadamard product, that is,

This inner product induces the Frobenius norm.

Powers of matricesSquare matrices can be multiplied by themselves repeatedly in the same way that ordinary numbers can. Thisrepeated multiplication can be described as a power of the matrix. Using the ordinary notion of matrix multiplication,the following identities hold for an n-by-n matrix A, a positive integer k, and a scalar c:

The naive computation of matrix powers is to multiply k times the matrix A to the result, starting with the identitymatrix just like the scalar case. This can be improved using the binary representation of k, a method commonly usedfor scalars. An even better method is to use the eigenvalue decomposition of A.Calculating high powers of matrices can be very time-consuming, but the complexity of the calculation can bedramatically decreased by using the Cayley-Hamilton theorem, which takes advantage of an identity found using thematrices' characteristic polynomial and gives a much more effective equation for Ak, which instead raises a scalar tothe required power, rather than a matrix.

Powers of diagonal matrices

The kth power of a diagonal matrix , is the diagonal matrix whose diagonal entries are the kth powers of thecorresponding entries of the original matrix .

When raising an arbitrary matrix (not necessarily a diagonal matrix) to a power, it is often helpful to diagonalize thematrix first.

Page 84: Vector and Matrices - Some Articles

Matrix multiplication 81

Notes[1] Mary L. Boas, "Mathematical Methods in the Physical Sciences", Third Addition, Wiley, 2006, page 115[2] Press 2007, p. 108.[3] Robinson, Sara (2005), "Toward an Optimal Algorithm for Matrix Multiplication" (http:/ / www. siam. org/ pdf/ news/ 174. pdf), SIAM News

38 (9),[4] Robinson, 2005.[5] Eve, 2009.[6] (Horn & Johnson 1985, Ch. 5).[7] (Styan 1973)[8] (Horn & Johnson 1991)[9] (Styan 1973)

References• Henry Cohn, Robert Kleinberg, Balazs Szegedy, and Chris Umans. Group-theoretic Algorithms for Matrix

Multiplication. arXiv:math.GR/0511460. Proceedings of the 46th Annual Symposium on Foundations ofComputer Science, 23–25 October 2005, Pittsburgh, PA, IEEE Computer Society, pp. 379–388.

• Henry Cohn, Chris Umans. A Group-theoretic Approach to Fast Matrix Multiplication. arXiv:math.GR/0307321.Proceedings of the 44th Annual IEEE Symposium on Foundations of Computer Science, 11–14 October 2003,Cambridge, MA, IEEE Computer Society, pp. 438–449.

• Coppersmith, D., Winograd S., Matrix multiplication via arithmetic progressions, J. Symbolic Comput. 9,p. 251-280, 1990.

• Eve, James. On O(n^2 log n) algorithms for n x n matrix operations. Technical Report No. 1169, School ofComputing Science, Newcastle University, August 2009. PDF (http:/ / www. cs. ncl. ac. uk/ publications/ trs/papers/ 1169. pdf)

• Horn, Roger A.; Johnson, Charles R. (1985), Matrix Analysis, Cambridge University Press,ISBN 978-0-521-38632-6

• Horn, Roger A.; Johnson, Charles R. (1991), Topics in Matrix Analysis, Cambridge University Press,ISBN 978-0-521-46713-1

• Knuth, D.E., The Art of Computer Programming Volume 2: Seminumerical Algorithms. Addison-WesleyProfessional; 3 edition (November 14, 1997). ISBN 978-0201896848. pp. 501.

• Press, William H.; Flannery, Brian P.; Teukolsky, Saul A.; Vetterling, William T. (2007), Numerical Recipes: TheArt of Scientific Computing (3rd ed.), Cambridge University Press, ISBN 978-0-521-88068-8.

• Ran Raz. On the complexity of matrix product. In Proceedings of the thirty-fourth annual ACM symposium onTheory of computing. ACM Press, 2002. doi:10.1145/509907.509932.

• Robinson, Sara, Toward an Optimal Algorithm for Matrix Multiplication, SIAM News 38(9), November 2005.PDF (http:/ / www. siam. org/ pdf/ news/ 174. pdf)

• Strassen, Volker, Gaussian Elimination is not Optimal, Numer. Math. 13, p. 354-356, 1969.• Styan, George P. H. (1973), "Hadamard Products and Multivariate Statistical Analysis", Linear Algebra and its

Applications 6: 217–240, doi:10.1016/0024-3795(73)90023-2

Page 85: Vector and Matrices - Some Articles

Matrix multiplication 82

External links• The Simultaneous Triple Product Property and Group-theoretic Results for the Exponent of Matrix Multiplication

(http:/ / arxiv. org/ abs/ cs/ 0703145)• WIMS Online Matrix Multiplier (http:/ / wims. unice. fr/ ~wims/ en_tool~linear~matmult. html)• Matrix Multiplication Problems (http:/ / ceee. rice. edu/ Books/ LA/ mult/ mult4. html#TOP)• Block Matrix Multiplication Problems (http:/ / www. gordon-taft. net/ MatrixMultiplication. html)• Matrix Multiplication in C (http:/ / www. edcc. edu/ faculty/ paul. bladek/ Cmpsc142/ matmult. htm)• Wijesuriya, Viraj B., Daniweb: Sample Code for Matrix Multiplication using MPI Parallel Programming

Approach (http:/ / www. daniweb. com/ forums/ post1428830. html#post1428830), retrieved 2010-12-29• Linear algebra: matrix operations (http:/ / www. umat. feec. vutbr. cz/ ~novakm/ algebra_matic/ en) Multiply or

add matrices of a type and with coefficients you choose and see how the result was computed.• Visual Matrix Multiplication (http:/ / www. wefoundland. com/ project/ Visual_Matrix_Multiplication) An

interactive app for learning matrix multiplication.• Online Matrix Calculator (http:/ / www. numberempire. com/ matrixbinarycalculator. php)• Matrix Multiplication in Java – Dr. P. Viry (http:/ / www. ateji. com/ px/ whitepapers/ Ateji PX MatMult

Whitepaper v1. 2. pdf?phpMyAdmin=95wsvAC1wsqrAq3j,M3duZU3UJ7)

DeterminantIn linear algebra, the determinant is a value associated with a square matrix. The determinant provides importantinformation when the matrix is that of the coefficients of a system of linear equations, or when it corresponds to alinear transformation of a vector space: in the former case the system has a unique solution if and only if thedeterminant is nonzero, in the latter case that same condition means that the transformation has an inverse operation.An intuitive interpretation can be given to the value of the determinant of a square matrix with real entries: theabsolute value of the determinant gives the scale factor by which area or volume is multiplied under the associatedlinear transformation, while its sign indicates whether the transformation preserves orientation. Thus a 2 × 2 matrixwith determinant −2, when applied to a region of the plane with finite area, will transform that region into one withtwice the area, while reversing its orientation.Determinants occur throughout mathematics. They appear in calculus as the Jacobian determinant in the substitutionrule for integrals of functions of several variables. They are used to define the characteristic polynomial of a matrixthat is an essential tool in eigenvalue problems in linear algebra. In some cases they are used just as a compactnotation for expressions that would otherwise be unwieldy to write down.The determinant of a matrix A is denoted det(A), or without parentheses: det A. An alternative notation, used forcompactness, especially in the case where the matrix entries are written out in full, is to denote the determinant of amatrix by surrounding the matrix entries by vertical bars instead of the usual brackets or parentheses. For instance

The determinant of the matrix is written and has the value

.Although most often used for matrices whose entries are real or complex numbers, the definition of the determinant only involves addition, subtraction and multiplication, and so it can be defined for square matrices with entries are taken from any commutative ring. Thus for instance the determinant of a matrix with integer coefficients will be an integer, and the matrix has an inverse with integer coefficients if and only if this determinant is 1 or −1 (these being the only invertible elements of the integers). For square matrices with entries in a non-commutative ring, for instance the quaternions, there is no unique definition for the determinant, and no definition that has all the usual properties of

Page 86: Vector and Matrices - Some Articles

Determinant 83

determinants over commutative rings.

DefinitionThe determinant of a square matrix A, one with the same number of rows and columns, is a value that can beobtained by multiplying certain sets of entries of A, and adding and subtracting such products, according to a givenrule: it is a polynomial expression of the matrix entries. This expression grows rapidly with the size of the matrix (ann-by-n matrix contributes n! terms), so it will first be given explicitly for the case of 2-by-2 matrices and 3-by-3matrices, followed by the rule for arbitrary size matrices, which subsumes these two cases.Assume A is a square matrix with n rows and n columns, so that it can be written as

The entries can be numbers or expressions (as happens when the determinant is used to define a characteristicpolynomial); the definition of the determinant depends only on the fact that they can be added and multipliedtogether in a commutative manner.The determinant of A is denoted as det(A), or it can be denoted directly in terms of the matrix entries by writingenclosing bars instead of brackets:

2-by-2 matrices

The area of the parallelogram is the absolutevalue of the determinant of the matrix formed bythe vectors representing the parallelogram's sides.

The determinant of a 2×2 matrix is defined by

Page 87: Vector and Matrices - Some Articles

Determinant 84

If the matrix entries are real numbers, the matrix A can be used to represent two linear mappings: one that maps thestandard basis vectors to the rows of A, and one that maps them to the columns of A. In either case, the images of thebasis vectors form a parallelogram that represents the image of the unit square under the mapping. The parallelogramdefined by the rows of the above matrix is the one with vertices at (0,0), (a,b), (a + c, b + d), and (c,d), as shown inthe accompanying diagram. The absolute value of is the area of the parallelogram, and thus represents thescale factor by which areas are transformed by A. (The parallelogram formed by the columns of A is in general adifferent parallelogram, but since the determinant is symmetric with respect to rows and columns, the area will bethe same.)The absolute value of the determinant together with the sign becomes the oriented area of the parallelogram. Theoriented area is the same as the usual area, except that it is negative when the angle from the first to the secondvector defining the parallelogram turns in a clockwise direction (which is opposite to the direction one would get forthe identity matrix).Thus the determinant gives the scaling factor and the orientation induced by the mapping represented by A. When thedeterminant is equal to one, the linear mapping defined by the matrix represents is equi-areal andorientation-preserving.

3-by-3 matrices

The volume of this Parallelepiped is the absolute value of the determinant of thematrix formed by the rows r1, r2, and r3.

The determinant of a 3×3 matrix

Page 88: Vector and Matrices - Some Articles

Determinant 85

The determinant of a 3x3 matrix can be calculated by itsdiagonals.

The rule of Sarrus is a mnemonic for this formula: the sum ofthe products of three diagonal north-west to south-east linesof matrix elements, minus the sum of the products of threediagonal south-west to north-east lines of elements when thecopies of the first two columns of the matrix are writtenbeside it as in the illustration at the right.

This formula does not carry over into higher dimensions.For example, the determinant of

is calculated using this rule:

n-by-n matricesThe determinant of a matrix of arbitrary size can be defined by the Leibniz formula or the Laplace formula.The Leibniz formula for the determinant of an n-by-n matrix A is

Here the sum is computed over all permutations σ of the set 1, 2, ..., n. A permutation is a function that reordersthis set of integers. The position of the element i after the reordering σ is denoted σi. For example, for n = 3, theoriginal sequence 1, 2, 3 might be reordered to S = [2, 3, 1], with S1 = 2, S2 = 3, S3 = 1. The set of all suchpermutations (also known as the symmetric group on n elements) is denoted Sn. For each permutation σ, sgn(σ)denotes the signature of σ; it is +1 for even σ and −1 for odd σ. Evenness or oddness can be defined as follows: thepermutation is even (odd) if the new sequence can be obtained by an even number (odd, respectively) of switches ofnumbers. For example, starting from [1, 2, 3] and switching the positions of 2 and 3 yields [1, 3, 2], switching oncemore yields [3, 1, 2], and finally, after a total of three (an odd number) switches, [3, 2, 1] results. Therefore [3, 2, 1]is an odd permutation. Similarly, the permutation [2, 3, 1] is even ([1, 2, 3] → [2, 1, 3] → [2, 3, 1], with an evennumber of switches). It is explained in the article on parity of a permutation why a permutation cannot besimultaneously even and odd.In any of the summands, the term

is notation for the product of the entries at positions (i, σi), where i ranges from 1 to n:

For example, the determinant of a 3 by 3 matrix A (n = 3) is

Page 89: Vector and Matrices - Some Articles

Determinant 86

This agrees with the rule of Sarrus given in the previous section.The formal extension to arbitrary dimensions was made by Tullio Levi-Civita, see (Levi-Civita symbol) using apseudo-tensor symbol.

Levi-Civita symbol

The determinant for an n-by-n matrix can be expressed in terms of the totally antisymmetric Levi-Civita symbol asfollows:

Properties characterizing the determinantThe determinant has the following properties:1. If A is a triangular matrix, i.e. ai,j = 0 whenever i > j or, alternatively, whenever i < j, then

,

the product of the diagonal entries of A. For example, the determinant of the identity matrix

is one.2. If B results from A by interchanging two rows or two columns, then det(B) = −det(A). The determinant is called

alternating (as a function of the rows or columns of the matrix).3. If B results from A by multiplying one row or column with a number c, then det(B) = c · det(A). As a

consequence, multiplying the whole matrix by c yields

4. If B results from A by adding a multiple of one row to another row, or a multiple of one column to anothercolumn, then

These properties can be shown by inspecting the definition via the Leibniz formula. For example, the first property is

because, for triangular matrices, the product is zero for any permutation σ different from the identity

permutation (the one not changing the order of the numbers 1, 2, ..., n), since then at least one ai,σ(i) is zero.These four properties can be used to compute determinants of any matrix, using Gaussian elimination. This is an algorithm that transforms any given matrix to a triangular matrix, only by using the operations in the last three items. Since the effect of these operations on the determinant can be traced, the determinant of the original matrix is known,

Page 90: Vector and Matrices - Some Articles

Determinant 87

once Gaussian elimination is performed. For example, the determinant of can be computed using the following

matrices:

Here, B is obtained from A by adding −1/2 × the first row to the second, so that det(A) = det(B). C is obtained from Bby adding the first to the third row, so that det(C) = det(B). Finally, D is obtained from C by exchanging the secondand third row, so that det(D) = −det(C). The determinant of the (upper) triangular matrix D is the product of itsentries on the main diagonal: (−2) · 2 · 4.5 = −18. Therefore det(A) = +18.

Further propertiesIn addition to the above-mentioned properties characterizing the determinant, there are a number of further basicproperties. For example, a matrix and its transpose have the same determinant:

These properties are chiefly important from a theoretical point of view. For instance, the relation of the determinantand eigenvalues is not typically used to numerically compute either one, especially for large matrices, because ofefficiency and numerical stability considerations.In this section all matrices are assumed to be n-by-n matrices.

Multiplicativity and matrix groupsThe determinant of a matrix product of square matrices equals the product of their determinants:

Thus the determinant is a multiplicative map. This property is a consequence of the characterization given above ofthe determinant as the unique n-linear alternating function of the columns with value 1 on the identity matrix, sincethe function Mn(K) → K that maps M ↦ det(AM) can easily be seen to be n-linear and alternating in the columns ofM, and takes the value det(A) at the identity. The formula can be generalized to (square) products of rectangularmatrices, giving the Cauchy-Binet formula, which also provides an independent proof of the multiplicative property.The determinant det(A) of a matrix A is non-zero if and only if A is invertible or, yet another equivalent statement, ifits rank equals the size of the matrix. If so, the determinant of the inverse matrix is given by

In particular, products and inverses of matrices with determinant one still have this property. Thus, the set of suchmatrices (of fixed size n) form a group known as the special linear group. More generally, the word "special"indicates the subgroup of another matrix group of matrices of determinant one. Examples include the specialorthogonal group (which if n is 2 or 3 consists of all rotation matrices), and the special unitary group.

Page 91: Vector and Matrices - Some Articles

Determinant 88

Relation to eigenvalues and traceDeterminants can be used to find the eigenvalues of the matrix A: they are the solutions of the characteristic equation

where I is the identity matrix of the same dimension as A. Conversely, det(A) is the product of the eigenvalues of A,counted with their algebraic multiplicities. The product of all non-zero eigenvalues is referred to aspseudo-determinant.An Hermitian matrix is positive definite if all its eigenvalues are positive. Sylvester's criterion asserts that this isequivalent to the determinants of the submatrices

being positive, for all k between 1 and n.The trace tr(A) is by definition the sum of the diagonal entries of A and also equals the sum of the eigenvalues. Thus,for complex matrices A,

or, for real matrices A,

Here exp(A) denotes the matrix exponential of A, because every eigenvalue λ of A corresponds to the eigenvalueexp(λ) of exp(A). In particular, given any logarithm of A, that is, any matrix L satisfying

the determinant of A is given by

For example, for n = 2 and n = 3, respectively,

These formulae are closely related to Newton's identities.A generalization of the above identities can be obtained from the following Taylor series expansion of thedeterminant:

where I is the identity matrix.

Page 92: Vector and Matrices - Some Articles

Determinant 89

Laplace's formula and the adjugate matrixLaplace's formula expresses the determinant of a matrix in terms of its minors. The minor Mi,j is defined to be the thedeterminant of the (n−1)×(n−1)-matrix that results from A by removing the i-th row and the j-th column. Theexpression (−1)i+jMi,j is known as cofactor. The determinant of A is given by

Calculating det(A) by means of that formula is referred to as expanding the determinant along a row or column. For

the example 3-by-3 matrix , Laplace expansion along the second column (j = 2, the sum

runs over i) yields:

However, Laplace expansion is efficient for small matrices only.The adjugate matrix adj(A) is the transpose of the matrix consisting of the cofactors, i.e.,

Cramer's ruleFor a matrix equation

the solution is given by Cramer's rule:

where Ai is the matrix formed by replacing the i-th column of A by the column vector b. This fact is implied by thefollowing identity

It has recently been shown that Cramer's rule can be implemented in O(n3) time, which is comparable to morecommon methods of solving systems of linear equations, such as LU, QR, or singular value decomposition.

Sylvester's determinant theoremSylvester's determinant theorem states that for A, an m-by-n matrix, and B, an n-by-m matrix,

,where and are the m-by-m and n-by-n identity matrices, respectively.For the case of column vector c and row vector r, each with m components, the formula allows the quick calculationof the determinant of a matrix that differs from the identity matrix by a matrix of rank 1:

.More generally, for any invertible m-by-m matrix X,[1]

,and

Page 93: Vector and Matrices - Some Articles

Determinant 90

.

Block matricesSuppose A, B, C, and D are n×n-, n×m-, m×n-, and m×m-matrices, respectively. Then

This can be seen from the Leibniz formula or by induction on n. When A is invertible, employing the followingidentity

leads to

When D is invertible, a similar identity with factored out can be derived analogously,[2] that is,

When the blocks are square matrices of the same order further formulas hold. For example, if C and D commute (i.e.,CD = DC), then the following formula comparable to the determinant of a 2-by-2 matrix holds:[3]

.

DerivativeBy definition, e.g., using the Leibniz formula, the determinant of real (or analogously for complex) square matricesis a polynomial function from Rn×n to R. As such it is everywhere differentiable. Its derivative can be expressedusing Jacobi's formula:

where adj(A) denotes the adjugate of A. In particular, if A is invertible, we have

Expressed in terms of the entries of A, these are

Yet another equivalent formulation is

,using big O notation. The special case where , the identity matrix, yields

This identity is used in describing the tangent space of certain matrix Lie groups.

If the matrix A is written as where a, b, c are vectors, then the gradient over one of the threevectors may be written as the cross product of the other two:

Page 94: Vector and Matrices - Some Articles

Determinant 91

Abstract algebraic aspects

Determinant of an endomorphismThe above identities concerning the determinant of a products and inverses of matrices imply that similar matriceshave the same determinant: two matrices A and B are similar, if there exists an invertible matrix X such that A =X−1BX. Indeed, repeatedly applying the above identities yields

The determinant is therefore also called a similarity invariant. The determinant of a linear transformation

for some finite dimensional vector space V is defined to be the determinant of the matrix describing it, with respectto an arbitrary choice of basis in V. By the similarity invariant, this determinant is independent of the choice of thebasis for V and therefore only depends on the endomorphism T.

Exterior algebraThe determinant can also be characterized as the unique function

from the set of all n-by-n matrices with entries in a field K to this field satisfying the following three properties: first,D is an n-linear function: considering all but one column of A fixed, the determinant is linear in the remainingcolumn, that is

for any column vectors v1, ..., vn, and w and any scalars (elements of K) a and b. Second, D is an alternating function:for any matrix A with two identical columns D(A) = 0. Finally, D(In) = 1. Here In is the identity matrix.This fact also implies that any every other n-linear alternating function F: Mn(K) → K satisfies

The last part in fact follows from the preceding statement: one easily sees that if F is nonzero it satisfies F(I) ≠ 0,and function that associates F(M)/F(I) to M satisfies all conditions of the theorem. The importance of stating this partis mainly that it remains valid[4] if K is any commutative ring rather than a field, in which case the given argumentdoes not apply.The determinant of a linear transformation A : V → V of an n-dimensional vector space V can be formulated in acoordinate-free manner by considering the n-th exterior power ΛnV of V. A induces a linear map

As ΛnV is one-dimensional, the map ΛnA is given by multiplying with some scalar. This scalar coincides with thedeterminant of A, that is to say

This definition agrees with the more concrete coordinate-dependent definition. This follows from the abovecharacterization of the determinant given above. For example, switching two columns changes the parity of thedeterminant; likewise, permuting the vectors in the exterior product v1 ∧ v2 ∧ ... ∧ vn to v2 ∧ v1 ∧ v3 ∧ ... ∧ vn, say,also alters the parity.

Page 95: Vector and Matrices - Some Articles

Determinant 92

For this reason, the highest non-zero exterior power Λn(V) is sometimes also called the determinant of V andsimilarly for more involved objects such as vector bundles or chain complexes of vector spaces. Minors of a matrixcan also be cast in this setting, by considering lower alternating forms ΛkV with k < n.

Square matrices over commutative rings and abstract propertiesThe determinant of a matrix can be defined, for example using the Leibniz formula, for matrices with entries in anycommutative ring. Briefly, a ring is a structure where addition, subtraction, and multiplication defined. Thecommutativity requirement means that the product does not depend on the order of the two factors, i.e.,

is supposed to hold for all elements r and s of the ring. For example, the integers form a commutative ring.Many of the above statements and notions carry over mutatis mutandis to determinants of these more generalmatrices: the determinant is multiplicative in this more general situation, and Cramer's rule also holds. A squarematrix over a commutative ring R is invertible if and only if its determinant is a unit in R, that is, an element having a(multiplicative) inverse. (If R is a field, this latter condition is equivalent to the determinant being nonzero, thusgiving back the above characterization.) For example, a matrix A with entries in Z, the integers, is invertible (in thesense that the inverse matrix has again integer entries) if the determinant is +1 or −1. Such a matrix is calledunimodular.The determinant defines a mapping between

the group of invertible n×n matrices with entries in R and the multiplicative group of units in R. Since it respects themultiplication in both groups, this map is a group homomorphism. Secondly, given a ring homomorphism f: R → S,there is a map GLn(R) → GLn(S) given by replacing all entries in R by their images under f. The determinantrespects these maps, i.e., given a matrix A = (ai,j) with entries in R, the identity

holds. For example, the determinant of the complex conjugate of a complex matrix (which is also the determinant ofits conjugate transpose) is the complex conjugate of its determinant, and for integer matrices: the reductionmodulo m of the determinant of such a matrix is equal to the determinant of the matrix reduced modulo m (the latterdeterminant being computed using modular arithmetic). In the more high-brow parlance of category theory, thedeterminant is a natural transformation between the two functors GLn and (⋅)×.[5] Adding yet another layer ofabstraction, this is captured by saying that the determinant is a morphism of algebraic groups, from the general lineargroup to the multiplicative group,

Generalizations and related notions

Infinite matricesFor matrices with an infinite number of rows and columns, the above definitions of the determinant do not carry overdirectly. For example, in Leibniz' formula, an infinite sum (all of whose terms are infinite products) would have to becalculated. Functional analysis provides different extensions of the determinant for such infinite-dimensionalsituations, which however only work for particular kinds of operators.The Fredholm determinant defines the determinant for operators known as trace class operators by an appropriategeneralization of the formula

Another infinite-dimensional notion of determinant is the functional determinant.

Page 96: Vector and Matrices - Some Articles

Determinant 93

Notions of determinant over non-commutative ringsFor square matrices with entries in a non-commutative ring, there are various difficulties in defining determinants ina manner analogous to that for commutative rings. A meaning can be given to the Leibniz formula provided theorder for the product is specified, and similarly for other ways to define the determinant, but non-commutativity thenleads to the loss of many fundamental properties of the determinant, for instance the multiplicative property or thefact that the determinant is unchanged under transposition of the matrix. Over non-commutative rings, there is noreasonable notion of a multilinear form (if a bilinear form exists with a regular element of R as value on some pair ofarguments, it can be used to show that all elements of R commute). Nevertheless various notions ofnon-commutative determinant have been formulated, which preserve some of the properties of determinants, notablyquasideterminants and the Dieudonné determinant.

Further variantsDeterminants of matrices in superrings (that is, Z/2-graded rings) are known as Berezinians or superdeterminants.[6]

The permanent of a matrix is defined as the determinant, except that the factors sgn(σ) occurring in Leibniz' rule areomitted. The immanant generalizes both by introducing a character of the symmetric group Sn in Leibniz' rule.

CalculationNaive methods of implementing an algorithm to compute the determinant include using Leibniz' formula orLaplace's formula. Both these approaches are extremely inefficient for large matrices, though, since the number ofrequired operations grows very quickly: it is of order n! (n factorial) for an n×n matrix M. For example, Leibniz'formula requires to calculate n! products. Therefore, more involved techniques have been developed for calculatingdeterminants.

Decomposition methodsGiven a matrix A, some methods compute its determinant by writing A as a product of matrices whose determinantscan be more easily computed. Such techniques are referred to as decomposition methods. Examples include the LUdecomposition, Cholesky decomposition or the QR decomposition. These methods are of order O(n3), which is asignificant improvement over O(n!)The LU decomposition expresses A in terms of a lower triangular matrix L, an upper triangular matrix U and apermutation matrix P:

The determinants of L and U can be quickly calculated, since they are the products of the respective diagonal entries.The determinant of P is just the sign of the corresponding permutation. The determinant of A is then

Moreover, the decomposition can be chosen such that L is a unitriangular matrix and therefore has determinant 1, inwhich case the formula further simplifies to

.

Page 97: Vector and Matrices - Some Articles

Determinant 94

Further methodsIf the determinant of A and the inverse of A have already been computed, the matrix determinant lemma allows toquickly calculate the determinant of A + uvT, where u and v are column vectors.Since the definition of the determinant does not need divisions, a question arises: do fast algorithms exist that do notneed divisions? This is especially interesting for matrices over rings. Indeed algorithms with run-time proportional ton4 exist. An algorithm of Mahajan and Vinay, and Berkowitz[7] is based on closed ordered walks (short clow). Itcomputes more products than the determinant definition requires, but some of these products cancel and the sum ofthese products can be computed more efficiently. The final algorithm looks very much like an iterated product oftriangular matrices.If two matrices of order n can be multiplied in time M(n), where M(n)≥na for some a>2, then the determinant can becomputed in time O(M(n)).[8] This means, for example, that an O(n2.376) algorithm exists based on theCoppersmith–Winograd algorithm.Algorithms can also be assessed according to their bit complexity, i.e., how many bits of accuracy are needed tostore intermediate values occuring in the computation. For example, the Gaussian elimination (or LU decomposition)methods is of order O(n3), but the bit length of intermediate values can become exponentially long.[9] The BareissAlgorithm, on the other hand, is an exact-division method based on Sylvester's identity is also of order n3, but the bitcomplexity roughly the bit size of the original entries in the matrix times n.

HistoryHistorically, determinants were considered without reference to matrices: originally, a determinant was defined as aproperty of a system of linear equations. The determinant "determines" whether the system has a unique solution(which occurs precisely if the determinant is non-zero). In this sense, determinants were first used in the Chinesemathematics textbook The Nine Chapters on the Mathematical Art (九章算術, Chinese scholars, around the 3rdcentury BC). In Europe, two-by-two determinants were considered by Cardano at the end of the 16th century andlarger ones by Leibniz.[10] [11] [12] [13]

In Europe, Cramer (1750) added to the theory, treating the subject in relation to sets of equations. The recurrent lawwas first announced by Bézout (1764).It was Vandermonde (1771) who first recognized determinants as independent functions.[10] Laplace (1772) [14] [15]

gave the general method of expanding a determinant in terms of its complementary minors: Vandermonde hadalready given a special case. Immediately following, Lagrange (1773) treated determinants of the second and thirdorder. Lagrange was the first to apply determinants to questions of elimination theory; he proved many special casesof general identities.Gauss (1801) made the next advance. Like Lagrange, he made much use of determinants in the theory of numbers.He introduced the word determinants (Laplace had used resultant), though not in the present signification, but ratheras applied to the discriminant of a quantic. Gauss also arrived at the notion of reciprocal (inverse) determinants, andcame very near the multiplication theorem.The next contributor of importance is Binet (1811, 1812), who formally stated the theorem relating to the product oftwo matrices of m columns and n rows, which for the special case of m = n reduces to the multiplication theorem. Onthe same day (November 30, 1812) that Binet presented his paper to the Academy, Cauchy also presented one on thesubject. (See Cauchy-Binet formula.) In this he used the word determinant in its present sense,[16] [17] summarizedand simplified what was then known on the subject, improved the notation, and gave the multiplication theorem witha proof more satisfactory than Binet's.[10] [18] With him begins the theory in its generality.The next important figure was Jacobi[11] (from 1827). He early used the functional determinant which Sylvester later called the Jacobian, and in his memoirs in Crelle for 1841 he specially treats this subject, as well as the class of alternating functions which Sylvester has called alternants. About the time of Jacobi's last memoirs, Sylvester (1839)

Page 98: Vector and Matrices - Some Articles

Determinant 95

and Cayley began their work.[19] [20]

The study of special forms of determinants has been the natural result of the completion of the general theory.Axisymmetric determinants have been studied by Lebesgue, Hesse, and Sylvester; persymmetric determinants bySylvester and Hankel; circulants by Catalan, Spottiswoode, Glaisher, and Scott; skew determinants and Pfaffians, inconnection with the theory of orthogonal transformation, by Cayley; continuants by Sylvester; Wronskians (so calledby Muir) by Christoffel and Frobenius; compound determinants by Sylvester, Reiss, and Picquet; Jacobians andHessians by Sylvester; and symmetric gauche determinants by Trudi. Of the text-books on the subjectSpottiswoode's was the first. In America, Hanus (1886), Weld (1893), and Muir/Metzler (1933) published treatises.Axler in 1995 attacked determinant's place in Linear Algebra. He saw it as something to be derived from the coreprinciples of Linear Algebra, not to be used to derive the core principles.[21]

Applications

Orientation of a basisOne often thinks of the determinant as assigning a number to every sequence of vectors in , by using thesquare matrix whose columns are the given vectors. For instance, an orthogonal matrix with entries in represents an orthonormal basis in Euclidean space. The determinant of such a matrix determines whether theorientation of the basis is consistent with or opposite to the orientation of the standard basis. Namely, if thedeterminant is +1, the basis has the same orientation. If it is −1, the basis has the opposite orientation.More generally, if the determinant of A is positive, A represents an orientation-preserving linear transformation (if Ais an orthogonal 2×2 or 3×3 matrix, this is a rotation), while if it is negative, A switches the orientation of the basis.

VolumeDeterminants are used to calculate volumes in vector calculus: the absolute value of the determinant of real vectors isequal to the volume of the parallelepiped spanned by those vectors. As a consequence, if the linear map

is represented by the matrix A, and is any measurable subset of , then the volume of is given by . More generally, if the linear map is represented by the

-by- matrix A, and is any measurable subset of , then the -dimensional volume of is givenby . By calculating the volume of the tetrahedron bounded by four points, they canbe used to identify skew lines.The volume of any tetrahedron, given its vertices a, b, c, and d, is (1/6)·|det(a − b, b − c, c − d)|, or any othercombination of pairs of vertices that would form a spanning tree over the vertices.

Jacobian determinantGiven a differentiable function

the n-by-n matrix whose entries are given by

is called the Jacobian matrix of f. Its determinant, the Jacobian determinant appears in the higher-dimensionalversion of integration by substitution. It also occurs in the inverse function theorem.

Page 99: Vector and Matrices - Some Articles

Determinant 96

Notes[1] Proofs can be found in http:/ / web. archive. org/ web/ 20080113084601/ http:/ / www. ee. ic. ac. uk/ hp/ staff/ www/ matrix/ proof003. html[2] These identities were taken http:/ / www. ee. ic. ac. uk/ hp/ staff/ dmb/ matrix/ proof003. html[3] Proofs are given at http:/ / www. mth. kcl. ac. uk/ ~jrs/ gazette/ blocks. pdf[4] Roger Godement, Cours d'Algèbre, seconde édition, Hermann (1966), §23, Théorème 5, p. 303[5] Mac Lane, Saunders (1998), Categories for the Working Mathematician, Graduate Texts in Mathematics 5 ((2nd ed.) ed.), Springer-Verlag,

ISBN 0-387-98403-8[6] (http:/ / books. google. de/ books?id=sZ1-G4hQgIIC& pg=PA116& dq=Berezinian& hl=de& ei=chHHTcefLJGdOpajsPYB& sa=X&

oi=book_result& ct=result& resnum=1& ved=0CC4Q6AEwAA#v=onepage& q=Berezinian& f=false)[7] http:/ / page. inf. fu-berlin. de/ ~rote/ Papers/ pdf/ Division-free+ algorithms. pdf[8] J.R. Bunch and J.E. Hopcroft, Triangular factorization and inversion by fast matrix multiplication, Mathematics of Computation, 28 (1974)

231–236.[9] Fang, Xin Gui; Havas, George (1997). "On the worst-case complexity of integer Gaussian elimination" (http:/ / perso. ens-lyon. fr/ gilles.

villard/ BIBLIOGRAPHIE/ PDF/ ft_gateway. cfm. pdf). Proceedings of the 1997 international symposium on Symbolic and algebraiccomputation. ISSAC '97. Kihei, Maui, Hawaii, United States: ACM. pp. 28-31. doi:http:/ / doi. acm. org/ 10. 1145/ 258726. 258740. &#32;ISBN& nbsp;0-89791-875-4. .

[10] Campbell, H: "Linear Algebra With Applications", pages 111-112. Appleton Century Crofts, 1971[11] Eves, H: "An Introduction to the History of Mathematics", pages 405, 493–494, Saunders College Publishing, 1990.[12] A Brief History of Linear Algebra and Matrix Theory : http:/ / darkwing. uoregon. edu/ ~vitulli/ 441. sp04/ LinAlgHistory. html[13] Cajori, F. A History of Mathematics p. 80 (http:/ / books. google. com/ books?id=bBoPAAAAIAAJ& pg=PA80#v=onepage& f=false)[14] Expansion of determinants in terms of minors: Laplace, Pierre-Simon (de) "Researches sur le calcul intégral et sur le systéme du monde,"

Histoire de l'Académie Royale des Sciences (Paris), seconde partie, pages 267-376 (1772).[15] Muir, Sir Thomas, The Theory of Determinants in the historical Order of Development [London, England: Macmillan and Co., Ltd., 1906].[16] The first use of the word "determinant" in the modern sense appeared in: Cauchy, Augustin-Louis “Memoire sur les fonctions qui ne peuvent

obtenir que deux valeurs égales et des signes contraires par suite des transpositions operées entre les variables qu'elles renferment," which wasfirst read at the Institute de France in Paris on November 30, 1812, and which was subsequently published in the Journal de l'EcolePolytechnique, Cahier 17, Tome 10, pages 29-112 (1815).

[17] Origins of mathematical terms: http:/ / jeff560. tripod. com/ d. html[18] History of matrices and determinants: http:/ / www-history. mcs. st-and. ac. uk/ history/ HistTopics/ Matrices_and_determinants. html[19] The first use of vertical lines to denote a determinant appeared in: Cayley, Arthur "On a theorem in the geometry of position," Cambridge

Mathematical Journal, vol. 2, pages 267-271 (1841).[20] History of matrix notation: http:/ / jeff560. tripod. com/ matrices. html[21] Down with Determinants: http:/ / www. axler. net/ DwD. html

References• Axler, Sheldon Jay (1997), Linear Algebra Done Right (2nd ed.), Springer-Verlag, ISBN 0387982590• de Boor, Carl (1990), "An empty exercise" (http:/ / ftp. cs. wisc. edu/ Approx/ empty. pdf), ACM SIGNUM

Newsletter 25 (2): 3–7, doi:10.1145/122272.122273.• Lay, David C. (August 22, 2005), Linear Algebra and Its Applications (3rd ed.), Addison Wesley,

ISBN 978-0321287137• Meyer, Carl D. (February 15, 2001), Matrix Analysis and Applied Linear Algebra (http:/ / www. matrixanalysis.

com/ DownloadChapters. html), Society for Industrial and Applied Mathematics (SIAM), ISBN 978-0898714548• Poole, David (2006), Linear Algebra: A Modern Introduction (2nd ed.), Brooks/Cole, ISBN 0-534-99845-3• Anton, Howard (2005), Elementary Linear Algebra (Applications Version) (9th ed.), Wiley International• Leon, Steven J. (2006), Linear Algebra With Applications (7th ed.), Pearson Prentice Hall

Page 100: Vector and Matrices - Some Articles

Determinant 97

External links• WebApp to calculate determinants and descriptively solve systems of linear equations (http:/ / sole. ooz. ie/ en)• Determinant Interactive Program and Tutorial (http:/ / people. revoledu. com/ kardi/ tutorial/ LinearAlgebra/

MatrixDeterminant. html)• Online Matrix Calculator (http:/ / matri-tri-ca. narod. ru/ en. index. html)• Linear algebra: determinants. (http:/ / www. umat. feec. vutbr. cz/ ~novakm/ determinanty/ en/ ) Compute

determinants of matrices up to order 6 using Laplace expansion you choose.• Matrices and Linear Algebra on the Earliest Uses Pages (http:/ / www. economics. soton. ac. uk/ staff/ aldrich/

matrices. htm)• Determinants explained in an easy fashion in the 4th chapter as a part of a Linear Algebra course. (http:/ / algebra.

math. ust. hk/ course/ content. shtml)• Instructional Video on taking the determinant of an nxn matrix (Kahn Academy) (http:/ / khanexercises. appspot.

com/ video?v=H9BWRYJNIv4)• Online matrix calculator (determinant, track, inverse, adjoint, transpose) (http:/ / www. stud. feec. vutbr. cz/

~xvapen02/ vypocty/ matreg. php?language=english) Compute determinant of matrix up to order 8

Exterior algebra

The cross product (blue vector) in relation to theexterior product (light blue parallelogram). Thelength of the cross product is to the length of the

parallel unit vector (red) as the size of the exteriorproduct is to the size of the reference

parallelogram (light red).

In mathematics, the exterior product or wedge product of vectors isan algebraic construction used in Euclidean geometry to study areas,volumes, and their higher-dimensional analogs. The exterior product oftwo vectors u and v, denoted by u ∧ v, lies in a space called the exteriorsquare, a different geometrical space (vector space) than the originalspace of vectors. The magnitude[1] of u ∧ v can be interpreted as thearea of the parallelogram with sides u and v, which can also becomputed using the cross product of the two vectors. Also like thecross product, the exterior product is anticommutative, meaning that u∧ v = −v ∧ u for all vectors u and v. One way to visualize the exteriorproduct of two vectors is as a family of parallelograms all lying in thesame plane, having the same area, and with the same orientation oftheir boundaries (a decision of clockwise or counterclockwise). Whenthought of in this manner (common in geometric algebra) the exteriorproduct of two vectors is called a 2-blade. More generally, the exteriorproduct of any number k of vectors can be defined and is sometimescalled a k-blade. It lives in a geometrical space known as the k-th exterior power. The magnitude of the resultingk-blade is the volume of the k-dimensional parallelotope whose sides are the given vectors, just like the magnitude ofthe scalar triple product of vectors in three dimensions gives the volume of the parallelepiped spanned by thosevectors.

The exterior algebra (also known as the Grassmann algebra, after Hermann Grassmann[2] ) is the algebraic system whose product is the exterior product. The exterior algebra provides an algebraic setting in which to answer geometric questions. For instance, whereas blades have a concrete geometrical interpretation, objects in the exterior algebra can be manipulated according to a set of unambiguous rules. The exterior algebra contains objects that are not just k-blades, but sums of k-blades. The k-blades, because they are simple products of vectors, are called the simple elements of the algebra. The rank of any element of the exterior algebra is defined to be the smallest number of simple elements of which it is a sum. An example when k = 2 is a symplectic form, which is an element of the

Page 101: Vector and Matrices - Some Articles

Exterior algebra 98

exterior square whose rank is maximal. The exterior product extends to the full exterior algebra, so that it make senseto multiply any two elements of the algebra. Equipped with this product, the exterior algebra is an associativealgebra, which means that α ∧ (β ∧ γ) = (α ∧ β) ∧ γ for any elements α, β, γ. The elements of the algebra that aresums of k-blades are called the degree k-elements, and when elements of different degrees are multiplied, the degreesadd (like multiplication of polynomials). This means that the exterior algebra is a graded algebra.In a precise sense (given by what is known as a universal construction), the exterior algebra is the largest algebrathat supports an alternating product on vectors, and can be easily defined in terms of other known objects such astensors. The definition of the exterior algebra makes sense for spaces not just of geometric vectors, but of othervector-like objects such as vector fields or functions. In full generality, the exterior algebra can be defined formodules over a commutative ring, and for other structures of interest in abstract algebra. It is one of these moregeneral constructions where the exterior algebra finds one if its most important applications, where it appears as thealgebra of differential forms that is fundamental in areas that use differential geometry. Differential forms aremathematical objects that represent infinitesimal areas of infinitesimal parallelograms (and higher-dimensionalbodies), and so can be integrated over surfaces and higher dimensional manifolds in a way that generalizes the lineintegrals from calculus. The exterior algebra also has many algebraic properties that make it a convenient tool inalgebra itself. The association of the exterior algebra to a vector space is a type of functor on vector spaces, whichmeans that it is compatible in a certain way with linear transformations of vector spaces. The exterior algebra is oneexample of a bialgebra, meaning that its dual space also possesses a product, and this dual product is compatible withthe wedge product. This dual algebra is precisely the algebra of alternating multilinear forms on V, and the pairingbetween the exterior algebra and its dual is given by the interior product.

Motivating examples

Areas in the plane

The area of a parallelogram in terms of thedeterminant of the matrix of coordinates of two of

its vertices.

The Cartesian plane R2 is a vector space equipped with a basisconsisting of a pair of unit vectors

Suppose that

are a pair of given vectors in R2, written in components. There is a unique parallelogram having v and w as two of itssides. The area of this parallelogram is given by the standard determinant formula:

Page 102: Vector and Matrices - Some Articles

Exterior algebra 99

Consider now the exterior product of v and w:

where the first step uses the distributive law for the wedge product, and the last uses the fact that the wedge productis alternating, and in particular e2 ∧ e1 = −e1 ∧ e2. Note that the coefficient in this last expression is precisely thedeterminant of the matrix [v w]. The fact that this may be positive or negative has the intuitive meaning that v and wmay be oriented in a counterclockwise or clockwise sense as the vertices of the parallelogram they define. Such anarea is called the signed area of the parallelogram: the absolute value of the signed area is the ordinary area, and thesign determines its orientation.The fact that this coefficient is the signed area is not an accident. In fact, it is relatively easy to see that the exteriorproduct should be related to the signed area if one tries to axiomatize this area as an algebraic construct. In detail, ifA(v, w) denotes the signed area of the parallelogram determined by the pair of vectors v and w, then A must satisfythe following properties:1. A(av, bw) = a b A(v, w) for any real numbers a and b, since rescaling either of the sides rescales the area by the

same amount (and reversing the direction of one of the sides reverses the orientation of the parallelogram).2. A(v,v) = 0, since the area of the degenerate parallelogram determined by v (i.e., a line segment) is zero.3. A(w,v) = −A(v,w), since interchanging the roles of v and w reverses the orientation of the parallelogram.4. A(v + aw,w) = A(v,w), since adding a multiple of w to v affects neither the base nor the height of the

parallelogram and consequently preserves its area.5. A(e1, e2) = 1, since the area of the unit square is one.With the exception of the last property, the wedge product satisfies the same formal properties as the area. In acertain sense, the wedge product generalizes the final property by allowing the area of a parallelogram to becompared to that of any "standard" chosen parallelogram. In other words, the exterior product in two-dimensions is abasis-independent formulation of area.[3]

Cross and triple productsFor vectors in R3, the exterior algebra is closely related to the cross product and triple product. Using the standardbasis e1, e2, e3, the wedge product of a pair of vectors

and

is

where e1 Λ e2, e3 Λ e1, e2 Λ e3 is the basis for the three-dimensional space Λ2(R3). This imitates the usualdefinition of the cross product of vectors in three dimensions.Bringing in a third vector

the wedge product of three vectors is

where e1 Λ e2 Λ e3 is the basis vector for the one-dimensional space Λ3(R3). This imitates the usual definition of thetriple product.

Page 103: Vector and Matrices - Some Articles

Exterior algebra 100

The cross product and triple product in three dimensions each admit both geometric and algebraic interpretations.The cross product u×v can be interpreted as a vector which is perpendicular to both u and v and whose magnitude isequal to the area of the parallelogram determined by the two vectors. It can also be interpreted as the vectorconsisting of the minors of the matrix with columns u and v. The triple product of u, v, and w is geometrically a(signed) volume. Algebraically, it is the determinant of the matrix with columns u, v, and w. The exterior product inthree-dimensions allows for similar interpretations. In fact, in the presence of a positively oriented orthonormalbasis, the exterior product generalizes these notions to higher dimensions.

Formal definitions and algebraic propertiesThe exterior algebra Λ(V) over a vector space V over a field K is defined as the quotient algebra of the tensor algebraby the two-sided ideal I generated by all elements of the form x ⊗ x such that x ∈ V.[4] Symbolically,

The wedge product ∧ of two elements of Λ(V) is defined by

Anticommutativity of the wedge productThe wedge product is alternating on elements of V, which means that x ∧ x = 0 for all x ∈ V. It follows that theproduct is also anticommutative on elements of V, for supposing that x, y ∈ V,

hence

Conversely, it follows from the anticommutativity of the product that the product is alternating, unless K hascharacteristic two.More generally, if x1, x2, ..., xk are elements of V, and σ is a permutation of the integers [1,...,k], then

where sgn(σ) is the signature of the permutation σ.[5]

The exterior powerThe kth exterior power of V, denoted Λk(V), is the vector subspace of Λ(V) spanned by elements of the form

If α ∈ Λk(V), then α is said to be a k-multivector. If, furthermore, α can be expressed as a wedge product of kelements of V, then α is said to be decomposable. Although decomposable multivectors span Λk(V), not everyelement of Λk(V) is decomposable. For example, in R4, the following 2-multivector is not decomposable:

(This is in fact a symplectic form, since α ∧ α ≠ 0.[6] )

Page 104: Vector and Matrices - Some Articles

Exterior algebra 101

Basis and dimension

If the dimension of V is n and e1,...,en is a basis of V, then the set

is a basis for Λk(V). The reason is the following: given any wedge product of the form

then every vector vj can be written as a linear combination of the basis vectors ei; using the bilinearity of the wedgeproduct, this can be expanded to a linear combination of wedge products of those basis vectors. Any wedge productin which the same basis vector appears more than once is zero; any wedge product in which the basis vectors do notappear in the proper order can be reordered, changing the sign whenever two basis vectors change places. In general,the resulting coefficients of the basis k-vectors can be computed as the minors of the matrix that describes the vectorsvj in terms of the basis ei.By counting the basis elements, the dimension of Λk(V) is equal to a binomial coefficient:

In particular, Λk(V) = 0 for k > n.Any element of the exterior algebra can be written as a sum of multivectors. Hence, as a vector space the exterioralgebra is a direct sum

(where by convention Λ0(V) = K and Λ1(V) = V), and therefore its dimension is equal to the sum of the binomialcoefficients, which is 2n.

Rank of a multivector

If α ∈ Λk(V), then it is possible to express α as a linear combination of decomposable multivectors:

where each α(i) is decomposable, say

The rank of the multivector α is the minimal number of decomposable multivectors in such an expansion of α. Thisis similar to the notion of tensor rank.Rank is particularly important in the study of 2-multivectors (Sternberg 1974, §III.6) (Bryant et al. 1991). The rankof a 2-multivector α can be identified with half the rank of the matrix of coefficients of α in a basis. Thus if ei is abasis for V, then α can be expressed uniquely as

where aij = −aji (the matrix of coefficients is skew-symmetric). The rank of the matrix aij is therefore even, and istwice the rank of the form α.In characteristic 0, the 2-multivector α has rank p if and only if

and

Page 105: Vector and Matrices - Some Articles

Exterior algebra 102

Graded structureThe wedge product of a k-multivector with a p-multivector is a (k+p)-multivector, once again invoking bilinearity.As a consequence, the direct sum decomposition of the preceding section

gives the exterior algebra the additional structure of a graded algebra. Symbolically,

Moreover, the wedge product is graded anticommutative, meaning that if α ∈ Λk(V) and β ∈ Λp(V), then

In addition to studying the graded structure on the exterior algebra, Bourbaki (1989) studies additional gradedstructures on exterior algebras, such as those on the exterior algebra of a graded module (a module that alreadycarries its own gradation).

Universal propertyLet V be a vector space over the field K. Informally, multiplication in Λ(V) is performed by manipulating symbolsand imposing a distributive law, an associative law, and using the identity v ∧ v = 0 for v ∈ V. Formally, Λ(V) is the"most general" algebra in which these rules hold for the multiplication, in the sense that any unital associativeK-algebra containing V with alternating multiplication on V must contain a homomorphic image of Λ(V). In otherwords, the exterior algebra has the following universal property:[7]

Given any unital associative K-algebra A and any K-linear map j : V → A such that j(v)j(v) = 0 for every v in V, thenthere exists precisely one unital algebra homomorphism f : Λ(V) → A such that j(v) = f(i(v)) for all v in V.

To construct the most general algebra that contains V and whose multiplication is alternating on V, it is natural tostart with the most general algebra that contains V, the tensor algebra T(V), and then enforce the alternating propertyby taking a suitable quotient. We thus take the two-sided ideal I in T(V) generated by all elements of the form v⊗vfor v in V, and define Λ(V) as the quotient

(and use Λ as the symbol for multiplication in Λ(V)). It is then straightforward to show that Λ(V) contains V andsatisfies the above universal property.As a consequence of this construction, the operation of assigning to a vector space V its exterior algebra Λ(V) is afunctor from the category of vector spaces to the category of algebras.Rather than defining Λ(V) first and then identifying the exterior powers Λk(V) as certain subspaces, one mayalternatively define the spaces Λk(V) first and then combine them to form the algebra Λ(V). This approach is oftenused in differential geometry and is described in the next section.

Page 106: Vector and Matrices - Some Articles

Exterior algebra 103

GeneralizationsGiven a commutative ring R and an R-module M, we can define the exterior algebra Λ(M) just as above, as a suitablequotient of the tensor algebra T(M). It will satisfy the analogous universal property. Many of the properties of Λ(M)also require that M be a projective module. Where finite-dimensionality is used, the properties further require that Mbe finitely generated and projective. Generalizations to the most common situations can be found in (Bourbaki1989).Exterior algebras of vector bundles are frequently considered in geometry and topology. There are no essentialdifferences between the algebraic properties of the exterior algebra of finite-dimensional vector bundles and those ofthe exterior algebra of finitely-generated projective modules, by the Serre-Swan theorem. More general exterioralgebras can be defined for sheaves of modules.

Duality

Alternating operatorsGiven two vector spaces V and X, an alternating operator (or anti-symmetric operator) from Vk to X is a multilinearmap

such that whenever v1,...,vk are linearly dependent vectors in V, then

A well-known example is the determinant, an alternating operator from (Kn)n to K.The map

which associates to k vectors from V their wedge product, i.e. their corresponding k-vector, is also alternating. Infact, this map is the "most general" alternating operator defined on Vk: given any other alternating operator f : Vk →X, there exists a unique linear map φ: Λk(V) → X with f = φ o w. This universal property characterizes the spaceΛk(V) and can serve as its definition.

Alternating multilinear formsThe above discussion specializes to the case when X = K, the base field. In this case an alternating multilinearfunction

is called an alternating multilinear form. The set of all alternating multilinear forms is a vector space, as the sumof two such maps, or the product of such a map with a scalar, is again alternating. By the universal property of theexterior power, the space of alternating forms of degree k on V is naturally isomorphic with the dual vector space(ΛkV)∗. If V is finite-dimensional, then the latter is naturally isomorphic to Λk(V∗). In particular, the dimension ofthe space of anti-symmetric maps from Vk to K is the binomial coefficient n choose k.Under this identification, the wedge product takes a concrete form: it produces a new anti-symmetric map from twogiven ones. Suppose ω : Vk → K and η : Vm → K are two anti-symmetric maps. As in the case of tensor products ofmultilinear maps, the number of variables of their wedge product is the sum of the numbers of their variables. It isdefined as follows:

where the alternation Alt of a multilinear map is defined to be the signed average of the values over all thepermutations of its variables:

Page 107: Vector and Matrices - Some Articles

Exterior algebra 104

This definition of the wedge product is well-defined even if the field K has finite characteristic, if one considers anequivalent version of the above that does not use factorials or any constants:

where here Shk,m ⊂ Sk+m is the subset of (k,m) shuffles: permutations σ of the set 1,2,…,k+m such that σ(1) < σ(2)< … < σ(k), and σ(k+1) < σ(k+2)< … <σ(k+m).[8]

Bialgebra structureIn formal terms, there is a correspondence between the graded dual of the graded algebra Λ(V) and alternatingmultilinear forms on V. The wedge product of multilinear forms defined above is dual to a coproduct defined onΛ(V), giving the structure of a coalgebra.The coproduct is a linear function Δ : Λ(V) → Λ(V) ⊗ Λ(V) given on decomposable elements by

For example,

This extends by linearity to an operation defined on the whole exterior algebra. In terms of the coproduct, the wedgeproduct on the dual space is just the graded dual of the coproduct:

where the tensor product on the right-hand side is of multilinear linear maps (extended by zero on elements ofincompatible homogeneous degree: more precisely, α∧β = ε o (α⊗β) o Δ, where ε is the counit, as definedpresently).The counit is the homomorphism ε : Λ(V) → K which returns the 0-graded component of its argument. Thecoproduct and counit, along with the wedge product, define the structure of a bialgebra on the exterior algebra.With an antipode defined on homogeneous elements by S(x) = (−1)deg xx, the exterior algebra is furthermore a Hopfalgebra.[9]

Interior productSuppose that V is finite-dimensional. If V* denotes the dual space to the vector space V, then for each α ∈ V*, it ispossible to define an antiderivation on the algebra Λ(V),

This derivation is called the interior product with α, or sometimes the insertion operator, or contraction by α.Suppose that w ∈ ΛkV. Then w is a multilinear mapping of V* to K, so it is defined by its values on the k-foldCartesian product V*× V*× ... × V*. If u1, u2, ..., uk-1 are k-1 elements of V*, then define

Additionally, let iαf = 0 whenever f is a pure scalar (i.e., belonging to Λ0V).

Page 108: Vector and Matrices - Some Articles

Exterior algebra 105

Axiomatic characterization and properties

The interior product satisfies the following properties:1. For each k and each α ∈ V*,

(By convention, Λ−1 = 0.)2. If v is an element of V ( = Λ1V), then iαv = α(v) is the dual pairing between elements of V and elements of V*.3. For each α ∈ V*, iα is a graded derivation of degree −1:

In fact, these three properties are sufficient to characterize the interior product as well as define it in the generalinfinite-dimensional case.Further properties of the interior product include:

••

Hodge dualitySuppose that V has finite dimension n. Then the interior product induces a canonical isomorphism of vector spaces

In the geometrical setting, a non-zero element of the top exterior power Λn(V) (which is a one-dimensional vectorspace) is sometimes called a volume form (or orientation form, although this term may sometimes lead toambiguity.) Relative to a given volume form σ, the isomorphism is given explicitly by

If, in addition to a volume form, the vector space V is equipped with an inner product identifying V with V*, then theresulting isomorphism is called the Hodge dual (or more commonly the Hodge star operator)

The composite of * with itself maps Λk(V) → Λk(V) and is always a scalar multiple of the identity map. In mostapplications, the volume form is compatible with the inner product in the sense that it is a wedge product of anorthonormal basis of V. In this case,

where I is the identity, and the inner product has metric signature (p,q) — p plusses and q minuses.

Inner productFor V a finite-dimensional space, an inner product on V defines an isomorphism of V with V∗, and so also anisomorphism of ΛkV with (ΛkV)∗. The pairing between these two spaces also takes the form of an inner product. Ondecomposable k-multivectors,

the determinant of the matrix of inner products. In the special case vi = wi, the inner product is the square norm of themultivector, given by the determinant of the Gramian matrix (⟨vi, vj⟩). This is then extended bilinearly (orsesquilinearly in the complex case) to a non-degenerate inner product on ΛkV. If ei, i=1,2,...,n, form an orthonormalbasis of V, then the vectors of the form

constitute an orthonormal basis for Λk(V).

Page 109: Vector and Matrices - Some Articles

Exterior algebra 106

With respect to the inner product, exterior multiplication and the interior product are mutually adjoint. Specifically,for v ∈ Λk−1(V), w ∈ Λk(V), and x ∈ V,

where x ∈ V* is the linear functional defined by

for all y ∈ V. This property completely characterizes the inner product on the exterior algebra.

FunctorialitySuppose that V and W are a pair of vector spaces and f : V → W is a linear transformation. Then, by the universalconstruction, there exists a unique homomorphism of graded algebras

such that

In particular, Λ(f) preserves homogeneous degree. The k-graded components of Λ(f) are given on decomposableelements by

Let

The components of the transformation Λ(k) relative to a basis of V and W is the matrix of k × k minors of f. Inparticular, if V = W and V is of finite dimension n, then Λn(f) is a mapping of a one-dimensional vector space Λn toitself, and is therefore given by a scalar: the determinant of f.

ExactnessIf

is a short exact sequence of vector spaces, then

is an exact sequence of graded vector spaces[10] as is[11]

Direct sumsIn particular, the exterior algebra of a direct sum is isomorphic to the tensor product of the exterior algebras:

This is a graded isomorphism; i.e.,

Slightly more generally, if

is a short exact sequence of vector spaces then Λk(V) has a filtration

with quotients : . In particular, if U is 1-dimensional then

Page 110: Vector and Matrices - Some Articles

Exterior algebra 107

is exact, and if W is 1-dimensional then

is exact.[12]

The alternating tensor algebraIf K is a field of characteristic 0,[13] then the exterior algebra of a vector space V can be canonically identified withthe vector subspace of T(V) consisting of antisymmetric tensors. Recall that the exterior algebra is the quotient ofT(V) by the ideal I generated by x ⊗ x.Let Tr(V) be the space of homogeneous tensors of degree r. This is spanned by decomposable tensors

The antisymmetrization (or sometimes the skew-symmetrization) of a decomposable tensor is defined by

where the sum is taken over the symmetric group of permutations on the symbols 1,...,r. This extends by linearityand homogeneity to an operation, also denoted by Alt, on the full tensor algebra T(V). The image Alt(T(V)) is thealternating tensor algebra, denoted A(V). This is a vector subspace of T(V), and it inherits the structure of a gradedvector space from that on T(V). It carries an associative graded product defined by

Although this product differs from the tensor product, the kernel of Alt is precisely the ideal I (again, assuming that Khas characteristic 0), and there is a canonical isomorphism

Index notationSuppose that V has finite dimension n, and that a basis e1, ..., en of V is given. then any alternating tensor t ∈ Ar(V) ⊂Tr(V) can be written in index notation as

where ti1 ... ir is completely antisymmetric in its indices.The wedge product of two alternating tensors t and s of ranks r and p is given by

The components of this tensor are precisely the skew part of the components of the tensor product s ⊗ t, denoted bysquare brackets on the indices:

The interior product may also be described in index notation as follows. Let be an antisymmetrictensor of rank r. Then, for α ∈ V*, iαt is an alternating tensor of rank r-1, given by

where n is the dimension of V.

Page 111: Vector and Matrices - Some Articles

Exterior algebra 108

Applications

Linear algebraIn applications to linear algebra, the exterior product provides an abstract algebraic manner for describing thedeterminant and the minors of a matrix. For instance, it is well-known that the magnitude of the determinant of asquare matrix is equal to the volume of the parallelotope whose sides are the columns of the matrix. This suggeststhat the determinant can be defined in terms of the exterior product of the column vectors. Likewise, the k×k minorsof a matrix can be defined by looking at the exterior products of column vectors chosen k at a time. These ideas canbe extended not just to matrices but to linear transformations as well: the magnitude of the determinant of a lineartransformation is the factor by which it scales the volume of any given reference parallelotope. So the determinant ofa linear transformation can be defined in terms of what the transformation does to the top exterior power. The actionof a transformation on the lesser exterior powers gives a basis-independent way to talk about the minors of thetransformation.

Linear geometryThe decomposable k-vectors have geometric interpretations: the bivector represents the plane spanned by thevectors, "weighted" with a number, given by the area of the oriented parallelogram with sides u and v. Analogously,the 3-vector represents the spanned 3-space weighted by the volume of the oriented parallelepiped withedges u, v, and w.

Projective geometryDecomposable k-vectors in ΛkV correspond to weighted k-dimensional subspaces of V. In particular, theGrassmannian of k-dimensional subspaces of V, denoted Grk(V), can be naturally identified with an algebraicsubvariety of the projective space P(ΛkV). This is called the Plücker embedding.

Differential geometryThe exterior algebra has notable applications in differential geometry, where it is used to define differential forms. Adifferential form at a point of a differentiable manifold is an alternating multilinear form on the tangent space at thepoint. Equivalently, a differential form of degree k is a linear functional on the k-th exterior power of the tangentspace. As a consequence, the wedge product of multilinear forms defines a natural wedge product for differentialforms. Differential forms play a major role in diverse areas of differential geometry.In particular, the exterior derivative gives the exterior algebra of differential forms on a manifold the structure of adifferential algebra. The exterior derivative commutes with pullback along smooth mappings between manifolds, andit is therefore a natural differential operator. The exterior algebra of differential forms, equipped with the exteriorderivative, is a differential complex whose cohomology is called the de Rham cohomology of the underlyingmanifold and plays a vital role in the algebraic topology of differentiable manifolds.

Page 112: Vector and Matrices - Some Articles

Exterior algebra 109

Representation theoryIn representation theory, the exterior algebra is one of the two fundamental Schur functors on the category of vectorspaces, the other being the symmetric algebra. Together, these constructions are used to generate the irreduciblerepresentations of the general linear group; see fundamental representation.

PhysicsThe exterior algebra is an archetypal example of a superalgebra, which plays a fundamental role in physical theoriespertaining to fermions and supersymmetry. For a physical discussion, see Grassmann number. For various otherapplications of related ideas to physics, see superspace and supergroup (physics).

Lie algebra homologyLet L be a Lie algebra over a field k, then it is possible to define the structure of a chain complex on the exterioralgebra of L. This is a k-linear mapping

defined on decomposable elements by

The Jacobi identity holds if and only if ∂∂ = 0, and so this is a necessary and sufficient condition for ananticommutative nonassociative algebra L to be a Lie algebra. Moreover, in that case ΛL is a chain complex withboundary operator ∂. The homology associated to this complex is the Lie algebra homology.

Homological algebraThe exterior algebra is the main ingredient in the construction of the Koszul complex, a fundamental object inhomological algebra.

HistoryThe exterior algebra was first introduced by Hermann Grassmann in 1844 under the blanket term ofAusdehnungslehre, or Theory of Extension.[14] This referred more generally to an algebraic (or axiomatic) theory ofextended quantities and was one of the early precursors to the modern notion of a vector space. Saint-Venant alsopublished similar ideas of exterior calculus for which he claimed priority over Grassmann.[15]

The algebra itself was built from a set of rules, or axioms, capturing the formal aspects of Cayley and Sylvester'stheory of multivectors. It was thus a calculus, much like the propositional calculus, except focused exclusively onthe task of formal reasoning in geometrical terms.[16] In particular, this new development allowed for an axiomaticcharacterization of dimension, a property that had previously only been examined from the coordinate point of view.The import of this new theory of vectors and multivectors was lost to mid 19th century mathematicians,[17] untilbeing thoroughly vetted by Giuseppe Peano in 1888. Peano's work also remained somewhat obscure until the turn ofthe century, when the subject was unified by members of the French geometry school (notably Henri Poincaré, ÉlieCartan, and Gaston Darboux) who applied Grassmann's ideas to the calculus of differential forms.A short while later, Alfred North Whitehead, borrowing from the ideas of Peano and Grassmann, introduced hisuniversal algebra. This then paved the way for the 20th century developments of abstract algebra by placing theaxiomatic notion of an algebraic system on a firm logical footing.

Page 113: Vector and Matrices - Some Articles

Exterior algebra 110

Notes[1] Strictly speaking, the magnitude depends on some additional structure, namely that the vectors be in a Euclidean space. We do not generally

assume that this structure is available, except where it is helpful to develop intuition on the subject.[2] Grassmann (1844) introduced these as extended algebras (cf. Clifford 1878). He used the word äußere (literally translated as outer, or

exterior) only to indicate the produkt he defined, which is nowadays conventionally called exterior product, probably to distinguish it from theouter product as defined in modern linear algebra.

[3] This axiomatization of areas is due to Leopold Kronecker and Karl Weierstrass; see Bourbaki (1989, Historical Note). For a moderntreatment, see MacLane & Birkhoff (1999, Theorem IX.2.2). For an elementary treatment, see Strang (1993, Chapter 5).

[4] This definition is a standard one. See, for instance, MacLane & Birkhoff (1999).[5] A proof of this can be found in more generality in Bourbaki (1989).[6] See Sternberg (1964, §III.6).[7] See Bourbaki (1989, III.7.1), and MacLane & Birkhoff (1999, Theorem XVI.6.8). More detail on universal properties in general can be found

in MacLane & Birkhoff (1999, Chapter VI), and throughout the works of Bourbaki.[8] Some conventions, particularly in physics, define the wedge product as

This convention is not adopted here, but is discussed in connection with alternating tensors.[9] Indeed, the exterior algebra of V is the enveloping algebra of the abelian Lie superalgebra structure on V.[10] This part of the statement also holds in greater generality if V and W are modules over a commutative ring: That Λ converts epimorphisms to

epimorphisms. See Bourbaki (1989, Proposition 3, III.7.2).[11] This statement generalizes only to the case where V and W are projective modules over a commutative ring. Otherwise, it is generally not the

case that Λ converts monomorphisms to monomorphisms. See Bourbaki (1989, Corollary to Proposition 12, III.7.9).[12] Such a filtration also holds for vector bundles, and projective modules over a commutative ring. This is thus more general than the result

quoted above for direct sums, since not every short exact sequence splits in other abelian categories.[13] See Bourbaki (1989, III.7.5) for generalizations.[14] Kannenberg (2000) published a translation of Grassmann's work in English; he translated Ausdehnungslehre as Extension Theory.[15] J Itard, Biography in Dictionary of Scientific Biography (New York 1970-1990).[16] Authors have in the past referred to this calculus variously as the calculus of extension (Whitehead 1898; Forder 1941), or extensive algebra

(Clifford 1878), and recently as extended vector algebra (Browne 2007).[17] Bourbaki 1989, p. 661.

References

Mathematical references• Bishop, R.; Goldberg, S.I. (1980), Tensor analysis on manifolds, Dover, ISBN 0-486-64039-6

Includes a treatment of alternating tensors and alternating forms, as well as a detailed discussion ofHodge duality from the perspective adopted in this article.

• Bourbaki, Nicolas (1989), Elements of mathematics, Algebra I, Springer-Verlag, ISBN 3-540-64243-9This is the main mathematical reference for the article. It introduces the exterior algebra of a moduleover a commutative ring (although this article specializes primarily to the case when the ring is a field),including a discussion of the universal property, functoriality, duality, and the bialgebra structure. Seechapters III.7 and III.11.

• Bryant, R.L.; Chern, S.S.; Gardner, R.B.; Goldschmidt, H.L.; Griffiths, P.A. (1991), Exterior differential systems,Springer-Verlag

This book contains applications of exterior algebras to problems in partial differential equations. Rankand related concepts are developed in the early chapters.

• MacLane, S.; Birkhoff, G. (1999), Algebra, AMS Chelsea, ISBN 0-8218-1646-2Chapter XVI sections 6-10 give a more elementary account of the exterior algebra, including duality,determinants and minors, and alternating forms.

• Sternberg, Shlomo (1964), Lectures on Differential Geometry, Prentice Hall

Page 114: Vector and Matrices - Some Articles

Exterior algebra 111

Contains a classical treatment of the exterior algebra as alternating tensors, and applications todifferential geometry.

Historical references• Bourbaki, Nicolas (1989), "Historical note on chapters II and III", Elements of mathematics, Algebra I,

Springer-Verlag• Clifford, W. (1878), "Applications of Grassmann's Extensive Algebra", American Journal of Mathematics (The

Johns Hopkins University Press) 1 (4): 350–358, doi:10.2307/2369379, JSTOR 2369379• Forder, H. G. (1941), The Calculus of Extension, Cambridge University Press• Grassmann, Hermann (1844), Die Lineale Ausdehnungslehre - Ein neuer Zweig der Mathematik (http:/ / books.

google. com/ books?id=bKgAAAAAMAAJ& pg=PA1& dq=Die+ Lineale+ Ausdehnungslehre+ ein+ neuer+Zweig+ der+ Mathematik) (The Linear Extension Theory - A new Branch of Mathematics) alternative reference(http:/ / resolver. sub. uni-goettingen. de/ purl?PPN534901565)

• Kannenberg, Llyod (2000), Extension Theory (translation of Grassmann's Ausdehnungslehre), AmericanMathematical Society, ISBN 0821820311

• Peano, Giuseppe (1888), Calcolo Geometrico secondo l'Ausdehnungslehre di H. Grassmann preceduto dalleOperazioni della Logica Deduttiva; Kannenberg, Lloyd (1999), Geometric calculus: According to theAusdehnungslehre of H. Grassmann, Birkhäuser, ISBN 978-0817641269.

• Whitehead, Alfred North (1898), a Treatise on Universal Algebra, with Applications (http:/ / historical. library.cornell. edu/ cgi-bin/ cul. math/ docviewer?did=01950001& seq=5), Cambridge

Other references and further reading• Browne, J.M. (2007), Grassmann algebra - Exploring applications of Extended Vector Algebra with

Mathematica, Published on line (http:/ / www. grassmannalgebra. info/ grassmannalgebra/ book/ index. htm)An introduction to the exterior algebra, and geometric algebra, with a focus on applications. Alsoincludes a history section and bibliography.

• Spivak, Michael (1965), Calculus on manifolds, Addison-Wesley, ISBN 978-0805390216Includes applications of the exterior algebra to differential forms, specifically focused on integration andStokes's theorem. The notation ΛkV in this text is used to mean the space of alternating k-forms on V;i.e., for Spivak ΛkV is what this article would call ΛkV*. Spivak discusses this in Addendum 4.

• Strang, G. (1993), Introduction to linear algebra, Wellesley-Cambridge Press, ISBN 978-0961408855Includes an elementary treatment of the axiomatization of determinants as signed areas, volumes, andhigher-dimensional volumes.

• Onishchik, A.L. (2001), "Exterior algebra" (http:/ / eom. springer. de/ E/ e037080. htm), in Hazewinkel, Michiel,Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104

• Wendell H. Fleming (1965) Functions of Several Variables, Addison-Wesley.Chapter 6: Exterior algebra and differential calculus, pages 205-38. This textbook in multivariatecalculus introduces the exterior algebra of differential forms adroitly into the calculus sequence forcolleges.

• Winitzki, S. (2010), Linear Algebra via Exterior Products, Published on line (http:/ / sites. google. com/ site/winitzki/ linalg)

An introduction to the coordinate-free approach in basic finite-dimensional linear algebra, using exteriorproducts.

Page 115: Vector and Matrices - Some Articles

Geometric algebra 112

Geometric algebraGeometric algebra (along with an associated Geometric calculus, Spacetime algebra and Conformal Geometricalgebra, together GA) provides an alternative and comprehensive approach to the algebraic representation ofclassical, computational and relativistic geometry. GA now finds application in all of Physics, in graphics, roboticsas well as the mathematics of its formal parents, the Grassmann and Clifford algebras.A distinguishing characteristic of GA is that its products are used and interpreted geometrically due to the naturalcorrespondence between geometric entities and the elements of the algebra. GA allows one to manipulate subspacesdirectly and is a coordinate-free formalism.Proponents argue it provides compact and intuitive descriptions in many areas including classical and quantummechanics, electromagnetic theory and relativity among others. They strive to work with real algebras wherever thatis possible and they argue that it is generally possible and usually enlightening to identify the presence of animaginary unit in a physical equation. Such units arise from one of the many quantities in the real algebra that squareto -1, and these have geometric significance because of the properties of the algebra and the interaction of its varioussubspaces.

Definition and NotationWe begin with an -dimensional real vector space and introduce three products of two vectors and :• Characterized by the rule

(the square of a vector is a scalar) we define the non-degenerate inner product

which is a scalar (0-vector) and equal to the usual Euclidean inner product if both vectors have positivesignature.

• The outer product

which is a bivector (2-vector);• We add together the last two equations to yield a single geometric product,

,a multivector (a sum of -vectors of different grades, here a 0-vector plus a 2-vector).

The outer product can be generalized to the antisymmetrized geometric product of any number of vectors to give a-blade ( -vector):

where the integer is called the grade of A.The outer product of vectors produces a pseudoscalar .The inner product can also be generalised:-

decomposing the geometric product of a vector with a -blade into a -blade and a -blade .From now on, a vector is something in itself. Vectors will be represented by small case letters (e.g. ), andmultivectors by upper case letters (e.g. ). Scalars will be represented by Greek characters.

Page 116: Vector and Matrices - Some Articles

Geometric algebra 113

Here is an axiomatic definition:

is said to have signature where the first unit vectors square to +1 and the next unit vectorssquare to -1. Then would be 3D Euclidean space, relativistic spacetime and would be3D Conformal Geometric algebra for example.If an orthogonal basis set is given by the basis of the geometric algebra or multivector space is formedfrom the geometric products of the basis vectors; the number of basis blades are given by the binomial expansion sothat the total dimension of the multivector space is .

A multivector may be decomposed with the grade projection operator as:

For example In general, a multivector has grade-0 scalar, grade-1 vector, grade-2 bivector, ..., grade-(p+q) pseudoscalar parts.The definition and the associativity of geometric product entails the concept of the inverse of a vector (or division byvector) expressed by

.

Although not all the elements of the algebra are invertible, the inversion concept extends to the geometric productand multivectors.

Relationship with other formalismsHere is a Comparison of vector algebra and geometric algebra.

There is a one to one correspondence between the even subalgebra of and the Complex numbers.Writing, a vector in terms of its components, and left multiplying by the unit vector yields

and

Similarly, the even subalgebra of with basis is isomorphic to the Quaternions if, forexample, we identify .We know that every associate algebra has a matrix representation and it turns out that the Pauli matrices are arepresentation of and the Dirac matrices of a matter of some interest to physicists.

RotationsIf we have a product of vectors then we denote the reverse as

.Now assume that so

.Setting † = 1 then

so † leaves the length of unchanged. We can also show that

so the transformation † preserves both length and angle. It therefore can be identified as a rotation; iscalled a rotor and is an instance of what is known in GA as a versor (presumably for historical reasons-Versor).

Page 117: Vector and Matrices - Some Articles

Geometric algebra 114

There is a general method for rotating a vector involving the formation of a multivector of the form being an anticlockwise rotation in the plane defined by a bivector .Rotors can be seen as the generalization of quaternions to spaces.For more about reflections, rotations and "sandwiching" products like see Plane of Rotation.

Geometric CalculusGeometric Calculus extends the formalism to include differentiation and integration including differential geometryand differential forms.[1]

Essentially, the vector derivative is defined so that the GA version of Green's theorem is true,

and then one can write

as a geometric product, effectively generalizing Stokes theorem (including the differential forms version of it).In when A is a curve with endpoints and , then

reduces to

or the fundamental theorem of integral calculus.Also developed are the concept of vector manifold and geometric integration theory (which generalizes Cartan'sdifferential forms).

Conformal Geometric Algebra (CGA)A compact description of the current state of the art is provided by Bayro-Corrochano and Scheuermann (2010),[2]

which also includes further references, in particular to Dorst et al (2007).[3] Another useful reference is Li (2008).[4]

This again extends the language of GA, the conformal model of is embedded in the CGA viathe identification of Euclidean points with vectors in the null cone, adding a point at infinity andnormalizing all points to the hyperplane . Allows all of conformal algebra to be done bycombinations of rotations and reflections and the language is covariant, permitting the extension of incidencerelations of projective geometry to circles and spheres.Specifically, we add such that and such that to the orthonormal basis of allowing the creation of

representing an ideal point (point at infinity)(see Compactification)

Page 118: Vector and Matrices - Some Articles

Geometric algebra 115

representing the origin where

and where is a unit pseudoscalarrepresenting the Minkowski plane.

This procedure has some similarities to the procedure for working with homogeneous coordinates in projectivegeometry and in this case allows the modeling of Euclidean transformations as Orthogonal transformations.A fast changing and fluid area of GA, CGA is also being investigated for applications to relativistic physics.

SoftwareGA is a very practically oriented subject but there is a reasonably steep initial learning curve associated with it, thatcan be eased somewhat by the use of applicable software.The following is a list of freely available software that does not require ownership of commercial software orpurchase of any commercial products for this purpose:• GA Viewer Fontijne, Dorst, Bouma & Mann [5]

The link provides a manual, introduction to GA and sample material as well as the software.• CLUViz Perwass [6]

Software allowing script creation and including sample visualizations, manual and GA introduction.• Gaigen Fontijne [7]

For programmers,this is a code generator with support for C,C++,C# and Java.• Cinderella Visualizations Hitzer [8] and Dorst [9].

Applications

Algebraic Examples

Area of parallelogram spanned by two vectors

If is a -blade then a vector has a projection or parallel component onto ,

and a rejection or perpendicular component

So for vectors and in 2D we have

or and we have that is the product of the "altitude" and the "base" of the -parallelogram, that is, its area.

Page 119: Vector and Matrices - Some Articles

Geometric algebra 116

Intersection of a line and a plane

Consider a line L defined by points T and P (which we seek) and a plane defined by a bivector B containing points Pand Q.We may define the line parametrically by where p and t are position vectors for points T and P and vis the direction vector for the line.Then

and so

and

.

Electrodynamics and special relativity

In physics, the main applications are the geometric algebra of Euclidean 3-space, Cl3, called the Algebra of physicalspace (APS), and the geometric algebra of Minkowski 3+1 spacetime, Cl3,1, called spacetime algebra (STA).[10]

In APS, points of (3+1)-dimensional space-time are represented by paravectors: a 3-dimensional vector (space) plusa 1-dimensional scalar (time), while in STA points of space-time are represented simply by vectors.In spacetime algebra the electromagnetic field tensor has a bivector representation where theimaginary unit is the volume element, and where Maxwell's equations simplify to one equation:

Boosts in this Lorenzian metric space have the same expression as rotation in Euclidean space, where is thebivector generated by the time and the space directions involved, whereas in the Euclidean case it is the bivectorgenerated by the two space directions, strengthening the "analogy" to almost identity.

Page 120: Vector and Matrices - Some Articles

Geometric algebra 117

Rotational Systems

The mathematical description of rotational forces such as Torque and angular momentum make use of the Crossproduct.

The cross product in relation to the outer product.In red are the unit normal vector, and the

"parallel" unit bivector.

The cross product can be viewed in terms of the outer product allowinga more natural geometric interpretation of the cross product as abivector using the dual relationship

For example,torque is generally defined as the magnitude of the perpendicular force component times distance, orwork per unit angle.Suppose a circular path in an arbitrary plane containing orthonormal vectors and is parameterized by angle.

By designating the unit bivector of this plane as the imaginary number

this path vector can be conveniently written in complex exponential form

and the derivative with respect to angle is

So the torque, the rate of change of work W, due to a forceF, is

Unlike the cross product description of torque, , the geometric-algebra description does not introduce avector in the normal direction; a vector that does not exist in two and that is not unique in greater than threedimensions. The unit bivector describes the plane and the orientation of the rotation, and the sense of the rotation isrelative to the angle between the vectors and .

HistoryAlthough the connection of geometry with algebra dates as far back at least to Euclid's Elements in the 3rd century B.C. (see Greek geometric algebra), GA in the sense used in this article was not developed until 1844, when it was used in a systematic way to describe the geometrical properties and transformations of a space. In that year, Hermann Grassmann introduced the idea of a geometrical algebra in full generality as a certain calculus (analogous to the propositional calculus) which encoded all of the geometrical information of a space.[11] Grassmann's algebraic

Page 121: Vector and Matrices - Some Articles

Geometric algebra 118

system could be applied to a number of different kinds of spaces: the chief among them being Euclidean space,affine space, and projective space. Following Grassmann, in 1878 William Kingdon Clifford examined Grassmann'salgebraic system alongside the quaternions of William Rowan Hamilton in (Clifford 1878). From his point of view,the quaternions described certain transformations (which he called rotors), whereas Grassmann's algebra describedcertain properties (or Strecken such as length, area, and volume). His contribution was to define a new product —the geometric product — on an existing Grassmann algebra, which realized the quaternions as living within thatalgebra. Subsequently Rudolf Lipschitz in 1886 generalized Clifford's interpretation of the quaternions and appliedthem to the geometry of rotations in n dimensions. Later these developments would lead other 20th-centurymathematicians to formalize and explore the properties of the Clifford algebra.Nevertheless, another revolutionary development of the 19th-century would completely overshadow the geometricalgebras: that of vector analysis, developed independently by Josiah Willard Gibbs and Oliver Heaviside. Vectoranalysis was motivated by James Clerk Maxwell's studies of electromagnetism, and specifically the need to expressand manipulate conveniently certain differential equations. Vector analysis had a certain intuitive appeal comparedto the rigors of the new algebras. Physicists and mathematicians alike readily adopted it as their geometrical toolkitof choice, particularly following the influential 1901 textbook Vector Analysis by Edwin Bidwell Wilson, followinglectures of Gibbs.In more detail, there have been three approaches to geometric algebra: quaternionic analysis, initiated by Hamilton in1843 and geometrized as rotors by Clifford in 1878; geometric algebra, initiated by Grassmann in 1844; and vectoranalysis, developed out of quaternionic analysis in the late 19th century by Gibbs and Heaviside. The legacy ofquaternionic analysis in vector analysis can be seen in the use of to indicate the basis vectors of it isbeing thought of as the purely imaginary quaternions. From the perspective of geometric algebra, quaternions can beidentified as Cℓ+

3,0(R), the even part of the Clifford algebra on Euclidean 3-space, which unifies the threeapproaches.

20th century and PresentProgress on the study of Clifford algebras quietly advanced through the twentieth century, although largely due tothe work of abstract algebraists such as Hermann Weyl and Claude Chevalley. The geometrical approach togeometric algebras has seen a number of 20th-century revivals. In mathematics, Emil Artin's Geometric Algebradiscusses the algebra associated with each of a number of geometries, including affine geometry, projectivegeometry, symplectic geometry, and orthogonal geometry. In physics, geometric algebras have been revived as a"new" way to do classical mechanics and electromagnetism, together with more advanced topics such as quantummechanics and gauge theory.[12] David Hestenes reinterpreted the Pauli and Dirac matrices as vectors in ordinaryspace and spacetime, respectively, and has been a primary contemporary advocate for the use of geometric algebra.In computer graphics, geometric algebras have been revived in order to represent efficiently rotations (and othertransformations) on computer hardware.

Page 122: Vector and Matrices - Some Articles

Geometric algebra 119

References[1] Clifford Algebra to Geometric Calculus, a Unified Language for mathematics and Physics (Dordrecht/Boston:G.Reidel Publ.Co.,1984[2] Geometric Algebra Computing in Engineering and Computer Science, E.Bayro-Corrochano & Gerik Scheuermann (Eds),Springer 2010.

Extract online at http:/ / geocalc. clas. asu. edu/ html/ UAFCG. html #5 New Tools for Computational Geometry and rejuvenation of ScrewTheory

[3] Dorst, Leo; Fontijne, Daniel; Mann, Stephen (2007). Geometric algebra for computer science: an object-oriented approach to geometry(http:/ / www. geometricalgebra. net/ ). Amsterdam: Elsevier/Morgan Kaufmann. ISBN 978-0-12-369465-2. OCLC 132691969. .

[4] Hongbo Li (2008) Invariant Algebras and Geometric Reasoning, Singapore: World Scientific. Extract online at http:/ / www. worldscibooks.com/ etextbook/ 6514/ 6514_chap01. pdf

[5] http:/ / www. geometricalgebra. net/ downloads. html[6] http:/ / www. clucalc. info/[7] http:/ / sourceforge. net/ projects/ g25/[8] http:/ / sinai. apphy. u-fukui. ac. jp/ gcj/ software/ GAcindy-1. 4/ GAcindy. htm[9] http:/ / staff. science. uva. nl/ ~leo/ cinderella/[10] Hestenes, David (1966). Space-time Algebra. New York: Gordon and Breach. ISBN 0677013906. OCLC 996371.[11] Grassmann, Hermann (1844). Die lineale Ausdehnungslehre ein neuer Zweig der Mathematik: dargestellt und durch Anwendungen auf die

übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die Krystallonomie erläutert (http:/ /books. google. com/ ?id=bKgAAAAAMAAJ). Leipzig: O. Wigand. OCLC 20521674. .

[12] Doran, Chris J. L. (February 1994). Geometric Algebra and its Application to Mathematical Physics (http:/ / www. mrao. cam. ac. uk/~clifford/ publications/ abstracts/ chris_thesis. html) (Ph.D. thesis). University of Cambridge. OCLC 53604228. .

Further reading• Baylis, W. E., ed., 1996. Clifford (Geometric) Algebra with Applications to Physics, Mathematics, and

Engineering. Boston: Birkhäuser.• Baylis, W. E., 2002. Electrodynamics: A Modern Geometric Approach, 2nd ed. Birkhäuser. ISBN 0-8176-4025-8• Nicolas Bourbaki, 1980. Eléments de Mathématique. Algèbre. Chpt. 9, "Algèbres de Clifford". Paris: Hermann.• Hestenes, D., 1999. New Foundations for Classical Mechanics, 2nd ed. Springer Verlag ISBN 0-7923-5302-1• Lasenby, J., Lasenby, A. N., and Doran, C. J. L., 2000, " A Unified Mathematical Language for Physics and

Engineering in the 21st Century (http:/ / www. mrao. cam. ac. uk/ ~clifford/ publications/ ps/ dll_millen. pdf),"Philosophical Transactions of the Royal Society of London A 358: 1-18.

• Chris Doran & Anthony Lasenby (2003). Geometric algebra for physicists (http:/ / assets. cambridge. org/052148/ 0221/ sample/ 0521480221WS. pdf). Cambridge University Press. ISBN 978-0-521-71595-9.

• J Bain (2006). "Spacetime structuralism: §5 Manifolds vs. geometric algebra" (http:/ / books. google. com/?id=OI5BySlm-IcC& pg=PT72). In Dennis Geert Bernardus Johan Dieks. The ontology of spacetime. Elsevier.p. 54 ff. ISBN 0444527680.

External links• Imaginary Numbers are not Real - the Geometric Algebra of Spacetime (http:/ / www. mrao. cam. ac. uk/

~clifford/ introduction/ intro/ intro. html). Introduction (Cambridge GA group).• Physical Applications of Geometric Algebra (http:/ / www. mrao. cam. ac. uk/ ~clifford/ ptIIIcourse/ ). Final-year

undergraduate course by Chris Doran and Anthony Lasenby (Cambridge GA group; see also 1999 version (http:/ /www. mrao. cam. ac. uk/ ~clifford/ ptIIIcourse/ course99/ )).

• Maths for (Games) Programmers: 5 - Multivector methods (http:/ / www. iancgbell. clara. net/ maths/ ).Comprehensive introduction and reference for programmers, from Ian Bell.

• Geometric Algebra (http:/ / planetmath. org/ ?op=getobj& amp;from=objects& amp;id=3770) on PlanetMath• Clifford algebra, geometric algebra, and applications (http:/ / arxiv. org/ abs/ 0907. 5356) Douglas Lundholm,

Lars Svensson Lecture notes for a course on the theory of Clifford algebras, with special emphasis on their widerange of applications in mathematics and physics.

• IMPA SUmmer School 2010 (http:/ / www. visgraf. impa. br/ Courses/ ga/ ) Fernandes Oliveira Intro and Slides.

Page 123: Vector and Matrices - Some Articles

Geometric algebra 120

• University of Fukui (http:/ / sinai. apphy. u-fukui. ac. jp/ gcj/ pubs. html) E.S.M. Hitzer and Japan GApublications.

• Google Group for GA (http:/ / groups. google. com/ group/ geometric_algebra)

Research groups• Geometric Calculus International (http:/ / sinai. apphy. u-fukui. ac. jp/ gcj/ gc_int. html). Links to Research

groups, Software, and Conferences, worldwide.• Cambridge Geometric Algebra group (http:/ / www. mrao. cam. ac. uk/ ~clifford/ ). Full-text online publications,

and other material.• University of Amsterdam group (http:/ / www. science. uva. nl/ ga/ )• Geometric Calculus research & development (http:/ / geocalc. clas. asu. edu/ ) (Arizona State University).• GA-Net blog (http:/ / gaupdate. wordpress. com/ ) and newsletter archive (http:/ / sinai. apphy. u-fukui. ac. jp/

GA-Net/ archive/ index. html). Geometric Algebra/Clifford Algebra development news.

Levi-Civita symbolThe Levi-Civita symbol, also called the permutation symbol, antisymmetric symbol, or alternating symbol, is amathematical symbol used in particular in tensor calculus. It is named after the Italian mathematician and physicistTullio Levi-Civita.

Definition

Values of the Levi-Civita-Symbol for a right-handed coordinatesystem.

In three dimensions, the Levi-Civita symbol is definedas follows:

i.e. is 1 if (i, j, k) is an even permutation of (1,2,3), −1 if it is an odd permutation, and 0 if any index is repeated.The formula for the three dimensional Levi-Civita symbol is:

Page 124: Vector and Matrices - Some Articles

Levi-Civita symbol 121

The formula in four dimensions is:

Visualization of the Levi-Civita symbol as a 3×3×3 matrix.

Corresponding visualization of the Levi-Civita-Symbol for aleft-handed coordinate system. Empty cubes mean 0, red ones +1,

and blue ones -1.

For example, in linear algebra, the determinant of a 3×3matrix A can be written

(and similarly for a square matrix of general size, see below)and the cross product of two vectors can be written as a determinant:

or more simply:

According to the Einstein notation, the summation symbols may be omitted.The tensor whose components in an orthonormal basis are given by the Levi-Civita symbol (a tensor of covariant rank n) is sometimes called the permutation tensor. It is actually a pseudotensor because under an orthogonal

Page 125: Vector and Matrices - Some Articles

Levi-Civita symbol 122

transformation of jacobian determinant −1 (i.e., a rotation composed with a reflection), it acquires a minus sign.Because the Levi-Civita symbol is a pseudotensor, the result of taking a cross product is a pseudovector, not avector.Note that under a general coordinate change, the components of the permutation tensor get multiplied by thejacobian of the transformation matrix. This implies that in coordinate frames different from the one in which thetensor was defined, its components can differ from those of the Levi-Civita symbol by an overall factor. If the frameis orthonormal, the factor will be ±1 depending on whether the orientation of the frame is the same or not.

Relation to Kronecker deltaThe Levi-Civita symbol is related to the Kronecker delta. In three dimensions, the relationship is given by thefollowing equations:

("contracted epsilon identity")

(In Einstein notation, the duplication of the i index implies the sum on i. The previous is then noted:)

Generalization to n dimensionsThe Levi-Civita symbol can be generalized to higher dimensions:

Thus, it is the sign of the permutation in the case of a permutation, and zero otherwise.The generalized formula is:

where n is the dimension (rank).For any n the property

follows from the facts that (a) every permutation is either even or odd, (b) (+1)2 = (-1)2 = 1, and (c) the permutationsof any n-element set number exactly n!.In index-free tensor notation, the Levi-Civita symbol is replaced by the concept of the Hodge dual.In general n dimensions one can write the product of two Levi-Civita symbols as:

Page 126: Vector and Matrices - Some Articles

Levi-Civita symbol 123

.

Properties(in these examples, superscripts should be considered equivalent with subscripts)

1. In two dimensions, when all are in ,

2. In three dimensions, when all are in

3. In n dimensions, when all are in :

Proofs

For equation 1, both sides are antisymmetric with respect of and . We therefore only need to consider thecase and . By substitution, we see that the equation holds for , i.e., for and

. (Both sides are then one). Since the equation is antisymmetric in and , any set of values forthese can be reduced to the above case (which holds). The equation thus holds for all values of and . Usingequation 1, we have for equation 2

.

Here we used the Einstein summation convention with going from to . Equation 3 follows similarly fromequation 2. To establish equation 4, let us first observe that both sides vanish when . Indeed, if , thenone can not choose and such that both permutation symbols on the left are nonzero. Then, with fixed,there are only two ways to choose and from the remaining two indices. For any such indices, we have

(no summation), and the result follows. Property (5) follows since and for anydistinct indices in , we have (no summation).

Page 127: Vector and Matrices - Some Articles

Levi-Civita symbol 124

Examples1. The determinant of an matrix can be written as

where each should be summed over Equivalently, it may be written as

where now each and each should be summed over .

2. If and are vectors in (represented in some right hand orientedorthonormal basis), then the th component of their cross product equals

For instance, the first component of is . From the above expression for the cross product,it is clear that . Further, if is a vector like and , then the triplescalar product equals

From this expression, it can be seen that the triple scalar product is antisymmetric when exchanging any adjacentarguments. For example, .

3. Suppose is a vector field defined on some open set of with Cartesian coordinates. Then the th component of the curl of equals

NotationA shorthand notation for anti-symmetrization is denoted by a pair of square brackets. For example, in arbitrarydimensions, for a rank 2 covariant tensor M,

and for a rank 3 covariant tensor T,

In three dimensions, these are equivalent to

While in four dimensions, these are equivalent to

More generally, in n dimensions

Page 128: Vector and Matrices - Some Articles

Levi-Civita symbol 125

Tensor densityIn any arbitrary curvilinear coordinate system and even in the absence of a metric on the manifold, the Levi-Civitasymbol as defined above may be considered to be a tensor density field in two different ways. It may be regarded asa contravariant tensor density of weight +1 or as a covariant tensor density of weight -1. In four dimensions,

Notice that the value, and in particular the sign, does not change.

Ordinary tensorIn the presence of a metric tensor field, one may define an ordinary contravariant tensor field which agrees with theLevi-Civita symbol at each event whenever the coordinate system is such that the metric is orthonormal at that event.Similarly, one may also define an ordinary covariant tensor field which agrees with the Levi-Civita symbol at eachevent whenever the coordinate system is such that the metric is orthonormal at that event. These ordinary tensorfields should not be confused with each other, nor should they be confused with the tensor density fields mentionedabove. One of these ordinary tensor fields may be converted to the other by raising or lowering the indices with themetric as is usual, but a minus sign is needed if the metric signature contains an odd number of negatives. Forexample, in Minkowski space (the four dimensional spacetime of special relativity)

Notice the minus sign.

References• Charles W. Misner, Kip S. Thorne, John Archibald Wheeler, Gravitation, (1970) W.H. Freeman, New York;

ISBN 0-7167-0344-0. (See section 3.5 for a review of tensors in general relativity).

This article incorporates material from Levi-Civita permutation symbol on PlanetMath, which is licensed under theCreative Commons Attribution/Share-Alike License.

Page 129: Vector and Matrices - Some Articles

Jacobi triple product 126

Jacobi triple productIn mathematics, the Jacobi triple product is the mathematical identity:

for complex numbers x and y, with |x| < 1 and y ≠ 0.It is attributed to Carl Gustav Jacob Jacobi, who proved it in 1829 in his work Fundamenta Nova TheoriaeFunctionum Ellipticarum.[1]

The basis of Jacobi's proof relies on Euler's pentagonal number theorem, which is itself a specific case of the JacobiTriple Product Identity.

Let and . Then we have

The Jacobi Triple Product also allows the Jacobi theta function to be written as an infinite product as follows:

Let and .Then the Jacobi theta function

can be written in the form

Using the Jacobi Triple Product Identity we can then write the theta function as the product

There are many different notations used to express the Jacobi triple product. It takes on a concise form whenexpressed in terms of q-Pochhammer symbols:

Where is the infinite q-Pochhammer symbol.It enjoys a particularly elegant form when expressed in terms of the Ramanujan theta function. For it canbe written as

ProofThis proof uses a simplified model of the Dirac sea and follows the proof in Cameron (13.3) which is attributed toRichard Borcherds. It treats the case where the power series are formal. For the analytic case, see Apostol. TheJacobi triple product identity can be expressed as

A level is a half-integer. The vacuum state is the set of all negative levels. A state is a set of levels whose symmetricdifference with the vacuum state is finite. The energy of the state is

Page 130: Vector and Matrices - Some Articles

Jacobi triple product 127

and the particle number of is

An unordered choice of the presence of finitely many positive levels and the absence of finitely many negative levels(relative to the vacuum) corresponds to a state, so the generating function for the number

of states of energy with particles can be expressed as

On the other hand, any state with particles can be obtained from the lowest energy particle state,, by rearranging particles: take a partition of and move the top particle up

by levels, the next highest particle up by levels, etc.... The resulting state has energy , so the

generating function can also be written as

where is the partition function. The uses of random partitions [2] by Andrei Okounkov contains a picture of apartition exciting the vacuum.

Notes[1] Remmert, R. (1998). Classical Topics in Complex Function Theory (pp. 28-30). New York: Springer.[2] http:/ / arxiv. org/ abs/ math-ph/ 0309015

References• See chapter 14, theorem 14.6 of Apostol, Tom M. (1976), Introduction to analytic number theory, Undergraduate

Texts in Mathematics, New York-Heidelberg: Springer-Verlag, ISBN 978-0-387-90163-3, MR0434929• Peter J. Cameron, Combinatorics: Topics, Techniques, Algorithms, (1994) Cambridge University Press, ISBN

0-521-45761-0

Page 131: Vector and Matrices - Some Articles

Rule of Sarrus 128

Rule of Sarrus

Sarrus rule: solid diagonals - dashed diagonals

Sarrus' rule or Sarrus' scheme is a method and amemorization scheme to compute the determinant of a 3×3matrix. It is named after the French mathematician PierreFrédéric Sarrus.

Consider a 3×3 matrix

then its determinant can be computed by the following scheme:Write out the first 2 columns of the matrix to the right of the 3rd column, so that you have 5 columns in a row. Thenadd the products of the diagonals going from top to bottom (solid) and subtract the products of the diagonals goingfrom bottom to top (dashed). This yields:

A similar scheme based on diagonals works for 2x2 matrices:

Both are special cases of the Leibniz formula, which however does not yield similar memorization schemes forlarger matrices. Sarrus' rule can also be derived by looking at the Laplace expansion of a 3x3 matrix.

References• Khattar, Dinesh (2010). The Pearson Guide to Complete Mathematics for AIEEE [1] (3rd ed.). Pearson Education

India. p. 6-2. ISBN 9788131721261.• Fischer, Gerd (1985) (in German). Analytische Geometrie (4th ed.). Wiesbaden: Vieweg. p. 145.

ISBN 3528372354.

External links• Sarrus' rule at Planetmath [2]

• Linear Algebra: Rule of Sarrus of Determinants [3] at khanacademy.org

Page 132: Vector and Matrices - Some Articles

Rule of Sarrus 129

References[1] http:/ / books. google. de/ books?id=7cwSfkQYJ_EC& pg=SA6-PA2[2] http:/ / planetmath. org/ encyclopedia/ RuleOfSarrus. html[3] http:/ / www. youtube. com/ watch?v=4xFIi0JF2AM

Laplace expansionIn linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is anexpression for the determinant |B| of an n × n square matrix B that is a weighted sum of the determinants of nsub-matrices of B, each of size (n–1) × (n–1). The Laplace expansion is of theoretical interest as one of several waysto view the determinant, as well as of practical use in determinant computation.The i,j cofactor of B is the scalar Cij defined by

where Mij is the i,j minor matrix of B, that is, the (n–1) × (n–1) matrix that results from deleting the i-th row and thej-th column of B.Then the Laplace expansion is given by the followingTheorem. Suppose B = (bij) is an n × n matrix and i,j ∈ 1, 2, ...,n.Then its determinant |B| is given by:

ExamplesConsider the matrix

The determinant of this matrix can be computed by using the Laplace expansion along the first row:

Alternatively, Laplace expansion along the second column yields

It is easy to see that the result is correct: the matrix is singular because the sum of its first and third column is twicethe second column, and hence its determinant is zero.

Page 133: Vector and Matrices - Some Articles

Laplace expansion 130

ProofSuppose is an n × n matrix and For clarity we also label the entries of that composeits minor matrix as

for Consider the terms in the expansion of that have as a factor. Each has the form

for some permutation τ ∈ Sn with , and a unique and evidently related permutation whichselects the same minor entries as Similarly each choice of determines a corresponding i.e. thecorrespondence is a bijection between and The permutation can bederived from as follows.Define by for and . Then and

Since the two cycles can be written respectively as and transpositions,

And since the map is bijective,

from which the result follows.

References• David Poole: Linear Algebra. A Modern Introduction. Cengage Learning 2005, ISBN 0534998453, p. 265-267 (

restricted online copy [1] at Google Books)• Harvey E. Rose: Linear Algebra. A Pure Mathematical Approach. Springer 2002, ISBN 3764369051, p. 57-60 (

restricted online copy [2] at Google Books)

External links• Laplace expansion [3] at PlanetMath

References[1] http:/ / books. google. com/ books?id=oBk3u2fDFc8C& pg=PA265[2] http:/ / books. google. com/ books?id=mTdAj-Yn4L4C& pg=PA57[3] http:/ / planetmath. org/ encyclopedia/ LaplaceExpansion2. html

Page 134: Vector and Matrices - Some Articles

Lie algebra 131

Lie algebraIn mathematics, a Lie algebra ( /ˈliː/, not /ˈlaɪ/) is an algebraic structure whose main use is in studying geometricobjects such as Lie groups and differentiable manifolds. Lie algebras were introduced to study the concept ofinfinitesimal transformations. The term "Lie algebra" (after Sophus Lie) was introduced by Hermann Weyl in the1930s. In older texts, the name "infinitesimal group" is used.

Definition and first propertiesA Lie algebra is a vector space over some field F together with a binary operation [·, ·]

called the Lie bracket, which satisfies the following axioms:• Bilinearity:

for all scalars a, b in F and all elements x, y, z in .• Alternating on :

for all x in .• The Jacobi identity:

for all x, y, z in .

Note that the bilinearity and alternating properties imply anticommutativity, i.e., for all elementsx, y in , while anticommutativity only implies the alternating property if the field's characteristic is not 2.[1]

For any associative algebra A with multiplication , one can construct a Lie algebra L(A). As a vector space, L(A) isthe same as A. The Lie bracket of two elements of L(A) is defined to be their commutator in A:

The associativity of the multiplication * in A implies the Jacobi identity of the commutator in L(A). In particular, theassociative algebra of n × n matrices over a field F gives rise to the general linear Lie algebra Theassociative algebra A is called an enveloping algebra of the Lie algebra L(A). It is known that every Lie algebra canbe embedded into one that arises from an associative algebra in this fashion. See universal enveloping algebra.

Homomorphisms, subalgebras, and ideals

The Lie bracket is not an associative operation in general, meaning that need not equal .Nonetheless, much of the terminology that was developed in the theory of associative rings or associative algebras iscommonly applied to Lie algebras. A subspace that is closed under the Lie bracket is called a Lie

subalgebra. If a subspace satisfies a stronger condition that

then I is called an ideal in the Lie algebra .[2] A Lie algebra in which the commutator is not identically zero andwhich has no proper ideals is called simple. A homomorphism between two Lie algebras (over the same groundfield) is a linear map that is compatible with the commutators:

for all elements x and y in . As in the theory of associative rings, ideals are precisely the kernels of homomorphisms, given a Lie algebra and an ideal I in it, one constructs the factor algebra , and the first

Page 135: Vector and Matrices - Some Articles

Lie algebra 132

isomorphism theorem holds for Lie algebras. Given two Lie algebras and , their direct sum is the vector space consisting of the pairs , with the operation

Examples• Any vector space V endowed with the identically zero Lie bracket becomes a Lie algebra. Such Lie algebras are

called abelian, cf. below. Any one-dimensional Lie algebra over a field is abelian, by the antisymmetry of the Liebracket.

• The three-dimensional Euclidean space R3 with the Lie bracket given by the cross product of vectors becomes athree-dimensional Lie algebra.

• The Heisenberg algebra is a three-dimensional Lie algebra with generators (see also the definition at Generatingset):

whose commutation relations are

It is explicitly exhibited as the space of 3×3 strictly upper-triangular matrices.

• The subspace of the general linear Lie algebra consisting of matrices of trace zero is a subalgebra,[3] thespecial linear Lie algebra, denoted

• Any Lie group G defines an associated real Lie algebra . The definition in general is somewhattechnical, but in the case of real matrix groups, it can be formulated via the exponential map, or the matrixexponent. The Lie algebra consists of those matrices X for which

for all real numbers t. The Lie bracket of is given by the commutator of matrices. As a concrete example,consider the special linear group SL(n,R), consisting of all n × n matrices with real entries and determinant 1.This is a matrix Lie group, and its Lie algebra consists of all n × n matrices with real entries and trace 0.

• The real vector space of all n × n skew-hermitian matrices is closed under the commutator and forms a real Liealgebra denoted . This is the Lie algebra of the unitary group U(n).

• An important class of infinite-dimensional real Lie algebras arises in differential topology. The space of smoothvector fields on a differentiable manifold M forms a Lie algebra, where the Lie bracket is defined to be thecommutator of vector fields. One way of expressing the Lie bracket is through the formalism of Lie derivatives,which identifies a vector field X with a first order partial differential operator LX acting on smooth functions byletting LX(f) be the directional derivative of the function f in the direction of X. The Lie bracket [X,Y] of twovector fields is the vector field defined through its action on functions by the formula:

This Lie algebra is related to the pseudogroup of diffeomorphisms of M.• The commutation relations between the x, y, and z components of the angular momentum operator in quantum

mechanics form a representation of a complex three-dimensional Lie algebra, which is the complexification of theLie algebra so(3) of the three-dimensional rotation group:

Page 136: Vector and Matrices - Some Articles

Lie algebra 133

• Kac–Moody algebra is an example of an infinite-dimensional Lie algebra.

Structure theory and classificationEvery finite-dimensional real or complex Lie algebra has a faithful representation by matrices (Ado's theorem). Lie'sfundamental theorems describe a relation between Lie groups and Lie algebras. In particular, any Lie group givesrise to a canonically determined Lie algebra (concretely, the tangent space at the identity), and conversely, for anyLie algebra there is a corresponding connected Lie group (Lie's third theorem). This Lie group is not determineduniquely, however, any two connected Lie groups with the same Lie algebra are locally isomorphic, and inparticular, have the same universal cover. For instance, the special orthogonal group SO(3) and the special unitarygroup SU(2) give rise to the same Lie algebra, which is isomorphic to R3 with the cross-product, and SU(2) is asimply-connected twofold cover of SO(3). Real and complex Lie algebras can be classified to some extent, and thisis often an important step toward the classification of Lie groups.

Abelian, nilpotent, and solvableAnalogously to abelian, nilpotent, and solvable groups, defined in terms of the derived subgroups, one can defineabelian, nilpotent, and solvable Lie algebras.A Lie algebra is abelian if the Lie bracket vanishes, i.e. [x,y] = 0, for all x and y in . Abelian Lie algebrascorrespond to commutative (or abelian) connected Lie groups such as vector spaces or tori and are all ofthe form meaning an n-dimensional vector space with the trivial Lie bracket.A more general class of Lie algebras is defined by the vanishing of all commutators of given length. A Lie algebra is nilpotent if the lower central series

becomes zero eventually. By Engel's theorem, a Lie algebra is nilpotent if and only if for every u in the adjointendomorphism

is nilpotent.More generally still, a Lie algebra is said to be solvable if the derived series:

becomes zero eventually.Every finite-dimensional Lie algebra has a unique maximal solvable ideal, called its radical. Under the Liecorrespondence, nilpotent (respectively, solvable) connected Lie groups correspond to nilpotent (respectively,solvable) Lie algebras.

Simple and semisimpleA Lie algebra is "simple" if it has no non-trivial ideals and is not abelian. A Lie algebra is called semisimple if itsradical is zero. Equivalently, is semisimple if it does not contain any non-zero abelian ideals. In particular, asimple Lie algebra is semisimple. Conversely, it can be proven that any semisimple Lie algebra is the direct sum ofits minimal ideals, which are canonically determined simple Lie algebras.The concept of semisimplicity for Lie algebras is closely related with the complete reducibility of theirrepresentations. When the ground field F has characteristic zero, semisimplicity of a Lie algebra over F isequivalent to the complete reducibility of all finite-dimensional representations of An early proof of thisstatement proceeded via connection with compact groups (Weyl's unitary trick), but later entirely algebraic proofswere found.

Page 137: Vector and Matrices - Some Articles

Lie algebra 134

ClassificationIn many ways, the classes of semisimple and solvable Lie algebras are at the opposite ends of the full spectrum ofthe Lie algebras. The Levi decomposition expresses an arbitrary Lie algebra as a semidirect sum of its solvableradical and a semisimple Lie algebra, almost in a canonical way. Semisimple Lie algebras over an algebraicallyclosed field have been completely classified through their root systems. The classification of solvable Lie algebras isa 'wild' problem, and cannot be accomplished in general.Cartan's criterion gives conditions for a Lie algebra to be nilpotent, solvable, or semisimple. It is based on the notionof the Killing form, a symmetric bilinear form on defined by the formula

where tr denotes the trace of a linear operator. A Lie algebra is semisimple if and only if the Killing form isnondegenerate. A Lie algebra is solvable if and only if

Relation to Lie groupsAlthough Lie algebras are often studied in their own right, historically they arose as a means to study Lie groups.Given a Lie group, a Lie algebra can be associated to it either by endowing the tangent space to the identity with thedifferential of the adjoint map, or by considering the left-invariant vector fields as mentioned in the examples. Thisassociation is functorial, meaning that homomorphisms of Lie groups lift to homomorphisms of Lie algebras, andvarious properties are satisfied by this lifting: it commutes with composition, it maps Lie subgroups, kernels,quotients and cokernels of Lie groups to subalgebras, kernels, quotients and cokernels of Lie algebras, respectively.The functor which takes each Lie group to its Lie algebra and each homomorphism to its differential is a faithful andexact functor. This functor is not invertible; different Lie groups may have the same Lie algebra, for example SO(3)and SU(2) have isomorphic Lie algebras. Even worse, some Lie algebras need not have any associated Lie group.Nevertheless, when the Lie algebra is finite-dimensional, there is always at least one Lie group whose Lie algebra isthe one under discussion, and a preferred Lie group can be chosen. Any finite-dimensional connected Lie group hasa universal cover. This group can be constructed as the image of the Lie algebra under the exponential map. Moregenerally, we have that the Lie algebra is homeomorphic to a neighborhood of the identity. But globally, if the Liegroup is compact, the exponential will not be injective, and if the Lie group is not connected, simply connected orcompact, the exponential map need not be surjective.If the Lie algebra is infinite-dimensional, the issue is more subtle. In many instances, the exponential map is not evenlocally a homeomorphism (for example, in Diff(S1), one may find diffeomorphisms arbitrarily close to the identitywhich are not in the image of exp). Furthermore, some infinite-dimensional Lie algebras are not the Lie algebra ofany group.The correspondence between Lie algebras and Lie groups is used in several ways, including in the classification ofLie groups and the related matter of the representation theory of Lie groups. Every representation of a Lie algebralifts uniquely to a representation of the corresponding connected, simply connected Lie group, and conversely everyrepresentation of any Lie group induces a representation of the group's Lie algebra; the representations are in one toone correspondence. Therefore, knowing the representations of a Lie algebra settles the question of representationsof the group. As for classification, it can be shown that any connected Lie group with a given Lie algebra isisomorphic to the universal cover mod a discrete central subgroup. So classifying Lie groups becomes simply amatter of counting the discrete subgroups of the center, once the classification of Lie algebras is known (solved byCartan et al. in the semisimple case).

Page 138: Vector and Matrices - Some Articles

Lie algebra 135

Category theoretic definitionUsing the language of category theory, a Lie algebra can be defined as an object A in Vec, the category of vectorspaces together with a morphism [.,.]: A ⊗ A → A, where ⊗ refers to the monoidal product of Vec, such that

••where τ (a ⊗ b) := b ⊗ a and σ is the cyclic permutation braiding (id ⊗ τA,A) ° (τA,A ⊗ id). In diagrammatic form:

Notes[1] Humpfrey p. 1[2] Due to the anticommutativity of the commutator, the notions of a left and right ideal in a Lie algebra coincide.[3] Humphreys p.2

References• Hall, Brian C. Lie Groups, Lie Algebras, and Representations: An Elementary Introduction, Springer, 2003. ISBN

0-387-40122-9• Erdmann, Karin & Wildon, Mark. Introduction to Lie Algebras, 1st edition, Springer, 2006. ISBN 1-84628-040-0• Humphreys, James E. Introduction to Lie Algebras and Representation Theory, Second printing, revised.

Graduate Texts in Mathematics, 9. Springer-Verlag, New York, 1978. ISBN 0-387-90053-5• Jacobson, Nathan, Lie algebras, Republication of the 1962 original. Dover Publications, Inc., New York, 1979.

ISBN 0-486-63832-4• Kac, Victor G. et al. Course notes for MIT 18.745: Introduction to Lie Algebras, http:/ / www-math. mit. edu/

~lesha/ 745lec/• O'Connor, J. J. & Robertson, E.F. Biography of Sophus Lie, MacTutor History of Mathematics Archive, http:/ /

www-history. mcs. st-and. ac. uk/ Biographies/ Lie. html• O'Connor, J. J. & Robertson, E.F. Biography of Wilhelm Killing, MacTutor History of Mathematics Archive,

http:/ / www-history. mcs. st-and. ac. uk/ Biographies/ Killing. html• Steeb, W.-H. Continuous Symmetries, Lie Algebras, Differential Equations and Computer Algebra, second

edition, World Scientific, 2007, ISBN 978-981-270-809-0• Varadarajan, V. S. Lie Groups, Lie Algebras, and Their Representations, 1st edition, Springer, 2004. ISBN

0-387-90969-9

Page 139: Vector and Matrices - Some Articles

Orthogonal group 136

Orthogonal groupIn mathematics, the orthogonal group of degree n over a field F (written as O(n,F)) is the group of n-by-northogonal matrices with entries from F, with the group operation of matrix multiplication. This is a subgroup of thegeneral linear group GL(n,F) given by

where QT is the transpose of Q. The classical orthogonal group over the real numbers is usually just written O(n).More generally the orthogonal group of a non-singular quadratic form over F is the group of linear operatorspreserving the form – the above group O(n, F) is then the orthogonal group of the sum-of-n-squares quadraticform.[1] The Cartan–Dieudonné theorem describes the structure of the orthogonal group for non-singular form. Thisarticle only discusses definite forms – the orthogonal group of the positive definite form (equivalent to sum of nsquares) and negative definite forms (equivalent to the negative sum of n squares) are identical –

– though the associated Pin groups differ; for other non-singular forms O(p,q), see indefiniteorthogonal group.Every orthogonal matrix has determinant either 1 or −1. The orthogonal n-by-n matrices with determinant 1 form anormal subgroup of O(n,F) known as the special orthogonal group, SO(n,F). (More precisely, SO(n,F) is the kernelof the Dickson invariant, discussed below.) By analogy with GL/SL (general linear group, special linear group), theorthogonal group is sometimes called the general orthogonal group and denoted GO, though this term is alsosometimes used for indefinite orthogonal groups O(p,q).The derived subgroup Ω(n,F) of O(n,F) is an often studied object because when F is a finite field Ω(n,F) is often acentral extension of a finite simple group.Both O(n,F) and SO(n,F) are algebraic groups, because the condition that a matrix be orthogonal, i.e. have its owntranspose as inverse, can be expressed as a set of polynomial equations in the entries of the matrix.

Over the real number fieldOver the field R of real numbers, the orthogonal group O(n,R) and the special orthogonal group SO(n,R) are oftensimply denoted by O(n) and SO(n) if no confusion is possible. They form real compact Lie groups of dimensionn(n − 1)/2. O(n,R) has two connected components, with SO(n,R) being the identity component, i.e., the connectedcomponent containing the identity matrix.The real orthogonal and real special orthogonal groups have the following geometric interpretations:O(n,R) is a subgroup of the Euclidean group E(n), the group of isometries of Rn; it contains those that leave theorigin fixed – O(n,R) = E(n) ∩ GL(n,R). It is the symmetry group of the sphere (n = 3) or hypersphere and allobjects with spherical symmetry, if the origin is chosen at the center.SO(n,R) is a subgroup of E+(n), which consists of direct isometries, i.e., isometries preserving orientation; it containsthose that leave the origin fixed – It is therotation group of the sphere and all objects with spherical symmetry, if the origin is chosen at the center. I, −I is a normal subgroup and even a characteristic subgroup of O(n,R), and, if n is even, also of SO(n,R). If n isodd, O(n,R) is the direct product of SO(n,R) and I, −I . The cyclic group of k-fold rotations Ck is for everypositive integer k a normal subgroup of O(2,R) and SO(2,R).Relative to suitable orthogonal bases, the isometries are of the form:

Page 140: Vector and Matrices - Some Articles

Orthogonal group 137

where the matrices R1,...,Rk are 2-by-2 rotation matrices in orthogonal planes of rotation. As a special case, known asEuler's rotation theorem, any (non-identity) element of SO(3,R) is rotation about a uniquely defined axis.The orthogonal group is generated by reflections (two reflections give a rotation), as in a Coxeter group,[2] andelements have length at most n (require at most n reflections to generate; this follows from the above classification,noting that a rotation is generated by 2 reflections, and is true more generally for indefinite orthogonal groups, by theCartan–Dieudonné theorem). A longest element (element needing the most reflections) is reflection through theorigin (the map ), though so are other maximal combinations of rotations (and a reflection, in odddimension).The symmetry group of a circle is O(2,R), also called Dih (S1), where S1 denotes the multiplicative group of complexnumbers of absolute value 1.SO(2,R) is isomorphic (as a Lie group) to the circle S1 (circle group). This isomorphism sends the complex numberexp(φi) = cos(φ) + i sin(φ) to the orthogonal matrix

The group SO(3,R), understood as the set of rotations of 3-dimensional space, is of major importance in the sciencesand engineering. See rotation group and the general formula for a 3 × 3 rotation matrix in terms of the axis and theangle.In terms of algebraic topology, for n > 2 the fundamental group of SO(n,R) is cyclic of order 2, and the spinor groupSpin(n) is its universal cover. For n = 2 the fundamental group is infinite cyclic and the universal cover correspondsto the real line (the spinor group Spin(2) is the unique 2-fold cover).

Even and odd dimensionThe structure of the orthogonal group differs in certain respects between even and odd dimensions – for example,

(reflection through the origin) is orientation-preserving in even dimension, but orientation-reversing in odddimension. When this distinction wishes to be emphasized, the groups are generally denoted O(2k) and O(2k+1),reserving n for the dimension of the space ( or ). The letters p or r are also used, indicatingthe rank of the corresponding Lie algebra; in odd dimension the corresponding Lie algebra is whilein even dimension the Lie algebra is

Page 141: Vector and Matrices - Some Articles

Orthogonal group 138

Lie algebraThe Lie algebra associated to the Lie groups O(n,R) and SO(n,R) consists of the skew-symmetric real n-by-nmatrices, with the Lie bracket given by the commutator. This Lie algebra is often denoted by o(n,R) or by so(n,R),and called the orthogonal Lie algebra or special orthogonal Lie algebra. These Lie algebras are the compact realforms of two of the four families of semisimple Lie algebras: in odd dimension while in evendimension More intrinsically, given a vector space with an inner product, the special orthogonal Lie algebra is given by thebivectors on the space, which are sums of simple bivectors (2-blades) . The correspondence is given by themap where is the covector dual to the vector v; in coordinates these are exactlythe elementary skew-symmetric matrices.This characterization is used in interpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotationor "curl", hence the name. Generalizing the inner product with a nondegenerate form yields the indefinite orthogonalLie algebras The representation theory of the orthogonal Lie algebras includes both representations corresponding to linearrepresentations of the orthogonal groups, and representations corresponding to projective representations of theorthogonal groups (linear representations of spin groups), the so-called spin representation, which are important inphysics.

3D isometries that leave the origin fixedIsometries of R3 that leave the origin fixed, forming the group O(3,R), can be categorized as:• SO(3,R):

• identity• rotation about an axis through the origin by an angle not equal to 180°• rotation about an axis through the origin by an angle of 180°

• the same with inversion in the origin (x is mapped to −x), i.e. respectively:• inversion in the origin• rotation about an axis by an angle not equal to 180°, combined with reflection in the plane through the origin

perpendicular to the axis• reflection in a plane through the origin.

The 4th and 5th in particular, and in a wider sense the 6th also, are called improper rotations.See also the similar overview including translations.

Conformal groupBeing isometries (preserving distances), orthogonal transforms also preserve angles, and are thus conformal maps,though not all conformal linear transforms are orthogonal. In classical terms this is the difference betweencongruence and similarity, as exemplified by SSS (Side-Side-Side) congruence of triangles and AAA(Angle-Angle-Angle) similarity of triangles. The group of conformal linear maps of Rn is denoted CO(n) for theconformal orthogonal group, and consists of the product of the orthogonal group with the group of dilations. If n isodd, these two subgroups do not intersect, and they are a direct product: ,while if n is even, these subgroups intersect in , so this is not a direct product, but it is a direct product with thesubgroup of dilation by a positive scalar: .Similarly one can define CSO(n); note that this is always : .

Page 142: Vector and Matrices - Some Articles

Orthogonal group 139

Over the complex number fieldOver the field C of complex numbers, O(n,C) and SO(n,C) are complex Lie groups of dimension n(n − 1)/2 over C(which means the dimension over R is twice that). O(n,C) has two connected components, and SO(n,C) is theconnected component containing the identity matrix. For n ≥ 2 these groups are noncompact.Just as in the real case SO(n,C) is not simply connected. For n > 2 the fundamental group of SO(n,C) is cyclic oforder 2 whereas the fundamental group of SO(2,C) is infinite cyclic.The complex Lie algebra associated to O(n,C) and SO(n,C) consists of the skew-symmetric complex n-by-nmatrices, with the Lie bracket given by the commutator.

Topology

Low dimensionalThe low dimensional (real) orthogonal groups are familiar spaces:

The group is double covered by .There are numerous charts on SO(3), due to the importance of 3-dimensional rotations in engineering applications.Here Sn denotes the n-dimensional sphere, RPn the n-dimensional real projective space, and SU(n) the special unitarygroup of degree n.

Homotopy groupsThe homotopy groups of the orthogonal group are related to homotopy groups of spheres, and thus are in generalhard to compute.However, one can compute the homotopy groups of the stable orthogonal group (aka the infinite orthogonal group),defined as the direct limit of the sequence of inclusions

(as the inclusions are all closed inclusions, hence cofibrations, this can also be interpreted as a union).

is a homogeneous space for , and one has the following fiber bundle:

which can be understood as "The orthogonal group acts transitively on the unit sphere , and thestabilizer of a point (thought of as a unit vector) is the orthogonal group of the perpendicular complement, which isan orthogonal group one dimension lower". The map is the natural inclusion.Thus the inclusion is (n − 1)-connected, so the homotopy groups stabilize, and

for : thus the homotopy groups of the stable space equal the lower homotopygroups of the unstable spaces.Via Bott periodicity, , thus the homotopy groups of O are 8-fold periodic, meaning ,and one needs only to compute the lower 8 homotopy groups to compute them all.

Page 143: Vector and Matrices - Some Articles

Orthogonal group 140

Relation to KO-theory

Via the clutching construction, homotopy groups of the stable space O are identified with stable vector bundles onspheres (up to isomorphism), with a dimension shift of 1: .

Setting (to make fit into the periodicity), one obtains:

Computation and Interpretation of homotopy groups

Low-dimensional groups

The first few homotopy groups can be calculated by using the concrete descriptions of low-dimensional groups.

• from orientation-preserving/reversing (this class survives to and hencestably)

yields• , which is spin• , which surjects onto ; this latter thus vanishes.

Lie groups

From general facts about Lie groups, always vanishes, and is free (free abelian).

Vector bundles

From the vector bundle point of view, is vector bundles over , which is two points. Thus over eachpoint, the bundle is trivial, and the non-triviality of the bundle is the difference between the dimensions of the vectorspaces over the two points, so

is dimension.

Page 144: Vector and Matrices - Some Articles

Orthogonal group 141

Loop spaces

Using concrete descriptions of the loop spaces in Bott periodicity, one can interpret higher homotopy of O as lowerhomotopy of simple to analyze spaces. Using , O and O/U have two components, and

have components, and the rest are connected.

Interpretation of homotopy groups

In a nutshell:[3]

• is dimension• is orientation• is spin• is topological quantum field theory.Let , and let be the tautological line bundle over the projective line , and itsclass in K-theory. Noting that , these yield vector bundlesover the corresponding spheres, and• is generated by • is generated by • is generated by • is generated by From the point of view of symplectic geometry, can be interpreted as the Maslovindex, thinking of it as the fundamental group of the stable Lagrangian Grassmannian as

so

Over finite fieldsOrthogonal groups can also be defined over finite fields , where is a power of a prime . When defined oversuch fields, they come in two types in even dimension: and ; and one type in odddimension: .If is the vector space on which the orthogonal group acts, it can be written as a direct orthogonal sum asfollows:

where are hyperbolic lines and contains no singular vectors. If , then is of plus type. Ifthen has odd dimension. If has dimension 2, is of minus type.

In the special case where n = 1, is a dihedral group of order .We have the following formulas for the order of these groups, O(n,q) = A in GL(n,q) : A·At=I , when thecharacteristic is greater than two

If −1 is a square in

If −1 is a nonsquare in

Page 145: Vector and Matrices - Some Articles

Orthogonal group 142

The Dickson invariantFor orthogonal groups, the Dickson invariant is a homomorphism from the orthogonal group to Z/2Z, and is 0 or 1depending on whether an element is the product of an even or odd number of reflections. More concretely, theDickson invariant can be defined as where I is the identity (Taylor 1992,Theorem 11.43). Over fields that are not of characteristic 2 it is equivalent to the determinant: the determinant is −1to the power of the Dickson invariant. Over fields of characteristic 2, the determinant is always 1, so the Dicksoninvariant gives extra information.The special orthogonal group is the kernel of the Dickson invariant and usually has index 2 in O(n,F).[4] When thecharacteristic of F is not 2, the Dickson Invariant is 0 whenever the determinant is 1. Thus when the characteristic isnot 2, SO(n,F) is commonly defined to be the elements of O(n,F) with determinant 1. Each element in O(n,F) hasdeterminant −1 or 1. Thus in characteristic 2, the determinant is always 1.The Dickson invariant can also be defined for Clifford groups and Pin groups in a similar way (in all dimensions).

Orthogonal groups of characteristic 2Over fields of characteristic 2 orthogonal groups often behave differently. This section lists some of the differences.Traditionally these groups are known as the hypoabelian groups but this term is no longer used for these groups.• Any orthogonal group over any field is generated by reflections, except for a unique example where the vector

space is 4 dimensional over the field with 2 elements and the Witt index is 2.[5] Note that a reflection incharacteristic two has a slightly different definition. In characteristic two, the reflection orthogonal to a vector utakes a vector v to v+B(v,u)/Q(u)·u where B is the bilinear form and Q is the quadratic form associated to theorthogonal geometry. Compare this to the Householder reflection of odd characteristic or characteristic zero,which takes v to v − 2·B(v,u)/Q(u)·u.

• The center of the orthogonal group usually has order 1 in characteristic 2, rather than 2, since • In odd dimensions 2n+1 in characteristic 2, orthogonal groups over perfect fields are the same as symplectic

groups in dimension 2n. In fact the symmetric form is alternating in characteristic 2, and as the dimension is oddit must have a kernel of dimension 1, and the quotient by this kernel is a symplectic space of dimension 2n, actedupon by the orthogonal group.

• In even dimensions in characteristic 2 the orthogonal group is a subgroup of the symplectic group, because thesymmetric bilinear form of the quadratic form is also an alternating form.

The spinor normThe spinor norm is a homomorphism from an orthogonal group over a field F to

F*/F*2,the multiplicative group of the field F up to square elements, that takes reflection in a vector of norm n to the imageof n in F*/F*2.For the usual orthogonal group over the reals it is trivial, but it is often non-trivial over other fields, or for theorthogonal group of a quadratic form over the reals that is not positive definite.

Page 146: Vector and Matrices - Some Articles

Orthogonal group 143

Galois cohomology and orthogonal groupsIn the theory of Galois cohomology of algebraic groups, some further points of view are introduced. They haveexplanatory value, in particular in relation with the theory of quadratic forms; but were for the most part post hoc, asfar as the discovery of the phenomena is concerned. The first point is that quadratic forms over a field can beidentified as a Galois H1, or twisted forms (torsors) of an orthogonal group. As an algebraic group, an orthogonalgroup is in general neither connected nor simply-connected; the latter point brings in the spin phenomena, while theformer is related to the discriminant.The 'spin' name of the spinor norm can be explained by a connection to the spin group (more accurately a pin group).This may now be explained quickly by Galois cohomology (which however postdates the introduction of the term bymore direct use of Clifford algebras). The spin covering of the orthogonal group provides a short exact sequence ofalgebraic groups.

Here μ2 is the algebraic group of square roots of 1; over a field of characteristic not 2 it is roughly the same as atwo-element group with trivial Galois action. The connecting homomorphism from H0(OV), which is simply thegroup OV(F) of F-valued points, to H1(μ2) is essentially the spinor norm, because H1(μ2) is isomorphic to themultiplicative group of the field modulo squares.There is also the connecting homomorphism from H1 of the orthogonal group, to the H2 of the kernel of the spincovering. The cohomology is non-abelian, so that this is as far as we can go, at least with the conventionaldefinitions.

Related groupsThe orthogonal groups and special orthogonal groups have a number of important subgroups, supergroups, quotientgroups, and covering groups. These are listed below.

The inclusions and are part of asequence of 8 inclusions used in a geometric proof of the Bott periodicity theorem, and the corresponding quotientspaces are symmetric spaces of independent interest – for example, is the Lagrangian Grassmannian.

Lie subgroupsIn physics, particularly in the areas of Kaluza–Klein compactification, it is important to find out the subgroups of theorthogonal group. The main ones are:

– preserves an axis – U(n) are those that preserve a compatible complex structure or a compatible

symplectic structure – see 2-out-of-3 property; SU(n) also preserves a complex orientation.

Page 147: Vector and Matrices - Some Articles

Orthogonal group 144

Lie supergroupsThe orthogonal group O(n) is also an important subgroup of various Lie groups:

Discrete subgroupsAs the orthogonal group is compact, discrete subgroups are equivalent to finite subgroups.[6] These subgroups areknown as point group and can be realized as the symmetry groups of polytopes. A very important class of examplesare the finite Coxeter groups, which include the symmetry groups of regular polytopes.Dimension 3 is particularly studied – see point groups in three dimensions, polyhedral groups, and list of sphericalsymmetry groups. In 2 dimensions, the finite groups are either cyclic or dihedral – see point groups in twodimensions.Other finite subgroups include:• Permutation matrices (the Coxeter group An)• Signed permutation matrices (the Coxeter group Bn); also equals the intersection of the orthogonal group with the

integer matrices.[7]

Covering and quotient groupsThe orthogonal group is neither simply connected nor centerless, and thus has both a covering group and a quotientgroup, respectively:• Two covering Pin groups, Pin+(n) → O(n) and Pin−(n) → O(n),• The quotient projective orthogonal group, O(n) → PO(n).These are all 2-to-1 covers.For the special orthogonal group, the corresponding groups are:• Spin group, Spin(n) → SO(n),• Projective special orthogonal group, SO(n) → PSO(n).Spin is a 2-to-1 cover, while in even dimension, PSO(2k) is a 2-to-1 cover, and in odd dimension PSO(2k+1) is a1-to-1 cover, i.e., isomorphic to SO(2k+1). These groups, Spin(n), SO(n), and PSO(n) are Lie group forms of thecompact special orthogonal Lie algebra, – Spin is the simply connected form, while PSO is the centerlessform, and SO is in general neither.[8]

In dimension 3 and above these are the covers and quotients, while dimension 2 and below are somewhat degenerate;see specific articles for details.

Page 148: Vector and Matrices - Some Articles

Orthogonal group 145

Applications to string theoryThe group O(10) is of special importance in superstring theory because it is the symmetry group of 10 dimensionalspace-time.

Principal homogeneous space: Stiefel manifoldThe principal homogeneous space for the orthogonal group O(n) is the Stiefel manifold of orthonormalbases (orthonormal n-frames).In other words, the space of orthonormal bases is like the orthogonal group, but without a choice of base point: givenan orthogonal space, there is no natural choice of orthonormal basis, but once one is given one, there is a one-to-onecorrespondence between bases and the orthogonal group. Concretely, a linear map is determined by where it sends abasis: just as an invertible map can take any basis to any other basis, an orthogonal map can take any orthogonalbasis to any other orthogonal basis.

The other Stiefel manifolds for of incomplete orthonormal bases (orthonormal k-frames) are stillhomogeneous spaces for the orthogonal group, but not principal homogeneous spaces: any k-frame can be taken toany other k-frame by an orthogonal map, but this map is not uniquely determined.

Notes[1] Away from 2, it is equivalent to use bilinear forms or quadratic forms, but at 2 these differ – notably in characteristic 2, but also when

generalizing to rings where 2 is not invertible, most significantly the integers, where the notions of even and odd quadratic forms arise.[2] The analogy is stronger: Weyl groups, a class of (representations of) Coxeter groups, can be considered as simple algebraic groups over the

field with one element, and there are a number of analogies between algebraic groups and vector spaces on the one hand, and Weyl groups andsets on the other.

[3] John Baez "This Week's Finds in Mathematical Physics" week 105 (http:/ / math. ucr. edu/ home/ baez/ week105. html)[4] (Taylor 1992, page 160)[5] (Grove 2002, Theorem 6.6 and 14.16)[6] Infinite subsets of a compact space have an accumulation point and are not discrete.

[7] equals the signed permutation matrices because an integer vector of norm 1 must have a single non-zero entry,

which must be ±1 (if it has two non-zero entries or a larger entry, the norm will be larger than 1), and in an orthogonal matrix these entriesmust be in different coordinates, which is exactly the signed permutation matrices.

[8] In odd dimension, SO(2k+1) PSO(2k+1) is centerless (but not simply connected), while in even dimension SO(2k) is neither centerlessnor simply connected.

References• Grove, Larry C. (2002), Classical groups and geometric algebra, Graduate Studies in Mathematics, 39,

Providence, R.I.: American Mathematical Society, ISBN 978-0-8218-2019-3, MR1859189• Taylor, Donald E. (1992), The Geometry of the Classical Groups, 9, Berlin: Heldermann Verlag,

ISBN 3-88538-009-9, MR1189139

External links• John Baez "This Week's Finds in Mathematical Physics" week 105 (http:/ / math. ucr. edu/ home/ baez/ week105.

html)• John Baez on Octonions (http:/ / math. ucr. edu/ home/ baez/ octonions/ node10. html)• (Italian) n-dimensional Special Orthogonal Group parametrization (http:/ / ansi. altervista. org)

Page 149: Vector and Matrices - Some Articles

Rotation group 146

Rotation groupIn mechanics and geometry, the rotation group is the group of all rotations about the origin of three-dimensionalEuclidean space R3 under the operation of composition.[1] By definition, a rotation about the origin is a lineartransformation that preserves length of vectors (it is an isometry) and preserves orientation (i.e. handedness) ofspace. A length-preserving transformation which reverses orientation is an improper rotation, that is a reflection ormore generally a rotoinversion.Composing two rotations results in another rotation; every rotation has a unique inverse rotation; and the identitymap satisfies the definition of a rotation. Owing to the above properties, the set of all rotations is a group undercomposition. Moreover, the rotation group has a natural manifold structure for which the group operations aresmooth; so it is in fact a Lie group. The rotation group is often denoted SO(3) for reasons explained below.

Length and angleBesides just preserving length, rotations also preserve the angles between vectors. This follows from the fact that thestandard dot product between two vectors u and v can be written purely in terms of length:

It follows that any length-preserving transformation in R3 preserves the dot product, and thus the angle betweenvectors. Rotations are often defined as linear transformations that preserve the inner product on R3. This isequivalent to requiring them to preserve length.

Orthogonal and rotation matricesEvery rotation maps an orthonormal basis of R3 to another orthonormal basis. Like any linear transformation offinite-dimensional vector spaces, a rotation can always be represented by a matrix. Let R be a given rotation. Withrespect to the standard basis of R3 the columns of R are given by . Since thestandard basis is orthonormal, the columns of R form another orthonormal basis. This orthonormality condition canbe expressed in the form

where RT denotes the transpose of R and I is the 3 × 3 identity matrix. Matrices for which this property holds arecalled orthogonal matrices. The group of all 3 × 3 orthogonal matrices is denoted O(3), and consists of all proper andimproper rotations.In addition to preserving length, proper rotations must also preserve orientation. A matrix will preserve or reverseorientation according to whether the determinant of the matrix is positive or negative. For an orthogonal matrix R,note that det RT = det R implies (det R)2 = 1 so that det R = ±1. The subgroup of orthogonal matrices withdeterminant +1 is called the special orthogonal group, denoted SO(3).Thus every rotation can be represented uniquely by an orthogonal matrix with unit determinant. Moreover, sincecomposition of rotations corresponds to matrix multiplication, the rotation group is isomorphic to the specialorthogonal group SO(3).Improper rotations correspond to orthogonal matrices with determinant −1, and they do not form a group because theproduct of two improper rotations is a proper rotation.

Page 150: Vector and Matrices - Some Articles

Rotation group 147

Group structureThe rotation group is a group under function composition (or equivalently the product of linear transformations). It isa subgroup of the general linear group consisting of all invertible linear transformations of Euclidean space.Furthermore, the rotation group is nonabelian. That is, the order in which rotations are composed makes a difference.For example, a quarter turn around the positive x-axis followed by a quarter turn around the positive y-axis is adifferent rotation than the one obtained by first rotating around y and then x.The orthogonal group, consisting of all proper and improper rotations, is generated by reflections. Every properrotation is the composition of two reflections, a special case of the Cartan–Dieudonné theorem.

Axis of rotationEvery nontrivial proper rotation in 3 dimensions fixes a unique 1-dimensional linear subspace of R3 which is calledthe axis of rotation (this is Euler's rotation theorem). Each such rotation acts as an ordinary 2-dimensional rotation inthe plane orthogonal to this axis. Since every 2-dimensional rotation can be represented by an angle φ, an arbitrary3-dimensional rotation can be specified by an axis of rotation together with an angle of rotation about this axis.(Technically, one needs to specify an orientation for the axis and whether the rotation is taken to be clockwise orcounterclockwise with respect to this orientation).For example, counterclockwise rotation about the positive z-axis by angle φ is given by

Given a unit vector n in R3 and an angle φ, let R(φ, n) represent a counterclockwise rotation about the axis through n(with orientation determined by n). Then• R(0, n) is the identity transformation for any n• R(φ, n) = R(−φ, −n)• R(π + φ, n) = R(π − φ, −n).Using these properties one can show that any rotation can be represented by a unique angle φ in the range 0 ≤ φ ≤ πand a unit vector n such that• n is arbitrary if φ = 0• n is unique if 0 < φ < π• n is unique up to a sign if φ = π (that is, the rotations R(π, ±n) are identical).

TopologyConsider the solid ball in R3 of radius π (that is, all points of R3 of distance π or less from the origin). Given theabove, for every point in this ball there is a rotation, with axis through the point and the origin, and rotation angleequal to the distance of the point from the origin. The identity rotation corresponds to the point at the center of theball. Rotation through angles between 0 and -π correspond to the point on the same axis and distance from the originbut on the opposite side of the origin. The one remaining issue is that the two rotations through π and through -π arethe same. So we identify (or "glue together") antipodal points on the surface of the ball. After this identification, wearrive at a topological space homeomorphic to the rotation group.Indeed, the ball with antipodal surface points identified is a smooth manifold, and this manifold is diffeomorphic tothe rotation group. It is also diffeomorphic to the real 3-dimensional projective space RP3, so the latter can also serveas a topological model for the rotation group.These identifications illustrate that SO(3) is connected but not simply connected. As to the latter, in the ball with antipodal surface points identified, consider the path running from the "north pole" straight through the interior down

Page 151: Vector and Matrices - Some Articles

Rotation group 148

to the south pole. This is a closed loop, since the north pole and the south pole are identified. This loop cannot beshrunk to a point, since no matter how you deform the loop, the start and end point have to remain antipodal, or elsethe loop will "break open". In terms of rotations, this loop represents a continuous sequence of rotations about thez-axis starting and ending at the identity rotation (i.e. a series of rotation through an angle φ where φ runs from 0 to2π).Surprisingly, if you run through the path twice, i.e., run from north pole down to south pole, jump back to the northpole (using the fact that north and south poles are identified), and then again run from north pole down to south pole,so that φ runs from 0 to 4π, you get a closed loop which can be shrunk to a single point: first move the pathscontinuously to the ball's surface, still connecting north pole to south pole twice. The second half of the path can thenbe mirrored over to the antipodal side without changing the path at all. Now we have an ordinary closed loop on thesurface of the ball, connecting the north pole to itself along a great circle. This circle can be shrunk to the north polewithout problems.The same argument can be performed in general, and it shows that the fundamental group of SO(3) is cyclic group oforder 2. In physics applications, the non-triviality of the fundamental group allows for the existence of objectsknown as spinors, and is an important tool in the development of the spin-statistics theorem.The universal cover of SO(3) is a Lie group called Spin(3). The group Spin(3) is isomorphic to the special unitarygroup SU(2); it is also diffeomorphic to the unit 3-sphere S3 and can be understood as the group of unit quaternions(i.e. those with absolute value 1). The connection between quaternions and rotations, commonly exploited incomputer graphics, is explained in quaternions and spatial rotations. The map from S3 onto SO(3) that identifiesantipodal points of S3 is a surjective homomorphism of Lie groups, with kernel ±1. Topologically, this map is atwo-to-one covering map.

Lie algebraSince SO(3) is a Lie subgroup of the general linear group GL(3), its Lie algebra can be identified with a Liesubalgebra of gl(3), the algebra of 3×3 matrices with the commutator given by

The condition that a matrix A belong to SO(3) is that(*)

If A(t) is a one-parameter subgroup of SO(3) parametrised by t, then differentiating (*) with respect to t gives

and so the Lie algebra so(3) consists of all skew-symmetric 3×3 matrices.

Representations of rotationsWe have seen that there are a variety of ways to represent rotations:• as orthogonal matrices with determinant 1,• by axis and rotation angle• in quaternion algebra with versors and the map S3 → SO(3) (see quaternions and spatial rotations).Another method is to specify an arbitrary rotation by a sequence of rotations about some fixed axes. See:• Euler anglesSee charts on SO(3) for further discussion.

Page 152: Vector and Matrices - Some Articles

Rotation group 149

GeneralizationsThe rotation group generalizes quite naturally to n-dimensional Euclidean space, Rn. The group of all proper andimproper rotations in n dimensions is called the orthogonal group, O(n), and the subgroup of proper rotations iscalled the special orthogonal group, SO(n).In special relativity, one works in a 4-dimensional vector space, known as Minkowski space rather than3-dimensional Euclidean space. Unlike Euclidean space, Minkowski space has an inner product with an indefinitesignature. However, one can still define generalized rotations which preserve this inner product. Such generalizedrotations are known as Lorentz transformations and the group of all such transformations is called the Lorentz group.The rotation group SO(3) can be described as a subgroup of E+(3), the Euclidean group of direct isometries of R3.This larger group is the group of all motions of a rigid body: each of these is a combination of a rotation about anarbitrary axis and a translation along the axis, or put differently, a combination of an element of SO(3) and anarbitrary translation.In general, the rotation group of an object is the symmetry group within the group of direct isometries; in otherwords, the intersection of the full symmetry group and the group of direct isometries. For chiral objects it is the sameas the full symmetry group.

Notes[1] Jacobson (2009), p. 34, Ex. 14.

References• A. W. Joshi Elements of Group Theory for Physicists (2007 New Age International) pp. 111ff.• Weisstein, Eric W., " Rotation Group (http:/ / mathworld. wolfram. com/ RotationGroup. html)" from

MathWorld.• Mathematical Methods in the Physical Sciences by Mary L Boas pp. 120,127,129,155ff and 535• Jacobson, Nathan (2009), Basic algebra, 1 (2nd ed.), Dover, ISBN 978-0-486-47189-1

Page 153: Vector and Matrices - Some Articles

Vector-valued function 150

Vector-valued function

A graph of the vector-valued function r(t) = <2 cos t, 4 sin t, t> indicating a rangeof solutions and the vector when evaluated near t = 19.5

A vector-valued function also referred toas a vector function is a mathematicalfunction of one or more variables whoserange is a set of multidimensional vectors orinfinite-dimensional vectors. Often the inputof a vector-valued function is a scalar, but ingeneral the input can be a vector of bothcomplex or real variables.

ExampleA common example of a vector valued function is one that depends on a single real number parameter t, oftenrepresenting time, producing a vector v(t) as the result. In terms of the standard unit vectors i, j, k of Cartesian3-space, these specific type of vector-valued functions are given by expressions such as

• or•where f(t), g(t) and h(t) are the coordinate functions of the parameter t. The vector r(t) has its tail at the origin andits head at the coordinates evaluated by the function.The vector shown in the graph to the right is the evaluation of the function near t=19.5 (between 6π and 6.5π; i.e.,somewhat more than 3 rotations). The spiral is the path traced by the tip of the vector as t increases from zerothrough 8π.Vector functions can also be referred to in a different notation:

• or•

Page 154: Vector and Matrices - Some Articles

Vector-valued function 151

PropertiesThe domain of a vector-valued function is the intersection of the domain of the functions f, g, and h.

Derivative of a three-dimensional vector functionMany vector-valued functions, like scalar-valued functions, can be differentiated by simply differentiating thecomponents in the Cartesian coordinate system. Thus, if

is a vector-valued function, then

The vector derivative admits the following physical interpretation: if r(t) represents the position of a particle, thenthe derivative is the velocity of the particle

Likewise, the derivative of the velocity is the acceleration

Partial derivativeThe partial derivative of a vector function a with respect to a scalar variable q is defined as[1]

where ai is the scalar component of a in the direction of ei. It is also called the direction cosine of a and ei or their dotproduct. The vectors e1,e2,e3 form an orthonormal basis fixed in the reference frame in which the derivative is beingtaken.

Ordinary derivativeIf a is regarded as a vector function of a single scalar variable, such as time t, then the equation above reduces to thefirst ordinary time derivative of a with respect to t,[1]

Total derivativeIf the vector a is a function of a number n of scalar variables qr (r = 1,...,n), and each qr is only a function of time t,then the ordinary derivative of a with respect to t can be expressed, in a form known as the total derivative, as[1]

Some authors prefer to use capital D to indicate the total derivative operator, as in D/Dt. The total derivative differsfrom the partial time derivative in that the total derivative accounts for changes in a due to the time variance of thevariables qr.

Page 155: Vector and Matrices - Some Articles

Vector-valued function 152

Reference framesWhereas for scalar-valued functions there is only a single possible reference frame, to take the derivative of avector-valued function requires the choice of a reference frame (at least when a fixed Cartesian coordinate system isnot implied as such). Once a reference frame has been chosen, the derivative of a vector-valued function can becomputed using techniques similar to those for computing derivatives of scalar-valued functions. A different choiceof reference frame will, in general, produce a different derivative function. The derivative functions in differentreference frames have a specific kinematical relationship.

Derivative of a vector function with nonfixed basesThe above formulas for the derivative of a vector function rely on the assumption that the basis vectors e1,e2,e3 areconstant, that is, fixed in the reference frame in which the derivative of a is being taken, and therefore the e1,e2,e3each has a derivative of identically zero. This often holds true for problems dealing with vector fields in a fixedcoordinate system, or for simple problems in physics. However, many complex problems involve the derivative of avector function in multiple moving reference frames, which means that the basis vectors will not necessarily beconstant. In such a case where the basis vectors e1,e2,e3 are fixed in reference frame E, but not in reference frame N,the more general formula for the ordinary time derivative of a vector in reference frame N is[1]

where the superscript N to the left of the derivative operator indicates the reference frame in which the derivative istaken. As shown previously, the first term on the right hand side is equal to the derivative of a in the reference framewhere e1,e2,e3 are constant, reference frame E. It also can be shown that the second term on the right hand side isequal to the relative angular velocity of the two reference frames cross multiplied with the vector a itself.[1] Thus,after substitution, the formula relating the derivative of a vector function in two reference frames is[1]

where NωE is the angular velocity of the reference frame E relative to the reference frame N.One common example where this formula is used is to find the velocity of a space-borne object, such as a rocket, inthe inertial reference frame using measurements of the rocket's velocity relative to the ground. The velocity NvR ininertial reference frame N of a rocket R located at position rR can be found using the formula

where NωE is the angular velocity of the Earth relative to the inertial frame N. Since velocity is the derivative ofposition, NvR and EvR are the derivatives of rR in reference frames N and E, respectively. By substitution,

where EvR is the velocity vector of the rocket as measured from a reference frame E that is fixed to the Earth.

Page 156: Vector and Matrices - Some Articles

Vector-valued function 153

Derivative and vector multiplicationThe derivative of the products of vector functions behaves similarly to the derivative of the products of scalarfunctions.[2] Specifically, in the case of scalar multiplication of a vector, if p is a scalar variable function of q,[1]

In the case of dot multiplication, for two vectors a and b that are both functions of q,[1]

Similarly, the derivative of the cross product of two vector functions is[1]

Derivative of an n-dimensional vector functionA function f of a real number t with values in the space can be written as .Its derivative equals

.If f is a function of several variables, say of , then the partial derivatives of the components of f form a

matrix called the Jacobian matrix of f.

Infinite-dimensional vector functionsIf the values of a function f lie in an infinite-dimensional vector space X, such as a Hilbert space, then f may be calledan infinite-dimensional vector function.

Functions with values in a Hilbert spaceIf the argument of f is a real number and X is a a Hilbert space, then the derivative of f at a point t can be defined asin the finite-dimensional case:

Most results of the finite-dimensional case also hold in the infinite-dimensional case too, mutatis mutandis.Differentiation can also be defined to functions of several variables (e.g., or even , where Y is aninfinite-dimensional vector space).N.B. If X is a Hilbert space, then one can easily show that any derivative (and any other limit) can be computedcomponentwise: if

(i.e., , where is an orthonormal basis of the space X), and exists, then

.However, the existence of a componentwise derivative does not guarantee the existence of a derivative, ascomponentwise convergence in a Hilbert space does not guarantee convergence with respect to the actual topologyof the Hilbert space.

Page 157: Vector and Matrices - Some Articles

Vector-valued function 154

Other infinite-dimensional vector spacesMost of the above hold for other topological vector spaces X too. However, not as many classical results hold in theBanach space setting, e.g., an absolutely continuous function with values in a suitable Banach space need not have aderivative anywhere. Moreover, in most Banach spaces setting there are no orthonormal bases.

Notes[1] Kane & Levinson 1996, p. 29–37[2] In fact, these relations are derived applying the product rule componentwise.

References• Kane, Thomas R.; Levinson, David A. (1996), "1–9 Differentiation of Vector Functions", Dynamics Online,

Sunnyvale, California: OnLine Dynamics, Inc., pp. 29–37

External links• Vector-valued functions and their properties (from Lake Tahoe Community College) (http:/ / ltcconline. net/

greenl/ courses/ 202/ vectorFunctions/ vectorFunctions. htm)• Weisstein, Eric W., " Vector Function (http:/ / mathworld. wolfram. com/ VectorFunction. html)" from

MathWorld.• Everything2 article (http:/ / www. everything2. com/ index. pl?node_id=1525585)• 3 Dimensional vector-valued functions (from East Tennessee State University) (http:/ / math. etsu. edu/

MultiCalc/ Chap1/ Chap1-6/ part1. htm)

Gramian matrixIn linear algebra, the Gramian matrix (or Gram matrix or Gramian) of a set of vectors in an innerproduct space is the Hermitian matrix of inner products, whose entries are given by .An important application is to compute linear independence: a set of vectors is linearly independent if and only if theGram determinant (the determinant of the Gram matrix) is non-zero.It is named after Jørgen Pedersen Gram.

ExamplesMost commonly, the vectors are elements of an Euclidean space, or are functions in an L2 space, such as continuousfunctions on a compact interval [a, b] (which are a subspace of L 2([a, b])).

Given real-valued functions on the interval , the Gram matrix , isgiven by the standard inner product on functions:

Given a real matrix A, the matrix ATA is a Gram matrix (of the columns of A), while the matrix AAT is the Grammatrix of the rows of A.For a general bilinear form B on a finite-dimensional vector space over any field we can define a Gram matrix Gattached to a set of vectors by . The matrix will be symmetric if the bilinear form Bis symmetric.

Page 158: Vector and Matrices - Some Articles

Gramian matrix 155

Applications• If the vectors are centered random variables, the Gramian is proportional to the covariance matrix, with the

scaling determined by the number of elements in the vector.• In quantum chemistry, the Gram matrix of a set of basis vectors is the overlap matrix.• In control theory (or more generally systems theory), the controllability Gramian and observability Gramian

determine properties of a linear system.• Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied

Psychological Measurement, Volume 18, pp. 79–94).• In the finite element method, the Gram matrix arises from approximating a function from a finite dimensional

space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensionalsubspace.

• In machine learning, kernel functions are often represented as Gram matrices.[1]

Properties

Positive semidefiniteThe Gramian matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for someset of vectors. This set of vectors is not in general unique: the Gramian matrix of any orthonormal basis is theidentity matrix.The infinite-dimensional analog of this statement is Mercer's theorem.

Change of basisUnder change of basis represented by an invertible matrix P, the Gram matrix will change by a matrix congruence toPTGP.

Gram determinantThe Gram determinant or Gramian is the determinant of the Gram matrix:

Geometrically, the Gram determinant is the square of the volume of the parallelotope formed by the vectors. Inparticular, the vectors are linearly independent if and only if the Gram determinant is nonzero (if and only if theGram matrix is nonsingular).The Gram determinant can also be expressed in terms of the exterior product of vectors by

Page 159: Vector and Matrices - Some Articles

Gramian matrix 156

References[1] Lanckriet, G. R. G., N. Cristianini, P. Bartlett, L. E. Ghaoui, and M. I. Jordan. “Learning the kernel matrix with semidefinite programming.”

The Journal of Machine Learning Research 5 (2004): 29.

• Barth, Nils (1999). "The Gramian and K-Volume in N-Space: Some Classical Results in Linear Algebra" (http:/ /www. jyi. org/ volumes/ volume2/ issue1/ articles/ barth. html). Journal of Young Investigators 2.

External links• Volumes of parallelograms (http:/ / www. owlnet. rice. edu/ ~fjones/ chap8. pdf) by Frank Jones

Lagrange's identityIn algebra, Lagrange's identity, named after Joseph Louis Lagrange, is:[1] [2]

which applies to any two sets a1, a2, . . ., an and b1, b2, . . ., bn of real or complex numbers (or more generally,elements of a commutative ring). This identity is a special form of the Binet–Cauchy identity.In a more compact vector notation, Lagrange's identity is expressed as:[3]

where a and b are n-dimensional vectors with components that are real numbers. The extension to complex numbersrequires the interpretation of the dot product as an inner product or Hermitian dot product. Explicitly, for complexnumbers, Lagrange's identity can be written in the form:[4]

involving the absolute value.[5]

Since the right-hand side of the identity is clearly non-negative, it implies Cauchy's inequality in thefinite-dimensional real coordinate space ℝn and its complex counterpart ℂn.

Lagrange's identity and exterior algebraIn terms of the wedge product, Lagrange's identity can be written

Hence, it can be seen as a formula which gives the length of the wedge product of two vectors, which is the area ofthe paralleogram they define, in terms of the dot products of the two vectors, as

Page 160: Vector and Matrices - Some Articles

Lagrange's identity 157

Lagrange's identity and vector calculusIn three dimensions, Lagrange's identity asserts that the square of the area of a parallelogram in space is equal to thesum of the squares of its projections onto the Cartesian coordinate planes. Algebraically, if a and b are vectors in ℝ3

with lengths |a| and |b|, then Lagrange's identity can be written in terms of the cross product and dot product:[6] [7]

Using the definition of angle based upon the dot product (see also Cauchy–Schwartz inequality), the left-hand side is

where θ is the angle formed by the vectors a and b. The area of a parallelogram with sides |a| and |b| and angle θ isknown in elementary geometry to be

so the left-hand side of Lagrange's identity is the squared area of the parallelogram. The cross product appearing onthe right-hand side is defined by

which is a vector whose components are equal in magnitude to the areas of the projections of the parallelogram ontothe yz, zx, and xy planes, respectively.

Seven dimensionsFor a and b as vectors in ℝ7, Lagrange's identity takes on the same form as in the case of ℝ3 [8]

However, the cross product in 7 dimensions does not share all the properties of the cross product in 3 dimensions.For example, the direction of a × b in 7-dimensions may be the same as c × d even though c and d are linearlyindependent of a and b. Also the seven dimensional cross product is not compatible with the Jacobi identity.[8]

QuaternionsA quaternion p is defined as the sum of a scalar t and a vector v:

The product of two quaternions p = t + v and q = s + w is defined by

The quaternionic conjugate of q is defined by

and the norm squared is

The multiplicativity of the norm in the quaternion algebra provides, for quaternions p and q:[9]

The quaternions p and q are called imaginary if their scalar part is zero; equivalently, if

Lagrange's identity is just the multiplicativity of the norm of imaginary quaternions,

since, by definition,

Page 161: Vector and Matrices - Some Articles

Lagrange's identity 158

Proof of algebraic formThe vector form follows from the Binet-Cauchy identity by setting ci = ai and di = bi. The second version follows byletting ci and di denote the complex conjugates of ai and bi, respectively,Here is also a direct proof.[10] The expansion of the first term on the left side is:

(1)

which means that the product of a column of as and a row of bs yields (a sum of elements of) a square of abs, whichcan be broken up into a diagonal and a pair of triangles on either side of the diagonal.The second term on the left side of Lagrange's identity can be expanded as:

(2)

which means that a symmetric square can be broken up into its diagonal and a pair of equal triangles on either side ofthe diagonal.To expand the summation on the right side of Lagrange's identity, first expand the square within the summation:

Distribute the summation on the right side,

Now exchange the indices i and j of the second term on the right side, and permute the b factors of the third term,yielding:

(3)

Back to the left side of Lagrange's identity: it has two terms, given in expanded form by Equations (1) and (2). Thefirst term on the right side of Equation (2) ends up canceling out the first term on the right side of Equation (1),yielding

(1) - (2) =

which is the same as Equation (3), so Lagrange's identity is indeed an identity, Q.E.D..

References[1] Eric W. Weisstein (2003). CRC concise encyclopedia of mathematics (http:/ / books. google. com/ ?id=8LmCzWQYh_UC& pg=PA228) (2nd

ed.). CRC Press. ISBN 1584883472. .[2] Robert E Greene and Steven G Krantz (2006). "Exercise 16" (http:/ / www. amazon. com/

Function-Complex-Variable-Graduate-Mathematics/ dp/ 082182905X/ ref=sr_1_1?ie=UTF8& s=books& qid=1271907834&sr=1-1#reader_082182905X). Function theory of one complex variable (3rd ed.). American Mathematical Society. p. 22. ISBN 0821839624. .

[3] Vladimir A. Boichenko, Gennadiĭ Alekseevich Leonov, Volker Reitmann (2005). Dimension theory for ordinary differential equations (http:// books. google. com/ ?id=9bN1-b_dSYsC& pg=PA26). Vieweg+Teubner Verlag. p. 26. ISBN 3519004372. .

[4] J. Michael Steele (2004). "Exercise 4.4: Lagrange's identity for complex numbers" (http:/ / books. google. com/ ?id=bvgBdZKEYAEC&pg=PA68). The Cauchy-Schwarz master class: an introduction to the art of mathematical inequalities. Cambridge University Press.pp. 68–69. ISBN 052154677X. .

[5] Greene, Robert E.; Krantz, Steven G. (2002). Function Theory of One Complex Variable. Providence, R.I.: American Mathematical Society.p. 22, Exercise 16. ISBN 978-0-8218-2905-9;Palka, Bruce P. (1991). An Introduction to Complex Function Theory. Berlin, New York: Springer-Verlag. p. 27, Exercise 4.22.ISBN 978-0-387-97427-9.

Page 162: Vector and Matrices - Some Articles

Lagrange's identity 159

[6] Howard Anton, Chris Rorres (2010). "Relationships between dot and cross products" (http:/ / books. google. com/ ?id=1PJ-WHepeBsC&pg=PA162& dq="cross+ product"+ "Lagrange's+ identity"& cd=6#v=onepage& q="cross product" "Lagrange's identity"). Elementary LinearAlgebra: Applications Version (10th ed.). John Wiley and Sons. p. 162. ISBN 0470432055. .

[7] Pertti Lounesto (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC& pg=PA94& dq="which+ in+coordinate+ form+ means+ Lagrange's+ identity"& cd=1#v=onepage& q="which in coordinate form means Lagrange's identity") (2nd ed.).Cambridge University Press. p. 94. ISBN 0521005515. .

[8] Door Pertti Lounesto (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC& printsec=frontcover&q=Pythagorean) (2nd ed.). Cambridge University Press. ISBN 0521005515. . See particularly § 7.4 Cross products in ℝ7 (http:/ / books.google. be/ books?id=kOsybQWDK4oC& pg=PA96#v=onepage& q& f=false), p. 96.

[9] Jack B. Kuipers (2002). "§5.6 The norm" (http:/ / books. google. com/ ?id=_2sS4mC0p-EC& pg=PA111). Quaternions and rotationsequences: a primer with applications to orbits. Princeton University Press. p. 111. ISBN 0691102988. .

[10] See, for example, Frank Jones, Rice University (http:/ / docs. google. com/ viewer?a=v& q=cache:rDnOA-ZKljkJ:www. owlnet. rice. edu/~fjones/ chap7. pdf+ lagrange's+ identity+ in+ the+ seven+ dimensional+ cross+ product& hl=en& gl=ph&sig=AHIEtbQQtdVGhgbYhz78SQQb2biLxRi4kA), page 4 in Chapter 7 of a book still to be published (http:/ / www. owlnet. rice. edu/~fjones/ ).

Quaternion

Graphical representation of quaternion units product as 90°-rotation in4D-space, ij = k, ji = −k, ij = −ji

In mathematics, the quaternions are a numbersystem that extends the complex numbers. Theywere first described by Irish mathematician SirWilliam Rowan Hamilton in 1843 and applied tomechanics in three-dimensional space. A strikingfeature of quaternions is that the product of twoquaternions is noncommutative, meaning that theproduct of two quaternions depends on which factoris to the left of the multiplication sign and whichfactor is to the right. Hamilton defined a quaternionas the quotient of two directed lines in athree-dimensional space[1] or equivalently as thequotient of two vectors.[2] Quaternions can also berepresented as the sum of a scalar and a vector.

Quaternions find uses in both theoretical and appliedmathematics, in particular for calculations involvingthree-dimensional rotations such as inthree-dimensional computer graphics and computervision. They can be used alongside other methods,such as Euler angles and matrices, or as analternative to them depending on the application.

In modern language, quaternions form afour-dimensional associative normed divisionalgebra over the real numbers, and thus also form adomain. In fact, the quaternions were the firstnoncommutative division algebra to be discovered.[3] The algebra of quaternions is often denoted by H (forHamilton), or in blackboard bold by (Unicode U+210D, ℍ). It can also be given by the Clifford algebraclassifications Cℓ0,2(R) = Cℓ0

3,0(R). The algebra H holds a special place in analysis since, according to theFrobenius theorem, it is one of only two finite-dimensional division rings containing the real numbers as a propersubring, the other being the complex numbers.

Page 163: Vector and Matrices - Some Articles

Quaternion 160

The unit quaternions can therefore be thought of as a choice of a group structure on the 3-sphere , the groupSpin(3), the group SU(2), or the universal cover of SO(3).

History

Quaternion plaque on Brougham (Broom) Bridge,Dublin, which says:

* Here as he walked byon the 16th of October 1843

Sir William Rowan Hamiltonin a flash of genius discoveredthe fundamental formula for

quaternion multiplicationi² = j² = k² = i j k = −1

& cut it on a stone of this bridge

Quaternion algebra was introduced by Irish mathematician Sir WilliamRowan Hamilton in 1843.[4] Important precursors to this work includedEuler's four-square identity (1748) and Olinde Rodrigues'parameterization of the general rotation by four parameters (1840), butneither of these authors treated the four-parameter rotations as analgebra.[5] [6] Gauss had also discovered quaternions in 1819, but thiswork was only published in 1900.[7]

Hamilton knew that the complex numbers could be viewed as points ina plane, and he was looking for a way to do the same for points inspace. Points in space can be represented by their coordinates, whichare triples of numbers, and for many years Hamilton had known how toadd and subtract triples of numbers. But he had been stuck on theproblem of multiplication and division: He did not know how to takethe quotient of two points in space.

The breakthrough finally came on Monday 16 October 1843 in Dublin,when Hamilton was on his way to the Royal Irish Academy where hewas going to preside at a council meeting. While walking along thetowpath of the Royal Canal with his wife, the concept behind quaternions was taking shape in his mind. Hamiltoncould not resist the impulse to carve the formulae for the quaternions

into the stone of Brougham Bridge as he passed by it.On the following day, he wrote a letter to his friend and fellow-mathematician, John T. Graves, describing the trainof thought that led to his discovery. This letter was subsequently published in the London, Edinburgh and DublinPhilosophical Magazine and Journal of Science, vol. xxv (1844), pp 489–95. On the letter, Hamilton states,And here there dawned on me the notion that we must admit, in some sense, a fourth dimension of space for thepurpose of calculating with triples ... An electric circuit seemed to close, and a spark flashed forth.

Hamilton called a quadruple with these rules of multiplication a quaternion, and he devoted the remainder of his lifeto studying and teaching them. He founded a school of "quaternionists" and popularized them in several books. Thelast and longest, Elements of Quaternions, had 800 pages and was published shortly after his death.After Hamilton's death, his pupil Peter Tait continued promoting quaternions. At this time, quaternions were amandatory examination topic in Dublin. Topics in physics and geometry that would now be described using vectors,such as kinematics in space and Maxwell's equations, were described entirely in terms of quaternions. There waseven a professional research association, the Quaternion Society, devoted to the study of quaternions and otherhypercomplex number systems.From the mid 1880s, quaternions began to be displaced by vector analysis, which had been developed by JosiahWillard Gibbs and Oliver Heaviside. Vector analysis described the same phenomena as quaternions, so it borrowedideas and terms liberally from the classical quaternion literature. However, vector analysis was conceptually simplerand notationally cleaner, and eventually quaternions were relegated to a minor role in mathematics and physics. Aside effect of this transition is that Hamilton's work is difficult to comprehend for many modern readers becauseHamilton's original definitions are unfamiliar and his writing style is prolix and opaque.

Page 164: Vector and Matrices - Some Articles

Quaternion 161

However, quaternions have had a revival since the late 20th century, primarily due to their utility in describingspatial rotations. Representations of rotations by quaternions are more compact and faster to compute thanrepresentations by matrices and unlike Euler angles are not susceptible to gimbal lock. For this reason, quaternionsare used in computer graphics,[8] computer vision, robotics, control theory, signal processing, attitude control,physics, bioinformatics, molecular dynamics computer simulation and orbital mechanics. For example, it is commonfor spacecraft attitude-control systems to be commanded in terms of quaternions. Quaternions have received anotherboost from number theory because of their relation to quadratic forms.Since 1989, the Department of Mathematics of the National University of Ireland, Maynooth has organized apilgrimage, where scientists (including physicists Murray Gell-Mann in 2002, Steven Weinberg in 2005, andmathematician Andrew Wiles in 2003) take a walk from Dunsink Observatory to the Royal Canal bridge where,unfortunately, no trace of Hamilton's carving remains.

DefinitionAs a set, the quaternions H are equal to R4, a four-dimensional vector space over the real numbers. H has threeoperations: addition, scalar multiplication, and quaternion multiplication. The sum of two elements of H is defined tobe their sum as elements of R4. Similarly the product of an element of H by a real number is defined to be the sameas the product in R4. To define the product of two elements in H requires a choice of basis for R4. The elements ofthis basis are customarily denoted as 1, i, j, and k. Every element of H can be uniquely written as a linearcombination of these basis elements, that is, as a1 + bi + cj + dk, where a, b, c, and d are real numbers. The basiselement 1 will be the identity element of H, meaning that multiplication by 1 does nothing, and for this reason,elements of H are usually written a + bi + cj + dk, suppressing the basis element 1. Given this basis, associativequaternion multiplication is defined by first defining the products of basis elements and then defining all otherproducts using the distributive law.

Multiplication of basis elementsThe equations

where i, j, and k are basis elements of H, determine all the possible products of i, j, and k. For example, since

right-multiplying both sides by k gives

All the other possible products can be determined by similar methods, resulting in

which can be arranged as a table whose rows represent the left factor of the product and whose columns represent theright factor:

Page 165: Vector and Matrices - Some Articles

Quaternion 162

Quaternion multiplication

× 1 i j k

1 1 i j k

i i −1 k −j

j j −k −1 i

k k j −i −1

Hamilton productFor two elements a1 + b1i + c1j + d1k and a2 + b2i + c2j + d2k, their Hamilton product (a1 + b1i + c1j + d1k)(a2 + b2i+ c2j + d2k) is determined by the products of the basis elements and the distributive law. The distributive law makesit possible to expand the product so that it is a sum of products of basis elements. This gives the followingexpression:

Now the basis elements can be multiplied using the rules given above to get:[4]

Ordered list formUsing the basis 1, i, j, k of H makes it possible to write H as a set of quadruples:

Then the basis elements are:

and the formulas for addition and multiplication are:

Page 166: Vector and Matrices - Some Articles

Quaternion 163

Remarks

Scalar and vector parts

A number of the form a + 0i + 0j + 0k, where a is a real number, is called real, and a number of the form 0 + bi + cj+ dk, where b, c, and d are real numbers, is called pure imaginary. If a + bi + cj + dk is any quaternion, then a iscalled its scalar part and bi + cj + dk is called its vector part. The scalar part of a quaternion is always real, and thevector part is always pure imaginary. Even though every quaternion is a vector in a four-dimensional vector space, itis common to define a vector to mean a pure imaginary quaternion. With this convention, a vector is the same as anelement of the vector space R3.Hamilton called pure imaginary quaternions right quaternions[9] [10] and real numbers (considered as quaternionswith zero vector part) scalar quaternions.

Noncommutative

Unlike multiplication of real or complex numbers, multiplication of quaternions is not commutative: For example,, while . The noncommutativity of multiplication has some unexpected consequences, among

them that polynomial equations over the quaternions can have more distinct solutions than the degree of thepolynomial. The equation , for instance, has infinitely many quaternion solutions with , so that these solutions lie on the two-dimensional surface of a sphere centered on zero inthe three-dimensional subspace of quaternions with zero real part. This sphere intersects the complex plane at thetwo poles and .

Historical impact on physics

In 1984 the European Journal of Physics (5:25–32) published P.R. Girard’s essay "The quaternion group andmodern physics". It "shows how various physical covariance groups: SO(3), the Lorentz group, the general relativitygroup, the Clifford algebra SU(2) and the conformal group can be easily related to the quaternion group". Girardbegins by discussing group representations and by representing some space groups of crystallography. He proceedsto kinematics of rigid body motion. Next he uses complex quaternions (biquaternions) to represent the Lorentz groupof special relativity, including the Thomas precession. He cites five authors, beginning with Ludwik Silberstein thatuse a potential function of one quaternion variable to express Maxwell’s equations in a single differential equation.Concerning general relativity, he expresses the Runge–Lenz vector. He mentions the Clifford biquaternions(split-biquaternions) as an instance of Clifford algebra. Finally, invoking the reciprocal of a biquaternion, Girarddescribes conformal maps on spacetime. Among the fifty references he includes Alexander Macfarlane and hisBulletin of the Quaternion Society.A more personal view was written by Jim Lambek in 1995. In the Mathematical Intelligencer (17(4):7) hecontributed "If Hamilton Had Prevailed: Quaternions in Physics" which recalls the use of biquaternions: "My owninterest as a graduate student was raised by the inspiring book by Silberstein". He concludes saying "I firmly believethat quaternions can supply a shortcut for pure mathematicians who wish to familiarize themselves with certainaspects of theoretical physics."

Sums of four squares

Quaternions are also used in one of the proofs of Lagrange's four-square theorem in number theory, which states thatevery nonnegative integer is the sum of four integer squares. As well as being an elegant theorem in its own right,Lagrange's four square theorem has useful applications in areas of mathematics outside number theory, such ascombinatorial design theory. The quaternion-based proof uses Hurwitz quaternions, a subring of the ring of allquaternions for which there is an analog of the Euclidean algorithm.

Page 167: Vector and Matrices - Some Articles

Quaternion 164

Conjugation, the norm, and reciprocalConjugation of quaternions is analogous to conjugation of complex numbers and to transposition (also known asreversal) of elements of Clifford algebras. To define it, let q = a +bi +cj + dk be a quaternion. The conjugate of q isthe quaternion a − bi − cj − dk. It is denoted by q*, ,[4] qt, or . Conjugation is an involution, meaning that it isits own inverse, so conjugating an element twice returns the original element. The conjugate of a product of twoquaternions is the product of the conjugates in the reverse order. That is, if p and q are quaternions, then (pq)* =q*p*, not p*q*.Unlike the situation in the complex plane, the conjugation of a quaternion can be expressed entirely withmultiplication and addition:

Conjugation can be used to extract the scalar and vector parts of a quaternion. The scalar part of p is (p + p*)/2, andthe vector part of p is (p − p*)/2.The square root of the product of a quaternion with its conjugate is called its norm and is denoted ||q||. (Hamiltoncalled this quantity the tensor of q, but this conflicts with modern usage. See tensor.) It has the formula

This is always a non-negative real number, and it is the same as the Euclidean norm on H considered as the vectorspace R4. Multiplying a quaternion by a real number scales its norm by the absolute value of the number. That is, ifα is real, then

This is a special case of the fact that the norm is multiplicative, meaning that

for any two quaternions p and q. Multiplicativity is a consequence of the formula for the conjugate of a product.Alternatively multiplicativity follows directly from the corresponding property of determinants of square matricesand the formula

where i denotes the usual imaginary unit.This norm makes it possible to define the distance d(p, q) between p and q as the norm of their difference:

This makes H into a metric space. Addition and multiplication are continuous in the metric topology.A unit quaternion is a quaternion of norm one. Dividing a non-zero quaternion q by its norm produces a unitquaternion Uq called the versor of q:

Every quaternion has a polar decomposition q = ||q|| Uq.Using conjugation and the norm makes it possible to define the reciprocal of a quaternion. The product of aquaternion with its reciprocal should equal 1, and the considerations above imply that the product of and

(in either order) is 1. So the reciprocal of q is defined to be

This makes it possible to divide two quaternions p and q in two different ways. That is, their quotient can be eitherpq−1 or q−1p. The notation p/q is ambiguous because it does not specify whether q divides on the left or the right.

Page 168: Vector and Matrices - Some Articles

Quaternion 165

Algebraic properties

Cayley graph of Q8. The red arrows representmultiplication on the right by i, and the green

arrows represent multiplication on the right by j.

The set H of all quaternions is a vector space over the real numberswith dimension 4. (In comparison, the real numbers have dimension 1,the complex numbers have dimension 2, and the octonions havedimension 8.) The quaternions have a multiplication that is associativeand that distributes over vector addition, but which is not commutative.Therefore the quaternions H are a non-commutative associative algebraover the real numbers. Even though H contains copies of the complexnumbers, it is not an associative algebra over the complex numbers.

Because it is possible to divide quaternions, they form a divisionalgebra. This is a structure similar to a field except for thecommutativity of multiplication. Finite-dimensional associativedivision algebras over the real numbers are very rare. The Frobeniustheorem states that there are exactly three: R, C, and H. The normmakes the quaternions into a normed algebra, and normed divisionalgebras over the reals are also very rare: Hurwitz's theorem says thatthere are only four: R, C, H, and O (the octonions). The quaternions are also an example of a composition algebraand of a unital Banach algebra.

Because the product of any two basis vectors is plus or minus another basis vector, the set ±1, ±i, ±j, ±k forms agroup under multiplication. This group is called the quaternion group and is denoted Q8.[11] The real group ring ofQ8 is a ring RQ8 which is also an eight-dimensional vector space over R. It has one basis vector for each element ofQ8. The quaternions are the quotient ring of RQ8 by the ideal generated by the elements 1 + (−1), i + (−i), j + (−j),and k + (−k). Here the first term in each of the differences is one of the basis elements 1, i, j, and k, and the secondterm is one of basis elements −1, −i, −j, and −k, not the additive inverses of 1, i, j, and k.

Quaternions and the geometry of R3

Because the vector part of a quaternion is a vector in R3, the geometry of R3 is reflected in the algebraic structure ofthe quaternions. Many operations on vectors can be defined in terms of quaternions, and this makes it possible toapply quaternion techniques wherever spatial vectors arise. For instance, this is true in electrodynamics and 3Dcomputer graphics.For the remainder of this section, i, j, and k will denote both imaginary[12] basis vectors of H and a basis for R3.Notice that replacing i by −i, j by −j, and k by −k sends a vector to its additive inverse, so the additive inverse of avector is the same as its conjugate as a quaternion. For this reason, conjugation is sometimes called the spatialinverse.Choose two imaginary quaternions p = b1i + c1j + d1k and q = b2i + c2j + d2k. Their dot product is

This is equal to the scalar parts of p*q, qp*, pq*, and q*p. (Note that the vector parts of these four products aredifferent.) It also has the formulas

The cross product of p and q relative to the orientation determined by the ordered basis i, j, and k is

(Recall that the orientation is necessary to determine the sign.) This is equal to the vector part of the product pq (asquaternions), as well as the vector part of −q*p*. It also has the formula

Page 169: Vector and Matrices - Some Articles

Quaternion 166

In general, let p and q be quaternions (possibly non-imaginary), and write

where ps and qs are the scalar parts of p and q and and are the vector parts of p and q. Then we have theformula

This shows that the noncommutativity of quaternion multiplication comes from the multiplication of pure imaginaryquaternions. It also shows that two quaternions commute if and only if their vector parts are collinear.

Matrix representationsJust as complex numbers can be represented as matrices, so can quaternions. There are at least two ways ofrepresenting quaternions as matrices in such a way that quaternion addition and multiplication correspond to matrixaddition and matrix multiplication. One is to use 2×2 complex matrices, and the other is to use 4×4 real matrices. Inthe terminology of abstract algebra, these are injective homomorphisms from H to the matrix rings M2(C) andM4(R), respectively.Using 2×2 complex matrices, the quaternion a + bi + cj + dk can be represented as

This representation has the following properties:• Complex numbers (c = d = 0) correspond to diagonal matrices.• The norm of a quaternion (the square root of a product with its conjugate, as with complex numbers) is the square

root of the determinant of the corresponding matrix.[13]

• The conjugate of a quaternion corresponds to the conjugate transpose of the matrix.• Restricted to unit quaternions, this representation provides an isomorphism between S3 and SU(2). The latter

group is important for describing spin in quantum mechanics; see Pauli matrices.Using 4×4 real matrices, that same quaternion can be written as

In this representation, the conjugate of a quaternion corresponds to the transpose of the matrix. The fourth power ofthe norm of a quaternion is the determinant of the corresponding matrix. Complex numbers are block diagonalmatrices with two 2×2 blocks.

Page 170: Vector and Matrices - Some Articles

Quaternion 167

Quaternions as pairs of complex numbersQuaternions can be represented as pairs of complex numbers. From this perspective, quaternions are the result ofapplying the Cayley–Dickson construction to the complex numbers. This is a generalization of the construction ofthe complex numbers as pairs of real numbers.Let C2 be a two-dimensional vector space over the complex numbers. Choose a basis consisting of two elements 1and j. A vector in C2 can be written in terms of the basis elements 1 and j as

If we define j2 = −1 and ij = −ji, then we can multiply two vectors using the distributive law. Writing k in place ofthe product ij leads to the same rules for multiplication as the usual quaternions. Therefore the above vector ofcomplex numbers corresponds to the quaternion a + bi + cj + dk. If we write the elements of C2 as ordered pairs andquaternions as quadruples, then the correspondence is

Square roots of −1In the complex numbers, there are just two numbers, i and −i, whose square is −1 . In H there are infinitely manysquare roots of minus one: the quaternion solution for the square root of −1 is the surface of the unit sphere in3-space. To see this, let q = a + bi + cj + dk be a quaternion, and assume that its square is −1. In terms of a, b, c, andd, this means

To satisfy the last three equations, either a = 0 or b, c, and d are all 0. The latter is impossible because a is a realnumber and the first equation would imply that a2 = −1. Therefore a = 0 and b2 + c2 + d2 = 1. In other words, aquaternion squares to −1 if and only if it is a vector (that is, pure imaginary) with norm 1. By definition, the set of allsuch vectors forms the unit sphere.Only negative real quaternions have an infinite number of square roots. All others have just two (or one in the caseof 0).The identification of the square roots of minus one in H was given by Hamilton[14] but was frequently omitted inother texts. By 1971 the sphere was included by Sam Perlis in his three page exposition included in Historical Topicsin Algebra (page 39) published by the National Council of Teachers of Mathematics. More recently, the sphere ofsquare roots of minus one is described in Ian R. Porteous's book Clifford Algebras and the Classical Groups(Cambridge, 1995) in proposition 8.13 on page 60. Also in Conway (2003) On Quaternions and Octonions we readon page 40: "any imaginary unit may be called i, and perpendicular one j, and their product k", another statement ofthe sphere.

Page 171: Vector and Matrices - Some Articles

Quaternion 168

H as a union of complex planesEach square root of −1 creates a distinct copy of the complex numbers inside the quaternions. If q2 = −1, then thecopy is determined by the function

In the language of abstract algebra, each is an injective ring homomorphism from C to H. The images of theembeddings corresponding to q and -q are identical.Every non-real quaternion lies in a unique copy of C. Write q as the sum of its scalar part and its vector part:

Decompose the vector part further as the product of its norm and its versor:

(Note that this is not the same as .) The versor of the vector part of q, , is a pure imaginaryunit quaternion, so its square is −1. Therefore it determines a copy of the complex numbers by the function

Under this function, q is the image of the complex number . Thus H is the union of complex planesintersecting in a common real line, where the union is taken over the sphere of square roots of minus one.

Commutative subringsThe relationship of quaternions to each other within the complex subplanes of H can also be identified and expressedin terms of commutative subrings. Specifically, since two quaternions p and q commute (p q = q p) only if they lie inthe same complex subplane of H, the profile of H as a union of complex planes arises when one seeks to find allcommutative subrings of the quaternion ring. This method of commutative subrings is also used to profile thecoquaternions and 2 × 2 real matrices.

Functions of a quaternion variableLike functions of a complex variable, functions of a quaternion variable suggest useful physical models. Forexample, the original electric and magnetic fields described by Maxwell were functions of a quaternion variable.

Exponential, logarithm, and powerThe exponential and logarithm of a quaternion are relatively inexpensive to compute, particularly compared with thecost of those operations for other charts on SO(3) such as rotation matrices, which require computing the matrixexponential and matrix logarithm respectively.Given a quaternion,

,the exponential is computed as

and

.[15]

It follows that the polar decomposition of a quaternion may be written

where the angle and the unit vector are defined by:

Page 172: Vector and Matrices - Some Articles

Quaternion 169

and

Any unit quaternion may be expressed in polar form as .The power of a quaternion raised to an arbitrary (real) exponent is given by:

Three-dimensional and four-dimensional rotation groupsThe term "conjugation", besides the meaning given above, can also mean taking an element a to r a r-1 where r issome non-zero element (quaternion). All elements that are conjugate to a given element (in this sense of the wordconjugate) have the same real part and the same norm of the vector part. (Thus the conjugate in the other sense is oneof the conjugates in this sense.)Thus the multiplicative group of non-zero quaternions acts by conjugation on the copy of R³ consisting ofquaternions with real part equal to zero. Conjugation by a unit quaternion (a quaternion of absolute value 1) with realpart cos(θ) is a rotation by an angle 2θ, the axis of the rotation being the direction of the imaginary part. Theadvantages of quaternions are:1. Non singular representation (compared with Euler angles for example).2. More compact (and faster) than matrices.3. Pairs of unit quaternions represent a rotation in 4D space (see SO(4): Algebra of 4D rotations).The set of all unit quaternions (versors) forms a 3-dimensional sphere S³ and a group (a Lie group) undermultiplication. S³ is the double cover of the group SO(3,R) of real orthogonal 3×3 matrices of determinant 1 sincetwo unit quaternions correspond to every rotation under the above correspondence.The image of a subgroup of S³ is a point group, and conversely, the preimage of a point group is a subgroup of S³.The preimage of a finite point group is called by the same name, with the prefix binary. For instance, the preimageof the icosahedral group is the binary icosahedral group.The group S³ is isomorphic to SU(2), the group of complex unitary 2×2 matrices of determinant 1.Let A be the set of quaternions of the form a + bi + cj + dk where a, b, c, and d are either all integers or all rationalnumbers with odd numerator and denominator 2. The set A is a ring (in fact a domain) and a lattice and is called thering of Hurwitz quaternions. There are 24 unit quaternions in this ring, and they are the vertices of a 24-cell regularpolytope with Schläfli symbol 3,4,3.

GeneralizationsIf F is any field with characteristic different from 2, and a and b are elements of F, one may define afour-dimensional unitary associative algebra over F with basis 1, i, j, and ij, where i2 = a, j2 = b and ij = −ji (so ij2 =−ab). These algebras are called quaternion algebras and are isomorphic to the algebra of 2×2 matrices over F orform division algebras over F, depending on the choice of a and b.

Quaternions as the even part of Cℓ3,0(R)The usefulness of quaternions for geometrical computations can be generalised to other dimensions, by identifyingthe quaternions as the even part Cℓ+

3,0(R) of the Clifford algebra Cℓ3,0(R). This is an associative multivector algebrabuilt up from fundamental basis elements σ1, σ2, σ3 using the product rules

Page 173: Vector and Matrices - Some Articles

Quaternion 170

If these fundamental basis elements are taken to represent vectors in 3D space, then it turns out that the reflection ofa vector r in a plane perpendicular to a unit vector w can be written:

Two reflections make a rotation by an angle twice the angle between the two reflection planes, so

corresponds to a rotation of 180° in the plane containing σ1 and σ2. This is very similar to the correspondingquaternion formula,

In fact, the two are identical, if we make the identification

and it is straightforward to confirm that this preserves the Hamilton relations

In this picture, quaternions correspond not to vectors but to bivectors, quantities with magnitude and orientationsassociated with particular 2D planes rather than 1D directions. The relation to complex numbers becomes clearer,too: in 2D, with two vector directions σ1 and σ2, there is only one bivector basis element σ1σ2, so only oneimaginary. But in 3D, with three vector directions, there are three bivector basis elements σ1σ2, σ2σ3, σ3σ1, so threeimaginaries.This reasoning extends further. In the Clifford algebra Cℓ4,0(R), there are six bivector basis elements, since with fourdifferent basic vector directions, six different pairs and therefore six different linearly independent planes can bedefined. Rotations in such spaces using these generalisations of quaternions, called rotors, can be very useful forapplications involving homogeneous coordinates. But it is only in 3D that the number of basis bivectors equals thenumber of basis vectors, and each bivector can be identified as a pseudovector.Dorst et al. identify the following advantages for placing quaternions in this wider setting:[16]

• Rotors are natural and non-mysterious in geometric algebra and easily understood as the encoding of a doublereflection.

• In geometric algebra, a rotor and the objects it acts on live in the same space. This eliminates the need to changerepresentations and to encode new data structures and methods (which is required when augmenting linearalgebra with quaternions).

• A rotor is universally applicable to any element of the algebra, not just vectors and other quaternions, but alsolines, planes, circles, spheres, rays, and so on.

• In the conformal model of Euclidean geometry, rotors allow the encoding of rotation, translation and scaling in asingle element of the algebra, universally acting on any element. In particular, this means that rotors can representrotations around an arbitrary axis, whereas quaternions are limited to an axis through the origin.

• Rotor-encoded transformations make interpolation particularly straightforward.For further detail about the geometrical uses of Clifford algebras, see Geometric algebra.

Brauer groupThe quaternions are "essentially" the only (non-trivial) central simple algebra (CSA) over the real numbers, in thesense that every CSA over the reals is Brauer equivalent to either the reals or the quaternions. Explicitly, the Brauergroup of the reals consists of two classes, represented by the reals and the quaternions, where the Brauer group is theset of all CSAs, up to equivalence relation of one CSA being a matrix ring over another. By the Artin–Wedderburntheorem (specifically, Wedderburn's part), CSAs are all matrix algebras over a division algebra, and thus thequaternions are the only non-trivial division algebra over the reals.

Page 174: Vector and Matrices - Some Articles

Quaternion 171

CSAs – rings over a field, which are simple algebras (have no non-trivial 2-sided ideals, just as with fields) whosecenter is exactly the field – are a noncommutative analog of extension fields, and are more restrictive than generalring extensions. The fact that the quaternions are the only non-trivial CSA over the reals (up to equivalence) may becompared with the fact that the complex numbers are the only non-trivial field extension of the reals.

Quotes• "I regard it as an inelegance, or imperfection, in quaternions, or rather in the state to which it has been hitherto

unfolded, whenever it becomes or seems to become necessary to have recourse to x, y, z, etc." — William RowanHamilton (ed. Quoted in a letter from Tait to Cayley).

• "Time is said to have only one dimension, and space to have three dimensions. […] The mathematical quaternionpartakes of both these elements; in technical language it may be said to be "time plus space", or "space plus time":and in this sense it has, or at least involves a reference to, four dimensions. And how the One of Time, of Spacethe Three, Might in the Chain of Symbols girdled be." — William Rowan Hamilton (Quoted in R.P. Graves, "Lifeof Sir William Rowan Hamilton").

• "Quaternions came from Hamilton after his really good work had been done; and, though beautifully ingenious,have been an unmixed evil to those who have touched them in any way, including Clerk Maxwell." — LordKelvin, 1892.

• "Neither matrices nor quaternions and ordinary vectors were banished from these ten [additional] chapters. For, inspite of the uncontested power of the modern Tensor Calculus, those older mathematical languages continue, inmy opinion, to offer conspicuous advantages in the restricted field of special relativity. Moreover, in science aswell as in every-day life, the mastery of more than one language is also precious, as it broadens our views, isconducive to criticism with regard to, and guards against hypostasy [weak-foundation] of, the matter expressed bywords or mathematical symbols." — Ludwik Silberstein, preparing the second edition of his Theory of Relativityin 1924.

• "… quaternions appear to exude an air of nineteenth century decay, as a rather unsuccessful species in thestruggle-for-life of mathematical ideas. Mathematicians, admittedly, still keep a warm place in their hearts for theremarkable algebraic properties of quaternions but, alas, such enthusiasm means little to the harder-headedphysical scientist." — Simon L. Altmann, 1986.

• "...the thing about a Quaternion 'is' is that we're obliged to encounter it in more than one guise. As a vectorquotient. As a way of plotting complex numbers along three axes instead of two. As a list of instructions forturning one vector into another..... And considered subjectively, as an act of becoming longer or shorter, while atthe same time turning, among axes whose unit vector is not the familiar and comforting 'one' but the altogetherdisquieting square root of minus one. If you were a vector, mademoiselle, you would begin in the 'real' world,change your length, enter an 'imaginary' reference system, rotate up to three different ways, and return to 'reality'a new person. Or vector..." — Thomas Pynchon, Against the Day, 2006.

Page 175: Vector and Matrices - Some Articles

Quaternion 172

Notes[1] Hamilton (http:/ / books. google. com/ ?id=TCwPAAAAIAAJ& printsec=frontcover& dq=quaternion+ quotient+ lines+ tridimensional+

space+ time#PPA60,M1). Hodges and Smith. 1853. p. 60. .[2] Hardy 1881 pg. 32 (http:/ / books. google. com/ ?id=YNE2AAAAMAAJ& printsec=frontcover& dq=quotient+ two+ vectors+ called+

quaternion#PPA32,M1). Ginn, Heath, & co.. 1881. .[3] Journal of Theoretics. http:/ / www. journaloftheoretics. com/ articles/ 3-6/ qm-pub. pdf.[4] See Hazewinkel et. al. (2004), p. 12.[5] Conway, John Horton; Smith, Derek Alan (2003). On quaternions and octonions: their geometry, arithmetic, and symmetry (http:/ / books.

google. com/ books?id=E_HCwwxMbfMC& pg=PA9). p. 9. ISBN 1-56881-134-9. .[6] Robert E. Bradley, Charles Edward Sandifer (2007). Leonhard Euler: life, work and legacy (http:/ / books. google. com/

books?id=75vJL_Y-PvsC& pg=PA193). p. 193. ISBN 0-444-52728-1. .. They mention Wilhelm Blaschke's 1959 claim that "the quaternionswere first identified by L. Euler in a letter to Goldbach written on May 4, 1748" and comment that "it makes no sense whatsoever to say thatEuler 'identified' the quaternions in this letter... this claim is absurd."

[7] Simon L. Altmann (December 1989). "Hamilton, Rodrigues, and the Quaternion Scandal" (http:/ / www. jstor. org/ stable/ 2689481).Mathematics Magazine 62 (5): 306. .

[8] Ken Shoemake (1985). "Animating Rotation with Quaternion Curves" (http:/ / www. cs. cmu. edu/ ~kiranb/ animation/ p245-shoemake. pdf).Computer Graphics 19 (3): 245–254. doi:10.1145/325165.325242. . Presented at SIGGRAPH '85.Tomb Raider (1996) is often cited as the first mass-market computer game to have used quaternions to achieve smooth 3D rotation. See egNick Bobick, " Rotating Objects Using Quaternions (http:/ / www. gamasutra. com/ view/ feature/ 3278/ rotating_objects_using_quaternions.php)", Game Developer magazine, July 1998

[9] Hamilton, Sir William Rowan (1866). Hamilton Elements of Quaternions article 285 (http:/ / books. google. com/ ?id=fIRAAAAAIAAJ&pg=PA117& dq=quaternion#PPA310,M1). p. 310]. .

[10] Hardy Elements of quaternions (http:/ / dlxs2. library. cornell. edu/ cgi/ t/ text/ pageviewer-idx?c=math;cc=math;q1=rightquaternion;rgn=full text;idno=05140001;didno=05140001;view=image;seq=81). library.cornell.edu. p. 65. .

[11] "quaternion group" (http:/ / www. wolframalpha. com/ input/ ?i=quaternion+ group). Wolframalpha.com. .[12] Vector Analysis (http:/ / books. google. com/ ?id=RC8PAAAAIAAJ& printsec=frontcover& dq=right+ tensor+ dyadic#PPA428,M1).

Gibbs-Wilson. 1901. p. 428. .[13] Wolframalpha.com (http:/ / www. wolframalpha. com/ input/ ?i=det+ a+ b*i,+ c+ d*i,+ -c+ d*i,+ a-b*i)[14] Hamilton (1899). Elements of Quaternions (2nd ed.). p. 244. ISBN 1108001718.[15] Lce.hut.fi (http:/ / www. lce. hut. fi/ ~ssarkka/ pub/ quat. pdf)[16] Quaternions and Geometric Algebra (http:/ / www. geometricalgebra. net/ quaternions. html). Accessed 2008-09-12. See also: Leo Dorst,

Daniel Fontijne, Stephen Mann, (2007), Geometric Algebra For Computer Science (http:/ / www. geometricalgebra. net/ index. html), MorganKaufmann. ISBN 0-12-369465-5

External articles and resources

Books and publications• Hamilton, William Rowan. On quaternions, or on a new system of imaginaries in algebra (http:/ / www. emis.

ams. org/ classics/ Hamilton/ OnQuat. pdf). Philosophical Magazine. Vol. 25, n 3. p. 489–495. 1844.• Hamilton, William Rowan (1853), " Lectures on Quaternions (http:/ / historical. library. cornell. edu/ cgi-bin/ cul.

math/ docviewer?did=05230001& seq=9)". Royal Irish Academy.• Hamilton (1866) Elements of Quaternions (http:/ / books. google. com/ books?id=fIRAAAAAIAAJ) University

of Dublin Press. Edited by William Edwin Hamilton, son of the deceased author.• Hamilton (1899) Elements of Quaternions volume I, (1901) volume II. Edited by Charles Jasper Joly; published

by Longmans, Green & Co..• Tait, Peter Guthrie (1873), "An elementary treatise on quaternions". 2d ed., Cambridge, [Eng.] : The University

Press.• Michiel Hazewinkel, Nadiya Gubareni, Nadezhda Mikhaĭlovna Gubareni, Vladimir V. Kirichenko. Algebras,

rings and modules. Volume 1. 2004. Springer, 2004. ISBN 1-4020-2690-0• Maxwell, James Clerk (1873), "A Treatise on Electricity and Magnetism". Clarendon Press, Oxford.• Tait, Peter Guthrie (1886), " Quaternion (http:/ / www. ugcs. caltech. edu/ ~presto/ papers/

Quaternions-Britannica. ps. bz2)". M.A. Sec. R.S.E. Encyclopaedia Britannica, Ninth Edition, 1886, Vol. XX,pp. 160–164. (bzipped PostScript file)

Page 176: Vector and Matrices - Some Articles

Quaternion 173

• Joly, Charles Jasper (1905), "A manual of quaternions". London, Macmillan and co., limited; New York, TheMacmillan company. LCCN 05036137 //r84

• Macfarlane, Alexander (1906), "Vector analysis and quaternions", 4th ed. 1st thousand. New York, J. Wiley &Sons; [etc., etc.]. LCCN es 16000048

• 1911 encyclopedia: " Quaternions (http:/ / www. 1911encyclopedia. org/ Quaternions)".• Finkelstein, David, Josef M. Jauch, Samuel Schiminovich, and David Speiser (1962), "Foundations of quaternion

quantum mechanics". J. Mathematical Phys. 3, pp. 207–220, MathSciNet.• Du Val, Patrick (1964), "Homographies, quaternions, and rotations". Oxford, Clarendon Press (Oxford

mathematical monographs). LCCN 64056979 //r81• Crowe, Michael J. (1967), A History of Vector Analysis: The Evolution of the Idea of a Vectorial System,

University of Notre Dame Press. Surveys the major and minor vector systems of the 19th century (Hamilton,Möbius, Bellavitis, Clifford, Grassmann, Tait, Peirce, Maxwell, Macfarlane, MacAuley, Gibbs, Heaviside).

• Altmann, Simon L. (1986), "Rotations, quaternions, and double groups". Oxford [Oxfordshire] : Clarendon Press; New York : Oxford University Press. LCCN 85013615 ISBN 0-19-855372-2

• Altmann, Simon L. (1989), "Hamilton, Rodrigues, and the Quaternion Scandal". Mathematics Magazine. Vol. 62,No. 5. p. 291–308, Dec. 1989.

• Adler, Stephen L. (1995), "Quaternionic quantum mechanics and quantum fields". New York : Oxford UniversityPress. International series of monographs on physics (Oxford, England) 88. LCCN 94006306 ISBN0-19-506643-X

• Trifonov, Vladimir (http:/ / members. cox. net/ vtrifonov/ ) (1995), "A Linear Solution of the Four-DimensionalityProblem", Europhysics Letters, 32 (8) 621–626, DOI: 10.1209/0295-5075/32/8/001 (http:/ / dx. doi. org/ 10.1209/ 0295-5075/ 32/ 8/ 001)

• Ward, J. P. (1997), "Quaternions and Cayley Numbers: Algebra and Applications", Kluwer Academic Publishers.ISBN 0-7923-4513-4

• Kantor, I. L. and Solodnikov, A. S. (1989), "Hypercomplex numbers, an elementary introduction to algebras",Springer-Verlag, New York, ISBN 0-387-96980-2

• Gürlebeck, Klaus and Sprössig, Wolfgang (1997), "Quaternionic and Clifford calculus for physicists andengineers". Chichester ; New York : Wiley (Mathematical methods in practice; v. 1). LCCN 98169958 ISBN0-471-96200-7

• Kuipers, Jack (2002), "Quaternions and Rotation Sequences: A Primer With Applications to Orbits, Aerospace,and Virtual Reality" (reprint edition), Princeton University Press. ISBN 0-691-10298-8

• Conway, John Horton, and Smith, Derek A. (2003), "On Quaternions and Octonions: Their Geometry,Arithmetic, and Symmetry", A. K. Peters, Ltd. ISBN 1-56881-134-9 ( review (http:/ / nugae. wordpress. com/2007/ 04/ 25/ on-quaternions-and-octonions/ )).

• Kravchenko, Vladislav (2003), "Applied Quaternionic Analysis", Heldermann Verlag ISBN 3-88538-228-8.• Hanson, Andrew J. (http:/ / www. cs. indiana. edu/ ~hanson/ quatvis/ ) (2006), "Visualizing Quaternions",

Elsevier: Morgan Kaufmann; San Francisco. ISBN 0-12-088400-3• Trifonov, Vladimir (http:/ / members. cox. net/ vtrifonov/ )</ref> (2007), "Natural Geometry of Nonzero

Quaternions", International Journal of Theoretical Physics, 46 (2) 251–257, DOI: 10.1007/s10773-006-9234-9(http:/ / dx. doi. org/ 10. 1007/ s10773-006-9234-9)

• Ernst Binz & Sonja Pods (2008) Geometry of Heisenberg Groups American Mathematical Society, Chapter 1:"The Skew Field of Quaternions" (23 pages) ISBN 978-0-8218-4495-3.

• Vince, John A. (2008), Geometric Algebra for Computer Graphics, Springer, ISBN 978-1-84628-996-5.• For molecules that can be regarded as classical rigid bodies molecular dynamics computer simulation employs

quaternions. They were first introduced for this purpose by D.J. Evans, (1977), "On the Representation ofOrientation Space", Mol. Phys., vol 34, p 317.

Page 177: Vector and Matrices - Some Articles

Quaternion 174

Links and monographs• Matrix and Quaternion FAQ v1.21 (http:/ / www. j3d. org/ matrix_faq/ matrfaq_latest. html) Frequently Asked

Questions• "Geometric Tools documentation" ( frame (http:/ / www. geometrictools. com/ Documentation/ Documentation.

html); body (http:/ / www. geometrictools. com/ Documentation/ DocumentationBody. html)) includes severalpapers focusing on computer graphics applications of quaternions. Covers useful techniques such as sphericallinear interpolation.

• Patrick-Gilles Maillot (http:/ / www. chez. com/ pmaillot) Provides free Fortran and C source code formanipulating quaternions and rotations / position in space. Also includes mathematical background onquaternions.

• "Geometric Tools source code" ( frame (http:/ / www. geometrictools. com/ LibFoundation/ Mathematics/Mathematics. html); body (http:/ / www. geometrictools. com/ LibFoundation/ Mathematics/ MathematicsBody.html)) includes free C++ source code for a complete quaternion class suitable for computer graphics work, undera very liberal license.

• Doug Sweetser, Doing Physics with Quaternions (http:/ / world. std. com/ ~sweetser/ quaternions/ qindex/ qindex.html)

• Quaternions for Computer Graphics and Mechanics (Gernot Hoffman) (http:/ / www. fho-emden. de/ ~hoffmann/quater12012002. pdf)

• The Physical Heritage of Sir W. R. Hamilton (http:/ / arxiv. org/ pdf/ math-ph/ 0201058) (PDF)• D. R. Wilkins, Hamilton’s Research on Quaternions (http:/ / www. maths. tcd. ie/ pub/ HistMath/ People/

Hamilton/ Quaternions. html)• Quaternion Julia Fractals (http:/ / www. unpronounceable. com/ julia/ ) 3D Raytraced Quaternion Julia Fractals

by David J. Grossman• Quaternion Math and Conversions (http:/ / www. euclideanspace. com/ maths/ algebra/ realNormedAlgebra/

quaternions/ index. htm) Great page explaining basic math with links to straight forward rotation conversionformulae.

• John H. Mathews, Bibliography for Quaternions (http:/ / math. fullerton. edu/ mathews/ c2003/ QuaternionBib/Links/ QuaternionBib_lnk_3. html).

• Quaternion powers on GameDev.net (http:/ / www. gamedev. net/ reference/ articles/ article1095. asp)• Andrew Hanson, Visualizing Quaternions home page (http:/ / books. elsevier. com/ companions/ 0120884003/ vq/

index. html).• Representing Attitude with Euler Angles and Quaternions: A Reference (http:/ / ai. stanford. edu/ ~diebel/

attitude. html), Technical report and Matlab toolbox summarizing all common attitude representations, withdetailed equations and discussion on features of various methods.

• Charles F. F. Karney, Quaternions in molecular modeling, J. Mol. Graph. Mod. 25(5), 595–604 (Jan. 2007); DOI:10.1016/j.jmgm.2006.04.002 (http:/ / dx. doi. org/ 10. 1016/ j. jmgm. 2006. 04. 002); E-print arxiv:0506177(http:/ / arxiv. org/ abs/ physics/ 0506177).

• Johan E. Mebius, A matrix-based proof of the quaternion representation theorem for four-dimensional rotations.(http:/ / arxiv. org/ abs/ math/ 0501249), arXiv General Mathematics 2005.

• Johan E. Mebius, Derivation of the Euler-Rodrigues formula for three-dimensional rotations from the generalformula for four-dimensional rotations. (http:/ / arxiv. org/ abs/ math/ 0701759), arXiv General Mathematics2007.

• NUI Maynooth Department of Mathematics, Hamilton Walk (http:/ / www. maths. nuim. ie/ links/ hamilton.shtml).

• OpenGL:Tutorials:Using Quaternions to represent rotation (http:/ / gpwiki. org/ index. php/OpenGL:Tutorials:Using_Quaternions_to_represent_rotation)

Page 178: Vector and Matrices - Some Articles

Quaternion 175

• David Erickson, Defence Research and Development Canada (DRDC), Complete derivation of rotation matrixfrom unitary quaternion representation in DRDC TR 2005-228 paper. Drdc-rddc.gc.ca (http:/ / aiss. suffield.drdc-rddc. gc. ca/ uploads/ quaternion. pdf)

• Alberto Martinez, University of Texas Department of History, "Negative Math, How Mathematical Rules Can BePositively Bent", Utexas.edu (https:/ / webspace. utexas. edu/ aam829/ 1/ m/ NegativeMath. html)

• D. Stahlke, Quaternions in Classical Mechanics Stahlke.org (http:/ / www. stahlke. org/ dan/ phys-papers/quaternion-paper. pdf) (PDF)

• Morier-Genoud, Sophie, and Valentin Ovsienko. "Well, Papa, can you multiply triplets?", arxiv.org (http:/ / arxiv.org/ abs/ 0810. 5562) describes how the quaternions can be made into a skew-commutative algebra graded byZ/2×Z/2×Z/2.

• Curious Quaternions (http:/ / plus. maths. org/ content/ os/ issue32/ features/ baez/ index) by Helen Joyce hostedby John Baez.

Software• Quaternion Calculator (http:/ / www. bluetulip. org/ programs/ quaternions. html) [javascript], bluetulip.org• Quaternion Calculator (http:/ / theworld. com/ ~sweetser/ java/ qcalc/ qcalc. html) [Java], theworld.com• Quaternion Toolbox for Matlab (http:/ / qtfm. sourceforge. net/ ), qtfm.sourceforge.net• Boost library support for Quaternions in C++ (http:/ / www. boost. org/ doc/ libs/ 1_41_0/ libs/ math/ doc/

quaternion/ html/ index. html), boost.org• Mathematics of flight simulation >Turbo-PASCAL software for quaternions, Euler angles and Extended Euler

angles (http:/ / www. xs4all. nl/ ~jemebius/ Eea. htm), xs4all.nl

Skew-symmetric matrixIn mathematics, and in particular linear algebra, a skew-symmetric (or antisymmetric or antimetric[1] ) matrix is asquare matrix A whose transpose is also its negative; that is, it satisfies the equation A = −AT. If the entry in the i th

row and j th column is aij, i.e. A = (aij) then the symmetric condition becomes aij = −aji. For example, the followingmatrix is skew-symmetric:

PropertiesWe assume that the underlying field is not of characteristic 2: that is, that 1 + 1 ≠ 0 where 1 denotes themultiplicative identity and 0 the additive identity of the given field. Otherwise, a skew-symmetric matrix is just thesame thing as a symmetric matrix.Sums and scalar multiples of skew-symmetric matrices are again skew-symmetric. Hence, the skew-symmetricmatrices form a vector space. Its dimension is n(n−1)/2.Let Matn denote the space of n × n matrices. A skew-symmetric matrix is determined by n(n − 1)/2 scalars (thenumber of entries above the main diagonal); a symmetric matrix is determined by n(n + 1)/2 scalars (the number ofentries on or above the main diagonal). If Skewn denotes the space of n × n skew-symmetric matrices and Symndenotes the space of n × n symmetric matrices and then since Matn = Skewn + Symn and Skewn ∩ Symn = 0, i.e.

where ⊕ denotes the direct sum. Let A ∈ Matn then

Page 179: Vector and Matrices - Some Articles

Skew-symmetric matrix 176

Notice that ½(A − AT) ∈ Skewn and ½(A + AT) ∈ Symn. This is true for every square matrix A with entries from anyfield whose characteristic is different from 2.As to equivalent conditions, notice that the relation of skew-symmetricity, A=-AT, holds for a matrix A if and only ifone has xTAy =-yTAx for all vectors x and y. This is also equivalent to xTAx=0 for all x (one implication beingobvious, the other a plain consequence of (x+y)TA(x+y)=0 for all x and y).All main diagonal entries of a skew-symmetric matrix must be zero, so the trace is zero. If A = (aij) isskew-symmetric, aij = −aji; hence aii = 0.3x3 skew symmetric matrices can be used to represent cross products as matrix multiplications.

DeterminantLet A be a n×n skew-symmetric matrix. The determinant of A satisfies

det(A) = det(AT) = det(−A) = (−1)ndet(A). Hence det(A) = 0 when n is odd.In particular, if n is odd, and since the underlying field is not of characteristic 2, the determinant vanishes. This resultis called Jacobi's theorem, after Carl Gustav Jacobi (Eves, 1980).The even-dimensional case is more interesting. It turns out that the determinant of A for n even can be written as thesquare of a polynomial in the entries of A (Theorem by Thomas Muir):

det(A) = Pf(A)2.This polynomial is called the Pfaffian of A and is denoted Pf(A). Thus the determinant of a real skew-symmetricmatrix is always non-negative.The number of distinct terms s(n) in the expansion of the determinant of a skew-symmetric matrix of order n hasbeen considered already by Cayley, Sylvester, and Pfaff. Due to cancellations, this number is quite small ascompared the number of terms of a generic matrix of order n, which is n!. The sequence s(n) (sequence A002370 [2]

in OEIS) is1, 0, 2, 0, 6, 0, 120, 0, 5250, 0, 395010, 0, …

and it is encoded in the exponential generating function

The latter yields to the asymptotics (for n even)

The number of positive and negative terms are approximatively a half of the total, although their difference takeslarger and larger positive and negative values as n increases (sequence A167029 [3] in OEIS).

Spectral theoryThe eigenvalues of a skew-symmetric matrix always come in pairs ±λ (except in the odd-dimensional case wherethere is an additional unpaired 0 eigenvalue). For a real skew-symmetric matrix the nonzero eigenvalues are all pureimaginary and thus are of the form iλ1, −iλ1, iλ2, −iλ2, … where each of the λk are real.Real skew-symmetric matrices are normal matrices (they commute with their adjoints) and are thus subject to the spectral theorem, which states that any real skew-symmetric matrix can be diagonalized by a unitary matrix. Since the eigenvalues of a real skew-symmetric matrix are complex it is not possible to diagonalize one by a real matrix. However, it is possible to bring every skew-symmetric matrix to a block diagonal form by an orthogonal transformation. Specifically, every 2n × 2n real skew-symmetric matrix can be written in the form A = Q Σ QT where

Page 180: Vector and Matrices - Some Articles

Skew-symmetric matrix 177

Q is orthogonal and

for real λk. The nonzero eigenvalues of this matrix are ±iλk. In the odd-dimensional case Σ always has at least onerow and column of zeros.More generally, every complex skew-symmetric matrix can be written in the form A = U Σ UT where U is unitaryand Σ has the block-diagonal form given above with complex λk. This is an example of the Youla decomposition of acomplex square matrix.[4]

Alternating formsWe begin with a special case of the definition. An alternating form φ on a vector space V over a field K, not ofcharacteristic 2, is defined to be a bilinear form

φ : V × V → Ksuch that

φ(v,w) = −φ(w,v).This defines a form with desirable properties for vector spaces over fields of characteristic not equal to 2, but in avector space over a field of characteristic 2, the definition fails, as every element is its own additive inverse. That is,symmetric and alternating forms are equivalent, which is clearly false in the case above. However, we may extendthe definition to vector spaces over fields of characteristic 2 as follows:In the case where the vector space V is over a field of arbitrary characteristic including characteristic 2, we may statethat for all vectors v in V

φ(v,v) = 0.This reduces to the above case when the field is not of characteristic 2 as seen below

0 = φ(v + w,v + w) = φ(v,v) + φ(v,w) + φ(w,v) + φ(w,w) = φ(v,w) + φ(w,v)Whence,

φ(v,w) = −φ(w,v).Thus, we have a definition that now holds for vector spaces over fields of all characteristics.Such a φ will be represented by a skew-symmetric matrix A, φ(v, w) = vTAw, once a basis of V is chosen; andconversely an n×n skew-symmetric matrix A on Kn gives rise to an alternating form sending x to xTAx.

Page 181: Vector and Matrices - Some Articles

Skew-symmetric matrix 178

Infinitesimal rotationsSkew-symmetric matrices over the field of real numbers form the tangent space to the real orthogonal group O(n) atthe identity matrix; formally, the special orthogonal Lie algebra. In this sense, then, skew-symmetric matrices can bethought of as infinitesimal rotations.Another way of saying this is that the space of skew-symmetric matrices forms the Lie algebra o(n) of the Lie groupO(n). The Lie bracket on this space is given by the commutator:

It is easy to check that the commutator of two skew-symmetric matrices is again skew-symmetric:

The matrix exponential of a skew-symmetric matrix A is then an orthogonal matrix R:

The image of the exponential map of a Lie algebra always lies in the connected component of the Lie group thatcontains the identity element. In the case of the Lie group O(n), this connected component is the special orthogonalgroup SO(n), consisting of all orthogonal matrices with determinant 1. So R = exp(A) will have determinant +1.Moreover, since the exponential map of a connected compact Lie group is always surjective, it turns out that everyorthogonal matrix with unit determinant can be written as the exponential of some skew-symmetric matrix. In theparticular important case of dimension n=2, the exponential representation for an orthogonal matrix reduces to thewell-known polar form of a complex number of unit modulus. Indeed, if n=2, a special orthogonal matrix has theform

with a2+b2=1. Therefore, putting a=cosθ and b=sinθ, it can be written

which corresponds exactly to the polar form cosθ + isinθ =eiθ of a complex number of unit modulus. Theexponential representation of an orthogonal matrix of order n can also be obtained starting from the fact that indimension n any special orthogonal matrix R can be written as R = Q S QT, where Q is orthogonal and S is a blockdiagonal matrix with blocks of order 2, plus one of order 1 if n is odd; since each single block of order 2 isalso an orthogonal matrix, it admits an exponential form. Correspondingly, the matrix S writes as exponential of askew-symmetric block matrix Σ of the form above, S=exp(Σ), so that R= Q exp(Σ)QT =exp(Q Σ QT), exponential ofthe skew-symmetric matrix Q Σ QT. Conversely, the surjectivity of the exponential map, together with the abovementioned block-diagonalization for skew-simmetric matrices, implies the block-diagonalization for orthogonalmatrices.

Coordinate-freeMore intrinsically (i.e., without using coordinates), skew-symmetric matrices on a vector space V with an innerproduct may be defined as the bivectors on the space, which are sums of simple bivectors (2-blades) . Thecorrespondence is given by the map where is the covector dual to the vector v;in coordinates these are exactly the elementary skew-symmetric matrices. This characterization is used ininterpreting the curl of a vector field (naturally a 2-vector) as an infinitesimal rotation or "curl", hence the name.

Page 182: Vector and Matrices - Some Articles

Skew-symmetric matrix 179

Skew-symmetrizable matrixAn n-by-n matrix A is said to be skew-symmetrizable if there exist an invertible diagonal matrix D andskew-symmetric matrix S such that A = DS. For real n-by-n matrices, sometimes the condition for D to have positiveentries is added.[5]

References[1] Richard A. Reyment, K. G. Jöreskog, Leslie F. Marcus (1996). Applied Factor Analysis in the Natural Sciences. Cambridge University Press.

p. 68. ISBN 0521575567.[2] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa002370[3] http:/ / en. wikipedia. org/ wiki/ Oeis%3Aa167029[4] Youla, D. C. (1961). "A normal form for a matrix under the unitary congruence group". Canad. J. Math. 13: 694–704.

doi:10.4153/CJM-1961-059-8.[5] Fomin, Sergey; Zelevinsky, Andrei (2001). "Cluster algebras I: Foundations". arXiv:math/0104151.

Further reading• Eves, Howard (1980). Elementary Matrix Theory. Dover Publications. ISBN 978-0-486-63946-8.• Suprunenko, D. A. (2001), "Skew-symmetric matrix" (http:/ / eom. springer. de/ S/ s085720. htm), in Hazewinkel,

Michiel, Encyclopaedia of Mathematics, Springer, ISBN 978-1556080104• Aitken, A. C. (1944). "On the number of distinct terms in the expansion of symmetric and skew determinants.".

Edinburgh Math. Notes.

External links• "Antisymmetric matrix" (http:/ / mathworld. wolfram. com/ AntisymmetricMatrix. html). Wolfram Mathworld.

Page 183: Vector and Matrices - Some Articles

Xyzzy 180

XyzzyXyzzy is a magic word from the Colossal Cave Adventure computer game.In computing, the word is sometimes used as a metasyntactic variable or as a video game cheat code, the canonical"magic word". In mathematics, the word is used as a mnemonic for the cross product.[1]

OriginModern usage derives primarily from one of the earliest computer games, Colossal Cave Adventure, in which theidea is to explore an underground cave with many rooms, collecting the treasures found there. By typing "xyzzy" atthe appropriate time, the player could move instantly between two otherwise distant points. As Colossal CaveAdventure was both the first adventure game and the first interactive fiction, hundreds of later interactive fictiongames included responses to the command "xyzzy" in tribute.[2]

The origin of the word has been the subject of debate. Rick Adams pointed out that the mnemonic "XYZZY" haslong been taught by math teachers to remember the process for performing cross products (as a mnemonic that liststhe order of subscripts to be multiplied first).[1] Crowther, author of Colossal Cave Adventure, states that he wasunaware of the mnemonic, and that he "made it up from whole cloth" when writing the game.[3]

UsesXyzzy has actually been implemented as an undocumented no-op command on several operating systems; in DataGeneral's AOS, for example, it would typically respond "Nothing happens", just as the game did if the magic wasinvoked at the wrong spot or before a player had performed the action that enabled the word. The 32-bit version,AOS/VS, would respond "Twice as much happens".[1] On several computer systems from Sun Microsystems, thecommand "xyzzy" is used to enter the interactive shell of the u-boot bootloader.[4] Early versions of Zenith Z-DOS(a re-branded variant of MS-DOS 1.25) had the command "xyzzy" which took a parameter of "on" or "off". Xyzzyby itself would print the status of the last "xyzzy on" or "xyzzy off" command.The popular Minesweeper game under Microsoft Windows has a cheat mode triggered by entering the commentxyzzy, then pressing the key sequence shift and then enter, which turns a single pixel in the top-left corner of theentire screen into a small black or white dot depending on whether or not the mouse pointer is over a mine.[5] Thisfeature is present in all versions except for Windows Vista and Windows 7, but under Windows 95, 98 and NT 4.0the pixel is only visible if the standard Explorer desktop is not running.[6]

The low-traffic Usenet newsgroup alt.xyzzy is used for test messages, to which other readers (if there are any)customarily respond, "Nothing happens" as a note that the test message was successfully received. The GoogleIMAP service documents a CAPABILITY called XYZZY when the CAPABILITY command is issued. If thecommand XYZZY is given, the server responds "OK Nothing happens."; in mIRC and Pidgin, entering thecommand /xyzzy will display the response "Nothing happens".A "deluxe chatting program" for DIGITAL's VAX/VMS written by David Bolen in 1987 and distributed viaBITNET took the name xyzzy. It enabled users on the same system or on linked DECnet nodes to communicate viatext in real time. There was a compatible program with the same name for IBM's VM/CMS.[7]

Xyzzy was the inspiration for the name of the interactive fiction competition the XYZZY Awards.xYzZY is used as the default boundary marker by the Perl HTTP::Message module for multipart MIME messages,[8]

and was used in Apple's AtEase for workgroups as the default administrator password in the 1990s.In the game, Zork, type xyzzy and press enter. The game responds with: A hollow voice says "fool."

Page 184: Vector and Matrices - Some Articles

Xyzzy 181

References[1] Rick Adams. "Everything you ever wanted to know about…the magic word XYZZY" (http:/ / www. rickadams. org/ adventure/ c_xyzzy.

html). The Colossal Cave Adventure page. .[2] David Welbourn. "xyzzy responses" (http:/ / webhome. idirect. com/ ~dswxyz/ sol/ xyzzy. html). . A web page giving responses to "xyzzy" in

many games of interactive fiction[3] Dennis G. Jerz. "Somewhere Nearby is Colossal Cave: Examining Will Crowther's Original "Adventure" in Code and in Kentucky" (http:/ /

www. digitalhumanities. org/ dhq/ vol/ 001/ 2/ 000009. html). .[4] "Page 17" (http:/ / dlc. sun. com/ pdf/ 820-4783-10/ 820-4783-10. pdf) (PDF). . Retrieved 2009-08-20.[5] eeggs.com. "Windows 2000 Easter Eggs - Eeggs.com" (http:/ / www. eeggs. com/ items/ 6818. html). Eeggs.com<!. . Retrieved 2009-08-20.[6] "Minesweeper Cheat codes" (http:/ / cheatcodes. com/ minesweeper-pc-cheats/ ). .[7] (http:/ / web. inter. nl. net/ users/ fred/ relay/ xyzzy. html) VAX/VMS XYZZY Reference Card, Created by David Bolen[8] Sean M. Burke (2002). "Perl and LWP", p.82. O'Reilly Media, Inc. ISBN 0596001789

Quaternions and spatial rotationUnit quaternions provide a convenient mathematical notation for representing orientations and rotations of objects inthree dimensions. Compared to Euler angles they are simpler to compose and avoid the problem of gimbal lock.Compared to rotation matrices they are more numerically stable and may be more efficient. Quaternions have foundtheir way into applications in computer graphics, computer vision, robotics, navigation, molecular dynamics andorbital mechanics of satellites.[1]

When used to represent rotation, unit quaternions are also called versors, or rotation quaternions. When used torepresent an orientation (rotation relative to a reference position), they are called orientation quaternions orattitude quaternions.

Page 185: Vector and Matrices - Some Articles

Quaternions and spatial rotation 182

Quaternion rotation operationsA very formal explanation of the properties used in this section is given by Altmann.[2]

The hypersphere of rotations

Visualizing the space of rotations

Unit quaternions represent the mathematical space of rotations in three dimensions in a very straightforward way.The correspondence between rotations and quaternions can be understood by first visualizing the space of rotationsitself.

Two rotations by different angles and different axes in the space ofrotations. The length of the vector is related to the magnitude of the

rotation.

In order to visualize the space of rotations, it helps toconsider a simpler case. Any rotation in threedimensions can be described by a rotation by someangle about some axis. Consider the special case inwhich the axis of rotation lies in the xy plane. We canthen specify the axis of one of these rotations by a pointon a circle, and we can use the radius of the circle tospecify the angle of rotation. Similarly, a rotationwhose axis of rotation lies in the "xy" plane can bedescribed as a point on a sphere of fixed radius in threedimensions. Beginning at the north pole of a sphere inthree dimensional space, we specify the point at thenorth pole to be the identity rotation (a zero anglerotation). Just as in the case of the identity rotation, noaxis of rotation is defined, and the angle of rotation(zero) is irrelevant. A rotation having a very smallrotation angle can be specified by a slice through thesphere parallel to the xy plane and very near the northpole. The circle defined by this slice will be very small,corresponding to the small angle of the rotation. As the rotation angles become larger, the slice moves in the negative"z" direction, and the circles become larger until the equator of the sphere is reached, which will correspond to arotation angle of 180 degrees. Continuing southward, the radii of the circles now become smaller (corresponding tothe absolute value of the angle of the rotation considered as a negative number). Finally, as the south pole is reached,the circles shrink once more to the identity rotation, which is also specified as the point at the south pole.

Notice that a number of characteristics of such rotations and their representations can be seen by this visualization.The space of rotations is continuous, each rotation has a neighborhood of rotations which are nearly the same, andthis neighborhood becomes flat as the neighborhood shrinks. Also, each rotation is actually represented by twoantipodal points on the sphere, which are at opposite ends of a line through the center of the sphere. This reflects thefact that each rotation can be represented as a rotation about some axis, or, equivalently, as a negative rotation aboutan axis pointing in the opposite direction (a so-called double cover). The "latitude" of a circle representing aparticular rotation angle will be half of the angle represented by that rotation, since as the point is moved from thenorth to south pole, the latitude ranges from zero to 180 degrees, while the angle of rotation ranges from 0 to 360degrees. (the "longitude" of a point then represents a particular axis of rotation.) Note however that this set ofrotations is not closed under composition. Two successive rotations with axes in the xy plane will not necessarilygive a rotation whose axis lies in the xy plane, and thus cannot be represented as a point on the sphere. This will notbe the case with a general rotation in 3-space, in which rotations do form a closed set under composition.

Page 186: Vector and Matrices - Some Articles

Quaternions and spatial rotation 183

The sphere of rotations for the rotations that have a "horizontal" axis(in the xy plane).

This visualization can be extended to a general rotationin 3 dimensional space. The identity rotation is a point,and a small angle of rotation about some axis can berepresented as a point on a sphere with a small radius.As the angle of rotation grows, the sphere grows, untilthe angle of rotation reaches 180 degrees, at whichpoint the sphere begins to shrink, becoming a point asthe angle approaches 360 degrees (or zero degrees fromthe negative direction). This set of expanding andcontracting spheres represents a hypersphere in fourdimensional space (a 3-sphere). Just as in the simplerexample above, each rotation represented as a point onthe hypersphere is matched by its antipodal point onthat hypersphere. The "latitude" on the hyperspherewill be half of the corresponding angle of rotation, andthe neighborhood of any point will become "flatter"(i.e. be represented by a 3-D Euclidean space of points)as the neighborhood shrinks. This behavior is matchedby the set of unit quaternions: A general quaternionrepresents a point in a four dimensional space, butconstraining it to have unit magnitude yields a three dimensional space equivalent to the surface of a hypersphere.The magnitude of the unit quaternion will be unity, corresponding to a hypersphere of unit radius. The vector part ofa unit quaternion represents the radius of the 2-sphere corresponding to the axis of rotation, and its magnitude is thecosine of half the angle of rotation. Each rotation is represented by two unit quaternions of opposite sign, and, as inthe space of rotations in three dimensions, the quaternion product of two unit quaternions will yield a unitquaternion. Also, the space of unit quaternions is "flat" in any infinitesimal neighborhood of a given unit quaternion.

Parameterizing the space of rotations

We can parameterize the surface of a sphere with two coordinates, such as latitude and longitude. But latitude andlongitude are ill-behaved (degenerate) at the north and south poles, though the poles are not intrinsically differentfrom any other points on the sphere. At the poles (latitudes +90° and -90°), the longitude becomes meaningless.It can be shown that no two-parameter coordinate system can avoid such degeneracy. We can avoid such problemsby embedding the sphere in three-dimensional space and parameterizing it with three Cartesian coordinates (herew,x,y), placing the north pole at (w,x,y) = (1,0,0), the south pole at (w,x,y) = (−1,0,0), and the equator at w = 0, x2 + y2

= 1. Points on the sphere satisfy the constraint w2 + x2 + y2 = 1, so we still have just two degrees of freedom thoughthere are three coordinates. A point (w,x,y) on the sphere represents a rotation in the ordinary space around thehorizontal axis directed by the vector by an angle .

In the same way the hyperspherical space of 3D rotations can be parameterized by three angles (Euler angles), butany such parameterization is degenerate at some points on the hypersphere, leading to the problem of gimbal lock.We can avoid this by using four Euclidean coordinates w,x,y,z, with w2 + x2 + y2 + z2 = 1. The point (w,x,y,z)represents a rotation around the axis directed by the vector by an angle

Page 187: Vector and Matrices - Some Articles

Quaternions and spatial rotation 184

From the rotations to the quaternions

Quaternions briefly

The complex numbers can be defined by introducing an abstract symbol i which satisfies the usual rules of algebraand additionally the rule i2 = −1. This is sufficient to reproduce all of the rules of complex number arithmetic: forexample:

.In the same way the quaternions can be defined by introducing abstract symbols i, j, k which satisfy the rules i2 = j2

= k2 = ijk = −1 and the usual algebraic rules except the commutative law of multiplication (a familiar example ofsuch a noncommutative multiplication is matrix multiplication). From this all of the rules of quaternion arithmeticfollow: for example, one can show that:

.

The imaginary part of a quaternion behaves like a vector in three dimension vectorspace, and the real part a behaves like a scalar in . When quaternions are used in geometry, it is more convenientto define them as a scalar plus a vector:

.Those who have studied vectors at school might find it strange to add a number to a vector, as they are objects ofvery different natures, or to multiply two vectors together, as this operation is usually undefined. However, if oneremembers that it is a mere notation for the real and imaginary parts of a quaternion, it becomes more legitimate. Inother words, the correct reasoning is the addition of two quaternions, one with zero vector/imaginary part, andanother one with zero scalar/real part:

.We can express quaternion multiplication in the modern language of vector cross and dot products (which wereactually inspired by the quaternions in the first place). In place of the rules i2 = j2 = k2 = ijk = −1 we have thequaternion multiplication rule:

where:• is the resulting quaternion,• is vector cross product (a vector),• is vector scalar product (a scalar).Quaternion multiplication is noncommutative (because of the cross product, which anti-commutes), whilescalar-scalar and scalar-vector multiplications commute. From these rules it follows immediately that (see details):

.The (left and right) multiplicative inverse or reciprocal of a nonzero quaternion is given by the conjugate-to-normratio (see details):

,

as can be verified by direct calculation.

Page 188: Vector and Matrices - Some Articles

Quaternions and spatial rotation 185

Describing rotations with quaternions

Let (w,x,y,z) be the coordinates of a rotation as previously described. Define the quaternion

where is a unit vector. Let also be an ordinary vector in 3 dimensional space, considered as a quaternion with areal coordinate equal to zero. Then it can be shown (see next section) that the quaternion product

yields the vector upon rotation of the original vector by an angle around the axis . The rotation isclockwise if our line of sight points in the direction pointed by . This operation is known as conjugation by q.It follows that quaternion multiplication is composition of rotations, for if p and q are quaternions representingrotations, then rotation (conjugation) by pq is

,which is the same as rotating (conjugating) by q and then by p.

The quaternion inverse of a rotation is the opposite rotation, since . The square of a quaternionrotation is a rotation by twice the angle around the same axis. More generally qn is a rotation by n times the anglearound the same axis as q. This can be extended to arbitrary real n, allowing for smooth interpolation between spatialorientations; see Slerp.

Proof of the quaternion rotation identity

Let be a unit vector (the rotation axis) and let . Our goal is to show that

yields the vector rotated by an angle around the axis . Expanding out, we have

where and are the components of perpendicular and parallel to respectively. This is the formula of arotation by around the axis.

Page 189: Vector and Matrices - Some Articles

Quaternions and spatial rotation 186

Example

The conjugation operation

A rotation of 120° around the first diagonal permutes i, j, and kcyclically.

Consider the rotation f around the axis , with a rotation angle of 120°, or 2π⁄3 radians.

The length of is √3, the half angle is π⁄3 (60°) with cosine ½, (cos 60° = 0.5) and sine √3⁄2, (sin 60° ≈ 0.866). Weare therefore dealing with a conjugation by the unit quaternion

If f is the rotation function,

It can be proved that the inverse of a unit quaternion is obtained simply by changing the sign of its imaginarycomponents. As a consequence,

and

This can be simplified, using the ordinary rules for quaternion arithmetic, to

As expected, the rotation corresponds to keeping a cube held fixed at one point, and rotating it 120° about the longdiagonal through the fixed point (observe how the three axes are permuted cyclically).

Page 190: Vector and Matrices - Some Articles

Quaternions and spatial rotation 187

Quaternion arithmetic in practice

Let's show how we reached the previous result. Let's develop the expression of f (in two stages), and apply the rules

It gives us:

which is the expected result. As we can see, such computations are relatively long and tedious if done manually;however, in a computer program, this amounts to calling the quaternion multiplication routine twice.

Explaining quaternions' properties with rotations

Non-commutativityThe multiplication of quaternions is non-commutative. Since the multiplication of unit quaternions corresponds tothe composition of three dimensional rotations, this property can be made intuitive by showing that threedimensional rotations are not commutative in general.A simple exercise of applying two rotations to an asymmetrical object (e.g., a book) can explain it. First, rotate abook 90 degrees clockwise around the z axis. Next flip it 180 degrees around the x axis and memorize the result.Then restore the original orientation, so that the book title is again readable, and apply those rotations in oppositeorder. Compare the outcome to the earlier result. This shows that, in general, the composition of two differentrotations around two distinct spatial axes will not commute.

Page 191: Vector and Matrices - Some Articles

Quaternions and spatial rotation 188

Are quaternions handed?Note that quaternions, like the rotations or other linear transforms, are not "handed" (as in left-handed vsright-handed). Handedness of a coordinate system comes from the interpretation of the numbers in physical space.No matter what the handedness convention, rotating the X vector 90 degrees around the Z vector will yield the Yvector — the mathematics and numbers are the same.

Quaternions and other representations of rotations

Qualitative description of the advantages of quaternionsThe representation of a rotation as a quaternion (4 numbers) is more compact than the representation as anorthogonal matrix (9 numbers). Furthermore, for a given axis and angle, one can easily construct the correspondingquaternion, and conversely, for a given quaternion one can easily read off the axis and the angle. Both of these aremuch harder with matrices or Euler angles.In computer games and other applications, one is often interested in “smooth rotations”, meaning that the sceneshould slowly rotate and not in a single step. This can be accomplished by choosing a curve such as the sphericallinear interpolation in the quaternions, with one endpoint being the identity transformation 1 (or some other initialrotation) and the other being the intended final rotation. This is more problematic with other representations ofrotations.When composing several rotations on a computer, rounding errors necessarily accumulate. A quaternion that’sslightly off still represents a rotation after being normalised— a matrix that’s slightly off may not be orthogonalanymore and is harder to convert back to a proper orthogonal matrix.Quaternions also avoid a phenomenon called gimbal lock which can result when, for example in pitch/yaw/rollrotational systems, the pitch is rotated 90° up or down, so that yaw and roll then correspond to the same motion, anda degree of freedom of rotation is lost. In a gimbal-based aerospace inertial navigation system, for instance, thiscould have disastrous results if the aircraft is in a steep dive or ascent.

Conversion to and from the matrix representation

From a quaternion to an orthogonal matrix

The orthogonal matrix corresponding to a rotation by the unit quaternion (with |z| = 1) isgiven by

From an orthogonal matrix to a quaternion

Finding the quaternion that corresponds to a rotation matrix can be numericallyunstable if the trace (sum of the diagonal elements) of the rotation matrix is zero or very small. A robust method is tochoose the diagonal element with the largest value. Let uvw be an even permutation of xyz (i.e. xyz, yzx or zxy).The value

will be a real number because . If r is zero the matrix is the identity matrix, and thequaternion must be the identity quaternion (1, 0, 0, 0). Otherwise the quaternion can be calculated as follows:

Page 192: Vector and Matrices - Some Articles

Quaternions and spatial rotation 189

Beware the vector convention: There are two conventions for rotation matrices: one assumes row vectors on the left;the other assumes column vectors on the right; the two conventions generate matrices that are the transpose of eachother. The above matrix assumes column vectors on the right. In general, a matrix for vertex transpose is ambiguousunless the vector convention is also mentioned. Historically, the column-on-the-right convention comes frommathematics and classical mechanics, whereas row-vector-on-the-left comes from computer graphics, wheretypesetting row vectors was easier back in the early days.(Compare the equivalent general formula for a 3 × 3 rotation matrix in terms of the axis and the angle.)

Fitting quaternions

The above section described how to recover a quaternion q from a 3 × 3 rotation matrix Q. Suppose, however, thatwe have some matrix Q that is not a pure rotation — due to round-off errors, for example — and we wish to find thequaternion q that most accurately represents Q. In that case we construct a symmetric 4×4 matrix

and find the eigenvector (x,y,z,w) corresponding to the largest eigenvalue (that value will be 1 if and only if Q is apure rotation). The quaternion so obtained will correspond to the rotation closest to the original matrix Q [3]

Performance comparisons with other rotation methodsThis section discusses the performance implications of using quaternions versus other methods (axis/angle orrotation matrices) to perform rotations in 3D.

Results

Storage requirements

Method Storage

Rotation matrix 9

Quaternion 4

Angle/axis 3*

* Note: angle-axis can be stored as 3 elements by multiplying the unit rotation axis by the rotation angle, forming thelogarithm of the quaternion, at the cost of additional calculations.

Page 193: Vector and Matrices - Some Articles

Quaternions and spatial rotation 190

Performance comparison of rotation chaining operations

Method # multiplies # add/subtracts total operations

Rotation matrices 27 18 45

Quaternions 16 12 28

Performance comparison of vector rotating operations

Method # multiplies # add/subtracts # sin/cos total operations

Rotation matrix 9 6 0 15

Quaternions 21 18 0 39

Angle/axis 23 16 2 41

Used methods

There are three basic approaches to rotating a vector :1. Compute the matrix product of a 3x3 rotation matrix R and the original 3x1 column matrix representing . This

requires 3*(3 multiplications + 2 additions) = 9 multiplications and 6 additions, the most efficient method forrotating a vector.

2. Use the quaternion rotation formula derived above of . Computing this result is equivalent totransforming the quaternion to a rotation matrix R using the formula above then multiplying with a vector.Performing some common subexpression elimination yields an algorithm that costs 21 multiplies and 18 adds. Asa second approach, the quaternion could first be converted to its equivalent angle/axis representation then theangle/axis representation used to rotate the vector. However, this is both less efficient and less numerically stablewhen the quaternion nears the no-rotation point.

3. Use the angle-axis formula to convert an angle/axis to a rotation matrix R then multiplying with a vector.Converting the angle/axis to R using common subexpression elimination costs 14 multiplies, 2 function calls (sin,cos), and 10 add/subtracts; from item 1, rotating using R adds an additional 9 multiplications and 6 additions for atotal of 23 multiplies, 16 add/subtracts, and 2 function calls (sin, cos).

Pairs of unit quaternions as rotations in 4D spaceA pair of unit quaternions zl and zr can represent any rotation in 4D space. Given a four dimensional vector , andpretending that it is a quaternion, we can rotate the vector like this:

It is straightforward to check that for each matrix M MT = I, that is, that each matrix (and hence both matricestogether) represents a rotation. Note that since , the two matrices must commute. Therefore,there are two commuting subgroups of the set of four dimensional rotations. Arbitrary four dimensional rotationshave 6 degrees of freedom, each matrix represents 3 of those 6 degrees of freedom.Since an infinitesimal four-dimensional rotation can be represented by a pair of quaternions (as follows), all(non-infinitesimal) four-dimensional rotations can also be represented.

Page 194: Vector and Matrices - Some Articles

Quaternions and spatial rotation 191

References[1] Quaternions and rotation Sequences: a Primer with Applications to Orbits, Aerospace, and Virtual Reality. Kuipers, Jack B., Princeton

University Press copyright 1999.[2] Rotations, Quaternions, and Double Groups. Altmann, Simon L., Dover Publications, 1986 (see especially Ch. 12).[3] Bar-Itzhack, Itzhack Y. (2000), "New method for extracting the quaternion from a rotation matrix", AIAA Journal of Guidance, Control and

Dynamics 23 (6): 1085–1087 (Engineering Note), doi:10.2514/2.4654, ISSN 0731-5090

External links and resources• Shoemake, Ken. Quaternions (http:/ / www. cs. caltech. edu/ courses/ cs171/ quatut. pdf)• Simple Quaternion type and operations in more than twenty different languages (http:/ / rosettacode. org/ wiki/

Simple_Quaternion_type_and_operations) on Rosetta Code• Hart, Francis, Kauffman. Quaternion demo (http:/ / graphics. stanford. edu/ courses/ cs348c-95-fall/ software/

quatdemo/ )• Dam, Koch, Lillholm. Quaternions, Interpolation and Animation (http:/ / www. diku. dk/ publikationer/ tekniske.

rapporter/ 1998/ 98-5. ps. gz)• Byung-Uk Lee. Unit Quaternion Representation of Rotation (http:/ / home. ewha. ac. kr/ ~bulee/ quaternion. pdf)• Ibanez, Luis. Quaternion Tutorial I (http:/ / www. itk. org/ CourseWare/ Training/ QuaternionsI. pdf)• Ibanez, Luis. Quaternion Tutorial II (http:/ / www. itk. org/ CourseWare/ Training/ QuaternionsII. pdf)• Vicci, Leandra. Quaternions and Rotations in 3-Space: The Algebra and its Geometric Interpretation (ftp:/ / ftp.

cs. unc. edu/ pub/ techreports/ 01-014. pdf)• Howell, Thomas and Lafon, Jean-Claude. The Complexity of the Quaternion Product, TR75-245, Cornell

University, 1975 (http:/ / world. std. com/ ~sweetser/ quaternions/ ps/ cornellcstr75-245. pdf)• Berthold K.P. Horn. Some Notes on Unit Quaternions and Rotation (http:/ / people. csail. mit. edu/ bkph/ articles/

Quaternions. pdf).

Page 195: Vector and Matrices - Some Articles

Seven-dimensional cross product 192

Seven-dimensional cross productIn mathematics, the seven-dimensional cross product is a bilinear operation on vectors in a seven-dimensionalspace. It assigns to any two vectors a, b in ℝ7 a vector a × b ∈ ℝ7.[1] In seven dimensions there also exists a crossproduct involving six vectors (which is linear but not binary), discussed briefly in the section on generalizations.As in the more familiar three-dimensional cross product, the binary cross product in seven dimensions is alternatingand orthogonal to the original vectors, but unlike that case, however, it does not satisfy the Jacobi identity. The sevendimensional cross product has the same relationship to octonions as the three-dimensional cross product does toquaternions, and apart from the trivial cases of zero and one dimensions, binary cross products can be shown to existonly in three and seven dimensions.[2]

ExampleThe postulates underlying construction of the seven-dimensional cross product are presented in the sectionDefinition. As context for that discussion, the historically first example of the cross product is tabulated below usinge1 to e7 as basis vectors.[3] [4] This table is one of 480 independent multiplication tables fitting the pattern that eachunit vector appears once in each column and once in each row.[5] Thus, each unit vector appears as a product in thetable six times, three times with a positive sign and three with a negative sign because of antisymmetry about thediagonal of zero entries. For example, e1 = e2 × e3 = e4 × e5 = e7 × e6 and the negative entries are the reversedcross-products.

Alternate indexing schemes

Number 1 2 3 4 5 6 7

Letter i j k l il jl kl

Alternate i j k l m n o

Cayley's sample multiplication table

× e1

e2

e3

e4

e5

e6

e7

e1

0 e3 −e2 e5 −e4 −e7 e6

e2

−e3 0 e1 e6 e7 −e4 −e5

e3

e2 −e1 0 e7 −e6 e5 −e4

e4

−e5 −e6 −e7 0 e1 e2 e3

e5

e4 −e7 e6 −e1 0 −e3 e2

e6

e7 e4 −e5 −e2 e3 0 −e1

e7

−e6 e5 e4 −e3 −e2 e1 0

Entries in the interior give the product of the corresponding vectors on the left and the top in that order (the productis anti-commutative). Some entries are highlighted to emphasize the symmetry.The table can be summarized by the relation[4]

where is a completely antisymmetric tensor with a positive value +1 when ijk = 123, 145, 176, 246, 257, 347,365. By picking out the factors leading to the unit vector e1, for example, one finds the formula for the e1 componentof x × y. Namely

Page 196: Vector and Matrices - Some Articles

Seven-dimensional cross product 193

The top left 3 × 3 corner of the table is the same as the cross product in three dimensions. It also may be noticed thatorthogonality of the cross product to its constituents x and y is a requirement upon the entries in this table. However,because of the many possible multiplication tables, general results for the cross product are best developed using abasis-independent formulation, as introduced next.

DefinitionWe can define a cross product on a Euclidean space V as a bilinear map from V × V to V mapping vectors x and y inV to another vector x × y also in V, where x × y has the properties[1] [6]

(orthogonality),and:

(magnitude),where (x·y) is the Euclidean dot product and |x| is the vector norm. The first property states that the cross product isperpendicular to its arguments, while the second property gives the magnitude of the cross product. An equivalentexpression in terms of the angle θ between the vectors[7] is[8]

or the area of the parallelogram in the plane of x and y with the two vectors as sides.[9] As a third alternative thefollowing can be shown to be equivalent to either expression for the magnitude:[10]

Consequences of the defining propertiesGiven the three basic properties of (i) bilinearity, (ii) orthogonality and (iii) magnitude discussed in the section ondefinition, a nontrivial cross product exists only in three and seven dimensions.[8] [2] [10] This restriction upondimensionality can be shown by postulating the properties required for the cross product, then deducing a equationwhich is only satisfied when the dimension is 0, 1, 3 or 7. In zero dimensions there is only the zero vector, while inone dimension all vectors are parallel, so in both these cases a cross product must be identically zero.The restriction to 0, 1, 3 and 7 dimensions is related to Hurwitz's theorem, that normed division algebras are onlypossible in 1, 2, 4 and 8 dimensions. The cross product is derived from the product of the algebra by considering theproduct restricted to the 0, 1, 3, or 7 imaginary dimensions of the algebra. Again discarding trivial products theproduct can only be defined this way in three and seven dimensions.[11]

In contrast with three dimensions where the cross product is unique (apart from sign), there are many possible binarycross products in seven dimensions. One way to see this is to note that given any pair of vectors x and y ∈ ℝ7 and anyvector v of magnitude |v| = |x||y| sinθ in the five dimensional space perpendicular to the plane spanned by x and y, itis possible to find a cross product with a multiplication table (and an associated set of basis vectors) such that x × y =v. That leaves open the question of just how many vector pairs like x and y can be matched to specified directionslike v before the limitations of any particular table intervene.Another difference between the three dimensional cross product and a seven dimensional cross product is:[8]

“…for the cross product x × y in ℝ7 there are also other planes than the linear span of x and y giving the samedirection as x × y”—Pertti Lounesto, Clifford algebras and spinors, p. 97

This statement is exemplified by every multiplication table, because any specific unit vector selected as a productoccurs as a mapping from three different pairs of unit vectors, once with a plus sign and once with a minus sign.Each of these different pairs, of course, corresponds to another plane being mapped into the same direction.

Page 197: Vector and Matrices - Some Articles

Seven-dimensional cross product 194

Further properties follow from the definition, including the following identities: (anticommutativity),

(scalar triple product), (Malcev

identity),[8]

Other properties follow only in the three dimensional case, and are not satisfied by the seven dimensional crossproduct, notably,

(vector triple product), (Jacobi identity).[8]

Coordinate expressionsTo define a particular cross product, an orthonormal basis ej may be selected and a multiplication table providedthat determines all the products ei× ej. One possible multiplication table is described in the Example section, but itis not unique.[5] Unlike three dimensions, there are many tables because every pair of unit vectors is perpendicular tofive other unit vectors, allowing many choices for each cross product.Once we have established a multiplication table, it is then applied to general vectors x and y by expressing x and y interms of the basis and expanding x×y through bilinearity.

× e1

e2

e3

e4

e5

e6

e7

e1

0 e4 e7 -e2 e6 −e5 -e3

e2

−e4 0 e5 e1 -e3 e7 −e6

e3

-e7 −e5 0 e6 e2 -e4 e1

e4

e2 −e1 −e6 0 e7 e3 -e5

e5

-e6 e3 -e2 −e7 0 e1 e4

e6

e5 -e7 e4 −e3 -e1 0 e2

e7

e3 e6 -e1 e5 −e4 -e2 0

|+ align="bottom" style="caption-side: bottom" | Lounesto's multiplication table Using e1 to e7 for the basisvectors a different multiplication table from the one in the Introduction, leading to a different cross product, is givenwith anticommutativity by[8]

More compactly this rule can be written as

with i = 1...7 modulo 7 and the indices i, i + 1 and i + 3 allowed to permute evenly. Together with anticommutativity this generates the product. This rule directly produces the two diagonals immediately adjacent to the diagonal of

Page 198: Vector and Matrices - Some Articles

Seven-dimensional cross product 195

zeros in the table. Also, from an identity in the subsection on consequences,

which produces diagonals further out, and so on.The ej component of cross product x × y is given by selecting all occurrences of ej in the table and collecting thecorresponding components of x from the left column and of y from the top row. The result is:

As the cross product is bilinear the operator x×– can be written as a matrix, which takes the form

The cross product is then given by

Page 199: Vector and Matrices - Some Articles

Seven-dimensional cross product 196

Different multiplication tables

Fano planes for the two multiplication tables usedhere.

Two different multiplication tables have been used in this article, andthere are more.[5] [12] These multiplication tables are characterized bythe Fano plane,[13] [14] and these are shown in the figure for the twotables used here: at top, the one described by Sabinin, Sbitneva, andShestakov, and at bottom that described by Lounesto. The numbersunder the Fano diagrams (the set of lines in the diagram) indicate a setof indices for seven independent products in each case, interpreted asijk → e

i × e

j = e

k. The multiplication table is recovered from the Fano

diagram by following either the straight line connecting any threepoints, or the circle in the center, with a sign as given by the arrows.For example, the first row of multiplications resulting in e1 in theabove listing is obtained by following the three paths connected to e1 inthe lower Fano diagram: the circular path e2 × e4, the diagonal path e3× e7, and the edge path e6 × e1 = e5 rearranged using one of the aboveidentities as:

or

also obtained directly from the diagram with the rule that any two unit vectors on a straight line are connected bymultiplication to the third unit vector on that straight line with signs according to the arrows (sign of the permutationthat orders the unit vectors).It can be seen that both multiplication rules follow from the same Fano diagram by simply renaming the unit vectors,and changing the sense of the center unit vector. The question arises: how many multiplication tables are there?[14]

The question of possible multiplication tables arises, for example, when one reads another article on octonions,which uses a different one from the one given by [Cayley, say]. Usually it is remarked that all 480 possibleones are equivalent, that is, given an octonionic algebra with a multiplication table and any other validmultiplication table, one can choose a basis such that the multiplication follows the new table in this basis.One may also take the point of view, that there exist different octonionic algebras, that is, algebras withdifferent multiplication tables. With this interpretation...all these octonionic algebras are isomorphic.—Jörg Schray, Corinne A Manogue, Octonionic representations of Clifford algebras and triality (1994)

Page 200: Vector and Matrices - Some Articles

Seven-dimensional cross product 197

Using geometric algebraThe product can also be calculated using geometric algebra. The product starts with the exterior product, a bivectorvalued product of two vectors:

This is bilinear, alternate, has the desired magnitude, but is not vector valued. The vector, and so the cross product,comes from the product of this bivector with a trivector. In three dimensions up to a scale factor there is only onetrivector, the pseudoscalar of the space, and a product of the above bivector and one of the two unit trivectors givesthe vector result, the dual of the bivector.A similar calculation is done is seven dimensions, except as trivectors form a 35-dimensional space there are manytrivectors that could be used, though not just any trivector will do. The trivector that gives the same product as theabove coordinate transform is

This is combined with the exterior product to give the cross product

where ⌋ is the left contraction operator from geometric algebra.[8] [15]

Relation to the octonionsJust as the 3-dimensional cross product can be expressed in terms of the quaternions, the 7-dimensional crossproduct can be expressed in terms of the octonions. After identifying R7 with the imaginary octonions (theorthogonal complement of the real line in O), the cross product is given in terms of octonion multiplication by

Conversely, suppose V is a 7-dimensional Euclidean space with a given cross product. Then one can define a bilinearmultiplication on R⊕V as follows:

The space R⊕V with this multiplication is then isomorphic to the octonions.[16]

The cross product only exists in three and seven dimensions as one can always define a multiplication on a space ofone higher dimension as above, and this space can be shown to be a normed division algebra. By Hurwitz's theoremsuch algebras only exist in one, two, four, and eight dimensions, so the cross product must be in zero, one, three orseven dimensions The products in zero and one dimensions are trivial, so non-trivial cross products only exist inthree and seven dimensions.[17] [18]

The failure of the 7-dimension cross product to satisfy the Jacobi identity is due to the nonassociativity of theoctonions. In fact,

where [x, y, z] is the associator.

Page 201: Vector and Matrices - Some Articles

Seven-dimensional cross product 198

RotationsIn three dimensions the cross product is invariant under the group of the rotation group, SO(3), so the cross productof x and y after they are rotated is the image of x × y under the rotation. But this invariance is not true in sevendimensions; that is, the cross product is not invariant under the group of rotations in seven dimensions, SO(7).Instead it is invariant under the exceptional Lie group G2, a subgroup of SO(7).[16] [8]

GeneralizationsNon-trivial binary cross products exist only in three and seven dimensions. But if the restriction that the product isbinary is lifted, so products of more than two vectors are allowed, then more products are possible.[19] [20] As in twodimensions the product must be vector valued, linear, and anti-commutative in any two of the vectors in the product.The product should satisfy orthogonality, so it is orthogonal to all its members. This means no more than n - 1vectors can be used in n dimensions. The magnitude of the product should equal the volume of the parallelotope withthe vectors as edges, which is can be calculated using the Gram determinant. So the conditions are

(orthogonality)

(Gram determinant)

The Gram determinant is the squared volume of the parallelotope with a1, ..., ak as edges. If there are just twovectors x and y it simplifies to the condition for the binary cross product given above, that is

,

With these conditions a non-trivial cross product only exists:• as a binary product in three and seven dimensions• as a product of n - 1 vectors in n > 3 dimensions• as a product of three vectors in eight dimensionsThe product of n - 1 vectors is in n dimensions is the Hodge dual of the exterior product of n - 1 vectors. One versionof the product of three vectors in eight dimensions is given by

where v is the same trivector as used in seven dimensions, ⌋ is again the left contraction, and w = -ve12...7 is a4-vector.

Notes[1] WS Massey (1983). "Cross products of vectors in higher dimensional Euclidean spaces" (http:/ / www. jstor. org/ stable/ 2323537). The

American Mathematical Monthly (Mathematical Association of America) 90 (10): 697–701. doi:10.2307/2323537. .[2] WS Massey (1983). "cited work" (http:/ / www. jstor. org/ stable/ 2323537). The American Mathematical Monthly 90 (10): 697–701. . "If one

requires only three basic properties of the cross product ... it turns out that a cross product of vectors exists only in 3-dimensional and7-dimensional Euclidean space.".

[3] This table is due to Arthur Cayley (1845) and John T. Graves (1843). See G Gentili, C Stoppato, DC Struppa and F Vlacci (2009). "Recentdevelopments for regular functions of a hypercomplex variable" (http:/ / books. google. com/ ?id=H-5v6pPpyb4C& pg=PA168). In IreneSabadini, M Shapiro, F Sommen. Hypercomplex analysis (Conference on quaternionic and Clifford analysis; proceedings ed.). Birkaüser.p. 168. ISBN 9783764398927. .

[4] Lev Vasilʹevitch Sabinin, Larissa Sbitneva, I. P. Shestakov (2006). "§17.2 Octonion algebra and its regular bimodule representation" (http:/ /books. google. com/ ?id=_PEWt18egGgC& pg=PA235). Non-associative algebra and its applications. CRC Press. p. 235. ISBN 0824726693.

[5] Rafał Abłamowicz, Pertti Lounesto, Josep M. Parra (1996). "§ Four ocotonionic basis numberings" (http:/ / books. google. com/?id=OpbY_abijtwC& pg=PA202). Clifford algebras with numeric and symbolic computations. Birkhäuser. p. 202. ISBN 0817639071. .

Page 202: Vector and Matrices - Some Articles

Seven-dimensional cross product 199

[6] Mappings are restricted to be bilinear by (Massey 1993) and Robert B Brown and Alfred Gray (1967). "Vector cross products" (http:/ / www.springerlink. com/ content/ a42n878560522255/ ). Commentarii Mathematici Helvetici (Birkhäuser Basel) 42 (Number 1/December):222–236. doi:10.1007/BF02564418. ..

[7] The definition of angle in n-dimensions ordinarily is defined in terms of the dot product as:

where θ is the angle between the vectors. Consequently, this property of the cross product provides its magnitude as:

From the Pythagorean trigonometric identity this magnitude equals

.See Francis Begnaud Hildebrand (1992). Methods of applied mathematics (http:/ / books. google. com/?id=17EZkWPz_eQC& pg=PA24) (Reprint of Prentice-Hall 1965 2nd ed.). Courier Dover Publications. p. 24.ISBN 0486670023. .[8] Lounesto, pp. 96-97[9] Kendall, M. G. (2004). A Course in the Geometry of N Dimensions (http:/ / books. google. com/ ?id=_dFJ6pSzRLkC& pg=PA19). Courier

Dover Publications. p. 19. ISBN 0486439275. .[10] Z.K. Silagadze (2002). Multi-dimensional vector product. arXiv:math.RA/0204357.[11] Nathan Jacobson (2009). Basic algebra I (http:/ / books. google. com/ ?id=_K04QgAACAAJ& dq=isbn=0486471896& cd=1) (Reprint of

Freeman 1974 2nd ed.). Dover Publications. pp. 417–427. ISBN 0486471896. .[12] Further discussion of the tables and the connection of the Fano plane to these tables is found here: Tony Smith. "Octonion products and

lattices" (http:/ / www. valdostamuseum. org/ hamsmith/ 480op. html). . Retrieved 2010-07-11.[13] Rafał Abłamowicz, Bertfried Fauser (2000). Clifford Algebras and Their Applications in Mathematical Physics: Algebra and physics (http:/

/ books. google. com/ ?id=yvCC94xzJG8C& pg=PA26). Springer. p. 26. ISBN 0817641823. .[14] Jörg Schray, Corinne A. Manogue (1996). "Octonionic representations of Clifford algebras and triality" (http:/ / www. springerlink. com/

content/ w1884mlmj88u5205/ ). Foundations of physics (Springer) 26 (1/January): 17–70. doi:10.1007/BF02058887. . Available as ArXivepreprint (http:/ / arxiv. org/ abs/ hep-th/ 9407179v1) Figure 1 is located here (http:/ / arxiv. org/ PS_cache/ hep-th/ ps/ 9407/ 9407179v1.fig1-1. png).

[15] Bertfried Fauser (2004). "§18.4.2 Contractions" (http:/ / books. google. com/ ?id=b6mbSCv_MHMC& pg=PA292). In Pertti Lounesto,Rafał Abłamowicz. Clifford algebras: applications to mathematics, physics, and engineering. Birkhäuser. pp. 292 ff. ISBN 0817635254. .

[16] John C. Baez (2001). "The Octonions" (http:/ / math. ucr. edu/ home/ baez/ octonions/ oct. pdf). Bull. Amer. Math. 39: 38. .[17] Elduque, Alberto (2004). Vector cross products (http:/ / www. unizar. es/ matematicas/ algebra/ elduque/ Talks/ crossproducts. pdf). .[18] Darpö, Erik (2009). "Vector product algebras". Bulletin of the London Mathematical Society Vol. 41 (5): 898–902.

doi:10.1112/blms/bdp066. See also Real vector product algebras (http:/ / citeseerx. ist. psu. edu/ viewdoc/ download?doi=10. 1. 1. 66. 4&rep=rep1& type=pdf)

[19] Lounesto, §7.5: Cross products of k vectors in ℝn, p. 98[20] Jean H. Gallier (2001). "Problem 7.10 (2)" (http:/ / books. google. com/ ?id=CTHaW9ft1ZMC& pg=PA244#v=onepage& q). Geometric

methods and applications: for computer science and engineering. Springer. p. 244. ISBN 0387950443. .

References• Brown, Robert B.; Gray, Alfred (1967). "Vector cross products". Commentarii Mathematici Helvetici 42 (1):

222–236. doi:10.1007/BF02564418.• Lounesto, Pertti (2001). Clifford algebras and spinors (http:/ / books. google. com/ ?id=kOsybQWDK4oC).

Cambridge, UK: Cambridge University Press. ISBN 0-521-00551-5.• Silagadze, Z.K. (2002). "Multi-dimensional vector product" (http:/ / iopscience. iop. org/ 0305-4470/ 35/ 23/ 310).

J Phys A: Math Gen 35: 4949. doi:10.1088/0305-4470/35/23/310. Also available as ArXiv reprintarXiv:math.RA/0204357.

• Massey, W.S. (1983). Cross products of vectors in higher dimensional Euclidian spaces. JSTOR 2323537.

Page 203: Vector and Matrices - Some Articles

Octonion 200

OctonionIn mathematics, the octonions are a normed division algebra over the real numbers. There are only four suchalgebras, the other three being the quaternions, complex numbers and real numbers. The octonions are the largestsuch algebra with eight dimensions, double the number of the quaternions from which they are an extension. Theyare noncommutative and nonassociative, but satisfy a weaker form of associativity, power associativity. Theoctonion algebra is usually represented by the capital letter O, using boldface O or blackboard bold .Octonions are not as well known as the quaternions and complex numbers, which are much more widely studied andused. Despite this they have some interesting properties and are related to a number of exceptional structures inmathematics, among them the exceptional Lie groups. Additionally, octonions have applications in fields such asstring theory, special relativity, and quantum logic.

HistoryThe octonions were discovered in 1843 by John T. Graves, inspired by his friend William Hamilton's discovery ofquaternions. Graves called his discovery octaves. They were discovered independently by Arthur Cayley (1845).They are sometimes referred to as Cayley numbers or the Cayley algebra.

DefinitionThe octonions can be thought of as octets (or 8-tuples) of real numbers. Every octonion is a real linear combinationof the unit octonions e0, e1, e2, e3, e4, e5, e6, e7 where e0 is the scalar element. That is, every octonion x can bewritten in the form with realcoefficients xi.Addition of octonions is accomplished by adding corresponding coefficients, as with the complex numbers andquaternions. By linearity, multiplication of octonions is completely determined once given a multiplication table forthe unit octonions such as that below.[1]

e0

e1

e2

e3

e4

e5

e6

e7

e1

-1 e3 −e2 e5 −e4 −e7 e6

e2

−e3 -1 e1 e6 e7 −e4 −e5

e3

e2 −e1 -1 e7 −e6 e5 −e4

e4

−e5 −e6 −e7 -1 e1 e2 e3

e5

e4 −e7 e6 −e1 -1 −e3 e2

e6

e7 e4 −e5 −e2 e3 -1 −e1

e7

−e6 e5 e4 −e3 −e2 e1 -1

Cayley's octonion multiplication table

Note antisymmetry about diagonal of −1 values.Often the numbering is replaced by a letter format:

Page 204: Vector and Matrices - Some Articles

Octonion 201

Number 1 2 3 4 5 6 7

Letter i j k l il jl kl

Alternate i j k l m n o

Some entries are tinted to emphasize the table symmetry. This particular table has a diagonal from lower left toupper right of e7's, and the bottom row and rightmost column are a reverse ordering of the index rows at the top andleft of the table.The table can be summarized by the relation:[2]

where is a completely antisymmetric tensor with a positive value +1 when ijk = 123, 145, 176, 246, 257, 347,365, and:

and e0 is the scalar element.The basis for the octonions given here is not nearly as universal as the standard basis for the quaternions. This tableis one of 480 possible tables.[3] Regarding the 480 possible multiplication tables, the following is said:[4]

The question of possible multiplication tables arises, for example, when one reads another article on octonions,which uses a different one from the one given by [Cayley, say]. Usually it is remarked that all 480 possibleones are equivalent, that is, given an octonionic algebra with a multiplication table and any other validmultiplication table, one can choose a basis such that the multiplication follows the new table in this basis.One may also take the point of view, that there exist different octonionic algebras, that is, algebras withdifferent multiplication tables. With this interpretation...all these octonionic algebras are isomorphic.—Jörg Schray, Corinne A Manogue, Octonionic representations of Clifford algebras and triality (1994)

Cayley–Dickson constructionA more systematic way of defining the octonions is via the Cayley–Dickson construction. Just as quaternions can bedefined as pairs of complex numbers, the octonions can be defined as pairs of quaternions. Addition is definedpairwise. The product of two pairs of quaternions (a, b) and (c, d) is defined by

where denotes the conjugate of the quaternion z. This definition is equivalent to the one given above when theeight unit octonions are identified with the pairs

(1,0), (i,0), (j,0), (k,0), (0,1), (0,i), (0,j), (0,k)

Page 205: Vector and Matrices - Some Articles

Octonion 202

Fano plane mnemonic

A simple mnemonic for the products of the unitoctonions.

A convenient mnemonic for remembering the products of unitoctonions is given by the diagram at the right, which represents themultiplication table of Cayley and Graves.[1] [5] This diagram withseven points and seven lines (the circle through 1, 2, and 3 isconsidered a line) is called the Fano plane. The lines are oriented. Theseven points correspond to the seven standard basis elements of Im(O)(see definition below). Each pair of distinct points lies on a unique lineand each line runs through exactly three points.

Let (a, b, c) be an ordered triple of points lying on a given line with theorder specified by the direction of the arrow. Then multiplication isgiven by

ab = c and ba = −c

together with cyclic permutations. These rules together with• 1 is the multiplicative identity,• e2 = -1 for each point in the diagramcompletely defines the multiplicative structure of the octonions. Each of the seven lines generates a subalgebra of Oisomorphic to the quaternions H.

Conjugate, norm, and inverseThe conjugate of an octonion

is given by

Conjugation is an involution of O and satisfies (xy)*=y* x* (note the change in order).The real part of x is defined as ½(x + x*) = x0 and the imaginary part as ½(x - x*). The set of all purely imaginaryoctonions span a 7 dimension subspace of O, denoted Im(O).Conjugation of octonions satisfies the equation

The norm of the octonion x is defined as

The square root is well-defined here as x* x = x x* is always a nonnegative real number:

This norm agrees with the standard Euclidean norm on R8.The existence of a norm on O implies the existence of inverses for every nonzero element of O. The inverse of x ≠ 0is given by

It satisfies x x−1 = x−1 x = 1.

Page 206: Vector and Matrices - Some Articles

Octonion 203

PropertiesOctonionic multiplication is neither commutative:

nor associative:

The octonions do satisfy a weaker form of associativity: they are alternative. This means that the subalgebragenerated by any two elements is associative. Actually, one can show that the subalgebra generated by any twoelements of O is isomorphic to R, C, or H, all of which are associative. Because of their non-associativity, octonionsdon't have matrix representations, unlike quaternions.The octonions do retain one important property shared by R, C, and H: the norm on O satisfies

This implies that the octonions form a nonassociative normed division algebra. The higher-dimensional algebrasdefined by the Cayley–Dickson construction (e.g. the sedenions) all fail to satisfy this property. They all have zerodivisors.Wider number systems exist which have a multiplicative modulus (e.g. 16 dimensional conic sedenions). Theirmodulus is defined differently from their norm, and they also contain zero divisors.It turns out that the only normed division algebras over the reals are R, C, H, and O. These four algebras also formthe only alternative, finite-dimensional division algebras over the reals (up to isomorphism).Not being associative, the nonzero elements of O do not form a group. They do, however, form a loop, indeed aMoufang loop.

AutomorphismsAn automorphism, A, of the octonions is an invertible linear transformation of O which satisfies

The set of all automorphisms of O forms a group called G2. The group G2 is a simply connected, compact, real Liegroup of dimension 14. This group is the smallest of the exceptional Lie groups and is isomorphic to the subgroup ofSO(7) that preserves any chosen particular vector in its 8-dimensional real spinor representation.See also: PSL(2,7) - the automorphism group of the Fano plane.

Quotes“The real numbers are the dependable breadwinner of the family, the complete ordered field we all rely on.The complex numbers are a slightly flashier but still respectable younger brother: not ordered, butalgebraically complete. The quaternions, being noncommutative, are the eccentric cousin who is shunned atimportant family gatherings. But the octonions are the crazy old uncle nobody lets out of the attic: they arenonassociative.”—John Baez

Page 207: Vector and Matrices - Some Articles

Octonion 204

In-line sources and notes[1] This table is due to Arthur Cayley (1845) and John T. Graves (1843). See G Gentili, C Stoppato, DC Struppa and F Vlacci (2009), "Recent

developments for regular functions of a hypercomplex variable" (http:/ / books. google. com/ ?id=H-5v6pPpyb4C& pg=PA168), in IreneSabadini, M Shapiro, F Sommen, Hypercomplex analysis (Conference on quaternionic and Clifford analysis; proceedings ed.), Birkaüser,p. 168, ISBN 9783764398927,

[2] Lev Vasilʹevitch Sabinin, Larissa Sbitneva, I. P. Shestakov (2006), "§17.2 Octonion algebra and its regular bimodule representation" (http:/ /books. google. com/ ?id=_PEWt18egGgC& pg=PA235), Non-associative algebra and its applications, CRC Press, p. 235, ISBN 0824726693,

[3] Rafał Abłamowicz, Pertti Lounesto, Josep M. Parra (1996), "§ Four ocotonionic basis numberings" (http:/ / books. google. com/?id=OpbY_abijtwC& pg=PA202), Clifford algebras with numeric and symbolic computations, Birkhäuser, p. 202, ISBN 0817639071,

[4] Jörg Schray, Corinne A. Manogue (1996), "Octonionic representations of Clifford algebras and triality" (http:/ / www. springerlink. com/content/ w1884mlmj88u5205/ ), Foundations of physics (Springer) 26 (Number 1/January): 17–70, doi:10.1007/BF02058887, . Available asArXive preprint (http:/ / arxiv. org/ abs/ hep-th/ 9407179v1) Figure 1 is located here (http:/ / arxiv. org/ PS_cache/ hep-th/ ps/ 9407/9407179v1. fig1-1. png).

[5] Tevian Dray & Corrine A Manogue (2004), "Chapter 29: Using octonions to describe fundamental particles" (http:/ / books. google. com/?id=b6mbSCv_MHMC& pg=PA452), in Pertti Lounesto, Rafał Abłamowicz, Clifford algebras: applications to mathematics, physics, andengineering, Birkhäuser, p. 452, ISBN 0817635254, Figure 29.1: Representation of multiplication table on projective plane.

References• Baez, John (2002), "The Octonions" (http:/ / www. ams. org/ bull/ 2002-39-02/ S0273-0979-01-00934-X/ home.

html), Bull. Amer. Math. Soc. 39 (02): 145–205, doi:10.1090/S0273-0979-01-00934-X. Online HTML versions atBaez's site (http:/ / math. ucr. edu/ home/ baez/ octonions/ ) or see lanl.arXiv.org copy (http:/ / xxx. lanl. gov/ abs/math/ 0105155v4)

• Cayley, Arthur (1845), "On Jacobi's elliptic functions, in reply to the Rev..; and on quaternions", Philos. Mag. 26:208–211. Appendix reprinted in "The Collected Mathematical Papers", Johnson Reprint Co., New York, 1963,p. 127.

• Conway, John Horton; Smith, Derek A. (2003), On Quaternions and Octonions: Their Geometry, Arithmetic, andSymmetry, A. K. Peters, Ltd., ISBN 1-56881-134-9. ( Review (http:/ / nugae. wordpress. com/ 2007/ 04/ 25/on-quaternions-and-octonions/ )).

External links• Octonions and the Fano Plane Mnemonic (video demonstration) (http:/ / www. youtube. com/

watch?v=5sLnYi_AbEI)

Page 208: Vector and Matrices - Some Articles

Multilinear algebra 205

Multilinear algebraIn mathematics, multilinear algebra extends the methods of linear algebra. Just as linear algebra is built on theconcept of a vector and develops the theory of vector spaces, multilinear algebra builds on the concepts of p-vectorsand multivectors with Grassmann algebra.

OriginIn a vector space of dimension n, one usually considers only the vectors. For Hermann Grassmann and others thispresumption misses the complexity of considering the structures of pairs, triples, and general multivectors. Sincethere are several combinatorial possibilities, the space of multivectors turns out to have 2n dimensions. The abstractformulation of the determinant is the most immediate application. Multilinear algebra also has applications inmechanical study of material response to stress and strain with various moduli of elasticity. This practical referenceled to the use of the word tensor to describe the elements of the multilinear space. The extra structure in a multilinearspace has led it to play an important role in various studies in higher mathematics. Though Grassmann started thesubject in 1844 with his Ausdehnungslehre, and re-published in 1862, his work was slow to find acceptance asordinary linear algebra provided sufficient challenges to comprehension.The topic of multilinear algebra is applied in some studies of multivariate calculus and manifolds where the Jacobianmatrix comes into play. The infinitesimal differentials of single variable calculus become differential forms inmultivariate calculus, and their manipulation is done with exterior algebra.After some preliminary work by Elwin Bruno Christoffel, a major advance in multilinear algebra came in the workof Gregorio Ricci-Curbastro and Tullio Levi-Civita (see references). It was the absolute differential calculus form ofmultilinear algebra that Marcel Grossman and Michele Besso introduced to Albert Einstein. The publication in 1915by Einstein of a general relativity explanation for the precession of the perihelion of Mercury, established multilinearalgebra and tensors as important mathematics.

Use in algebraic topologyAround the middle of the 20th century the study of tensors was reformulated more abstractly. The Bourbaki group'streatise Multilinear Algebra was especially influential — in fact the term multilinear algebra was probably coinedthere.One reason at the time was a new area of application, homological algebra. The development of algebraic topologyduring the 1940s gave additional incentive for the development of a purely algebraic treatment of the tensor product.The computation of the homology groups of the product of two spaces involves the tensor product; but only in thesimplest cases, such as a torus, is it directly calculated in that fashion (see Künneth theorem). The topologicalphenomena were subtle enough to need better foundational concepts; technically speaking, the Tor functors had to bedefined.The material to organise was quite extensive, including also ideas going back to Hermann Grassmann, the ideas fromthe theory of differential forms that had led to De Rham cohomology, as well as more elementary ideas such as thewedge product that generalises the cross product.The resulting rather severe write-up of the topic (by Bourbaki) entirely rejected one approach in vector calculus (thequaternion route, that is, in the general case, the relation with Lie groups). They instead applied a novel approachusing category theory, with the Lie group approach viewed as a separate matter. Since this leads to a much cleanertreatment, there was probably no going back in purely mathematical terms. (Strictly, the universal property approachwas invoked; this is somewhat more general than category theory, and the relationship between the two as alternateways was also being clarified, at the same time.)

Page 209: Vector and Matrices - Some Articles

Multilinear algebra 206

Indeed what was done is almost precisely to explain that tensor spaces are the constructions required to reducemultilinear problems to linear problems. This purely algebraic attack conveys no geometric intuition.Its benefit is that by re-expressing problems in terms of multilinear algebra, there is a clear and well-defined 'bestsolution': the constraints the solution exerts are exactly those you need in practice. In general there is no need toinvoke any ad hoc construction, geometric idea, or recourse to co-ordinate systems. In the category-theoretic jargon,everything is entirely natural.

Conclusion on the abstract approachIn principle the abstract approach can recover everything done via the traditional approach. In practice this may notseem so simple. On the other hand the notion of natural is consistent with the general covariance principle ofgeneral relativity. The latter deals with tensor fields (tensors varying from point to point on a manifold), butcovariance asserts that the language of tensors is essential to the proper formulation of general relativity.Some decades later the rather abstract view coming from category theory was tied up with the approach that hadbeen developed in the 1930s by Hermann Weyl (in his book The Classical Groups). In a way this took the theory fullcircle, connecting once more the content of old and new viewpoints.

Topics in multilinear algebraThe subject matter of multilinear algebra has evolved less than the presentation down the years. Here are furtherpages centrally relevant to it:• tensor• dual space• bilinear operator• inner product• multilinear map• Cramer's rule• component-free treatment of tensors• Kronecker delta• tensor contraction• mixed tensor• Levi-Civita symbol• tensor algebra, free algebra• symmetric algebra, symmetric power• exterior derivative• Einstein notation• symmetric tensor• metric tensorThere is also a glossary of tensor theory.

Page 210: Vector and Matrices - Some Articles

Multilinear algebra 207

From the point of view of applicationsSome of the ways in which multilinear algebra concepts are applied:• classical treatment of tensors• dyadic tensor• bra-ket notation• geometric algebra• Clifford algebra• pseudoscalar• pseudovector• spinor• outer product• hypercomplex number• multilinear subspace learning

References• Hermann Grassmann (2000) Extension Theory, American Mathematical Society. Translation by Lloyd

Kannenberg of the 1862 Ausdehnungslehre.• Wendell H. Fleming (1965) Functions of Several Variables, Addison-Wesley.

Second edition (1977) Springer ISBN 3540902066.Chapter: Exterior algebra and differential calculus # 6 in 1st ed, # 7 in 2nd.

• Ricci-Curbastro, Gregorio; Levi-Civita, Tullio (1900), "Méthodes de calcul différentiel absolu et leursapplications", Mathematische Annalen 54 (1): 125–201, doi:10.1007/BF01454201, ISSN 1432-1807

Page 211: Vector and Matrices - Some Articles

Pseudovector 208

Pseudovector

A loop of wire (black), carrying a current, createsa magnetic field (blue). If the position and currentof the wire are reflected across the dotted line, the

magnetic field it generates would not bereflected: Instead, it would be reflected and

reversed. The position of the wire and its currentare (polar) vectors, but the magnetic field is a

pseudovector.[1]

In physics and mathematics, a pseudovector (or axial vector) is aquantity that transforms like a vector under a proper rotation, but gainsan additional sign flip under an improper rotation such as a reflection.Geometrically it is the opposite, of equal magnitude but in the oppositedirection, of its mirror image. This is as opposed to a true or polarvector (more formally, a contravariant vector), which on reflectionmatches its mirror image.

In three dimensions the pseudovector p is associated with the crossproduct of two polar vectors a and b:[2]

The vector p calculated this way is a pseudovector. One example is the normal to a plane. A plane can be defined bytwo non-parallel vectors, a and b,[3] which can be said to span the plane. The vector a × b is a normal to the plane(there are two normals, one on each side – which is used can be determined by the right-hand rule), and is apseudovector. This has consequences in computer graphics where it has to be considered when transforming surfacenormals.A number of quantities in physics behave as pseudovectors rather than polar vectors, including magnetic field andangular velocity. In mathematics pseudovectors are equivalent to three dimensional bivectors, from which thetransformation rules of pseudovectors can be derived. More generally in n-dimensional geometric algebrapseudovectors are the elements of the algebra with dimension n − 1, written Λn−1Rn. The label 'pseudo' can befurther generalized to pseudoscalars and pseudotensors, both of which gain an extra sign flip under improperrotations compared to a true scalar or tensor.

Physical examplesPhysical examples of pseudovectors include the magnetic field, torque, vorticity, and the angular momentum.Often, the distinction between vectors and pseudovectors is overlooked, but it becomes important in understandingand exploiting the effect of symmetry on the solution to physical systems. For example, consider the case of anelectrical current loop in the z = 0 plane, which has a magnetic field at z = 0 that is oriented in the z direction. Thissystem is symmetric (invariant) under mirror reflections through the plane (an improper rotation), so the magneticfield should be unchanged by the reflection. But reflecting the magnetic field through that plane naively appears tochanges its sign if it is viewed as a vector field—this contradiction is resolved by realizing that the mirror reflectionof the field induces an extra sign flip because of its pseudovector nature, so the mirror flip in the end leaves themagnetic field unchanged as expected.

Page 212: Vector and Matrices - Some Articles

Pseudovector 209

Each wheel of a car driving away from anobserver has an angular momentum pseudovector

pointing left. The same is true for the mirrorimage of the car.

As another example, consider the pseudovector angular momentum L= r × p. Driving in a car, and looking forward, each of the wheels hasan angular momentum vector pointing to the left. If the world isreflected in a mirror which switches the left and right side of the car,the "reflection" of this angular momentum "vector" (viewed as anordinary vector) points to the right, but the actual angular momentumvector of the wheel still points to the left, corresponding to the extraminus sign in the reflection of a pseudovector. This reflects the factthat the wheels are still turning forward. In comparison, the behaviourof a regular vector, such as the position of the car, is quite different.

To the extent that physical laws would be the same if the universe werereflected in a mirror (equivalently, invariant under parity), the sum of avector and a pseudovector is not meaningful. However, the weak force,which governs beta decay, does depend on the chirality of the universe, and in this case pseudovectors and vectorsare added.

DetailsThe definition of a "vector" in physics (including both polar vectors and pseudovectors) is more specific than themathematical definition of "vector" (namely, any element of an abstract vector space). Under the physics definition,a "vector" is required to have components that "transform" in a certain way under a proper rotation: In particular, ifeverything in the universe were rotated, the vector would rotate in exactly the same way. (The coordinate system isfixed in this discussion; in other words this is the perspective of active transformations.) Mathematically, ifeverything in the universe undergoes a rotation described by a rotation matrix R, so that a displacement vector x istransformed to x′ = Rx, then any "vector" v must be similarly transformed to v′ = Rv. This important requirement iswhat distinguishes a vector (which might be composed of, for example, the x, y, and z-components of velocity) fromany other triplet of physical quantities (For example, the length, width, and height of a rectangular box cannot beconsidered the three components of a vector, since rotating the box does not appropriately transform these threecomponents.)(In the language of differential geometry, this requirement is equivalent to defining a vector to be a tensor ofcontravariant rank one.)The discussion so far only relates to proper rotations, i.e. rotations about an axis. However, one can also considerimproper rotations, i.e. a mirror-reflection possibly followed by a proper rotation. (One example of an improperrotation is inversion.) Suppose everything in the universe undergoes an improper rotation described by the rotationmatrix R, so that a position vector x is transformed to x′ = Rx. If the vector v is a polar vector, it will be transformedto v′ = Rv. If it is a pseudovector, it will be transformed to v′ = -Rv.The transformation rules for polar vectors and pseudovectors can be compactly stated as

(polar vector)

(pseudovector)where the symbols are as described above, and the rotation matrix R can be either proper or improper. The symboldet denotes determinant; this formula works because the determinant of proper and improper rotation matrices are +1and -1, respectively.

Page 213: Vector and Matrices - Some Articles

Pseudovector 210

Behavior under addition, subtraction, scalar multiplicationSuppose v1 and v2 are known pseudovectors, and v3 is defined to be their sum, v3=v1+v2. If the universe istransformed by a rotation matrix R, then v3 is transformed to

So v3 is also a pseudovector. Similarly one can show that the difference between two pseudovectors is apseudovector, that the sum or difference of two polar vectors is a polar vector, that multiplying a polar vector by anyreal number yields another polar vector, and that multiplying a pseudovector by any real number yields anotherpseudovector.On the other hand, suppose v1 is known to be a polar vector, v2 is known to be a pseudovector, and v3 is defined tobe their sum, v3=v1+v2. If the universe is transformed by a rotation matrix R, then v3 is transformed to

Therefore, v3 is neither a polar vector nor a pseudovector. For an improper rotation, v3 does not in general even keepthe same magnitude:

but .If the magnitude of v3 were to describe a measurable physical quantity, that would mean that the laws of physicswould not appear the same if the universe was viewed in a mirror. In fact, this is exactly what happens in the weakinteraction: Certain radioactive decays treat "left" and "right" differently, a phenomenon which can be traced to thesummation of a polar vector with a pseudovector in the underlying theory. (See parity violation.)

Behavior under cross products

Under inversion the two vectors change sign, buttheir cross product is invariant [black are the twooriginal vectors, grey are the inverted vectors, and

red is their mutual cross product].

For a rotation matrix R, either proper or improper, the followingmathematical equation is always true:

,where v1 and v2 are any three-dimensional vectors. (This equation can be proven either through a geometricargument or through an algebraic calculation, and is well known.)Suppose v1 and v2 are known polar vectors, and v3 is defined to be their cross product, v3=v1×v2. If the universe istransformed by a rotation matrix R, then v3 is transformed to

So v3 is a pseudovector. Similarly, one can show:• polar vector × polar vector = pseudovector• pseudovector × pseudovector = pseudovector• polar vector × pseudovector = polar vector

Page 214: Vector and Matrices - Some Articles

Pseudovector 211

• pseudovector × polar vector = polar vector

ExamplesFrom the definition, it is clear that a displacement vector is a polar vector. The velocity vector is a displacementvector (a polar vector) divided by time (a scalar), so is also a polar vector. Likewise, the momentum vector is thevelocity vector (a polar vector) times mass (a scalar), so is a polar vector. Angular momentum is the cross product ofa displacement (a polar vector) and momentum (a polar vector), and is therefore a pseudovector. Continuing thisway, it is straightforward to classify any vector as either a pseudovector or polar vector.

The right-hand ruleAbove, pseudovectors have been discussed using active transformations. An alternate approach, more along the linesof passive transformations, is to keep the universe fixed, but switch "right-hand rule" with "left-hand rule" andvice-versa everywhere in physics, in particular in the definition of the cross product. Any polar vector (e.g., atranslation vector) would be unchanged, but pseudovectors (e.g., the magnetic field vector at a point) would switchsigns. Nevertheless, there would be no physical consequences, apart from in the parity-violating phenomena such ascertain radioactive decays.[4]

Geometric algebraIn geometric algebra the basic elements are vectors, and these are used to build an hierarchy of elements using thedefinitions of products in this algebra. In particular, the algebra builds pseudovectors from vectors.The basic multiplication in the geometric algebra is the geometric product, denoted by simply juxtaposing twovectors as in ab. This product is expressed as:

where the leading term is the customary vector dot product and the second term is called the wedge product. Usingthe postulates of the algebra, all combinations of dot and wedge products can be evaluated. A terminology todescribe the various combinations is provided. For example, a multivector is a summation of k-fold wedge productsof various k-values. A k-fold wedge product also is referred to as a k-blade.In the present context the pseudovector is one of these combinations. This term is attached to a different mulitvectordepending upon the dimensions of the space (that is, the number of linearly independent vectors in the space). Inthree dimensions, the most general 2-blade or bivector can be expressed as a single wedge product and is apseudovector.[5] In four dimensions, however, the pseudovectors are trivectors.[6] In general, it is a (n - 1)-blade,where n is the dimension of the space and algebra.[7] An n-dimensional space has n vectors and also n pseudovectors.Each pseudovector is formed from the outer (wedge) product of all but one of the n vectors. For instance, in fourdimensions where the vectors are: e1, e2, e3, e4, the pseudovectors can be written as: e234, e134, e124, e123.

Transformations in three dimensionsThe transformation properties of the pseudovector in three dimensions has been compared to that of the vector crossproduct by Baylis.[8] He says: "The terms axial vector and pseudovector are often treated as synonymous, but it isquite useful to be able to distinguish a bivector ⋯ from its dual ⋯." To paraphrase Baylis: Given two polar vectors(that is, true vectors) a and b in three dimensions, the cross product composed from a and b is the vector normal totheir plane given by c = a × b. Given a set of right-handed orthonormal basis vectors eℓ , the cross product isexpressed in terms of its components as:

Page 215: Vector and Matrices - Some Articles

Pseudovector 212

where superscripts label vector components. On the other hand, the plane of the two vectors is represented by theexterior product or wedge product, denoted by a ∧ b. In this context of geometric algebra, this bivector is called apseudovector, and is the dual of the cross product.[9] The dual of e

1 is introduced as e

23 ≡ e

2e

3 = e

2 ∧ e

3, and so forth.

That is, the dual of e1

is the subspace perpendicular to e1, namely the subspace spanned by e

2 and e

3. With this

understanding,[10]

For details see Hodge dual. Comparison shows that the cross product and wedge product are related by:

where i = e1 ∧ e

2 ∧ e

3 is called the unit pseudoscalar.[11] [12] It has the property:[13]

Using the above relations, it is seen that if the vectors a and b are inverted by changing the signs of their componentswhile leaving the basis vectors fixed, both the pseudovector and the cross product are invariant. On the other hand, ifthe components are fixed and the basis vectors eℓ are inverted, then the pseudovector is invariant, but the crossproduct changes sign. This behavior of cross products is consistent with their definition as vector-like elements thatchange sign under transformation from a right-handed to a left-handed coordinate system, unlike polar vectors.

Note on usageAs an aside, it may be noted that not all authors in the field of geometric algebra use the term pseudovector, andsome authors follow the terminology that does not distinguish between the pseudovector and the cross product.[14]

However, because the cross product does not generalize beyond three dimensions,[15] the notion of pseudovectorbased upon the cross product also cannot be extended to higher dimensions. The pseudovector as the (n–1)-blade ofan n-dimensional space is not so restricted.Another important note is that pseudovectors, despite their name, are "vectors" in the common mathematical sense,i.e. elements of a vector space. The idea that "a pseudovector is different from a vector" is only true with a differentand more specific definition of the term "vector" as discussed above.

Notes[1] Stephen A. Fulling, Michael N. Sinyakov, Sergei V. Tischchenko (2000). Linearity and the mathematics of several variables (http:/ / books.

google. com/ books?id=Eo3mcd_62DsC& pg=RA1-PA343& dq=pseudovector+ "magnetic+ field"& lr=& as_drrb_is=q& as_minm_is=0&as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q=pseudovector "magnetic field"& f=false). World Scientific.p. 343. ISBN 9810241968. .

[2] Aleksandr Ivanovich Borisenko, Ivan Evgenʹevich Tarapov (1979). Vector and tensor analysis with applications (http:/ / books. google. com/books?id=CRIjIx2ac6AC& pg=PA125& dq="C+ is+ a+ pseudovector. + Note+ that"& lr=& as_drrb_is=q& as_minm_is=0& as_miny_is=&as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q="C is a pseudovector. Note that"& f=false) (Reprint of 1968 Prentice-Halled.). Courier Dover. p. 125. ISBN 0486638332. .

[3] RP Feynman: §52-5 Polar and axial vectors (http:/ / student. fizika. org/ ~jsisko/ Knjige/ Opca Fizika/ Feynman Lectures on Physics/ Vol 1Ch 52 - Symmetry in Physical Laws. pdf) from Chapter 52: Symmetry and physical laws, in: Feynman Lectures in Physics, Vol. 1

[4] See Feynman Lectures (http:/ / student. fizika. org/ ~jsisko/ Knjige/ Opca Fizika/ Feynman Lectures on Physics/ ).[5] William M Pezzaglia Jr. (1992). "Clifford algebra derivation of the characteristic hypersurfaces of Maxwell's equations" (http:/ / books.

google. com/ books?id=KfNgBHNUW_cC& pg=PA131). In Julian Ławrynowicz. Deformations of mathematical structures II. Springer.p. 131 ff. ISBN 0792325761. .

[6] In four dimensions, such as a Dirac algebra, the pseudovectors are trivectors. Venzo De Sabbata, Bidyut Kumar Datta (2007). Geometricalgebra and applications to physics (http:/ / books. google. com/ books?id=AXTQXnws8E8C& pg=PA64& dq=bivector+ trivector+pseudovector+ "geometric+ algebra"& lr=& as_drrb_is=b& as_minm_is=0& as_miny_is=2005& as_maxm_is=0& as_maxy_is=2010&as_brr=0& cd=1#v=onepage& q=bivector trivector pseudovector "geometric algebra"& f=false). CRC Press. p. 64. ISBN 1584887729. .

[7] .

William E Baylis (2004). "§4.2.3 Higher-grade multivectors in Cℓn: Duals" (http:/ / books. google. com/

books?id=oaoLbMS3ErwC& pg=PA100& dq="pseudovectors+ (grade+ n+ -+ 1+ elements)"& lr=& as_drrb_is=q&

as_minm_is=0& as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q="pseudovectors

Page 216: Vector and Matrices - Some Articles

Pseudovector 213

(grade n - 1 elements)"& f=false). Lectures on Clifford (geometric) algebras and applications. Birkhäuser. p. 100.ISBN 0817632573. .[8] William E Baylis (1994). Theoretical methods in the physical sciences: an introduction to problem solving using Maple V (http:/ / books.

google. com/ books?id=pEfMq1sxWVEC& pg=PA234). Birkhäuser. p. 234, see footnote. ISBN 081763715X. .[9] R Wareham, J Cameron & J Lasenby (2005). "Application of conformal geometric algebra in computer vision and graphics" (http:/ / books.

google. com/ books?id=uxofVAQE3LoC& pg=PA330& dq="is+ termed+ the+ dual+ of+ x"& lr=& as_drrb_is=q& as_minm_is=0&as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=1#v=onepage& q="is termed the dual of x"& f=false). Computer algebra andgeometric algebra with applications. Springer. p. 330. ISBN 3540262962. . In three dimensions, a dual may be right-handed or left-handed;see Leo Dorst, Daniel Fontijne, Stephen Mann (2007). "Figure 3.5: Duality of vectors and bivectors in 3-D" (http:/ / books. google. com/books?id=-1-zRTeCXwgC& pg=PA82). Geometric Algebra for Computer Science: An Object-Oriented Approach to Geometry (2nd ed.).Morgan Kaufmann. p. 82. ISBN 0123749425. .

[10] Christian Perwass (2009). "§1.5.2 General vectors" (http:/ / books. google. com/ books?id=8IOypFqEkPMC& pg=PA17#v=onepage& q=&f=false). Geometric Algebra with Applications in Engineering. Springer. p. 17. ISBN 354089067X. .

[11] David Hestenes (1999). "The vector cross product" (http:/ / books. google. com/ books?id=AlvTCEzSI5wC& pg=PA60). New foundationsfor classical mechanics: Fundamental Theories of Physics (2nd ed.). Springer. p. 60. ISBN 0792353021. .

[12] Venzo De Sabbata, Bidyut Kumar Datta (2007). "The pseudoscalar and imaginary unit" (http:/ / books. google. com/books?id=AXTQXnws8E8C& pg=PA53). Geometric algebra and applications to physics. CRC Press. p. 53 ff. ISBN 1584887729. .

[13] Eduardo Bayro Corrochano, Garret Sobczyk (2001). Geometric algebra with applications in science and engineering (http:/ / books. google.com/ books?id=GVqz9-_fiLEC& pg=PA126). Springer. p. 126. ISBN 0817641998. .

[14] For example, Bernard Jancewicz (1988). Multivectors and Clifford algebra in electrodynamics (http:/ / books. google. com/books?id=seFyL-UWoj4C& pg=PA11#v=onepage& q=& f=false). World Scientific. p. 11. ISBN 9971502909. .

[15] Stephen A. Fulling, Michael N. Sinyakov, Sergei V. Tischchenko (2000). Linearity and the mathematics of several variables (http:/ / books.google. com/ books?id=Eo3mcd_62DsC& pg=RA1-PA340). World Scientific. p. 340. ISBN 9810241968. .

General references• Richard Feynman, Feynman Lectures on Physics, Vol. 1 Chap. 52. See §52-5: Polar and axial vectors, p. 52-6

(http:/ / student. fizika. org/ ~jsisko/ Knjige/ Opca Fizika/ Feynman Lectures on Physics/ Vol 1 Ch 52 - Symmetryin Physical Laws. pdf)

• George B. Arfken and Hans J. Weber, Mathematical Methods for Physicists (Harcourt: San Diego, 2001). (ISBN0-12-059815-9)

• John David Jackson, Classical Electrodynamics (Wiley: New York, 1999). (ISBN 0-471-30932-X)• Susan M. Lea, "Mathematics for Physicists" (Thompson: Belmont, 2004) (ISBN 0-534-37997-4)• Chris Doran and Anthony Lasenby, "Geometric Algebra for Physicists" (Cambridge University Press: Cambridge,

2007) (ISBN 978-0-521-71595-9)• William E Baylis (2004). "Chapter 4: Applications of Clifford algebras in physics" (http:/ / books. google. com/

books?id=oaoLbMS3ErwC& pg=PA100). In Rafał Abłamowicz, Garret Sobczyk. Lectures on Clifford(geometric) algebras and applications. Birkhäuser. p. 100 ff. ISBN 0817632573.: The dual of the wedge producta b is the cross product a × b.

Page 217: Vector and Matrices - Some Articles

Bivector 214

BivectorIn mathematics, a bivector or 2-vector is a quantity in geometric algebra or exterior algebra that generalises the ideaof a vector. If a scalar is considered a zero dimensional quantity, and a vector is a one dimensional quantity, then abivector can be thought of as two dimensional. Bivectors have applications in many areas of mathematics andphysics. They are related to complex numbers in two dimensions and to both pseudovectors and quaternions in threedimensions. They can be used to generate rotations in any dimension, and are a useful tool for classifying suchrotations. They also are used in physics, tying together a number of otherwise unrelated quantities.Bivectors are generated by the exterior product on vectors – given two vectors a and b their exterior product a ∧ b isa bivector. But not all bivectors can be generated this way, and in higher dimensions a sum of exterior products isoften needed. More precisely a bivector that requires only a single exterior product is simple; in two and threedimensions all bivectors are simple, but in higher dimensions this is not generally the case.[1] The exterior product isantisymmetric, so b ∧ a negates the bivector, producing a rotation with the opposite sense, and a ∧ a is the zerobivector.

Parallel plane segments with the sameorientation and area corresponding to the same

bivector a ∧ b.[2]

Geometrically, a simple bivector can be interpreted as an oriented planesegment, much as vectors can be thought of as directed line segments.[3]

Specifically for the bivector a ∧ b, its magnitude is the area of theparallelogram with edges a and b, its attitude that of any plane specifiedby a and b, and its orientation the sense of the rotation that would align awith b. It does not have a definite location or position.[3] [4]

History

The bivector was first defined in 1844 by German mathematicianHermann Grassmann in exterior algebra, as the result of the exteriorproduct. Around the same time in 1843 in Ireland William RowanHamilton discovered quaternions. It was not until English mathematicianWilliam Kingdon Clifford in 1888 added the geometric product toGrassmann's algebra, incorporating the ideas of both Hamilton andGrassmann, and founded Clifford algebra, that the bivector as it is knowntoday was fully understood.

Around this time Josiah Willard Gibbs and Oliver Heaviside developed vector calculus which included separatecross product and dot products, derived from quaternion multiplication.[5] [6] [7] The success of vector calculus, andof the book Vector Analysis by Gibbs and Wilson, meant the insights of Hamilton and Clifford were overlooked fora long time, as much of 20th century mathematics and physics was formulated in vector terms. Gibbs insteaddescribed bivectors as vectors, and used "bivector" to describe an unrelated quantity, a use that has sometimes beencopied.[8] [9] [10]

Today the bivector is largely studied as a topic in geometric algebra, a more restricted Clifford algebra over real orcomplex vector spaces with nondegenerate quadratic form. Its resurgence was led by David Hestenes who, alongwith others, discovered a range of new applications in physics for geometric algebra.[11]

Page 218: Vector and Matrices - Some Articles

Bivector 215

Formal definitionFor this article the bivector will be considered only in real geometric algebras. This in practice is not much of arestriction, as all useful applications are drawn from such algebras. Also unless otherwise stated all examples have aEuclidian metric and so a quadratic form with signature 1.

Geometric algebra and the geometric productThe bivector arises from the definition of the geometric product over a vector space. For vectors a, b and c thegeometric product on vectors is defined as follows:Associativity:

Left and right distributivity:

Contraction:

Where Q is the quadratic form, |a| is the magnitude of a and ϵa

is the signature of the vector. For a space withEuclidian metric ϵ

a is 1 so can be omitted, and the contraction condition becomes:

The interior productFrom associativity a(ab) = a2b, a scalar times b. So ab cannot be a scalar. But

is a sum of scalars and so a scalar. From the law of cosines on the triangle formed by the vectors its value is|a||b|cosθ, where θ is the angle between the vectors. It is therefore identical to the interior product between twovectors, and is written the same way,

It is symmetric, scalar valued, and can be used to determine the angle between two vectors: in particular if a and bare orthogonal the product is zero.

The exterior productIn the same way another quantity can be written down:

This is called the exterior product, a ∧ b. It is antisymmetric in a and b, that is

By addition:

That is the geometric product is the sum of the symmetric interior product and antisymmetric exterior product.To calculate a ∧ b consider the sum

Page 219: Vector and Matrices - Some Articles

Bivector 216

Expanding using the geometric product and simplifying gives

so using the Pythagorean trigonometric identity:

With a negative square it cannot be a scalar or vector quantity, so it is a new sort of object, a bivector. It hasmagnitude |a||b|sinθ, where θ is the angle between the vectors, and so is zero for parallel vectors.To distinguish them from vectors, bivectors are written here with bold capitals, for example:

Although other conventions are used, in particular as vectors and bivectors are both elements of the geometricalgebra.

Properties

The space Λ2ℝn

The algebra generated by the geometric product is the geometric algebra over the vector space. For a Euclideanvector space it is written or Cℓn(ℝ), where n is the dimension of the vector space ℝn. Cℓn is both a vector spaceand an algebra, generated by all the products between vectors in ℝn, so it contains all vectors and bivectors. Moreprecisely as a vector space it contains the vectors and bivectors as subspaces. The space of all bivectors is writtenΛ2ℝn.[12] Unlike ℝn it is not a Euclidean subspace; nor is it a subalgebra.

The even subalgebraThe subalgebra generated by the bivectors is the even subalgebra of the geometric algebra, written Cℓ +n. This algebra results from considering all products of scalars and bivectors generated by the geometric product. Ithas dimension 2n - 1, and contains Λ2ℝn as a linear subspace with dimension 1⁄2n(n - 1) (a triangular number). In twoand three dimensions the even subalgebra contains only scalars and bivectors, and each is of particular interest. Intwo dimensions the even subalgebra is isomorphic to the complex numbers, ℂ, while in three it is isomorphic to thequaternions, ℍ. More generally the even subalgebra can be used to generate rotations in any dimension, and can begenerated by bivectors in the algebra.

MagnitudeAs noted in the previous section the magnitude of a simple bivector, that is one that is the exterior product of twovectors a and b, is |a||b|sin θ, where θ is the angle between the vectors. It is written |B|, where B is the bivector.For general bivectors the magnitude can be calculated by taking the norm of the bivector considered as a vector inthe space Λ2ℝn. If the magnitude is zero then all the bivector's components are zero, and the bivector is the zerobivector which as an element of the geometric algebra equals the scalar zero.

Page 220: Vector and Matrices - Some Articles

Bivector 217

Unit bivectorsA unit bivector is one with unit magnitude. It can be derived from any non-zero bivector by dividing the bivector byits magnitude, that is

Of particular interest are the unit bivectors formed from the products of the Standard basis. If ei and ej are distinctbasis vectors then the product ei ∧ ej is a bivector. As the vectors are orthogonal this is just eiej, written eij, with unitmagnitude as the vectors are unit vectors. The set of all such bivectors form a basis for Λ2ℝn. For instance in fourdimensions the basis for Λ2ℝ4 is (e1e2, e1e3, e1e4, e2e3, e2e4, e3e4) or (e12, e13, e14, e23, e24, e34).[13]

Simple bivectorsThe exterior product of two vectors is a bivector, but not all bivectors are exterior products of two vectors. Forexample in four dimensions the bivector

cannot be written as the exterior product of two vectors. A vector that can be written as the exterior product of twovectors is simple. In two and three dimensions all bivectors are simple, but not in four or more dimensions; in fourdimensions every bivector is the sum of at most two exterior products. A bivector has a real square if and only if it issimple, and only simple bivectors can be represented geometrically by a oriented plane area.[1]

Product of two bivectorsThe geometric product of two bivectors, A and B, is

The quantity A · B is the scalar valued interior product, while A ∧ B is the grade 4 exterior product that arises in fouror more dimensions. The quantity A × B is the bivector valued commutator product, given by

[14]

The space of bivectors Λ2ℝn are a Lie algebra over ℝ, with the commutator product as the Lie bracket. The fullgeometric product of bivectors generates the even subalgebra.Of particular interest is the product of a bivector with itself. As the commutator product is antisymmetric the productsimplifies to

If the bivector is simple the last term is zero and the product is the scalar valued A · A, which can be used as a checkfor simplicity. In particular the exterior product of bivectors only exists in four or more dimensions, so all bivectorsin two and three dimensions are simple.[1]

Page 221: Vector and Matrices - Some Articles

Bivector 218

Two dimensionsWhen working with coordinates in geometric algebra it is usual to write the basis vectors as (e1, e2, ...), a conventionthat will be used here.A vector in real two dimensional space ℝ2 can be written a = a1e1 + a2e2, where a1 and a2 are real numbers, e1 and e2are orthonormal basis vectors. The geometric product of two such vectors is

This can be split into the symmetric, scalar valued, interior product and an antisymmetric, bivector valued exteriorproduct:

All bivectors in two dimensions are of this form, that is multiples of the bivector e1e2, written e12 to emphasise it is abivector rather than a vector. The magnitude of e12 is 1, with

so it is called the unit bivector. The term unit bivector can be used in other dimensions but it is only uniquelydefined in two dimensions and all bivectors are multiples of e12. As the highest grade element of the algebra e12 isalso the pseudoscalar which is given the symbol i.

Complex numbersWith the properties of negative square and unit magnitude the unit bivector can be identified with the imaginary unitfrom complex numbers. The bivectors and scalars together form the even subalgebra of the geometric algebra, whichis isomorphic to the complex numbers ℂ. The even subalgebra has basis (1, e12), the whole algebra has basis (1, e1,e2, e12).The complex numbers are usually identified with the coordinate axes and two dimensional vectors, which wouldmean associating them with the vector elements of the geometric algebra. There is no contradiction in this, as to getfrom a general vector to a complex number an axis needs to be identified as the real axis, e1 say. This multiplies byall vectors to generate the elements of even subalgebra.All the properties of complex numbers can be derived from bivectors, but two are of particular interest. First as withcomplex numbers products of bivectors and so the even subalgebra are commutative. This is only true in twodimensions, so properties of the bivector in two dimensions that depend on commutativity do not usually generaliseto higher dimensions.Second a general bivector can be written

where θ is a real number. Putting this into the Taylor series for the exponential map and using the property e12

2 = −1results in a bivector version of Euler's formula,

which when multiplied by any vector rotates it through an angle θ about the origin:

The product of a vector with a bivector in two dimensions is anticommutative, so the following products all generatethe same rotation

Page 222: Vector and Matrices - Some Articles

Bivector 219

Of these the last product is the one that generalises into higher dimensions. The quantity needed is called a rotor andis given the symbol R, so in two dimensions a rotor that rotates through angle θ can be written

and the rotation it generates is[15]

Three dimensionsIn three dimensions the geometric product of two vectors is

This can be split into the symmetric, scalar valued, interior product and the antisymmetric, bivector valued, exteriorproduct:

In three dimensions all bivectors are simple and so the result of an exterior product. The unit bivectors e23, e31 ande12 form a basis for the space of bivectors Λ2ℝ3, which itself a three dimensional linear space. So if a generalbivector is:

they can be added like vectors

while when multiplied they produce the following

which can be split into symmetric scalar and antisymmetric bivector parts as follows

The exterior product of two bivectors in three dimensions is zero.A bivector B can be written as the product of its magnitude and a unit bivector, so writing β for |B| and using theTaylor series for the exponential map it can be shown that

This is another version of Euler's formula, but with a general bivector in three dimensions. Unlike in two dimensionsbivectors are not commutative so properties that depend on commutativity do not apply in three dimensions. Forexample in general eA + B ≠ eAeB in three (or more) dimensions.The full geometric algebra in three dimensions, Cℓ3(ℝ), has basis (1, e1, e2, e3, e23, e31, e12, e123). The element e123is a trivector and the pseudoscalar for the geometry. Bivectors in three dimensions are sometimes identified withpseudovectors[16] to which they are related, as discussed below.

Page 223: Vector and Matrices - Some Articles

Bivector 220

QuaternionsBivectors are not closed under the geometric product, but the even subalgebra is. In three dimensions it consists ofall scalar and bivector elements of the geometric algebra, so a general element can be written for example a + A,where a is the scalar part and A is the bivector part. It is written Cℓ +3 and has basis (1, e23, e31, e12). The product of two general elements of the even subalgebra is

The even subalgebra, that is the algebra consisting of scalars and bivectors, is isomorphic to the quaternions, ℍ. Thiscan be seen by comparing the basis to the quaternion basis, or from the above product which is identical to thequaternion product, except for a change of sign which relates to the negative products in the bivector interior productA · B. Other quaternion properties can be similarly related to or derived from geometric algebra.This suggests that the usual split of a quaternion into scalar and vector parts would be better represented as a splitinto scalar and bivector parts; if this is done there is no special quaternion product, there is just the normal geometricproduct on the elements. It also relates quaternions in three dimensions to complex numbers in two, as each isisomorphic to the even subalgebra for the dimension, a relationship that generalises to higher dimensions.

Rotation vectorThe rotation vector, from the axis angle representation of rotations, is a compact way of representing rotations inthree dimensions. In its most compact form it consists of a vector, the product of the a unit vector that is the axis ofrotation and the angle of rotation, so the magnitude of the vector is the rotation angle.In geometric algebra this vector is classified as a bivector. This can be seen in its relation to quaternions. If the axisis ω and the angle of rotation is θ then the rotation vector is ωθ quaternion associated with the rotation is

but this is just the exponent of half of the bivector Ωθ, that is

So rotation vectors are bivectors, just as quaternions are elements of the geometric algebra, and they are related bythe exponential map in that algebra.

RotorsThe bivector Ωθ generates a rotation through the exponential map. The even elements generated rotate a generalvector in three dimensions in the same way as quaternions:

As to two dimensions the quantity eΩθ is called a rotor and written R. The quantity e -Ωθ is then R -1, and theygenerate rotations as follows

This is identical to two dimensions, except here rotors are four-dimensional objects isomorphic to the quaternions.This can be generalised to all dimensions, with rotors, elements of the even subalgebra with unit magnitude, beinggenerated by the exponential map from bivectors. They form a double cover over the rotation group, so the rotors Rand −R represent the same rotation.

Page 224: Vector and Matrices - Some Articles

Bivector 221

MatricesBivectors are isomorphic to skew-symmetric matrices; the general bivector B23e23 + B31e31 + B12e12 maps to thematrix

This multiplied by vectors on both sides gives the same vector as the product of a vector and bivector; an example isthe angular velocity tensor.Skew symmetric matrices generate orthogonal matrices with determinant 1 through the exponential map. Inparticular the exponent of a bivector associated with a rotation is a rotation matrix, that is the rotation matrix MRgiven by the above skew-symmetric matrix is

The rotation described by MR is the same as that described by the rotor R given by

and the matrix MR can be also calculated directly from rotor R:

Bivectors are related to the eigenvalues of a rotation matrix. Given a rotation matrix M the eigenvalues cancalculated by solving the characteristic equation for that matrix 0 = det (M - λI). By the fundamental theorem ofalgebra this has three roots, but only one real root as there is only one eigenvector, the axis of rotation. The otherroots must be a complex conjugate pair. They have unit magnitude so purely imaginary logarithms, equal to themagnitude of the bivector associated with the rotation, which is also the angle of rotation. The eigenvectorsassociated with the complex eigenvalues are in the plane of the bivector, so the exterior product of two non-paralleleigenvectors result in the bivector, or at least a multiple of it.

Axial vectorsThe rotation vector is an example of an axial vector. Axial vectors or pseudovectors are vectors that undergo a signchange compared to normal or polar vectors under inversion, that is when reflected or otherwise inverted. Examplesinclude quantities like torque, angular momentum and vector magnetic fields. Such quantities can be described asbivectors in geometric algebra; that is quantities that might use axial vectors in vector algebra are better representedby bivectors in geometric algebra.[17] More precisely, the Hodge dual gives the isomorphism between axial vectorsand bivectors, so each axial vector is associated with a bivector and vice-versa; that is

where * indicates the Hodge dual. Alternately, using the unit pseudoscalar in Cℓ3(ℝ), i = e1e2e3 gives

This is easier to use as the product is just the geometric product. But it is antisymmetric because (as in twodimensions) the unit pseudoscalar i squares to −1, so a negative is needed in one of the products.This relationship extends to operations like the vector valued cross product and bivector valued exterior product, aswhen written as determinants they are calculated in the same way:

Page 225: Vector and Matrices - Some Articles

Bivector 222

so are related by the Hodge dual:

Bivectors have a number of advantages over axial vectors. They better disambiguate axial and polar vectors, that isthe quantities represented by them, so it is clearer which operations are allowed and what their results are. Forexample the inner product of a polar vector and a axial vector resulting from the cross product in the triple productshould result in a pseudoscalar, a result which is more obvious if the calculation is framed as the exterior product ofa vector and bivector. They generalises to other dimensions; in particular bivectors can be used to describe quantitieslike torque and angular momentum in two as well as three dimensions. Also, they closely match geometric intuitionin a number of ways, as seen in the next section.[18]

Geometric interpretation

Parallel plane segments with the sameorientation and area corresponding to the same

bivector a ∧ b.[2]

As suggested by their name and that of the algebra, one of the attractionsof bivectors is that they have a natural geometric interpretation. This canbe described in any dimension but is best done in three where parallelscan be drawn with more familiar objects, before being applied to higherdimensions. In two dimensions the geometric interpretation is trivial, asthe space is two dimensional so has only one plane, and all bivectors areassociated with it differing only by a scale factor.All bivectors can be interpreted as planes, or more precisely as directedplane segments. In three dimensions there are three properties of abivector that can be interpreted geometrically:

• The arrangement of the plane in space, precisely the attitude of theplane (or alternately the rotation, geometric orientation or gradient ofthe plane), is associated with the ratio of the bivector components. Inparticular the three basis bivectors, e23, e31 and e12, or scalar multiplesof them, are associated with the yz-plane, xz-plane and xy-planerespectively.

• The magnitude of the bivector is associated with the area of the plane segment. The area does not have aparticular shape so any shape can be used. It can even be represented in other ways, such as by an angularmeasure. But if the vectors are interpreted as lengths the bivector is usually interpreted as an area with the sameunits, as follows.

• Like the direction of a vector a plane associated with a bivector has a direction, a circulation or a sense of rotationin the plane, which takes two values seen as clockwise and counterclockwise when viewed from viewpoint not inthe plane. This is associated with a change of sign in the bivector, that is if the direction is reversed the bivector isnegated. Alternately if two bivectors have the same attitude and magnitude but opposite directions then one is thenegative of the other.

Page 226: Vector and Matrices - Some Articles

Bivector 223

The cross product a × b is orthogonal to the bivector a∧ b.

In three dimensions all bivectors can be generated by the exteriorproduct of two vectors. If the bivector B = a ∧ b then themagnitude of B is

where θ is the angle between the vectors. This is the area of the parallelogram with edges a and b, as shown in thediagram. One interpretation is that the area is swept out by b as it moves along a. The exterior product isantisymmetric, so reversing the order of a and b to make a move along b results in a bivector with the oppositedirection that is the negative of the first. The plane of bivector a ∧ b contains both a and b so they are both parallelto the plane.Bivectors and axial vectors are related by Hodge dual. In a real vector space the Hodge dual relates a subspace to itsorthogonal complement, so if a bivector is represented by a plane then the axial vector associated with it is simplythe plane's surface normal. The plane has two normals, one on each side, giving the two possible orientations for theplane and bivector.

Relationship between force F, torque τ, linearmomentum p, and angular momentum L.

This relates the cross product to the exterior product. It can also beused to represent physical quantities, like torque and angularmomentum. In vector algebra they are usually represented byvectors, perpendicular to the plane of the force, linear momentumor displacement that they are calculated from. But if a bivector isused instead the plane is the plane of the bivector, so is a morenatural way to represent the quantities and the way they act. It alsounlike the vector representation generalises into other dimensions.

The product of two bivectors has a geometric interpretation. Fornon-zero bivectors A and B the product can be split intosymmetric and antisymmetric parts as follows:

Like vectors these have magnitudes |A · B| = |A||B| cos θ and |A × B| = |A||B| sin θ, where θ is the angle between theplanes. In three dimensions it is the same as the angle between the normal vectors dual to the planes, and itgeneralises to some extent in higher dimensions.

Page 227: Vector and Matrices - Some Articles

Bivector 224

Two bivectors, two of the non-parallel sides of a prism,being added to give a third bivector.[12]

Bivectors can be added together as areas. Given two non-zerobivectors B and C in three dimensions it is always possible to finda vector that is contained in both, a say, so the bivectors can bewritten as exterior products involving a:

This can be interpreted geometrically as seen in the diagram: the two areas sum to give a third, with the three areasforming faces of a prism with a, b, c and b + c as edges. This corresponds to the two ways of calculating the areausing the distributivity of the exterior product:

This only works in three dimensions as it is the only dimension where a vector parallel to both bivectors must exist.In higher dimensions bivectors generally are not associated with a single plane, or if they are (simple bivectors) twobivectors may have no vector in common, and so sum to a non-simple bivector.

Four dimensionsIn four dimensions the basis elements for the space Λ2ℝ4 of bivectors are (e12, e13, e14, e23, e24, e34), so a generalbivector is of the form

OrthogonalityIn four dimensions bivectors are orthogonal to bivectors. That is the dual of a bivector is a bivector, and the spaceΛ2ℝ4 is dual to itself in Cℓ4(ℝ). Normal vectors are not unique, instead every plane is orthogonal to all the vectors inits dual space. This can be used to partition the bivectors into two 'halves', for example into two sets of three unitbivectors each. There are only four distinct ways to do this, and whenever it's done one vector is in only one of thetwo halves, for example (e12, e13, e14) and (e23, e24, e34).

Page 228: Vector and Matrices - Some Articles

Bivector 225

Simple bivectors in 4DIn four dimensions bivectors are generated by the exterior product of vectors in ℝ4, but with one important differencefrom ℝ3 and ℝ2. In four dimensions not all bivectors are simple. There are bivectors such as e12 + e34 that cannot begenerated by the external product of two vectors. This also means they do not have a real, that is scalar, square. Inthis case

The element e1234 is the pseudoscalar in Cℓ4, distinct from the scalar, so the square is non-scalar.All bivectors in four dimensions can be generated using at most two exterior products and four vectors. The abovebivector can be written as

Alternately every bivector can be written as the sum of two simple bivectors. It is useful to choose two orthogonalbivectors for this, and this is always possible to do. Moreover for a general bivector the choice of simple bivectors isunique, that is there is only one way to decompose into orthogonal bivectors. This is true also for simple bivectors,except one of the orthogonal parts is zero. The exception is when the two orthogonal bivectors have equalmagnitudes (as in the above example): in this case the decomposition is not unique.[1]

Rotations in ℝ4

As in three dimensions bivectors in four dimension generate rotations through the exponential map, and all rotationscan be generated this way. As in three dimensions if B is a bivector then the rotor R is eB/2 and rotations aregenerated in the same way:

A 3D projection of an tesseract performing an isoclinicrotation.

The rotations generated are more complex though. They can becategorised as follows:

simple rotations are those that fix a plane in 4D, and rotateby an angle "about" this plane.

double rotations have only one fixed point, the origin, androtate through two angles about two orthogonal planes. Ingeneral the angles are different and the planes are uniquelyspecified

isoclinic rotations are double rotations where the angles ofrotation are equal. In this case the planes about which therotation is taking place are not unique.

These are generated by bivectors in a straightforward way. Simplerotations are generated by simple bivectors, with the fixed planethe dual or orthogonal to the plane of the bivector. The rotationcan be said to take place about that plane, in the plane of the bivector. All other bivectors generate double rotations,with the two angles of the rotation equalling the magnitudes of the two simple bivectors the non-simple bivector iscomposed of. Isoclinic rotations arise when these magnitudes are equal, in which case the decomposition into twosimple bivectors in not unique.[19]

Bivectors in general do not commute, but one exception is orthogonal bivectors and exponents of them. So if thebivector B = B1 + B2, where B1 and B2 are orthogonal simple bivectors, is used to generate a rotation it decomposesinto two simple rotations that commute as follows:

Page 229: Vector and Matrices - Some Articles

Bivector 226

It is always possible to do this as all bivectors can be expressed as sums of orthogonal bivectors.

Spacetime rotationsSpacetime is a mathematical model for our universe used in special relativity. It consists of three space dimensionsand one time dimension combined into a single four dimensional space. It is naturally described if using geometricalgebra and bivectors, with the Euclidean metric replaced by a Minkowski metric. That is the algebra is identical tothat of Euclidean space, except the signature is changed, so

(Note the order and indices above are not universal – here e4 is the time-like dimension). The geometric algebra isCℓ3,1(ℝ), and the subspace of bivectors is Λ2ℝ3,1. The bivectors are of two types. The bivectors e23, e31 and e12 havenegative squares and correspond to the bivectors of the three dimensional subspace corresponding to Euclideanspace, ℝ3. These bivectors generate normal rotations in ℝ3.The bivectors e14, e24 and e34 have positive squares and as planes span a space dimension and the time dimension.These also generate rotations through the exponential map, but instead of trigonometric functions hyperbolicfunctions are needed, which generates a rotor as follows:

These are Lorentz transformations, expressed in a particularly compact way, using the same algebra as in ℝ3 and ℝ4.In general all spacetime rotations are generated from bivectors through the exponential map, that is, a general rotorgenerated by bivector A is of the form

The set of all rotations in spacetime form the Lorentz group, and from them most of the consequences of specialrelativity can be deduced. More generally this show how transformations in Euclidean space and spacetime can allbe described using the same algebra.

Maxwell's equations(Note: in this section traditional 3-vectors are indicated by lines over the symbols and spacetime vector and bivectorsby bold symbols, with the vectors J and A exceptionally in uppercase)Maxwell's equations are used in physics to describe the relationship between electric and magnetic fields. Normallygiven as four differential equations they have a particularly compact form when the fields are expressed as aspacetime bivector from Λ2ℝ3,1. If the electric and magnetic fields in ℝ3 are E and B then the electromagneticbivector is

where e4 is again the basis vector for the time-like dimension and c is the speed of light. The quantity Be123 is thebivector dual to B in three dimensions, as discussed above, while Ee4 as a product of orthogonal vectors is alsobivector valued. As a whole it is the electromagnetic tensor expressed more compactly as a bivector, and is used asfollows. First it is related to the 4-current J, a vector quantity given by

where j is current density and ρ is charge density. They are related by a differential operator ∂, which is

Page 230: Vector and Matrices - Some Articles

Bivector 227

The operator ∇ is a differential operator in geometric algebra, acting on the space dimensions and given by ∇M =∇·M + ∇∧M. When applied to vectors ∇·M is the divergence and ∇∧M is the curl but with a bivector rather thanvector result, that is dual in three dimensions to the curl. For general quantity M they act as grade lowering andraising differential operators. In particular if M is a scalar then this operator is just the gradient, and it can be thoughtof as a geometric algebraic del operator.Together these can be used to give a particularly compact form for Maxwell's equations in a vacuum:

This when decomposed according to geometric algebra, using geometric products which have both grade raising andgrade lowering effects, is equivalent to Maxwell's four equations. This is the form in a vacuum, but the general formis only a little more complex. It is also related to the electromagnetic four-potential, a vector A given by

where A is the vector magnetic potential and V is the electric potential. It is related to the electromagnetic bivector asfollows

using the same differential operator ∂.[20]

Higher dimensionsAs has been suggested in earlier sections much of geometric algebra generalises well into higher dimensions. Thegeometric algebra for the real space ℝn is Cℓn(ℝ), and the subspace of bivectors is Λ2ℝn.The number of simple bivectors needed to form a general bivector rises with the dimension, so for n odd it is (n - 1) /2, for n even it is n / 2. So for four and five dimensions only two simple bivectors are needed but three are requiredfor six and seven dimensions. For example in six dimensions with standard basis (e1, e2, e3, e4, e5, e6) the bivector

is the sum of three simple bivectors but no less. As in four dimensions it is always possible to find orthogonal simplebivectors for this sum.

Rotations in higher dimensionsAs in three and four dimensions rotors are generated by the exponential map, so

is the rotor generated by bivector B. Simple rotations, that take place in a plane of rotation around a fixed blade ofdimension (n - 2) are generated by with simple bivectors, while other bivectors generate more complex rotationswhich can be described in terms of the simple bivectors they are sums of, each related to a plane of rotation. Allbivectors can be expressed as the sum of orthogonal and commutative simple bivectors, so rotations can always bedecomposed into a set of commutative rotations about the planes associated with these bivectors. The group of therotors in n dimensions is the spin group, Spin(n).One notable feature, related to the number of simple bivectors and so rotation planes, is that in odd dimensions everyrotation has a fixed axis - it is misleading to call it an axis of rotation as in higher dimensions rotations are takingplace in multiple planes orthogonal to it. This is related to bivectors, as bivectors in odd dimensions decompose intothe same number of bivectors as the even dimension below, so have the same number of planes, but one extradimension. As each plane generates rotations in two dimensions in odd dimensions there must be one dimension, thatis an axis, that is not being rotated.[21]

Bivectors are also related to the rotation matrix in n dimensions. As in three dimensions the characteristic equation of the matrix can be solved to find the eigenvalues. In odd dimensions this has one real root, with eigenvector the fixed

Page 231: Vector and Matrices - Some Articles

Bivector 228

axis, and in even dimensions it has no real roots, so either all or all but one of the roots are complex conjugate pairs.Each pair is associated with a simple component of the bivector associated with the rotation. In particular the log ofeach pair is ± the magnitude, while eigenvectors generated from the roots are parallel to and so can be used togenerate the bivector. In general the eigenvalues and bivectors are unique, and the set of eigenvalues gives the fulldecomposition into simple bivectors; if roots are repeated then the decomposition of the bivector into simplebivectors is not unique.

Projective geometryGeometric algebra can be applied to projective geometry in a straightforward way. The geometric algebra used isCℓn(ℝ), n ≥ 3, the algebra of the real vector space ℝn. This is used to describe objects in the real projective space ℝℙn -

1. The non-zero vectors in Cℓn(ℝ) or ℝn are associated with points in the projective space so vectors that differ onlyby a scale factor, so their exterior product is zero, map to the same point. Non-zero simple bivectors in Λ2ℝn

represent lines in ℝℙn - 1, with bivectors differing only by a (positive or negative) scale factor representing the sameline.A description of the projective geometry can be constructed in the geometric algebra using basic operations. Forexample given two distinct points in ℝℙn - 1 represented by vectors a and b the line between them is given by a ∧ b(or b ∧ a). Two lines intersect in a point if A ∧ B = 0 for their bivectors A and B. This point is given by the vector

The operation "⋁" is the meet, which can be defined as above in terms of the join, J = A ∧ B for non-zero A ∧ B.Using these operations projective geometry can be formulated in terms of geometric algebra. For example given athird (non-zero) bivector C the point p lies on the line given by C if and only if

So the condition for the lines given by A, B and C to be collinear is

which in Cℓ3(ℝ) and ℝℙ2 simplifies to

where the angle brackets denote the scalar part of the geometric product. In the same way all projective spaceoperations can be written in terms of geometric algebra, with bivectors representing general lines in projective space,so the whole geometry can be developed using geometric algebra.[14]

Tensors and matricesAs noted above a bivector can be written as a skew-symmetric matrix, which through the exponential map generatesa rotation matrix that describes the same rotation as the rotor, also generated by the exponential map but applied tothe vector. But it is also used with other bivectors such as the angular velocity tensor and the electromagnetic tensor,respectively a 3x3 and 4x4 skew-symmetric matrix or tensor.Real bivectors in Λ2ℝn are isomorphic to n×n skew-symmetric matrices, or alternately to antisymmetric tensors oforder 2 on ℝn. While bivectors are isomorphic to vectors (via the dual) in three dimensions they can be representedby skew-symmetric matrices in any dimension. This is useful for relating bivectors to problems described bymatrices, so they can be re-cast in terms of bivectors, given a geometric interpretation, then often solved more easilyor related geometrically to other bivector problems.[22]

More generally every real geometric algebra is isomorphic to a matrix algebra. These contain bivectors as asubspace, though often in a way which is not especially useful. These matrices are mainly of interest as a way ofclassifying Clifford algebras.[23]

Page 232: Vector and Matrices - Some Articles

Bivector 229

Notes[1] Lounesto (2001) p. 87[2] Leo Dorst, Daniel Fontijne, Stephen Mann (2009). Geometric Algebra for Computer Science: An Object-Oriented Approach to Geometry

(http:/ / books. google. com/ books?id=-1-zRTeCXwgC& pg=PA32#v=onepage& q=& f=false) (2nd ed.). Morgan Kaufmann. p. 32.ISBN 0123749425. . "The algebraic bivector is not specific on shape; geometrically it is an amount of oriented area in a specific plane, that'sall."

[3] David Hestenes (1999). New foundations for classical mechanics: Fundamental Theories of Physics (http:/ / books. google. com/books?id=AlvTCEzSI5wC& pg=PA21) (2nd ed.). Springer. p. 21. ISBN 0792353021. .

[4] Lounesto (2001) p. 33[5] Karen Hunger Parshall, David E. Rowe (1997). The Emergence of the American Mathematical Research Community, 1876-1900 (http:/ /

books. google. com/ books?id=uMvcfEYr6tsC& pg=PA31). American Mathematical Society. p. 31 ff. ISBN 0821809075. .[6] Rida T. Farouki (2007). "Chapter 5: Quaternions" (http:/ / books. google. com/ books?id=xg2fBfXUtGgC& pg=PA60).

Pythagorean-hodograph curves: algebra and geometry inseparable. Springer. p. 60 ff. ISBN 3540733973. .[7] A discussion of quaternions from these years is Alexander McAulay (1911). "Quaternions" (http:/ / books. google. com/

books?id=KjwEAAAAYAAJ& pg=PA718). The encyclopædia britannica: a dictionary of arts, sciences, literature and general information.Vol. 22 (11th ed.). Cambridge University Press. p. 718 et seq. .

[8] Josiah Willard Gibbs, Edwin Bidwell Wilson (1901). Vector analysis: a text-book for the use of students of mathematics and physics (http:/ /books. google. com/ books?id=abwrAAAAYAAJ& pg=PA431& dq="directional+ ellipse"& lr=& as_drrb_is=q& as_minm_is=0&as_miny_is=& as_maxm_is=0& as_maxy_is=& as_brr=0& cd=2#v=onepage& q="directional ellipse"& f=false). Yale University Press.p. 481 ff. .

[9] Philippe Boulanger, Michael A. Hayes (1993). Bivectors and waves in mechanics and optics (http:/ / books. google. com/books?id=QN0Ks3fTPpAC& pg=PR11). Springer. ISBN 0412464608. .

[10] PH Boulanger & M Hayes (1991). "Bivectors and inhomogeneous plane waves in anisotropic elastic bodies" (http:/ / books. google. com/books?id=2fwUdSTN_6gC& pg=PA280). In Julian J. Wu, Thomas Chi-tsai Ting, David M. Barnett. Modern theory of anisotropic elasticityand applications. Society for Industrial and Applied Mathematics (SIAM). p. 280 et seq. ISBN 0898712890. .

[11] David Hestenes. op. cit (http:/ / books. google. com/ books?id=AlvTCEzSI5wC& pg=PA61). p. 61. ISBN 0792353021. .[12] Lounesto (2001) p. 35[13] Lounesto (2001) p. 86[14] Hestenes, David; Ziegler, Renatus (1991). "Projective Geometry with Clifford Algebra" (http:/ / geocalc. clas. asu. edu/ pdf/ PGwithCA.

pdf). Acta Applicandae Mathematicae 23: 25–63. .[15] Lounesto (2001) p.29[16] William E Baylis (1994). Theoretical methods in the physical sciences: an introduction to problem solving using Maple V (http:/ / books.

google. com/ books?id=pEfMq1sxWVEC& pg=PA234). Birkhäuser. p. 234, see footnote. ISBN 081763715X. . "The terms axial vector andpseudovector are often treated as synonymous, but it is quite useful to be able to distinguish a bivector (...the pseudovector) from its dual(...the axial vector)."

[17] Chris Doran, Anthony Lasenby (2003). Geometric algebra for physicists (http:/ / books. google. com/ books?id=6uI7bQb6qJ0C&pg=PA56& dq="dispensing+ with+ the+ traditional+ definition+ of+ angular+ momentum"& lr=& as_brr=0& cd=1#v=onepage&q="dispensing with the traditional definition of angular momentum"& f=false). Cambridge University Press. p. 56. ISBN 0521480221. .

[18] Lounesto (2001) pp. 37 - 39[19] Lounesto (2001) pp. 89 - 90[20] Lounesto (2001) pp. 109-110[21] Lounesto (2001) p.222[22] Lounesto (2001) p. 193[23] Lounesto (2001) p. 217

General references• Leo Dorst, Daniel Fontijne, Stephen Mann (2009). "§ 2.3.3 Visualizing bivectors" (http:/ / books. google. com/

books?id=-1-zRTeCXwgC& pg=PA31). Geometric Algebra for Computer Science: An Object-OrientedApproach to Geometry (2nd ed.). Morgan Kaufmann. p. 31 ff. ISBN 0123749425.

• Whitney, Hassler (1957). Geometric Integration Theoy. Princeton: Princeton University Press.ISBN 0486445836, 9780486445830.

• Lounesto, Pertti (2001). Clifford algebras and spinors (http:/ / books. google. com/ books?id=kOsybQWDK4oC).Cambridge: Cambridge University Press. ISBN 978-0-521-00551-7.

Page 233: Vector and Matrices - Some Articles

Bivector 230

• Chris Doran and Anthony Lasenby (2003). "§ 1.6 The outer product" (http:/ / books. google. com/books?id=nZ6MsVIdt88C& pg=PA11). Geometric Algebra for Physicists. Cambridge: Cambridge UniversityPress. p. 11 et seq. ISBN 978-0-521-71595-9.

Page 234: Vector and Matrices - Some Articles

Article Sources and Contributors 231

Article Sources and ContributorsDot product Source: http://en.wikipedia.org/w/index.php?oldid=430098471 Contributors: Ahmedo15, Akulaalfa, Alksub, Andros 1337, Anonymous Dissident, Aptronotus, ArbiterOne,Audriusa, Bdesham, BenFrantzDale, Charles Matthews, Chezz444, Crispin.cheddarcock, Crunchy Numbers, Cybercobra, DIG, Dan, Dan Granahan, Daniel Brockman, Danshil, Daviis, Davor,Dfan, Dysprosia, Econotechie, Ecov, Ehaussecker, Elockid, Emote, Epbr123, Ezrakilty, Fflanner, Flip619, Frau Hitt, Fredrik, Fresheneesz, GBL, Gandalf61, Gbleem, Ghazer, Giftlite,GoldenGoose100, GregorB, Hairy Dude, Harleymeyer, Harold f, Heptazane, HereToHelp, Hillman, Humanengr, Hypnosifl, Icairns, Icep, Interchange88, Isnow, JVz, Jersey Devil, Josemanimala,KDesk, KSmrq, Karlmeerbergen, Kborer, Khashishi, Kk, Kpufferfish, LOL, LRG, Landroni, Limited Atonement, LizardJr8, Locador, Lockeownzj00, Luokehao, M samadi, MSGJ, Marc vanLeeuwen, MarkSweep, Markus Schmaus, Marozols, MathMartin, Matt Crypto, Matt Kovacs, Mazin07, Mcherm, Melchoir, Michael Hardy, Michael Slone, Mortees, MrHumperdink, Mrwojo,Naddy, NanoTy, Narayansg, Naught101, NeoUrfahraner, Neparis, Nguoiviet979, O18, Octahedron80, Oleg Alexandrov, Omegatron, Ospalh, Oxymoron83, Paolo.dL, Patrick, PaulTanenbaum,Pfalstad, PhilKnight, Pleasantville, Quentar, Rausch, Redbaron486, Rich Farmbrough, Rightisright4250, Romanm, Rosenlind, Ryansca, SGBailey, Samuel Huang, Sanchom, Sangwine, Savidan,SebastianHelm, Selfworm, Shirulashem, Slaunger, Slicky, Smartyepp, Smoo222, Sreyan, StradivariusTV, Sławomir Biały, Tarquin, TeH nOmInAtOr, Template namespace initialisation script,Tim Starling, Timwi, Tlevine, Tony Sidaway, Tosha, Tristanreid, VACowboy41, Voidxor, WaysToEscape, Wereon, WhiteHatLurker, Wwoods, Yamamoto Ichiro, Zath42, 228 anonymous edits

Cross product Source: http://en.wikipedia.org/w/index.php?oldid=430489806 Contributors: 21655, Aaron D. Ball, Acdx, Ajl772, Albmont, Aliotra, Altenmann, Andres, Andros 1337,Anonauthor, Anonymous Dissident, Antixt, Arthur Rubin, Astronautics, AxelBoldt, BD2412, Balcer, Ben pcc, BenFrantzDale, Bobo192, Booyabazooka, BrainMagMo, Brews ohare, Bth, Can'tsleep, clown will eat me, Charles Matthews, Chinju, Chris the speller, Chrismurf, Christian List, Clarahamster, Conny, Conrad.Irwin, Coolkid70, Cortiver, CrazyChemGuy, Crunchy Numbers, D,DVdm, Dan Granahan, David Tombe, Decumanus, Discospinster, Doshell, Dougthebug, Dr Dec, Dude1818, Dysprosia, Edgerck, El C, Emersoni, Empro2, Enviroboy, Eraserhead1, Esseye,Eulerianpath, FDT, FoC400, Fresheneesz, FyzixFighter, Fzix info, Fæ, Gaelen S., Gandalf61, Gauge, Giftlite, Hanshuo, Henrygb, Hu12, HurukanLightWarrior, Iamnoah, Ino5hiro, Iorsh,Iridescent, Isaac Dupree, Isnow, Ivokabel, J7KY35P21OK, JEBrown87544, Jaredwf, Jimduck, JohnBlackburne, KHamsun, KSmrq, KYN, Kevmitch, Kisielk, Kmhkmh, LOL, Lambiam,Lantonov, Leland McInnes, Lethe, Lockeownzj00, Loodog, M simin, MFNickster, Magnesium, MarcusMaximus, Mark A Oakley, Mark Foskey, MathKnight, Mckaysalisbury, Mcnallod,Melchoir, Michael Hardy, Michael Ross, Monguin61, Mrahner, Mycroft IV4, Nbarth, Neparis, Nikola Smolenski, Octahedron80, Oleg Alexandrov, Onco p53, Oneirist, Paolo.dL, Patrick, PaulAugust, Paul Breeuwsma, Pi.1415926535, Pingveno, Pip2andahalf, Plugwash, Pouchkidium, PraShree, Quietly, Qwfp, R'n'B, Rainwarrior, Raiontov, RaulMiller, Rausch, Rbgros754, Reddevyl,Reindra, ReinisFMF, Rgdboer, Rich Farmbrough, Robinh, Romanm, SCEhardt, SDC, Salix alba, SeventyThree, Silly rabbit, Simskoarl, Spinningspark, Sreyan, Ssafarik, Stefano85, Svick,Sławomir Biały, T-rex, Tadeu, TakuyaMurata, Tarquin, TeH nOmInAtOr, Template namespace initialisation script, Teorth, Tesseran, Tetracube, The Thing That Should Not Be, TheSolomon,Thinking of England, Thoreaulylazy, Tim Starling, Timrem, Timwi, Tobias Bergemann, Tomruen, Uberjivy, Voorlandt, WISo, WVhybrid, Wars, WhiteCrane, Whitepaw, WikHead,Wilfordbrimley, Willking1979, Windchaser, Wolfkeeper, Wolfrock, Wshun, Wwoods, X42bn6, Xaos, Yellowdesk, Ysangkok, ZamorakO o, ZeroOne, Zundark, 373 anonymous edits

Triple product Source: http://en.wikipedia.org/w/index.php?oldid=427899190 Contributors: 08glasgow08, Antixt, Anxogcd, Art Carlson, Ato, BenFrantzDale, BenRG, Bjordan555, Brewsohare, Brianjd, Charles Matthews, ChrKrabbe, Dashed, DavidCBryant, Dhirsbrunner, Dhollm, Dmharvey, Edgerck, Endofskull, Fresheneesz, Giftlite, Gr3gf, Isnow, J04n, Jacopo Werther, Jimp,Jogloran, JohnBlackburne, Jorgenumata, Kalossoph, Kroberge, LOL, Leland McInnes, MagiMaster, Marino-slo, Melchoir, Michael Hardy, Mrh30, Nbarth, Neilc, Neparis, Nol Aders, OlegAlexandrov, Onomou, Paolo.dL, Patrick, Rxc, Samuel Huang, Silly rabbit, StarLight, The Sculler, Timgoh0, Usgnus, 虞海, 61 anonymous edits

Binet–Cauchy identity Source: http://en.wikipedia.org/w/index.php?oldid=407060845 Contributors: Brews ohare, Chungc, Gene Ward Smith, Giftlite, GregorB, Mhym, Michael Hardy,Michael Slone, Petergacs, RDBury, ReyBrujo, Schmock, Tkuvho, 1 anonymous edits

Inner product space Source: http://en.wikipedia.org/w/index.php?oldid=428039145 Contributors: 0, 212.29.241.xxx, 2andrewknyazev, AHM, Aetheling, Aitter, Anakata, Ancheta Wis,AnnaFrance, Archelon, Army1987, Artem M. Pelenitsyn, Atlantia, AxelBoldt, Ayda D, BenFrantzDale, BenKovitz, Benji1986, Bhanin, Blablablob, Brews ohare, Buster79, CSTAR, Cacadril,Catgut, Charles Matthews, Conversion script, Corkgkagj, Dan Granahan, Darker Dreams, Dgrant, Dr. Universe, Droova, Dysprosia, Eliosh, Filu, Fpahl, Frau Hitt, Frencheigh, Functor salad,FunnyMan3595, Fwappler, Gelbukh, Giftlite, Goodralph, Graham87, Henry Delforn, Ht686rg90, Ideyal, Ixfd64, JMK, Jakob.scholbach, James pic, Jeff3000, Jitse Niesen, JohnBlackburne,Jpkotta, KSmrq, Karl-Henner, KittySaturn, Larryisgood, Leperous, Lethe, Linas, Lupin, LutzL, MFH, Markus Schmaus, MathKnight, MathMartin, Meanmetalx, Mets501, MiNombreDeGuerra,Michael Hardy, Michael Slone, Mikez, Mm, Nbarth, Oleg Alexandrov, Padicgroup, Pankovpv, Patrick, Paul August, Paul D. Anderson, Pokus9999, Poor Yorick, Pred, Rdrosson, RichFarmbrough, Rlupsa, Rodrigo Rocha, Rudjek, SGBailey, Salix alba, Sam Hocevar, ScottSteiner, Seb35, Selfstudier, Ses, Severoon, SixWingedSeraph, Splat, Stigin, Struway, Sławomir Biały,TakuyaMurata, Tarquin, Tbackstr, Tbsmith, TeH nOmInAtOr, TedPavlic, Thecaptainzap, Toby Bartels, Tomo, Tony Liao, Tsirel, Urdutext, Ustadny, Vanished User 0001, Vaughan Pratt,Vilemiasma, Waltpohl, WinterSpw, Zhaoway, Zundark, 120 anonymous edits

Sesquilinear form Source: http://en.wikipedia.org/w/index.php?oldid=430666439 Contributors: Albmont, Almit39, Centrx, Charles Matthews, Evilbu, Fropuff, Giftlite, Hongooi, Jaksmata,Maksim-e, Markus Schmaus, Mhss, Michael Slone, Nbarth, Phys, Plclark, Summentier, Sverdrup, Uncle G, Мыша, 21 anonymous edits

Scalar multiplication Source: http://en.wikipedia.org/w/index.php?oldid=397258198 Contributors: AJRobbins, Algebraist, Bob.v.R, Filemon, Giftlite, Jim.belk, Lantonov, LokiClock,MathKnight, Neparis, Ohnoitsjamie, Oleg Alexandrov, Orhanghazi, PV=nRT, Patrick, Smultiplication, Stefano85, Sławomir Biały, TakuyaMurata, Toby Bartels, Waltpohl, WissensDürster,Zsinj, 4 ,יש דוד anonymous edits

Euclidean space Source: http://en.wikipedia.org/w/index.php?oldid=428167445 Contributors: Aiko, Alxeedo, Andycjp, Angela, Arthur Rubin, Avicennasis, AxelBoldt, Brian Tvedt, Caltas,Charles Matthews, Conversion script, CrizCraig, Da Joe, DabMachine, Dan Gluck, David Shay, Dcljr, DefLog, DhananSekhar, Dysprosia, Eddie Dealtry, Epolk, Fresheneesz, Fropuff, Gapato,GargoyleMT, Giftlite, Grendelkhan, Gwguffey, Hede2000, Heqs, Hongooi, IWhisky, Iantresman, Isnow, JDspeeder1, JPD, Jim.belk, JoeKearney, JohnArmagh, KSmrq, Kaarel, Karada,Ksuzanne, Lethe, Lightmouse, Looxix, MarSch, MathMartin, Mav, Mgnbar, Mhss, MiNombreDeGuerra, Michael Hardy, Mineralquelle, Msh210, Nucleophilic, Oderbolz, Orionus, PAR,Paolo.dL, Patrick, Paul August, Philip Trueman, Philomath3, Pizza1512, Pmod, Qwertyus, RG2, Rgdboer, Rich Farmbrough, Richardohio, RoyLeban, Rudjek, Salgueiro, Sandip90, Silly rabbit,SpuriousQ, Sriehl, Sławomir Biały, Tarquin, The Rationalist, TheChard, Tide rolls, Tomas e, Tomo, Tomruen, Tosha, Trigamma, Tzanko Matev, VKokielov, Vsage, WereSpielChequers,Wikivictory, Woohookitty, XJamRastafire, Yggdrasil014, Youandme, Zundark, 虞海, 90 anonymous edits

Orthonormality Source: http://en.wikipedia.org/w/index.php?oldid=423065055 Contributors: 09sacjac, Albmont, Amaurea, Axcte, Beland, Cyrius, DavidHouse, Fropuff, Gantlord, Gmh5016,KnightRider, LOL, Lunch, MathKnight, Michael Hardy, Misza13, NatusRoma, Nikai, Oleg Alexandrov, Paolo.dL, Pcgomes, Plasticup, Pomte, Pred, Shahab, Squids and Chips, StanLioubomoudrov, Stevertigo, Thurth, Timc, Wickey-nl, Winston365, Woohookitty, Wshun, Yuval madar, 11 anonymous edits

Cauchy–Schwarz inequality Source: http://en.wikipedia.org/w/index.php?oldid=430057061 Contributors: 209.218.246.xxx, ARUNKUMAR P.R, Alexmorgan, Algebra123230, Andrea Allais,AnyFile, Arbindkumarlal, Arjunarikeri, Arved, Avia, AxelBoldt, BWDuncan, Belizefan, Bender235, Bh3u4m, Bkell, CJauff, CSTAR, Cbunix23, Chas zzz brown, Chinju, Christopherlumb,Conversion script, Cyan, D6, Dale101usa, Dicklyon, DifferCake, EagleScout, Eliosh, FelixP, Gauge, Giftlite, Graham87, Haham hanuka, HairyFotr, Hede2000, Jackzhp, Jmsteele,JohnBlackburne, Justin W Smith, Katzmik, Kawanoz, Krakhan, Madmath789, MarkSweep, MathMartin, Mathtyke, Mboverload, Mct mht, Mertdikmen, Mhym, Michael Hardy, Michael Slone,Microcell, Miguel, Missyouronald, Nicol, Njerseyguy, Nsk92, Okaygecko, Oleg Alexandrov, Orange Suede Sofa, Orioane, Paul August, PaulGarner, Phil Boswell, Phys, Prari, Primalbeing, Q0k,Qwfp, R.e.b., Rajb245, Rchase, Reflex Reaction, Rjwilmsi, S tyler, Salgueiro, Sbyrnes321, Schutz, Sl, SlipperyHippo, Small potato, Somewikian, Stevo2001, Sverdrup, Sławomir Biały,TakuyaMurata, TedPavlic, Teorth, Tercer, ThibautLienart, Tobias Bergemann, TomyDuby, Tosha, Uncia, Vaughan Pratt, Vovchyck, Zdorovo, Zenosparadox, Zundark, Zuphilip, Zvika, 116anonymous edits

Orthonormal basis Source: http://en.wikipedia.org/w/index.php?oldid=422267003 Contributors: Algebraist, Alway Arptonshay, Arthena, Bad2101, Bunyk, Charles Matthews, Conrad.Irwin,Constructive editor, Cyan, Dave3457, Derbeth, Drbreznjev, Dysprosia, FrankTobia, Gauge, Geometry guy, George Butler, Giftlite, Guan, Infovarius, Jim.belk, Jitse Niesen, Josh Cherry, Lethe,Looxix, MFH, Mhss, Michael Angelkovich, Michael Hardy, Mikez, Nbarth, Paolo.dL, Patrick, Pred, RaulMiller, Sina2, T68492, TakuyaMurata, Tarquin, Thenub314, Tkuvho, Wshun, Yuvalmadar, Zundark, 24 anonymous edits

Vector space Source: http://en.wikipedia.org/w/index.php?oldid=427520038 Contributors: 0ladne, ABCD, Aenar, Akhil999in, Algebraist, Alksentrs, AndrewHarvey4, Andris, Anonymous Dissident, Antiduh, Army1987, Auntof6, AxelBoldt, Barnaby dawson, BenFrantzDale, BraneJ, Brews ohare, Bryan Derksen, CRGreathouse, CambridgeBayWeather, Cartiod, Charitwo, CharlotteWebb, Charvest, Chowbok, ChrisUK, Ciphergoth, Cjfsyntropy, Cloudmichael, Cmdrjameson, CommonsDelinker, Complexica, Conversion script, Cpl Syx, Cybercobra, Cícero, Dan Granahan, Daniel Brockman, Daniele.tampieri, Danman3459, David Eppstein, Davidsiegel, Deconstructhis, DifferCake, Dysprosia, Eddie Dealtry, Englebert, Eric Kvaalen, FractalFusion, Freiddie, Frenchef, Fropuff, Gabriele ricci, Geneb1955, Geometry guy, GeometryGirl, Giftlite, Glenn, Gombang, Graham87, Guitardemon666, Guruparan, Gwib, Hairy Dude, Hakankösem, Hans Adler, Headbomb, Henry Delforn, Hfarmer, Hlevkin, Hrrr, Humanengr, Igni, Ilya, Imasleepviking, Infovarius, Inquisitus, Iulianu, Ivan Štambuk, JackSchmidt, Jakob.scholbach, James084, January, Jasanas, Javierito92, Jheald, Jitse Niesen, Jludwig, Jogloran, Jorgen W, Jujutacular, Kan8eDie, Kbolino, Kiefer.Wolfowitz, Kinser, Kku, Koavf, Kri, Kwantus, Lambiam, Ldboer, Lethe, Levineps, Lisp21, Lockeownzj00, Lonerville, Looie496, MFH, Madmath789, Magioladitis, MarSch, MarcelB612, MarkSweep, Markan, Martinwilke1980, MathKnight, Mathstudent3000, Mattpat, Maximaximax, Mct mht, Mechakucha, Mh, Michael Hardy, Michael Kinyon, Michael Slone, Michaelp7, Mikewax, Mindmatrix, Mitsuruaoyama, Mpatel, Msh210, N8chz, Nbarth, Ncik, Newbyguesses, Nick, Nihiltres, Nixdorf, Notinasnaid, Nsk92, Oleg Alexandrov, Olivier, Omnieiunium, Orimosenzon, Ozob, P0lyglut, Paolo.dL, Paranomia, Patrick, Paul August, Paul D. Anderson, Pcb21, Phys, Pizza1512, Point-set topologist, Portalian, Profvk, Python eggs, R160K, RDBury, Rama, Randomblue, Rckrone, Rcowlagi, Rgdboer, Rjwilmsi, RobHar, Romanm, SMP, Salix alba, SandyGeorgia, Setitup, Shoujun, Silly rabbit, Sligocki, Spacepotato, Sreyan, Srleffler, Ssafarik, Staka, Stephen Bain, Stpasha, Sławomir Biały, TakuyaMurata, Taw, Tbjw, TeH

Page 235: Vector and Matrices - Some Articles

Article Sources and Contributors 232

nOmInAtOr, TechnoFaye, TedPavlic, Terabyte06, Terry Bollinger, The Anome, Thehotelambush, Thenub314, Therearenospoons, Tim Starling, TimothyRias, Tiptoety, Titoxd, TobiasBergemann, Tommyinla, Tomo, Topology Expert, Tsirel, Uncia, Urdutext, VKokielov, Vanished User 0001, Waltpohl, Wapcaplet, Whackawhackawoo, WhatamIdoing, Wikithesource,Wikomidia, Woodstone, Woohookitty, Wshun, Youandme, Zundark, ^demon, 246 anonymous edits

Matrix multiplication Source: http://en.wikipedia.org/w/index.php?oldid=429950528 Contributors: (:Julien:), A bit iffy, A little insignificant, Achurch, Acolombi, Adrian 1001, Alephhaz,AlexG, Arcfrk, Arvindn, AugPi, AxelBoldt, BenFrantzDale, Bender2k14, Bentogoa, Bkell, Boud, Brian Randell, Bryan Derksen, Calbaer, CanaDanjl, Cburnett, Chris Q, Christos Boutsidis,Citrus538, Cloudmichael, Coffee2theorems, Copyeditor42, Countchoc, Damian Yerrick, Damirgraffiti, Dandin1, Dbroadwell, Dcoetzee, DennyColt, Dominus, Doobliebop, Doshell, Ejrh, ErikZachte, Fleminra, Fresheneesz, FrozenUmbrella, Gandalf61, Gauge, Ged.R, Giftlite, Haham hanuka, Happy-melon, Harris000, HenningThielemann, Hermel, Hmonroe, Hoyvin-Mayvin, Ino5hiro,JakeVortex, Jakob.scholbach, Jitse Niesen, Jivee Blau, JohnBlackburne, Jon Awbrey, Joshuav, Jérôme, K.menin, KelvSYC, Kevin Baas, Kvng, Lakeworks, Lambiam, Liao, LkNsngth, Man It'sSo Loud In Here, Marc Venot, Marc van Leeuwen, MathMartin, MattTait, Max Schwarz, Mdd4696, Melchoir, Mellum, Michael Slone, Miquonranger03, Miym, Mononomic, MuDavid,NellieBly, NeonMerlin, Neparis, Ngvrnd, Nijdam, Nikola Smolenski, Nobar, Nsk92, Olathe, Oleg Alexandrov, Oli Filth, Orderud, PV=nRT, Paolo.dL, Parerga, Patrick, Paul August, Paul D.Anderson, Psychlohexane, Pt, R.e.b., RDBury, Radius, Ratiocinate, RexNL, Risk one, Robertwb, Rockn-Roll, Rpspeck, Running, Salih, Sandeepr.murthy, Schneelocke, Shabbychef, Shahab,Skaraoke, Sr3d, Ssola, Stephane.magnenat, Sterrys, StitchProgramming, Svick, Swift1337, SyntaxError55, Sławomir Biały, Tarquin, Terry Bollinger, The Fish, The Thing That Should Not Be,Thenub314, Throwaway85, Tide rolls, Tyrantbrian, Umofomia, Vgmddg, Vincisonfire, Wshun, Zazpot, Zhangleisk, Zmoboros, Zorakoid, 258 anonymous edits

Determinant Source: http://en.wikipedia.org/w/index.php?oldid=430327749 Contributors: 01001, 165.123.179.xxx, A-asadi, A. B., AbsolutDan, Adam4445, Adamp, Ae77, Ahoerstemeier,Alex Sabaka, Alexandre Martins, Algebraist, Alison, Alkarex, Alksub, Anakata, Andres, Anonymous Dissident, Anskas, Ardonik, ArnoldReinhold, Arved, Asmeurer, AugPi, AxelBoldt, Balagen,Barking Mad142, BenFrantzDale, Bender2k14, Betacommand, Big Jim Fae Scotland, BjornPoonen, BrianOfRugby, Bryan Derksen, Burn, CBM, CRGreathouse, Campuzano85, Carbonrodney,Catfive, Cbogart2, Ccandan, Cesarth, Charles Matthews, Chocochipmuffin, Christopher Parham, Chu Jetcheng, Cjkstephenson, Closedmouth, Cobi, Connelly, Conversion script, Cowanae,Crasshopper, Cronholm144, Crystal whacker, Cthulhu.mythos, Cwkmail, Danaman5, Dantestyrael, Dark Formal, Datahaki, Dcoetzee, Delirium, Demize, Dmbrown00, Dmcq, Doctormatt,Dysprosia, EconoPhysicist, Elphion, Entropeneur, Epbr123, Euphrat1508, EverettYou, Executive Outcomes, Ffatossaajvazii, Fredrik, Fropuff, Gauge, Gejikeiji, Gene Ward Smith, Gershwinrb,Giftlite, Graham87, GrewalWiki, Guiltyspark, Gwernol, Hangitfresh, Heili.brenna, HenkvD, HenningThielemann, Hlevkin, Ian13, Icairns, Ijpulido, Ino5hiro, Istcol, Itai, JJ Harrison,JackSchmidt, Jackzhp, Jagged 85, Jakob.scholbach, Jasonevans, Jeff G., Jerry, Jersey Devil, Jewbacca, Jheald, Jim.belk, Jitse Niesen, Joejc, Jogers, Jordgette, Joriki, Josp-mathilde, Josteinaj,Jrgetsin, Jshen6, Juansempere, Justin W Smith, Kaarebrandt, Kallikanzarid, Kaspar.jan, Kd345205, Khabgood, Kingpin13, Kmhkmh, Kokin, Kstueve, Kunal Bhalla, Kurykh, Kwantus,LAncienne, LOL, Lagelspeil, Lambiam, Lavaka, Leakeyjee, Lethe, Lhf, Lightmouse, LilHelpa, Logapragasan, Luiscardona89, MackSalmon, Marc van Leeuwen, Marek69, MartinOtter,MathMartin, McKay, Mcconnell3, Mcstrother, Mdnahas, Merge, Mets501, Michael Hardy, Michael P. Barnett, Michael Slone, Mikael Häggström, Mild Bill Hiccup, Misza13, Mmxx,Mobiusthefrost, Mrsaad31, Msa11usec, MuDavid, N3vln, Nachiketvartak, NeilenMarais, Nekura, Netdragon, Nethgirb, Netrapt, Nickj, Nicolae Coman, Nistra, Nsaa, Numbo3, Obradovic Goran,Octahedron80, Oleg Alexandrov, Oli Filth, Paolo.dL, Patamia, Patrick, Paul August, Pedrose, Pensador82, Personman, PhysPhD, Pigei, Priitliivak, Protonk, Pt, Quadell, Quadrescence, Quantling,R.e.b., RDBury, RIBEYE special, Rbb l181, Recentchanges, Reinyday, RekishiEJ, RexNL, Rgdboer, Rich Farmbrough, Robinh, Rocchini, Rogper, Rpchase, Rumblethunder, SUL, Sabri76,Salgueiro, Sandro.bosio, Sangwine, Sayahoy, Shai-kun, Shreevatsa, Siener, Simon Sang, SkyWalker, Slady, Smithereens, Snoyes, Spartan S58, Spireguy, Spoon!, Ssd, Stdazi, Stefano85, Stevenj,StradivariusTV, Supreme fascist, Swerdnaneb, SwordSmurf, Sławomir Biały, T8191, Tarif Ezaz, Tarquin, Taw, TedPavlic, Tegla, Tekhnofiend, Tgr, The Thing That Should Not Be,TheEternalVortex, TheIncredibleEdibleOompaLoompa, Thehelpfulone, Thenub314, Timberframe, Tobias Bergemann, TomViza, Tosha, TreyGreer62, Trifon Triantafillidis, Trivialsegfault,Truthnlove, Ulisse0, Unbitwise, Urdutext, Vanka5, Vincent Semeria, Wellithy, Wik, Wolfrock, Woscafrench, Wshun, Xaos, Ztutz, Zzedar, ^demon, 380 anonymous edits

Exterior algebra Source: http://en.wikipedia.org/w/index.php?oldid=423094119 Contributors: Acannas, Aetheling, Akriasas, Algebraist, Amillar, Aponar Kestrel, Arcfrk, AugPi, AxelBoldt,Billlion, Buka, Charles Matthews, Darkskynet, DomenicDenicola, Dr.CD, Dysprosia, Entropeter, Fropuff, Gauge, Gene Ward Smith, Gene.arboit, Geometry guy, Giftlite, Hephaestos,JackSchmidt, Jao, JasonSaulG, Jhp64, Jitse Niesen, Jmath666, Jogloran, JohnBlackburne, Jrf, Juan Marquez, Kallikanzarid, Karl-Henner, Katzmik, Kclchan, Keenan Pepper, Kevs, Kilva, LeoGumpert, Lethe, MarSch, Marino-slo, Michael Hardy, Michael Slone, Muhandes, Myasuda, Naddington, Nageh, Nbarth, Nishkid64, Paolo.dL, Phys, Pjacobi, Pldx1, ProperFraction, Qutezuce,Qwfp, Rainwarrior, Reyk, Rgdboer, Robert A West, Schizobullet, Schneelocke, Silly rabbit, Sillybanana, Slawekb, Sopholatre, Sreyan, StradivariusTV, Sławomir Biały, TakuyaMurata,TimothyRias, Tkuvho, TobinFricke, Tosha, Varuna, WhatamIdoing, Ylebru, Zero sharp, Zhoubihn, Zinoviev, Кирпичик, 136 anonymous edits

Geometric algebra Source: http://en.wikipedia.org/w/index.php?oldid=427431154 Contributors: Anakata, Avriette, BenFrantzDale, Bomazi, Brews ohare, BryanG, CBM, Cabrer7, CharlesMatthews, Chessfan, Chowbok, Chris Capoccia, Cmdrjameson, Conversion script, D6, David Haslam, Dentdelion, Derek Ross, Devoutb3nji, DrCowsley, Fuzzypeg, Gaius Cornelius, Giftlite,Gnfnrf, HEL, Hyperlinker, Ibfw, Icairns, Ironholds, Ixfd64, Jagged 85, Jason Quinn, Jheald, John of Reading, JohnBlackburne, Jontintinjordan, Jrf, LAUBO, Leo Zaza, LLB, LLM, Linas,Lionelbrits, Madmardigan53, MathMartin, Michael Hardy, Nbarth, Neonumbers, Nightkey, Oleg Alexandrov, Paolo.dL, Paul D. Anderson, Peeter.joot, Populus, Quondum, RHaworth,RainerBlome, Reyk, Rich Farmbrough, Rjwilmsi, Robert L, Selfstudier, Shreevatsa, Silly rabbit, Star trooper man, SunCreator, Tbriggs845, The Anome, Tiddly Tom, Tide rolls, Tpikonen,Trond.olsen, Usgnus, Woohookitty, WriterHound, Xavic69, 80 anonymous edits

Levi-Civita symbol Source: http://en.wikipedia.org/w/index.php?oldid=405477600 Contributors: Akriesch, Antixt, Attilios, AugPi, AxelBoldt, BenFrantzDale, Bo Jacoby, Burn, Catem117,Charles Matthews, Chuunen Baka, Dustimagic, Eclecticerudite, FilippoSidoti, Gaius Cornelius, Gene.arboit, Giftlite, Grafen, Hadilllllll, Ivancho.was.here, J.Rohrer, JRSpriggs, JabberWok,Jbaber, Kipton, Legendre17, Leperous, Linas, Looxix, Lzur, Mani1, Marco6969, MathKnight, Michael Hardy, Mike Rosoft, Mpatel, Mythealias, NOrbeck, Paolo.dL, Patrick, PaulTanenbaum,Physicistjedi, Plaes, Pnrj, Pt, Qniemiec, Rich Farmbrough, Roonilwazlib, Shirt58, Skyone.wiki, Sverdrup, Tarquin, TheObtuseAngleOfDoom, Thric3, Tomhosking, Trogsworth, Vuldoraq,XJamRastafire, Zaslav, Zero sharp, 73 anonymous edits

Jacobi triple product Source: http://en.wikipedia.org/w/index.php?oldid=405948291 Contributors: Boothy443, CRGreathouse, Ceyockey, Charles Matthews, Giftlite, Kbdank71, Linas, Lunae,Michael Hardy, Radicalt, RobHar, Salvatore Ingala, Triona, 10 anonymous edits

Rule of Sarrus Source: http://en.wikipedia.org/w/index.php?oldid=430122147 Contributors: Adam Zivner, Battlecruiser, Bender235, BiT, Diwas, Giftlite, Kmhkmh, M-le-mot-dit, SilverSpoon, Svick, Tekhnofiend, Zhou Yu, 11 anonymous edits

Laplace expansion Source: http://en.wikipedia.org/w/index.php?oldid=398693100 Contributors: .mau., Anonymous Dissident, Attys, Bigmonachus, Catfive, Charles Matthews, Error792,Fantasi, Giftlite, HappyCamper, Jitse Niesen, Kmhkmh, Lambiam, Luca Antonelli, Michael Hardy, MiloszD, Nihiltres, PV=nRT, Polyade, Tim R, Tobias Bergemann, 11 anonymous edits

Lie algebra Source: http://en.wikipedia.org/w/index.php?oldid=428450332 Contributors: Adam cohenus, AlainD, Arcfrk, Arthena, Asimy, AxelBoldt, BenFrantzDale, Bogey97, CSTAR,Chameleon, Charles Matthews, Conversion script, CryptoDerk, Curps, Dachande, David Gerard, Dd314, DefLog, Deflective, Drbreznjev, Drorata, Dysprosia, Englebert, Foobaz, Freiddie,Fropuff, Gauge, Geometry guy, Giftlite, Grendelkhan, Grokmoo, Grubber, Hairy Dude, Harold f, Hesam7, Iorsh, Isnow, JackSchmidt, Jason Quinn, Jason Recliner, Esq., Jeremy Henty, Jkock,Joel Koerwer, [email protected], Juniuswikiae, Kaoru Itou, Kragen, Kwamikagami, Lenthe, Lethe, Linas, Loren Rosen, MarSch, Masnevets, Maurice Carbonaro, Michael Hardy, Michael Larsen,Michael Slone, Miguel, Msh210, NatusRoma, Nbarth, Ndbrian1, Niout, Noegenesis, Oleg Alexandrov, Paolo.dL, Phys, Pizza1512, Pj.de.bruin, Prtmrz, Pt, Pyrop, Python eggs, R'n'B, Rausch,Reinyday, RexNL, RobHar, Rossami, Rschwieb, Sbyrnes321, Shirulashem, Silly rabbit, Spangineer, StevenJohnston, Suisui, Supermanifold, TakuyaMurata, Thomas Bliem, Tobias Bergemann,Tosha, Twri, Vanish2, Veromies, Wavelength, Weialawaga, Wood Thrush, Wshun, Zundark, 84 anonymous edits

Orthogonal group Source: http://en.wikipedia.org/w/index.php?oldid=429952786 Contributors: Algebraist, AxelBoldt, BenFrantzDale, Chadernook, Charles Matthews, DreamingInRed,Drschawrz, Ettrig, Fropuff, Gauge, Giftlite, Guardian of Light, HappyCamper, JackSchmidt, Jim.belk, KnightRider, Kwamikagami, Lambiam, Looxix, Loren Rosen, Masnevets, MathMartin,Mathiscool, Michael Hardy, Monsterman222, Msh210, Nbarth, Niout, Noideta, Oleg Alexandrov, Patrick, Paul D. Anderson, Phys, Pt, R.e.b., Renamed user 1, Salix alba, Schmloof, ShellKinney, Softcafe, Somethingcompletelydifferent, Technohead1980, The Anome, Thehotelambush, TooMuchMath, Ulner, Unyoyega, Weialawaga, Wolfrock, Zundark, 45 anonymous edits

Rotation group Source: http://en.wikipedia.org/w/index.php?oldid=429851471 Contributors: AxelBoldt, Banus, Beland, Cesiumfrog, Charles Matthews, CryptoDerk, Dan Gluck, Dr Dec,Fropuff, G. Blaine, Gaius Cornelius, Giftlite, Grubber, Helder.wiki, Hyacinth, Jim.belk, JohnBlackburne, Juansempere, Linas, Looxix, M0rph, Maproom, Mct mht, Michael Hardy, OlegAlexandrov, PAR, Paolo.dL, Patrick, PaulGEllis, QFT, RDBury, Rgdboer, Robsavoie, Samuel Huang, Silly rabbit, Spurts, Stevenj, The Anome, Thurth, Tkuvho, V madhu, Wiki me, Yurivict,Zundark, 34 anonymous edits

Vector-valued function Source: http://en.wikipedia.org/w/index.php?oldid=419513511 Contributors: Ac44ck, BrokenSegue, CBM, Charles Matthews, EconoPhysicist, FilipeS, Giftlite, Gurch,Hannes Eder, Ht686rg90, JackSchmidt, Jecowa, MATThematical, MarcusMaximus, Michael Hardy, Neparis, Nillerdk, PV=nRT, Paolo.dL, Parodi, Plastikspork, Richie, Rror, Salix alba, Spoon!,StradivariusTV, Sławomir Biały, TexasAndroid, User A1, 10 anonymous edits

Gramian matrix Source: http://en.wikipedia.org/w/index.php?oldid=427175745 Contributors: ABCD, Akhram, AugPi, Charles Matthews, Chnv, Cvalente, Eijkhout, Giftlite, IlPesso, Intellec7,Jamshidian, Jitse Niesen, Keyi, MathMartin, Michael Hardy, Nbarth, Olegalexandrov, PV=nRT, Peskydan, Playingviolin1, Rausch, RicardoFachada, Shabbychef, Sharov, Sławomir Biały,Tammojan, Tbackstr, Vanish2, Vonkje, Xelnx, Xodarap00, Ybungalobill, 18 anonymous edits

Lagrange's identity Source: http://en.wikipedia.org/w/index.php?oldid=410745245 Contributors: A. Pichler, AugPi, Brews ohare, David Tombe, FDT, Gene Ward Smith, Giftlite, Jitse Niesen,JohnBlackburne, Justin545, Kbdank71, Michael Hardy, Michael Slone, Nicolas Bray, Paolo.dL, Rjwilmsi, Schmock, Selfworm, Simetrical, Stdazi, Sławomir Biały, TakuyaMurata,XJamRastafire, Youngjinmoon

Page 236: Vector and Matrices - Some Articles

Article Sources and Contributors 233

Quaternion Source: http://en.wikipedia.org/w/index.php?oldid=429220151 Contributors: 62.100.19.xxx, 95j, A. di M., AManWithNoPlan, Afteread, AjAldous, Aleks kleyn, Amantine,Andrej.westermann, Andrewa, Anniepoo, Arved, AugPi, AxelBoldt, Baccyak4H, Bdesham, Ben Standeven, BenBaker, BenFrantzDale, BenRG, Bidabadi, Bob A, Bob Loblaw, Boing! saidZebedee, Brion VIBBER, C quest000, CRGreathouse, Caylays wrath, Cffk, Charles Matthews, Chas zzz brown, Chris Barista, ChrisHodgesUK, Chtito, Ckoenigsberg, Cleared as filed,Cochonfou, CommonsDelinker, Conversion script, Cronholm144, Crust, Cullinane, Cyp, D.M. from Ukraine, D6, DPoon, DanMS, DanielPenfield, Daqu, DaveRErickson, David Eppstein, DavidHaslam, DavidCary, DavidWBrooks, Davidleblanc, Decrypt3, DeltaIngegneria, DemonThing, Denevans, Diocles, Dmmaus, Docu, Dreish, Dstahlke, Dysprosia, EdJohnston, Eequor, Eeyore22,Elimisteve, ElonNarai, Encephalon, Equendil, Eriatarka, Eric Kvaalen, Excirial, FDT, Fgnievinski, Frank Lofaro Jr., Frazzydee, Frencheigh, Fropuff, Gabeh, Gandalf61, Gdr, Geometry guy,Giftlite, Godvjrt, Goochelaar, Graham87, GregorB, Greyhawthorn, Grzegorj, Hairy Dude, Helder.wiki, Henry Delforn, Hgrosser, Hkuiper, Hobojaks, Homebum, Hotlorp, Hu, Hubbard rox 2008,Hyacinth, Icairns, Ida Shaw, Ideyal, Iluvcapra, Imagi-King, Irregulargalaxies, JWWalker, JackSchmidt, JadeNB, JakeVortex, JamesBWatson, Jan Hidders, Jay Gatsby, Jeff02, JeffBobFrank,Jemebius, Jespdj, Jheald, Jitse Niesen, Jj137, Jkominek, Joanjoc, Joe Kress, JoeBruno, JohnBlackburne, Jondel, Joriki, Jtoft, Jumbuck, Jwynharris, KMcD, KSmrq, Kainous, Katzmik, Kbk,KickAssClown, Knutux, Koeplinger, Kri, Kwiki, Linas, Lockeownzj00, LokiClock, Looxix, LordEniac, Lotje, Lotu, Lupin, Macrakis, Makeemlighter, MarkMYoung, MathMartin, Mav, Menchi,Mets501, Mezzaluna, Michael C Price, Michael Hardy, Michael.Pohoreski, Mkch, Mrh30, Mskfisher, Muhandes, Nbarth, Neilbeach, Niac2, Nigholith, Nneonneo, Noeckel, Nousernamesleft,OTB, OlEnglish, Oleg Alexandrov, OneWeirdDude, Oreo Priest, Orionus, Ozob, P0mbal, PAR, Pablo X, Pak21, Paolo.dL, Papadim.G, Patrick, Patsuloi, Paul D. Anderson, Pdn, Phil Boswell,PhilipO, Phys, Pmanderson, Pmg, Possum, Pred, ProkopHapala, Prosfilaes, Pvazteixeira, QuatSkinner, Quaternionist, Qutezuce, R'n'B, R.e.b., R3m0t, RMcGuigan, Raiden10, Rdnzl, Reddi,Revolver, Reywas92, Rgdboer, Rjwilmsi, Robinh, Rogper, Rs2, Ruud Koot, SDC, Salgueiro, Sam Staton, Sangwine, Sanjaynediyara, Schmock, Scythe33, Shawn in Montreal, Shmorhay,Shsilver, Silverfish, Simetrical, Siroxo, Sjoerd visscher, Skarebo, SkyWalker, Slow Smurf, Sneakums, Soler97, Spiral5800, Spoon!, Stamcose, StevenDH, Stevenj, Stwalczyk, SuperUser79,Tachikoma's All Memory, Taits Wrath, TakuyaMurata, Taw, TenPoundHammer, Terry Bollinger, TheLateDentarthurdent, TheTito, Thenry1337, Thorwald, Titi2, Tkuvho, TobyNorris,Tomchiukc, Trifonov, Tsemii, Turgidson, Varlaam, Virginia-American, Vkravche, Voltagedrop, WegianWarrior, WhiteHatLurker, Wiki101012, William Allen Simpson, Wleizero, Wshun,Wwoods, XJamRastafire, Xantharius, Yoderj, Zedall, Zundark, Zy26, 327 anonymous edits

Skew-symmetric matrix Source: http://en.wikipedia.org/w/index.php?oldid=427764290 Contributors: Algebraist, Andres, AxelBoldt, BenFrantzDale, Bourbakista, Brienanni, Burn, Calle,Charles Matthews, Domminico, Dr Dec, Fropuff, Giftlite, Grinevitski, Haseldon, Herbee, Jitse Niesen, Josh Cherry, Jshadias, Juansempere, KSmrq, Kevin Baas, Kiefer.Wolfowitz, Kyap, LOL,LilHelpa, Lizard86, Lunch, Maksim-e, Mattfister, Mcbeth50, Melchoir, Michael Hardy, Msh210, Nbarth, Neparis, Ocolon, Octahedron80, Oleg Alexandrov, Oli Filth, PMajer, Paolo.dL, Patrick,RDBury, Rjwilmsi, Syp, TakuyaMurata, Tarquin, Tbackstr, The tree stump, Tobias Bergemann, Username314, Zero0000, 43 anonymous edits

Xyzzy Source: http://en.wikipedia.org/w/index.php?oldid=423621735 Contributors: 61.9.128.xxx, A More Perfect Onion, Alinnisawest, Altenmann, AndrewHZ, Antixt, Armandoban, B4hand,BillMcGonigle, Biscuitforce, CarrotMan, Cleared as filed, Conversion script, Courtarro, CraigBox, Creidieki, Denimadept, Dravecky, Eaglizard, Ed Poor, Eleland, ElfQrin, Elliotbay, Erpelchen,Fanx, Flewis, Frank Lofaro Jr., Furrykef, Gerry Ashton, GoingBatty, Guy Harris, Harwasch, Hfodf, JIP, Jao, Jc3s5h, Jeffrey296, Jekader, JoshuaZ, Ken444444, Kinema, Kizor, Komap, Liftarn,Lordfeepness, Lowellian, Martarius, MartinHarper, McGeddon, Mdz, Mrbartjens, Mrh30, Mycroft IV4, Nandesuka, NetRolller 3D, One more night, PGSONIC, Perrella, Pinkunicorn, Poromenos,Rafwuk, RevBooyah, Rickadams, RockMFR, Rory O'Kane, RoySmith, Search4Lancer, Semifor, Spidey104, Stephen Gilbert, SuperSuperBoi, Th1rt3en, Thenickdude, Thumperward, TomKetchum, Tony Sidaway, Truthanado, WarrenA, Welsh, Willphase, XMog, 82 anonymous edits

Quaternions and spatial rotation Source: http://en.wikipedia.org/w/index.php?oldid=430632116 Contributors: Albmont, ArnoldReinhold, AxelBoldt, Ben pcc, BenFrantzDale, BenRG,Bjones410, Bmju, Brews ohare, Bulee, CALR, Catskul, Ceyockey, Charles Matthews, CheesyPuffs144, Cyp, Daniel Brockman, Daniel.villegas, Darkbane, David Eppstein, Denevans, Depakote,Dionyziz, Dl2000, Ebelular, Edward, Endomorphic, Enosch, Eugene-elgato, Fgnievinski, Fish-Face, Fropuff, Fyrael, Gaius Cornelius, GangofOne, Genedial, Giftlite, Gutza, HenryHRich,Hyacinth, Ig0r, J04n, Janek Kozicki, Jemebius, Jermcb, Jheald, Jitse Niesen, JohnBlackburne, JohnPritchard, Josh Triplett, KSmrq, Kborer, Kordas, Lambiam, LeandraVicci, Lemontea, Lightcurrent, Linas, Lkesteloot, Looxix, Lotu, Lourakis, LuisIbanez, ManoaChild, Markus Kuhn, MathsPoetry, Michael C Price, Michael Hardy, Mike Stramba, Mild Bill Hiccup, Oleg Alexandrov,Orderud, PAR, Paddy3118, Paolo.dL, Patrick, Patrick Gill, Patsuloi, Ploncomi, Pt, Qz, RJHall, Rainwarrior, Randallbsmith, Reddi, Rgdboer, Robinh, RzR, Samuel Huang, Sebsch, Short Circuit,Sigmundur, SlavMFM, Soler97, TLKeller, Tamfang, Terry Bollinger, Timo Honkasalo, Tkuvho, TobyNorris, User A1, WVhybrid, WaysToEscape, Yoderj, Zhw, Zundark, 159 anonymous edits

Seven-dimensional cross product Source: http://en.wikipedia.org/w/index.php?oldid=425935260 Contributors: Aghitza, Antixt, Bkell, Brews ohare, Charles Matthews, DVdm, David Tombe,Eric119, FDT, Fropuff, Gauge, Giftlite, Holmansf, Hooperbloob, JohnBlackburne, Magnesium, Melchoir, Monguin61, Ozob, Paolo.dL, Rgdboer, Robinh, Silly rabbit, Staecker, Sławomir Biały,Vaughan Pratt, 18 anonymous edits

Octonion Source: http://en.wikipedia.org/w/index.php?oldid=429410487 Contributors: 345Kai, 62.100.19.xxx, Afteread, Aleks kleyn, Antixt, AxelBoldt, Baccyak4H, Bdesham, Bender2k14,Brews ohare, Buttonius, Casimir9999, Charles Matthews, ChongDae, Conversion script, DRLB, Dmmaus, Donarreiskoffer, Fropuff, Georg Muntingh, Giftlite, Graham87, Helder.wiki, HenryDelforn, Heptadecagon, Hgrosser, Howard McCay, Icairns, Ideyal, Iida-yosiaki, Isaac Rabinovitch, Isnow, Janet Davis, Jeppesn, JoeBruno, JohnBlackburne, Karl Dickman, Koeplinger,Lanthanum-138, Linas, Lumidek, Machine Elf 1735, Markherm, MatthewMain, Mav, Michael C Price, Michael Hardy, Mjb, Ms2ger, Nousernamesleft, Oleg Alexandrov, OneWeirdDude,PV=nRT, Pill, Pmanderson, Qutezuce, R.e.b., Robinh, Sabbut, Sam Hocevar, Saxbryn, Silly rabbit, TakuyaMurata, Taw, Template namespace initialisation script, Tesseran, TheTito, TobiasBergemann, Tosha, Tristanb, Zoicon5, Zundark, 57 ,یکیو یلع anonymous edits

Multilinear algebra Source: http://en.wikipedia.org/w/index.php?oldid=422590431 Contributors: Alodyne, Brews ohare, Bsilverthorn, CesarB, Charles Matthews, D6, Fredrik, Giftlite, JaGa,Japanese Searobin, Jheald, Juan Marquez, Karada, Loren Rosen, Modify, Naddy, Neonumbers, NoVomit, Rgdboer, Silly rabbit, Sławomir Biały, The Anome, Usgnus, WikiMSL, Zeroparallax, 13anonymous edits

Pseudovector Source: http://en.wikipedia.org/w/index.php?oldid=426858492 Contributors: Bloodsnort, Brews ohare, CBM, Corkgkagj, CosineKitty, Enon, Eteq, Gauge, Gerbrant, Giftlite,Hancypetric, Hondje, JasonSaulG, JohnBlackburne, Jormundgard, Juansempere, KHamsun, Kwamikagami, Ligulem, Mangledorf, Mennsa, Mets501, Nk, Oblivious, PAR, Paolo.dL, Patrick,Quibik, Roadrunner, RockMagnetist, Sbyrnes321, SeventyThree, Silly rabbit, Smack, Stevenj, Strait, Sysy, Tarquin, Thinking of England, Vonregensburg, 22 anonymous edits

Bivector Source: http://en.wikipedia.org/w/index.php?oldid=430803965 Contributors: Antixt, Brews ohare, Charles Matthews, Dr.K., Giraffedata, Hyacinth, Jenny Harrison, John of Reading,JohnBlackburne, Joy, Kbk, LokiClock, Magioladitis, Mathewsyriac, Michael Hardy, NOrbeck, Nbarth, Noraft, Paul August, Peter Karlsen, Phys, R'n'B, RDBury, Reallyskeptic, Rgdboer,Sander123, Welsh, WikHead, Xavic69, 10 anonymous edits

Page 237: Vector and Matrices - Some Articles

Image Sources, Licenses and Contributors 234

Image Sources, Licenses and ContributorsFile:Dot Product.svg Source: http://en.wikipedia.org/w/index.php?title=File:Dot_Product.svg License: Public Domain Contributors: Limited Atonement, Mazin07, 1 anonymous editsImage:Cross product vector.svg Source: http://en.wikipedia.org/w/index.php?title=File:Cross_product_vector.svg License: Public Domain Contributors: User:AcdxImage:Right hand rule cross product.svg Source: http://en.wikipedia.org/w/index.php?title=File:Right_hand_rule_cross_product.svg License: GNU Free Documentation License Contributors: User:AcdxImage:Cross product parallelogram.svg Source: http://en.wikipedia.org/w/index.php?title=File:Cross_product_parallelogram.svg License: Public Domain Contributors: User:AcdxImage:Parallelepiped volume.svg Source: http://en.wikipedia.org/w/index.php?title=File:Parallelepiped_volume.svg License: Public Domain Contributors: User:Jitse NiesenImage:Exterior calc cross product.png Source: http://en.wikipedia.org/w/index.php?title=File:Exterior_calc_cross_product.png License: Public Domain Contributors: Original uploader wasLeland McInnes at en.wikipediaImage:Exterior_calc_triple_product.png Source: http://en.wikipedia.org/w/index.php?title=File:Exterior_calc_triple_product.png License: GNU Free Documentation License Contributors:[:en:User:Leland McInnes]Image:Inner-product-angle.png Source: http://en.wikipedia.org/w/index.php?title=File:Inner-product-angle.png License: GNU Free Documentation License Contributors: Darapti, Maksim,Oleg AlexandrovImage:Coord system CA 0.svg Source: http://en.wikipedia.org/w/index.php?title=File:Coord_system_CA_0.svg License: Public Domain Contributors: User:Jorge StolfiFile:Vector addition ans scaling.png Source: http://en.wikipedia.org/w/index.php?title=File:Vector_addition_ans_scaling.png License: GNU Free Documentation License Contributors:User:Jakob.scholbachFile:Vector addition3.svg Source: http://en.wikipedia.org/w/index.php?title=File:Vector_addition3.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors:User:Jakob.scholbachFile:Scalar multiplication.svg Source: http://en.wikipedia.org/w/index.php?title=File:Scalar_multiplication.svg License: Creative Commons Attribution 3.0 Contributors:User:Jakob.scholbachFile:Vector components and base change.svg Source: http://en.wikipedia.org/w/index.php?title=File:Vector_components_and_base_change.svg License: Creative CommonsAttribution-Sharealike 3.0 Contributors: User:Jakob.scholbachFile:Vector components.svg Source: http://en.wikipedia.org/w/index.php?title=File:Vector_components.svg License: Creative Commons Attribution-Sharealike 3.0,2.5,2.0,1.0 Contributors:User:Jakob.scholbachImage:Matrix.svg Source: http://en.wikipedia.org/w/index.php?title=File:Matrix.svg License: GNU Free Documentation License Contributors: User:LakeworksImage:Determinant parallelepiped.svg Source: http://en.wikipedia.org/w/index.php?title=File:Determinant_parallelepiped.svg License: Creative Commons Attribution 3.0 Contributors:User:RocchiniFile:Linear subspaces with shading.svg Source: http://en.wikipedia.org/w/index.php?title=File:Linear_subspaces_with_shading.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Original uploader was Alksentrs at en.wikipediaImage:Universal property tensor product.png Source: http://en.wikipedia.org/w/index.php?title=File:Universal_property_tensor_product.png License: Creative CommonsAttribution-Sharealike 3.0 Contributors: User:Jakob.scholbachImage:Vector norms2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Vector_norms2.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: Jakob.scholbach(talk) Original uploader was Jakob.scholbach at en.wikipediaImage:Periodic identity function.gif Source: http://en.wikipedia.org/w/index.php?title=File:Periodic_identity_function.gif License: Public Domain Contributors: Kieff, Panther, 2 anonymouseditsImage:Rectangular hyperbola.svg Source: http://en.wikipedia.org/w/index.php?title=File:Rectangular_hyperbola.svg License: Public Domain Contributors: User:QefImage:Heat eqn.gif Source: http://en.wikipedia.org/w/index.php?title=File:Heat_eqn.gif License: Public Domain Contributors: User:Oleg AlexandrovImage:Image Tangent-plane.svg Source: http://en.wikipedia.org/w/index.php?title=File:Image_Tangent-plane.svg License: Public Domain Contributors: Original uploader was Alexwright aten.wikipedia Later version(s) were uploaded by BenFrantzDale at en.wikipedia.Image:Moebiusstrip.png Source: http://en.wikipedia.org/w/index.php?title=File:Moebiusstrip.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Original uploader wasJakob.scholbach at en.wikipediaImage:Affine subspace.svg Source: http://en.wikipedia.org/w/index.php?title=File:Affine_subspace.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors:User:Jakob.scholbachFile:2D-simplex.svg Source: http://en.wikipedia.org/w/index.php?title=File:2D-simplex.svg License: Public Domain Contributors: User:ToshaFile:Matrix multiplication diagram 2.svg Source: http://en.wikipedia.org/w/index.php?title=File:Matrix_multiplication_diagram_2.svg License: GNU Free Documentation License Contributors: Fangfufu, Lakeworks, StefanVanDerWalt, 1 anonymous editsImage:Area parallellogram as determinant.svg Source: http://en.wikipedia.org/w/index.php?title=File:Area_parallellogram_as_determinant.svg License: Public Domain Contributors:User:Jitse NiesenImage:Sarrus rule.png Source: http://en.wikipedia.org/w/index.php?title=File:Sarrus_rule.png License: Creative Commons Attribution 3.0 Contributors: KmhkmhImage:ExteriorAlgebra-01.png Source: http://en.wikipedia.org/w/index.php?title=File:ExteriorAlgebra-01.png License: Public Domain Contributors: Darapti, EugeneZelenko, Fropuff,Zscout370, 1 anonymous editsImage:Conformal Embedding.jpg Source: http://en.wikipedia.org/w/index.php?title=File:Conformal_Embedding.jpg License: Public Domain Contributors: User:SelfstudierImage:LinePlaneIntersect.png Source: http://en.wikipedia.org/w/index.php?title=File:LinePlaneIntersect.png License: Public Domain Contributors: User:SelfstudierImage:Levi-Civita Symbol_cen.png Source: http://en.wikipedia.org/w/index.php?title=File:Levi-Civita_Symbol_cen.png License: Creative Commons Attribution-Sharealike 3.0 Contributors:User:QniemiecImage:Epsilontensor.svg Source: http://en.wikipedia.org/w/index.php?title=File:Epsilontensor.svg License: Creative Commons Attribution 2.5 Contributors: User:Akriesch, User:Luxo,User:Xmaster1123Image:LeviCivitaTensor.jpg Source: http://en.wikipedia.org/w/index.php?title=File:LeviCivitaTensor.jpg License: Public Domain Contributors: User:BelgariathFile:Loudspeaker.svg Source: http://en.wikipedia.org/w/index.php?title=File:Loudspeaker.svg License: Public Domain Contributors: Bayo, Gmaxwell, Husky, Iamunknown, Myself488,Nethac DIU, Omegatron, Rocket000, The Evil IP address, Wouterhagens, 9 anonymous editsImage:Liealgebra.png Source: http://en.wikipedia.org/w/index.php?title=File:Liealgebra.png License: Public Domain Contributors: PhysImage:Vector-valued function-2.png Source: http://en.wikipedia.org/w/index.php?title=File:Vector-valued_function-2.png License: GNU Free Documentation License Contributors: NillerdkImage:Quaternion2.png Source: http://en.wikipedia.org/w/index.php?title=File:Quaternion2.png License: Public Domain Contributors: BenFrantzDale, ProkopHapala, QuibikImage:William_Rowan_Hamilton_Plaque_-_geograph.org.uk_-_347941.jpg Source:http://en.wikipedia.org/w/index.php?title=File:William_Rowan_Hamilton_Plaque_-_geograph.org.uk_-_347941.jpg License: unknown Contributors:Image:Cayley graph Q8.png Source: http://en.wikipedia.org/w/index.php?title=File:Cayley_graph_Q8.png License: Public Domain Contributors: Sullivan.t.jImage:Space of rotations.png Source: http://en.wikipedia.org/w/index.php?title=File:Space_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: Flappiefh,MathsPoetry, Phy1729, SlavMFMImage:Hypersphere of rotations.png Source: http://en.wikipedia.org/w/index.php?title=File:Hypersphere_of_rotations.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: MathsPoetry, Perhelion, Phy1729Image:Diagonal rotation.png Source: http://en.wikipedia.org/w/index.php?title=File:Diagonal_rotation.png License: Creative Commons Attribution-Sharealike 3.0 Contributors: MathsPoetryFile:Fano plane for 7-D cross product.PNG Source: http://en.wikipedia.org/w/index.php?title=File:Fano_plane_for_7-D_cross_product.PNG License: Creative CommonsAttribution-Sharealike 3.0 Contributors: User:Brews ohareFile:FanoMnemonic.PNG Source: http://en.wikipedia.org/w/index.php?title=File:FanoMnemonic.PNG License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Brews ohareFile:BIsAPseudovector.svg Source: http://en.wikipedia.org/w/index.php?title=File:BIsAPseudovector.svg License: Public Domain Contributors: User:Sbyrnes321

Page 238: Vector and Matrices - Some Articles

Image Sources, Licenses and Contributors 235

Image:Impulsmoment van autowiel onder inversie.svg Source: http://en.wikipedia.org/w/index.php?title=File:Impulsmoment_van_autowiel_onder_inversie.svg License: Creative CommonsAttribution-Sharealike 3.0 Contributors: User:GerbrantImage:Uitwendig product onder inversie.svg Source: http://en.wikipedia.org/w/index.php?title=File:Uitwendig_product_onder_inversie.svg License: Creative CommonsAttribution-Sharealike 3.0 Contributors: User:GerbrantFile:Wedge product.JPG Source: http://en.wikipedia.org/w/index.php?title=File:Wedge_product.JPG License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:Brews ohareFile:Exterior calc cross product.png Source: http://en.wikipedia.org/w/index.php?title=File:Exterior_calc_cross_product.png License: Public Domain Contributors: Original uploader wasLeland McInnes at en.wikipediaFile:Torque animation.gif Source: http://en.wikipedia.org/w/index.php?title=File:Torque_animation.gif License: Public Domain Contributors: YaweFile:Bivector Sum.svg Source: http://en.wikipedia.org/w/index.php?title=File:Bivector_Sum.svg License: Creative Commons Attribution-Sharealike 3.0 Contributors: User:JohnBlackburneFile:Tesseract.gif Source: http://en.wikipedia.org/w/index.php?title=File:Tesseract.gif License: Public domain Contributors: Amada44, Bryan, Mattes, Mentifisto, Rocket000, Rovnet,Sarregouset, SharkD, Sl-Ziga, 1 anonymous edits

Page 239: Vector and Matrices - Some Articles

License 236

LicenseCreative Commons Attribution-Share Alike 3.0 Unportedhttp:/ / creativecommons. org/ licenses/ by-sa/ 3. 0/