tensor algebra - oafak.comoafak.com/wp-content/uploads/2013/01/2-tensor-algebra-jan-2013.pdftensor...
TRANSCRIPT
A second Order Tensor ๐ป is a linear mapping from a vector space to itself. Given ๐ โ V the mapping,
๐ป: V โ V
states that โ ๐ โ V such that, ๐ป ๐ = ๐.
Every other definition of a second order tensor can be derived from this simple definition. The tensor character of an object can be established by observing its action on a vector.
Second Order Tensor
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 2
The mapping is linear. This means that if we have two runs of the process, we first input ๐ and later input ๐ฏ. The outcomes ๐ป(๐) and ๐ป(๐ฏ), added would have been the same as if we had added the inputs ๐ and ๐ฏ first and supplied the sum of the vectors as input. More compactly, this means,
๐ป ๐ + ๐ฏ = ๐ป(๐) + ๐ป(๐ฏ)
Linearity
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 3
Linearity further means that, for any scalar ๐ผ and tensor ๐ป ๐ป ๐ผ๐ = ๐ผ๐ป ๐
The two properties can be added so that, given ๐ผ, ๐ฝ โ R, and ๐, ๐ฏ โ V, then
๐ป ๐ผ๐ + ๐ฝ๐ฏ = ๐ผ๐ป ๐ + ๐ฝ๐ป ๐ฏ
Since we can think of a tensor as a process that takes an input and produces an output, two tensors are equal only if they produce the same outputs when supplied with the same input. The sum of two tensors is the tensor that will give an output which will be the sum of the outputs of the two tensors when each is given that input.
Linearity
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 4
In general, ๐ผ, ๐ฝ โR , ๐, ๐ฏ โ V and ๐บ, ๐ป โ T ๐ผ๐บ๐ + ๐ฝ๐ป๐ = (๐ผ๐บ + ๐ฝ๐ป)๐
With the definition above, the set of tensors constitute a vector space with its rules of addition and multiplication by a scalar. It will become obvious later that it also constitutes a Euclidean vector space with its own rule of the inner product.
Vector Space
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 5
Notation.
It is customary to write the tensor mapping without the parentheses. Hence, we can write,
๐ป๐ โก ๐ป(๐)
For the mapping by the tensor ๐ป on the vector variable and dispense with the parentheses unless when needed.
Special Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 6
The annihilator ๐ถ is defined as the tensor that maps all vectors to the zero vector, ๐:
๐ถ๐ข = ๐, โ๐ โ V
Zero Tensor or Annihilator
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 7
The identity tensor ๐ is the tensor that leaves every vector unaltered. โ๐ โ V ,
๐๐ = ๐
Furthermore, โ๐ผ โ R , the tensor, ๐ผ๐ is called a spherical tensor.
The identity tensor induces the concept of an inverse of a tensor. Given the fact that if ๐ป โ T and ๐ โ V , the mapping ๐ โก ๐ป๐ produces a vector.
The Identity
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 8
Consider a linear mapping that, operating on ๐, produces our original argument, ๐, if we can find it:
๐๐ = ๐
As a linear mapping, operating on a vector, clearly, ๐ is a tensor. It is called the inverse of ๐ป because,
๐๐ = ๐๐ป๐ = ๐
So that the composition ๐๐ป = ๐, the identity mapping. For this reason, we write,
๐ = ๐ปโ1
The Inverse
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 9
It is easy to show that if ๐๐ป = ๐, then ๐ป๐ = ๐๐ป = ๐.
HW: Show this.
The set of invertible sets is closed under composition. It is also closed under inversion. It forms a group with the identity tensor as the groupโs neutral element
Inverse
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 10
Given ๐, ๐ฏ โ V , The tensor ๐จT satisfying
๐ โ ๐จT๐ฏ = ๐ฏ โ (๐จ๐)
Is called the transpose of ๐ด.
A tensor indistinguishable from its transpose is said to be symmetric.
Transposition of Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 11
There are certain mappings from the space of tensors to the real space. Such mappings are called Invariants of the Tensor. Three of these, called Principal invariants play key roles in the application of tensors to continuum mechanics. We shall define them shortly.
The definition given here is free of any association with a coordinate system. It is a good practice to derive any other definitions from these fundamental ones:
Invariants
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 12
If we write ๐, ๐, ๐ โก ๐ โ ๐ ร ๐
where ๐, ๐, and ๐ are arbitrary vectors.
For any second order tensor ๐ป, and linearly independent ๐, ๐, and ๐, the linear mapping ๐ผ1: T โR
๐ผ1 ๐ป โก tr ๐ป =๐ป๐, ๐, ๐ + ๐, ๐ป๐, ๐ + [๐, ๐, ๐ป๐]
[๐, ๐, ๐]
Is independent of the choice of the basis vectors ๐, ๐, and ๐. It is called the First Principal Invariant of ๐ป or Trace of ๐ป โก tr ๐ป โก ๐ผ1(๐ป)
The Trace
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 13
The trace is a linear mapping. It is easily shown that ๐ผ, ๐ฝ โ R , and ๐บ, ๐ป โ T
tr ๐ผ๐บ + ๐ฝ๐ป = ๐ผtr ๐บ + ๐ฝtr(๐ป)
HW. Show this by appealing to the linearity of the vector space.
While the trace of a tensor is linear, the other two principal invariants are nonlinear. WE now proceed to define them
The Trace
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 14
The second principal invariant ๐ผ2 ๐บ is related to the trace. In fact, you may come across books that define it so. However, the most common definition is that
๐ผ2 ๐บ =1
2๐ผ12 ๐บ โ ๐ผ1(๐บ
2)
Independently of the trace, we can also define the second principal invariant as,
Square of the trace
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 15
The Second Principal Invariant, ๐ผ2 ๐ป , using the same notation as above is ๐ป๐ , ๐ป๐ , ๐ + ๐, ๐ป๐ , ๐ป๐ + ๐ป๐ , ๐, ๐ป๐
๐, ๐, ๐
=1
2tr2 ๐ป โ tr ๐ป2
that is half the square of trace minus the trace of the square of ๐ป which is the second principal invariant.
This quantity remains unchanged for any arbitrary selection of basis vectors ๐, ๐ and ๐.
Second Principal Invariant
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 16
The third mapping from tensors to the real space underlying the tensor is the determinant of the tensor. While you may be familiar with that operation and can easily extract a determinant from a matrix, it is important to understand the definition for a tensor that is independent of the component expression. The latter remains relevant even when we have not expressed the tensor in terms of its components in a particular coordinate system.
The Determinant
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 17
As before, For any second order tensor ๐ป, and any linearly independent vectors ๐, ๐, and ๐,
The determinant of the tensor ๐ป,
det ๐ป =๐ป๐ , ๐ป๐ , ๐ป๐
๐, ๐, ๐
(In the special case when the basis vectors are orthonormal, the denominator is unity)
The Determinant
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 18
It is good to note that there are other principal invariants that can be defined. The ones we defined here are the ones you are most likely to find in other texts.
An invariant is a scalar derived from a tensor that remains unchanged in any coordinate system. Mathematically, it is a mapping from the tensor space to the real space. Or simply a scalar valued function of the tensor.
Other Principal Invariants
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 19
The trace provides a simple way to define the inner product of two second-order tensors. Given ๐บ, ๐ป โ T
The trace, tr ๐บ๐๐ป = tr(๐บ๐ป๐)
Is a scalar, independent of the coordinate system chosen to represent the tensors. This is defined as the inner or scalar product of the tensors ๐บ and ๐ป. That is,
๐บ: ๐ป โก tr ๐บ๐๐ป = tr(๐บ๐ป๐)
Inner Product of Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 20
The trace automatically induces the concept of the norm of a vector (This is not the determinant! Note!!) The square root of the scalar product of a tensor with itself is the norm, magnitude or length of the tensor:
๐ป = tr(๐ป๐๐ป) = ๐ป: ๐ป
Attributes of a Euclidean Space
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 21
Furthermore, the distance between two tensors as well as the angle they contain are defined. The scalar distance ๐(๐บ, ๐ป)between tensors ๐บ and ๐ป :
๐ ๐บ, ๐ป = ๐บ โ ๐ป = ๐ป โ ๐บ
And the angle ๐(๐บ, ๐ป),
๐ = cosโ1๐บ: ๐ป
๐บ ๐ป
Distance and angles
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 22
A product mapping from two vector spaces to T is defined as the tensor product. It has the following properties:
"โ":V รV โ T ๐โ ๐ ๐ = (๐ โ ๐)๐
It is an ordered pair of vectors. It acts on any other vector by creating a new vector in the direction of its first vector as shown above. This product of two vectors is called a tensor product or a simple dyad.
The Tensor Product
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 23
It is very easily shown that the transposition of dyad is simply a reversal of its order. (HW. Show this).
The tensor product is linear in its two factors.
Based on the obvious fact that for any tensor ๐ป and ๐, ๐,๐ โ V , ๐ป ๐โ ๐ ๐ = ๐ป๐ ๐ โ ๐ = ๐ป๐ โ ๐ ๐
It is clear that ๐ป ๐โ ๐ = ๐ป๐ โ ๐
Show this neatly by operating either side on a vector
Furthermore, the contraction, ๐โ ๐ ๐ป = ๐โ ๐ป๐๐
A fact that can be established by operating each side on the same vector.
Dyad Properties
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 24
Recall that for ๐, ๐ฏ โ V , The tensor ๐จT satisfying
๐ โ ๐จT๐ฏ = ๐ฏ โ (๐จ๐)
Is called the transpose of ๐จ. Now let ๐จ = ๐โ ๐ a dyad. ๐ฏ โ ๐จ๐ =
= ๐ฏ โ ๐โ ๐ ๐ = ๐ฏ โ ๐ ๐ โ ๐ = ๐ฏ โ ๐ ๐ โ ๐ = ๐ โ ๐ ๐ฏ โ ๐ = ๐ โ ๐โ ๐ ๐ฏ
So that ๐โ ๐ T = ๐โ ๐
Showing that the transpose of a dyad is simply a reversal of its factors.
Transpose of a Dyad
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 25
If ๐ง is the unit normal to a given plane, show that the tensor ๐ โก ๐ โ ๐งโ ๐ง is such that ๐๐ฎ is the projection of the vector ๐ฎ to the plane in question.
Consider the fact that ๐ โ ๐ฎ = ๐๐ฎ โ ๐ง โ ๐ฎ ๐ง = ๐ฎ โ ๐ง โ ๐ฎ ๐ง
The above vector equation shows that ๐๐ฎ is what remains after we have subtracted the projection ๐ง โ ๐ฎ ๐ง onto the normal. Obviously, this is the
projection to the plane itself. ๐ as we shall see later is called a tensor projector.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 26
Consider a contravariant vector component ๐๐ let us take a product of this with the Kronecker Delta:
๐ฟ๐๐๐๐
which gives us a third-order object. Let us now perform a contraction across (by taking the superscript index from ๐ด๐
and the subscript from ๐ฟ๐๐) to arrive at,
๐๐ = ๐ฟ๐๐๐๐
Observe that the only free index remaining is the superscript ๐ as the other indices have been contracted (it is consequently a summation index) out in the implied summation. Let us now expand the RHS above, we find,
Substitution Operation
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 27
๐๐ = ๐ฟ๐๐๐๐ = ๐ฟ1
๐๐1 + ๐ฟ2๐๐2 + ๐ฟ3
๐๐3
Note the following cases:
if ๐ = 1, we have ๐1 = ๐1, if ๐ = 2, we have ๐2 = ๐2 if ๐ = 3, we have ๐3 = ๐3. This leads us to conclude
therefore that the contraction, ๐ฟ๐๐๐๐ = ๐๐. Indicating
that that the Kronecker Delta, in a contraction, merely substitutes its own other symbol for the symbol on the vector ๐๐ it was contracted with. This fact, that the Kronecker Delta does this in general earned it the alias of โSubstitution Operatorโ.
Substitution
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 28
Operate on the vector ๐ and let ๐ป๐ = ๐. On the LHS, ๐โ ๐ ๐ป๐ = ๐โ ๐ ๐
On the RHS, we have:
๐โ ๐ป๐๐ ๐ = ๐ ๐ป๐๐ โ ๐ = ๐ ๐ โ ๐ป๐๐
Since the contents of both sides of the dot are vectors and dot product of vectors is commutative. Clearly,
๐โ ๐ โ ๐ป๐๐ = ๐โ ๐ โ ๐ป๐
follows from the definition of transposition. Hence,
๐โ ๐ป๐๐ ๐ = ๐ ๐ โ ๐ = ๐โ ๐ ๐
Composition with Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 29
For ๐, ๐,๐, ๐ โ V , We can show that the dyad composition,
๐โ ๐ ๐โ ๐ = ๐โ ๐ ๐ โ ๐
Again, the proof is to show that both sides produce the same result when they act on the same vector. Let ๐ โ V , then the LHS on ๐ yields:
๐โ ๐ ๐โ ๐ ๐ = ๐โ ๐ ๐(๐ โ ๐)= ๐ ๐ โ ๐ (๐ โ ๐)
Which is obviously the result from the RHS also.
This therefore makes it straightforward to contract dyads by breaking and joining as seen above.
Dyad on Dyad Composition
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 30
Show that the trace of the tensor product ๐ฎโ ๐ฏ is ๐ฎ โ ๐ฏ.
Given any three independent vectors ๐, ๐, and ๐, (No loss of generality in letting the three independent vectors be the curvilinear basis vectors ๐ 1, ๐ 2 and ๐ 3). Using the above definition of trace, we can write that,
Trace of a Dyad
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 31
tr ๐ฎโ ๐ฏ
=๐ฎโ ๐ฏ ๐ 1 , ๐ 2, ๐ 3 + ๐ 1, ๐ฎ โ ๐ฏ ๐ 2 , ๐ 3 + ๐ 1, ๐ 2, ๐ฎ โ ๐ฏ ๐ 3
๐ 1, ๐ 2, ๐ 3
=1
๐123๐ฃ1๐ฎ, ๐ 2, ๐ 3 + ๐ 1, ๐ฃ2๐ฎ, ๐ 3 + ๐ 1, ๐ 2, ๐ฃ3๐ฎ
=1
๐123๐ฃ1๐ฎ โ ๐23๐๐
๐ + ๐31๐๐ ๐ โ ๐ฃ2๐ฎ + ๐12๐๐
๐ โ ๐ฃ3๐ฎ
=1
๐123๐ฃ1๐ฎ โ ๐231๐
1 + ๐312๐ 2 โ ๐ฃ2๐ฎ + ๐123๐
3 โ ๐ฃ3๐ฎ
= ๐ฃ๐๐ข๐ = ๐ฎ โ ๐ฏ
Trace of a Dyad
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 32
It is easy to show that for a tensor product ๐ซ = ๐โ ๐ โ๐, ๐ โ V ๐ฐ2 ๐ซ = ๐ฐ3 ๐ซ = 0
HW. Show that this is so.
We proved earlier that ๐ฐ1 ๐ซ = ๐ โ ๐
Furthermore, if ๐ป โ T , then, tr ๐ป๐โ ๐ = tr ๐โ ๐ = ๐ โ ๐ = ๐ป๐ โ ๐
Other Invariants of a Dyad
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 33
Given ๐ป โ T , for any basis vectors ๐ ๐ โ V , ๐ = 1,2,3 ๐ป๐ โก ๐ป๐ ๐ โ V , ๐ = 1,2,3
by the law of tensor mapping. We proceed to find the components of ๐ป๐ on this same basis. Its covariant
components, just like in any other vector are the scalars, ๐ป๐ผ ๐ = ๐ ๐ผ โ ๐ป๐
Specifically, these components are ๐ป1 ๐ , ๐ป2 ๐ , ๐ป3 ๐
Tensor Bases & Component Representation
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 34
We can dispense with the parentheses and write that ๐๐ผ๐ โก ๐๐ผ ๐ = ๐ป๐ โ ๐ ๐ผ
So that the vector ๐ป๐ ๐ = ๐ป๐ = ๐๐ผ๐๐
๐ผ
The components ๐๐๐ can be found by taking the dot
product of the above equation with ๐ ๐:
๐ ๐ โ ๐ป๐ ๐ = ๐๐ผ๐ ๐ ๐ โ ๐ ๐ผ = ๐๐๐
๐๐๐ = ๐ ๐ โ ๐ป๐ ๐
= tr ๐ป๐ ๐โ๐ ๐ = ๐ป: ๐ ๐โ๐ ๐
Tensor Components
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 35
The component ๐๐๐ is simply the result of the inner
product of the tensor ๐ป on the tensor product ๐ ๐โ๐ ๐.
These are the components of ๐ป on the product dual of this particular product base.
This is a general result and applies to all product bases:
It is straightforward to prove the results on the following table:
Tensor Components
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 36
Components of ๐ป Derivation Full Representation
๐๐๐ ๐ป: (๐ ๐โ๐ ๐) ๐ป = ๐๐๐๐ ๐โ๐ ๐
๐๐๐ ๐ป: ๐ ๐โ๐ ๐ ๐ป = ๐๐๐๐ ๐โ๐ ๐
๐๐.๐
๐ป: (๐ ๐โ๐ ๐) ๐ป = ๐๐
.๐๐ ๐โ๐ ๐
๐.๐๐ ๐ป: (๐ ๐โ๐ ๐) ๐ป = ๐.๐
๐๐ ๐โ๐
๐
Tensor Components
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 37
Components of ๐ Derivation Full Representation
๐ ๐๐ = ๐๐๐ ๐: (๐ ๐โ๐ ๐) ๐ = ๐๐๐๐ ๐โ๐ ๐
๐ ๐๐ = ๐๐๐ ๐: ๐ ๐โ๐ ๐ ๐ = ๐๐๐๐ ๐โ๐ ๐
๐ ๐.๐= ๐ฟ๐.๐
๐: (๐ ๐โ๐ ๐) ๐ = ๐ฟ๐
.๐๐ ๐โ๐ ๐ = ๐
๐โ๐ ๐
๐ .๐๐= ๐ฟ .๐๐
๐: (๐ ๐โ๐ ๐) ๐ = ๐ฟ.๐๐๐ ๐โ๐
๐ = ๐ ๐โ๐ ๐
IdentityTensor Components
It is easily verified from the definition of the identity tensor and the inner product that: (HW Verify this)
Showing that the Kronecker deltas are the components of the identity tensor in certain (not all) coordinate bases.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 38
The above table shows the interesting relationship between the metric components and Kronecker deltas.
Obviously, they are the same tensors under different bases vectors.
Kronecker and Metric Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 39
It is easy to show that the above tables of component representations are valid. For any ๐ฏ โ V , and ๐ป โ T,
๐ป โ ๐๐๐๐ ๐โ๐ ๐ ๐ฏ = ๐ป๐ฏ โ ๐๐๐ ๐
๐โ๐ ๐ ๐ฏ
Expanding the vector in contravariant components, we have,
๐ป๐ฏ โ ๐๐๐ ๐ ๐โ๐ ๐ ๐ฏ = ๐ป๐ฃ๐ผ๐ ๐ผ โ ๐๐๐ ๐
๐โ๐ ๐ ๐ฃ๐ผ๐ ๐ผ
= ๐ป๐ฃ๐ผ๐ ๐ผ โ ๐๐๐๐ฃ๐ผ๐ ๐ ๐ ๐ โ ๐ ๐ผ
= ๐ป๐ฃ๐ผ๐ ๐ผ โ ๐๐๐๐ฃ๐ผ๐ ๐๐ฟ๐ผ
๐
= ๐ป๐ผ ๐ฃ๐ผ โ ๐๐๐๐ฃ
๐๐ ๐ = ๐ป๐ผ ๐ฃ๐ผ โ ๐ป๐๐ฃ
๐
= ๐
โด ๐ป = ๐๐๐๐ ๐โ๐ ๐
Component Representation
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 40
The transpose of ๐ป = ๐๐๐๐ ๐โ๐ ๐ is ๐ป๐ = ๐๐๐๐
๐โ๐ ๐.
If ๐ป is symmetric, then,
๐๐๐๐ ๐โ๐ ๐ = ๐๐๐๐
๐โ๐ ๐ = ๐๐๐๐ ๐โ๐ ๐
Clearly, in this case, ๐๐๐ = ๐๐๐
It is straightforward to establish the same for contravariant components. This result is impossible to establish for mixed tensor components:
Symmetry
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 41
For mixed tensor components,
๐ป = ๐๐.๐๐ ๐โ๐ ๐
The transpose,
๐ปT = ๐๐.๐๐ ๐โ๐
๐= ๐๐.๐๐ ๐โ๐
๐
While symmetry implies that,
๐ป = ๐๐.๐๐ ๐โ๐ ๐ = ๐ป
T = ๐๐.๐๐ ๐โ๐
๐
We are not able to exploit the dummy variables to bring the two sides to a common product basis. Hence the symmetry is not expressible in terms of their components.
Symmetry
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 42
A tensor is antisymmetric if its transpose is its negative. In product bases that are either covariant or contravariant, antisymmetry, like symmetry can be expressed in terms of the components:
The transpose of ๐ป = ๐๐๐๐ ๐โ๐ ๐ is ๐ป๐ = ๐๐๐๐
๐โ๐ ๐.
If ๐ป is antisymmetric, then,
๐๐๐๐ ๐โ๐ ๐ = โ๐๐๐๐
๐โ๐ ๐ = โ๐๐๐๐ ๐โ๐ ๐
Clearly, in this case, ๐๐๐ = โ๐๐๐
It is straightforward to establish the same for contravariant components. Antisymmetric tensors are also said to be skew-symmetric.
AntiSymmetry
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 43
For any tensor ๐, define the symmetric and skew parts
sym ๐ โก1
2๐ + ๐T , and skw ๐ โก
1
2๐ โ ๐T . It is easy
to show the following: ๐ = sym ๐ + skw ๐
skw sym ๐ = sym skw ๐ = 0
We can also write that,
sym ๐ =1
2๐๐๐ + ๐๐๐ ๐
๐โ๐ ๐
and
skw ๐ =1
2๐๐๐ โ ๐๐๐ ๐
๐โ๐ ๐
Symmetric & Skew Parts of Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 44
Composition of tensors in component form follows the rule of the composition of dyads.
๐ป = ๐๐๐๐ ๐โ๐ ๐ ,
๐บ = ๐๐๐ ๐ ๐โ๐ ๐
๐ป๐บ = ๐๐๐๐ ๐โ๐ ๐ ๐๐ผ๐ฝ๐ ๐ผโ๐ ๐ฝ
= ๐๐๐๐๐ผ๐ฝ ๐ ๐โ๐ ๐ ๐ ๐ผโ๐ ๐ฝ
= ๐๐๐๐๐ผ๐ฝ๐ ๐โ๐ ๐ฝ๐๐๐ผ
= ๐ .๐๐.๐๐๐ฝ๐ ๐โ๐ ๐ฝ
= ๐ .๐ผ๐. ๐๐ผ๐๐ ๐โ๐ ๐
Composition
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 45
Addition of two tensors of the same order is the addition of their components provided they are refereed to the same product basis.
Addition
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 46
Components ๐ป + ๐บ
๐๐๐ +๐๐๐ ๐๐๐ +๐๐๐ ๐ ๐โ๐ ๐
๐๐๐ + ๐๐๐ ๐๐๐ + ๐๐๐ ๐ ๐โ๐ ๐
๐๐.๐+ ๐๐.๐
๐๐.๐+ ๐๐.๐๐ ๐โ๐ ๐
๐.๐๐+๐.๐๐ ๐.๐
๐+๐.๐๐๐ ๐โ๐
๐
Component Addition
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 47
Invoking the definition of the three principal invariants, we now find expressions for these in terms of the components of tensors in various product bases.
First note that for ๐ป = ๐๐๐๐ ๐โ๐ ๐, the triple product,
๐ป๐ 1 , ๐ 2, ๐ 3 = ๐๐๐๐ ๐โ๐ ๐ ๐ 1 , ๐ 2, ๐ 3
= ๐๐๐๐ ๐๐ฟ1๐ , ๐ 2, ๐ 3 = ๐๐1๐
๐ โ (๐231๐ 1) = ๐๐1๐
๐1๐231
Recall that ๐ ๐ ร ๐ ๐ = ๐๐๐๐๐ ๐
Component Representation of Invariants
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 48
The Trace of the Tensor ๐ป = ๐๐๐๐ ๐โ๐ ๐
tr ๐ป =๐ป๐ 1 , ๐ 2, ๐ 3 + ๐ 1, ๐ป๐ 2 , ๐ 3 + ๐ 1, ๐ 2, ๐ป๐ 3
๐ 1, ๐ 2, ๐ 3
=๐๐1๐๐1๐231 + ๐๐2๐
๐2๐312 + ๐๐3๐๐3๐123
๐123
= ๐๐1๐๐1 + ๐๐2๐
๐2 + ๐๐3๐๐3 = ๐๐๐๐
๐๐ = ๐๐.๐
The Trace
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 49
๐ป๐ , ๐ป๐ , ๐ = ๐๐๐๐๐๐ผ๐๐๐ผ๐๐ฝ
๐๐๐ฝ๐๐
๐, ๐ป๐ , ๐ป๐ = ๐๐๐๐๐๐๐๐ฝ๐๐๐ฝ๐๐พ๐๐๐พ
๐ป๐ , ๐, ๐ป๐ = ๐๐๐๐๐๐ผ๐๐๐ผ๐๐๐๐พ
๐๐๐พ
Changing the roles of dummy variables, we can write,
๐ป๐ , ๐ป๐ , ๐ + ๐, ๐ป๐ , ๐ป๐ + ๐ป๐ , ๐, ๐ป๐
= ๐๐ผ๐ฝ๐๐๐๐ผ๐๐๐๐
๐ฝ๐๐๐๐ + ๐๐๐ฝ๐พ๐
๐๐๐๐ฝ๐๐๐๐๐พ๐๐
+ ๐๐ผ๐๐พ๐๐๐ผ๐๐๐๐๐๐
๐พ๐๐
= ๐๐๐ผ๐๐๐ฝ๐๐ผ๐ฝ๐ + ๐๐
๐ฝ๐๐๐พ๐๐๐ฝ๐พ + ๐๐
๐ผ๐๐๐พ๐๐ผ๐๐พ ๐
๐๐๐๐๐
=1
2๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ๐๐๐๐๐
๐๐๐๐๐
Second Invariant
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 50
The last equality can be verified in the following way. Contracting the coefficient
๐๐๐ผ๐๐๐ฝ๐๐ผ๐ฝ๐ + ๐๐
๐ฝ๐๐๐พ๐๐๐ฝ๐พ + ๐๐
๐ผ๐๐๐พ๐๐ผ๐๐พ
with ๐๐๐๐
๐๐๐๐ ๐๐๐ผ๐๐๐ฝ๐๐ผ๐ฝ๐ + ๐๐
๐ฝ๐๐๐พ๐๐๐ฝ๐พ + ๐๐
๐ผ๐๐๐พ๐๐ผ๐๐พ
= ๐ฟ๐ผ๐ ๐ฟ๐ฝ๐โ ๐ฟ๐ผ๐๐ฟ๐ฝ๐ ๐๐๐ผ๐๐๐ฝ+ ๐ฟ๐ฝ
๐๐ฟ๐พ๐ โ ๐ฟ๐พ
๐๐ฟ๐ฝ๐ ๐๐๐ฝ๐๐๐พ
+ ๐ฟ๐ผ๐ ๐ฟ๐พ๐ โ ๐ฟ๐ผ
๐๐ฟ๐พ๐ ๐๐๐ผ๐๐๐พ
= ๐๐ผ๐ผ ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ+ ๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ+ ๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ
= 3 ๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ
Second Invariant
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 51
Similarly, contracting ๐๐๐๐ with ๐๐๐๐, we have,
๐๐๐๐๐๐๐๐ = 6.
Hence
๐๐๐ผ๐๐๐ฝ๐๐ผ๐ฝ๐ + ๐๐
๐ฝ๐๐๐พ๐๐๐ฝ๐พ + ๐๐
๐ผ๐๐๐พ๐๐ผ๐๐พ ๐
๐๐๐๐๐
๐๐๐๐๐๐๐๐๐๐
=3 ๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ๐๐๐๐๐๐
6๐๐๐๐๐๐
=๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ๐๐๐๐๐
๐๐๐๐๐
2๐๐๐๐๐๐๐๐๐๐
=1
2๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ
Which is half the difference between square of the trace and the trace of the square of tensor ๐ป.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 52
The invariant,
๐ป๐ , ๐ป๐ , ๐ป๐ = ๐๐๐๐๐๐ผ๐๐๐ผ๐๐ฝ
๐๐๐ฝ๐๐พ๐๐๐พ
= ๐๐๐๐๐๐ผ๐๐๐ฝ๐๐๐พ๐๐๐ผ๐๐ฝ๐๐พ
= det ๐ป ๐๐ผ๐ฝ๐พ๐๐ผ๐๐ฝ๐๐พ
= det ๐ป ๐, ๐, ๐
From which
det ๐ป = ๐๐๐๐๐1๐๐2๐๐3๐
Determinant
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 53
Given a vector ๐ = ๐ข๐๐ ๐, the tensor
๐ ร โก ๐๐๐ผ๐๐ข๐ผ๐ ๐โ๐ ๐
is called a vector cross. The following relation is easily established between a the vector cross and its associated vector:
โ๐ฏ โ V , ๐ ร ๐ฏ = ๐ ร ๐ฏ
The vector cross is traceless and antisymmetric. (HW. Show this)
Traceless tensors are also called deviatoric or deviator tensors.
The Vector Cross
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 54
For any antisymmetric tensor ๐, โ๐ โ V , such that ๐ = ๐ ร
๐ which can always be found, is called the axial vector to the skew tensor.
It can be proved that
๐ = โ1
2๐๐๐๐ฮฉ๐๐๐ ๐ = โ
1
2๐๐๐๐ฮฉ
๐๐๐ ๐
(HW: Prove it by contracting both sides of ฮฉ๐๐ = ๐๐๐ผ๐๐๐ผ
with ๐๐๐๐ฝwhile noting that ๐๐๐๐ฝ๐๐๐ผ๐ = ๐ฟ๐๐ผ๐๐๐๐ฝ= โ2๐ฟ๐ผ
๐ฝ)
Axial Vector
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 55
Gurtin 2.8.5 Show that for any two vectors ๐ฎ and ๐ฏ, the inner product ๐ฎ ร : ๐ฏ ร = 2๐ฎ โ ๐ฏ. Hence show that ๐ฎ ร = โ2 ๐ฎ
๐ฎ ร = ๐๐๐๐๐ข๐๐ ๐โ๐ ๐, ๐ฏ ร = ๐๐๐๐๐ฃ๐๐ ๐โ๐ ๐. Hence,
๐ฎ ร : ๐ฏ ร = ๐๐๐๐๐๐๐๐๐ข๐๐ฃ๐ ๐ ๐โ๐ ๐ : ๐
๐โ๐ ๐
= ๐๐๐๐๐๐๐๐๐ข๐๐ฃ๐ ๐ ๐ โ ๐
๐ ๐ ๐ โ ๐ ๐
= ๐๐๐๐๐๐๐๐๐ข๐๐ฃ๐๐ฟ๐๐๐ฟ๐๐ = ๐๐๐๐๐๐๐๐๐ข๐๐ฃ
๐
= 2๐ฟ๐๐๐ข๐๐ฃ๐ = 2๐ข๐๐ฃ
๐ = 2 ๐ฎ โ ๐ฏ
The rest of the result follows by setting ๐ฎ = ๐ฏ
HW. Redo this proof using the contravariant alternating tensor components, ๐๐๐๐ and ๐๐๐๐.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 56
Examples
For vectors ๐ฎ, ๐ฏ and ๐ฐ, show that ๐ฎ ร ๐ฏ ร ๐ฐ ร =๐ฎโ ๐ฏ ร๐ฐ โ ๐ฎ โ ๐ฏ ๐ฐ ร.
The tensor ๐ฎ ร = โ๐๐๐๐๐ข๐๐ ๐โ๐ ๐
Similarly, ๐ฏ ร = โ๐๐ผ๐ฝ๐พ๐ฃ๐พ๐ ๐ผโ๐ ๐ฝ and ๐ฐร = โ๐๐๐๐๐ค๐๐ ๐โ
๐ ๐. Clearly,
๐ฎ ร ๐ฏ ร ๐ฐ ร
= โ๐๐๐๐๐๐ผ๐ฝ๐พ๐๐๐๐๐ข๐๐ฃ๐พ๐ค๐ ๐ ๐ผโ๐ ๐ฝ ๐
๐โ๐ ๐ ๐ ๐โ๐ ๐= โ๐๐ผ๐ฝ๐พ๐๐๐๐๐
๐๐๐๐ข๐๐ฃ๐พ๐ค๐ ๐ ๐ผโ๐ ๐ ๐ฟ๐ฝ๐ ๐ฟ๐๐
= โ๐๐ผ๐๐พ๐๐๐๐๐๐๐๐๐ข๐๐ฃ๐พ๐ค๐ ๐ ๐ผโ๐ ๐
= โ๐๐๐ผ๐พ๐๐๐๐๐๐๐๐๐ข๐๐ฃ๐พ๐ค๐ ๐ ๐ผโ๐ ๐
= โ ๐ฟ๐๐ผ๐ฟ๐๐พโ ๐ฟ๐๐ผ๐ฟ๐๐พ ๐๐๐๐๐ข๐๐ฃ๐พ๐ค๐ ๐ ๐ผโ๐ ๐
= โ๐๐๐๐๐ข๐ผ๐ฃ๐๐ค๐ ๐ ๐ผโ๐ ๐ + ๐๐๐๐๐ข๐พ๐ฃ๐พ๐ค๐ ๐ ๐โ๐ ๐
= ๐ฎโ ๐ฏ ร ๐ฐ โ ๐ฎ โ ๐ฏ ๐ฐ ร
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 57
๐๐๐ โก ๐ ๐ โ ๐ ๐ and ๐๐๐ โก ๐ ๐ โ ๐ ๐
These two quantities turn out to be fundamentally important to any space that which either of these two basis vectors can span. They are called the covariant and contravariant metric tensors. They are the quantities that metrize the space in the sense that any measurement of length, angles areas etc are dependent on them.
Index Raising & Lowering
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 58
Now we start with the fact that the contravariant and covariant components of a vector ๐, ๐๐ = ๐ โ ๐ ๐, ๐๐ = ๐ โ ๐ ๐ respectively. We can express the vector ๐
with respect to the reciprocal basis as
๐ = ๐๐๐ ๐
Consequently, ๐๐ = ๐ โ ๐ ๐ = ๐๐๐
๐ โ ๐ ๐ = ๐๐๐๐๐
The effect of ๐๐๐ contracting ๐๐๐ with ๐๐ is to raise and substitute its index.
Index Raising & Lowering
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 59
With similar arguments, it is easily demonstrated that,
๐๐ = ๐๐๐๐๐
So that ๐๐๐, in a contraction, lowers and substitutes the
index. This rule is a general one. These two components are able to raise or lower indices in tensors of higher orders as well. They are called index raising and index lowering operators.
Index Raising & Lowering
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 60
Tensor components such as ๐๐ and ๐๐ related through the index-raising and index lowering metric tensors as we have on the previous slide, are called associated vectors. In higher order quantities, they are associated tensors.
Note that associated tensors, so called, are mere tensor components of the same tensor in different bases.
Associated Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 61
Given any tensor ๐จ, the cofactor ๐จc of ๐จ is the tensor
๐ป = ๐๐.๐๐ ๐โ๐ ๐ = ๐.๐
๐๐ ๐โ๐ ๐
๐๐๐ =1
2!๐ฟ๐๐๐๐๐๐ ๐ด๐
๐๐ด๐ ๐
is the cofactor of ๐จ.
Just as in a matrix, ๐จโT = ๐จc
det ๐จ
We now show that the tensor ๐จc satisfies, ๐จc ๐ฎ ร ๐ฏ = ๐๐ฎ ร ๐๐ฏ
For any two independent vectors ๐ฎ and ๐ฏ.
Cofactor Tensor
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 62
The above vector equation is:
๐๐๐ ๐ ๐โ๐
๐ ๐๐๐๐๐ข๐๐ฃ๐๐ ๐ = ๐๐๐ ๐๐๐๐๐ข๐๐ฃ๐๐ฟ๐
๐๐ ๐
= ๐๐๐๐๐๐๐๐ข๐๐ฃ๐๐ ๐ = ๐
๐๐๐๐ด๐๐๐ด๐๐ ๐ข๐๐ฃ๐ ๐ ๐ = ๐
๐๐๐ ๐ด๐๐๐ด๐ ๐๐ข๐๐ฃ๐๐ ๐
The coefficients of the arbitrary tensor ๐ข๐๐ฃ๐ are:
๐๐๐๐๐๐๐ = ๐๐๐๐ ๐ด๐
๐๐ด๐ ๐
Contracting on both sides with ๐๐๐๐ we have,
๐๐๐๐๐๐๐๐๐๐๐ = ๐
๐๐๐ ๐๐๐๐๐ด๐๐๐ด๐ ๐ so that,
2! ๐ฟ๐๐ ๐๐๐ = ๐ฟ๐๐๐
๐๐๐ ๐ด๐๐๐ด๐ ๐ โ ๐๐
๐ =1
2!๐ฟ๐๐๐๐๐๐ ๐ด๐
๐๐ด๐ ๐
Cofactor
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 63
The above result shows that the cofactor transforms the area vector of the parallelogram defined by ๐ฎ ร ๐ฏ to the parallelogram defined by ๐๐ฎ ร ๐๐ฏ
Cofactor Transformation
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 64
Show that the determinant of a product is the product of the determinants
๐ช = ๐จ๐ฉ โ ๐ถ๐๐ = ๐ด๐
๐ ๐ต๐๐
so that the determinant of ๐ช in component form is,
๐๐๐๐๐ถ๐1๐ถ๐2๐ถ๐3 = ๐๐๐๐๐ด๐
1๐ต๐๐๐ด๐2 ๐ต๐๐๐ด๐3๐ต๐๐
= ๐ด๐1๐ด๐2 ๐ด๐3 ๐๐๐๐๐ต๐
๐๐ต๐๐๐ต๐๐
= ๐ด๐1๐ด๐2 ๐ด๐3 ๐๐๐๐ det ๐ฉ
= det ๐จ ร det ๐ฉ .
If ๐จ is the inverse of ๐ฉ, then ๐ช becomes the identity matrix. Hence the above also proves that the determinant of an inverse is the inverse of the determinant.
Determinants
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 65
det ๐ผ๐ช = ๐๐๐๐ ๐ผ๐ถ๐1 ๐ผ๐ถ๐
2 ๐ผ๐ถ๐3 = ๐ผ3 det ๐ช
For any invertible tensor we show that det ๐บC = det ๐บ 2
The inverse of tensor ๐บ,
๐บโ1 = det ๐บ โ1 ๐บCT
let the scalar ๐ผ = det ๐บ. We can see clearly that, ๐บC = ๐ผ๐บโ๐
Taking the determinant of this equation, we have,
det ๐บC = ๐ผ3 det ๐บโ๐ = ๐ผ3 det ๐บโ1
as the transpose operation has no effect on the value of a determinant. Noting that the determinant of an inverse is the inverse of the determinant, we have,
det ๐บC = ๐ผ3 det ๐บโ1 =๐ผ3
๐ผ= det ๐บ 2
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 66
Show that ๐ผ๐บ C = ๐ผ2๐บC
Ans
๐ผ๐บ C = det ๐ผ๐บ ๐ผ๐บ โT = ๐ผ3 det ๐บ ๐ผโ1๐บโT
= ๐ผ2 det ๐บ ๐บโT = ๐ผ2๐บC
Show that ๐บโ1 C = det ๐บ โ1๐บT
Ans. ๐บโ1 C = det ๐บโ1 ๐บโ1 โ๐ = det ๐บ โ1๐บT
Cofactor
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 67
(HW Show that the second principal invariant of an invertible tensor is the trace of its cofactor.)
Cofactor
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 68
(d) Show that ๐บCโ1= det ๐บ โ1๐บT
Ans. ๐บC = det ๐บ ๐บโ๐
Consequently,
๐บCโ1= det ๐บ โ1 ๐บโ๐ โ1 = det ๐บ โ1๐บT
(e) Show that ๐บCC= det ๐บ ๐บ
Ans. ๐บC = det ๐บ ๐บโ๐
So that,
๐บCC= det ๐บC ๐บC
โ๐= det ๐บ 2 ๐บC
โ1 ๐
= det ๐บ 2 det ๐บ โ1๐บT ๐ = det ๐บ 2 det ๐บ โ1๐บ= det ๐บ ๐บ
as required.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 69
3. Show that for any invertible tensor ๐บ and any vector ๐,
๐บ๐ ร = ๐บC ๐ ร ๐บโ๐
where ๐บC and ๐บโ๐ are the cofactor and inverse of ๐บ respectively.
By definition,
๐บC = det ๐บ ๐บโT
We are to prove that,
๐บ๐ ร = ๐บC ๐ ร ๐บโ๐ = det ๐บ ๐บโT ๐ ร ๐บโ๐
or that,
๐บT ๐บ๐ ร = ๐ ร det ๐บ ๐บโ๐ = ๐ ร ๐บC๐
On the RHS, the contravariant ๐๐ component of ๐ ร is
๐ ร ๐๐ = ๐๐๐ผ๐๐ข๐ผ
which is exactly the same as writing, ๐ ร = ๐๐๐ผ๐ ๐ข๐ผ๐ ๐โ๐ ๐ in the invariant form.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 70
Similarly, ๐บC. ๐
๐ .๐ ๐โ๐
๐ =1
2๐๐๐๐๐๐๐ฝ๐พ๐๐
๐ฝ๐๐๐พ๐ ๐โ๐
๐ so that its transpose ๐บCT=
1
2๐๐๐๐๐๐๐ฝ๐พ๐๐
๐ฝ๐๐๐พ๐ ๐โ๐ ๐. We may therefore write,
๐ ร ๐บCT=1
2๐๐๐ผ๐๐ข๐ผ๐
๐๐๐๐๐๐ฝ๐พ๐๐๐ฝ๐๐๐พ๐ ๐โ๐ ๐ โ ๐
๐โ๐ ๐
=1
2๐๐๐ผ๐๐ฟ๐
๐๐ข๐ผ๐๐๐๐๐๐๐ฝ๐พ๐๐
๐ฝ๐๐๐พ๐ ๐โ๐ ๐
=1
2๐๐๐๐ผ๐๐๐ฝ๐พ๐ข๐ผ๐
๐๐๐๐๐๐ฝ๐๐๐พ๐ ๐โ๐ ๐
=1
2๐๐๐๐ ๐ฟ๐ฝ
๐ ๐ฟ๐พ๐ผ โ ๐ฟ๐พ
๐๐ฟ๐ฝ๐ผ ๐ข๐ผ๐๐
๐ฝ๐๐๐พ๐ ๐โ๐ ๐
=1
2๐๐๐๐ ๐ข๐พ๐๐
๐๐๐๐พโ ๐ข๐ฝ๐๐
๐ฝ๐๐๐ ๐ ๐โ๐ ๐
=1
2๐๐๐๐ ๐ข๐ฝ๐๐
๐๐๐๐ฝโ ๐ข๐ฝ๐๐
๐ฝ๐๐๐ ๐ ๐โ๐ ๐ = ๐
๐๐๐๐ข๐ฝ๐๐๐๐๐๐ฝ๐ ๐โ๐ ๐
= ๐๐๐ผ๐ฝ๐ข๐๐๐ผ๐ ๐๐ฝ๐๐ ๐โ๐ ๐
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 71
We now turn to the LHS;
๐บ๐ ร = ๐๐๐ผ๐ ๐บ๐ ๐ผ๐ ๐โ๐ ๐ = ๐๐๐ผ๐๐๐ผ๐๐ข๐๐ ๐โ๐ ๐
Now, ๐บ = ๐.๐ฝ๐. ๐ ๐โ๐
๐ฝ so that its transpose, ๐บT = ๐๐ฝ๐๐ ๐ฝโ๐ ๐ = ๐๐
๐ฝ๐ ๐โ๐ ๐ฝ so that
๐บT ๐บ๐ ร = ๐๐๐ผ๐๐๐๐ฝ๐๐ผ๐๐ข๐๐ ๐โ๐ ๐ฝ โ ๐ ๐โ๐ ๐
= ๐๐๐ผ๐๐๐๐๐๐ผ๐๐ข๐๐ ๐โ๐ ๐
= ๐๐๐ผ๐๐๐๐๐๐ผ๐๐ข๐๐ ๐โ๐ ๐
= ๐๐ผ๐ฝ๐๐ข๐๐๐ผ๐ ๐๐ฝ๐๐ ๐โ๐ ๐ = ๐ ร ๐บ
C T.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 72
Show that ๐บC๐ ร = ๐บ ๐ ร ๐บT The LHS in component invariant form can be written as:
๐บC๐ ร = ๐๐๐๐ ๐บC๐๐๐ ๐โ๐ ๐
where ๐บC๐
๐ฝ=1
2๐๐๐๐๐
๐ฝ๐๐๐๐๐๐๐๐ so that
๐บC๐๐= ๐บC
๐
๐ฝ๐ข๐ฝ =1
2๐๐๐๐๐
๐ฝ๐๐๐ข๐ฝ๐๐๐๐๐๐
Consequently,
๐บC๐ ร =1
2๐๐๐๐๐๐๐๐๐
๐ฝ๐๐๐ข๐ฝ๐๐๐๐๐๐๐ ๐โ๐ ๐
=1
2๐๐ฝ๐๐ ๐ฟ๐
๐๐ฟ๐๐ โ ๐ฟ๐
๐๐ฟ๐๐ ๐ข๐ฝ๐๐
๐๐๐๐๐ ๐โ๐ ๐
=1
2๐๐ฝ๐๐๐ข๐ฝ ๐๐
๐๐๐๐ โ ๐๐
๐๐๐๐ ๐ ๐โ๐ ๐ = ๐
๐ฝ๐๐๐ข๐ฝ๐๐๐๐๐๐ ๐ ๐โ๐ ๐
On the RHS, ๐ ร ๐บT = ๐๐ผ๐ฝ๐พ๐ข๐ฝ๐๐พ๐๐ ๐ผโ๐ ๐. We can therefore write,
๐บ ๐ ร ๐บT = ๐๐ผ๐ฝ๐พ๐ข๐ฝ๐๐ผ๐ ๐๐พ๐๐ ๐โ๐ ๐ =
Which on a closer look is exactly the same as the LHS so that,
๐บC๐ ร = ๐บ ๐ ร ๐บT
as required.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 73
4. Let ๐ be skew with axial vector ๐. Given vectors ๐ฎ and ๐ฏ, show that ๐๐ฎ ร ๐๐ฏ = ๐โ๐ ๐ฎ ร ๐ฏ and, hence conclude that ๐C = ๐โ๐ .
๐๐ฎ ร ๐๐ฏ = ๐ ร ๐ฎ ร ๐ ร ๐ฏ = ๐ ร ๐ฎ ร ๐ ร ๐ฏ
= ๐ร ๐ฎ โ ๐ฏ ๐ โ ๐ ร ๐ฎ โ ๐ ๐ฏ= ๐ โ ๐ฎ ร ๐ฏ ๐ = ๐โ๐ ๐ฎ ร ๐ฏ
But by definition, the cofactor must satisfy, ๐๐ฎ ร ๐๐ฏ = ๐c ๐ฎ ร ๐ฏ
which compared with the previous equation yields the desired result that
๐C = ๐โ๐ .
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 74
5. Show that the cofactor of a tensor can be written as
๐บC = ๐บ2 โ ๐ผ1๐บ + ๐ผ2๐ฐT
even if ๐บ is not invertible. ๐ผ1, ๐ผ2 are the first two invariants of ๐บ.
Ans.
The above equation can be written more explicitly as,
๐บC = ๐บ2 โ tr ๐บ ๐บ +1
2tr2 ๐บ โ tr ๐บ2 ๐ฐ
T
In the invariant component form, this is easily seen to be,
๐บC = ๐๐ ๐ ๐๐๐โ ๐๐ผ๐ผ๐๐๐ +1
2๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐ฝ๐ผ๐๐ผ๐ฝ๐ฟ๐๐ ๐ ๐โ๐ ๐
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 75
But we know that the cofactor can be obtained directly from the equation,
๐บC =1
2๐๐๐ฝ๐พ๐๐๐๐๐๐ฝ
๐๐๐พ๐๐ ๐โ๐
๐ =1
2
๐ฟ๐๐ ๐ฟ๐
๐ ๐ฟ๐๐
๐ฟ๐๐ฝ๐ฟ๐ ๐ฝ๐ฟ๐๐ฝ
๐ฟ๐๐พ๐ฟ๐ ๐พ๐ฟ๐๐พ
๐๐ฝ๐๐๐พ๐๐ ๐โ๐
๐
1
2๐ฟ๐๐ ๐ฟ๐ ๐ฝ๐ฟ๐๐ฝ
๐ฟ๐ ๐พ๐ฟ๐๐พ โ ๐ฟ๐
๐๐ฟ๐๐ฝ๐ฟ๐๐ฝ
๐ฟ๐๐พ๐ฟ๐๐พ + ๐ฟ๐
๐๐ฟ๐๐ฝ๐ฟ๐ ๐ฝ
๐ฟ๐๐พ๐ฟ๐ ๐พ ๐๐ฝ
๐๐๐พ๐๐ ๐โ๐
๐
=1
2๐ฟ๐๐ ๐ฟ๐ ๐ฝ๐ฟ๐๐พโ ๐ฟ๐๐ฝ๐ฟ๐ ๐พโ ๐ฟ๐ ๐ ๐ฟ๐๐ฝ๐ฟ๐๐พโ ๐ฟ๐๐ฝ๐ฟ๐๐พ + ๐ฟ๐๐ ๐ฟ๐๐ฝ๐ฟ๐ ๐พโ ๐ฟ๐ ๐ฝ๐ฟ๐๐พ๐๐ฝ๐๐๐พ๐๐ ๐โ๐
๐
=1
2๐ฟ๐๐ ๐๐ผ๐ผ๐๐ฝ๐ฝโ ๐๐๐๐๐ ๐โ 2๐๐
๐๐๐ผ๐ผ + 2๐๐
๐ ๐๐๐ ๐ ๐โ๐
๐
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 76
Using the above, Show that the cofactor of a vector cross ๐ ร is ๐โ๐
๐ ร 2 = ๐๐๐ผ๐๐ข๐ผ๐ ๐โ๐ ๐ ๐๐๐ฝ๐๐ข๐ฝ๐ ๐โ๐ ๐
= ๐๐๐ผ๐๐๐๐ฝ๐๐ข๐ผ๐ข๐ฝ ๐ ๐โ๐
๐ ๐ฟ๐๐ = ๐๐๐ผ๐๐๐๐ฝ๐๐ข๐ผ๐ข
๐ฝ ๐ ๐โ๐ ๐
= ๐๐๐ผ๐๐๐ฝ๐๐๐ข๐ผ๐ข๐ฝ ๐ ๐โ๐
๐
= ๐ฟ๐ฝ๐ ๐ฟ๐๐ผ โ ๐ฟ๐
๐ ๐ฟ๐ฝ๐ผ ๐ข๐ผ๐ข
๐ฝ ๐ ๐โ๐ ๐ = ๐ข๐๐ข
๐ โ ๐ฟ๐๐ ๐ข๐ผ๐ข
๐ผ ๐ ๐โ๐ ๐
= ๐โ ๐ โ ๐ โ ๐ ๐
tr ๐ ร 2 = ๐ โ ๐ โ 3 ๐ โ ๐ = โ 2 ๐ โ ๐
tr ๐ ร = 0
But from previous result,
๐ ร C = ๐ ร 2 โ ๐ ร tr ๐ ร +1
2tr2 ๐ ร โ tr ๐ ร 2 ๐
T
= ๐โ๐โ ๐ โ ๐ ๐ โ 0 +1
20 + 2 ๐ โ ๐ ๐
T
= ๐โ๐ โ ๐ โ ๐ ๐ โ 0 + ๐ โ ๐ ๐ T = ๐โ๐
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 77
Show that ๐โ๐ C = ๐
In component form,
๐โ ๐ = ๐ข๐๐ข๐๐ ๐โ๐ ๐
So that
๐โ ๐ 2 = ๐ข๐๐ข๐๐ ๐โ๐ ๐ ๐ข๐๐ข๐๐ ๐โ๐
๐ = ๐ข๐๐ข๐๐ข๐๐ข๐๐ ๐โ๐
๐๐ฟ๐๐
= ๐ข๐๐ข๐๐ข๐๐ข๐๐ ๐โ๐
๐ = ๐โ๐ ๐ โ ๐
Clearly,
tr ๐โ ๐ = ๐ โ ๐, tr2 ๐โ๐ = ๐ โ ๐ 2
and tr ๐โ ๐ 2 = ๐ โ ๐ 2 ๐โ๐ C =
๐โ๐ 2 โ ๐โ ๐ tr ๐โ ๐ +1
2tr2 ๐โ๐ โ tr ๐โ ๐ 2 ๐
T
= ๐โ๐ ๐ โ ๐ โ ๐โ๐ ๐ โ ๐ +1
2๐ โ ๐ 2 โ ๐ โ ๐ 2 ๐
T
= ๐
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 78
Given a Euclidean Vector Space E, a tensor ๐ธ is said to be orthogonal if, โ๐, ๐ โ E,
๐ธ๐ โ ๐ธ๐ = ๐ โ ๐ Specifically, we can allow ๐ = ๐, so that
๐ธ๐ โ ๐ธ๐ = ๐ โ ๐
Or ๐ธ๐ = ๐
In which case the mapping leaves the magnitude unaltered.
Orthogonal Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 79
Let ๐ = ๐ธ๐ ๐ธ๐ โ ๐ธ๐ = ๐ โ ๐ธ๐ = ๐ โ ๐ = ๐ โ ๐
By definition of the transpose, we have that, ๐ โ ๐ธ๐ = ๐ โ ๐ธ๐ป๐ = ๐ โ ๐ธ๐ป๐ธ๐ = ๐ โ ๐
Clearly, ๐ธ๐ป๐ธ = ๐
A condition necessary and sufficient for a tensor ๐ธ to be orthogonal is that ๐ธ be invertible and its inverse equal to its transpose.
Orthogonal Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 80
Upon noting that the determinant of a product is the product of the determinants and that transposition does not alter a determinant, it is easy to conclude that,
det ๐ธ๐ป๐ธ = det ๐ธ๐ป det ๐ธ = det ๐ธ 2 = 1
Which clearly shows that det ๐ธ = ยฑ1
When the determinant of an orthogonal tensor is strictly positive, it is called โproper orthogonalโ.
Orthogonal
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 81
A rotation is a proper orthogonal tensor while a reflection is not.
Rotation & Reflection
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 82
Let ๐ธ be a rotation. For any pair of vectors ๐ฎ, ๐ฏ show that ๐ธ ๐ฎ ร ๐ฏ = (๐ธ๐ฎ) ร (๐ธ๐ฏ)
This question is the same as showing that the cofactor of ๐ธ is ๐ธ itself. That is that a rotation is self cofactor. We can write that
๐ป ๐ฎ ร ๐ฏ = (๐ธ๐ฎ) ร (๐ธ๐ฏ)
where ๐ = cof ๐ธ = det ๐ธ ๐ธโT
Now that ๐ธ is a rotation, det ๐ธ = 1, and ๐ธโT = (๐ธโ1)๐ = (๐ธT)๐ = ๐ธ
This implies that ๐ป = ๐ธ and consequently, ๐ธ ๐ฎ ร ๐ฏ = (๐ธ๐ฎ) ร (๐ธ๐ฏ)
Rotation
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 83
For a proper orthogonal tensor Q, show that the eigenvalue equation always yields an eigenvalue of +1. This means that there is always a solution for the equation,
๐ธ๐ = ๐
For any invertible tensor, ๐บC = det ๐บ ๐บโT
For a proper orthogonal tensor ๐ธ, det ๐ธ = 1. It therefore follows that,
๐ธC = det๐ธ ๐ธโT = ๐ธโT = ๐ธ
It is easily shown that tr๐ธC = ๐ผ2(๐ธ) (HW Show this Romano 26)
Characteristic equation for ๐ธ is, det ๐ธ โ ๐๐ = ๐3 โ ๐2๐1 + ๐๐2 โ ๐3 = 0
Or, ๐3 โ ๐2๐1 + ๐๐1 โ 1 = 0
Which is obviously satisfied by ๐ = 1.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 84
If for an arbitrary unit vector ๐, the tensor, ๐ธ ๐ = cos ๐ ๐ฐ + (1 โ cos ๐ฝ )๐ โ๐ + sin ๐ (๐ ร) where (๐ ร) is the skew tensor whose ๐๐ component is ๐๐๐๐๐๐,
show that ๐ธ ๐ (๐ฐ โ ๐โ ๐) = cos ๐ (๐ฐ โ ๐โ ๐) + sin ๐ (๐ ร).
๐ธ ๐ ๐โ ๐ = cos ๐ ๐ โ ๐ + (1 โ cos ๐ฝ )๐โ ๐ + sin ๐ [๐ ร ๐โ ๐ ]
The last term vanishes immediately on account of the fact that ๐โ ๐ is a symmetric tensor. We therefore have,
๐ธ ๐ ๐โ ๐ = cos ๐ ๐ โ ๐ + (1 โ cos ๐ฝ )๐ โ ๐ = ๐โ ๐
which again mean that ๐ธ ๐ so that
๐ธ ๐ ๐ฐ โ ๐โ ๐ = cos ๐ ๐ฐ + 1 โ cos ๐ฝ ๐ โ ๐ + sin ๐ ๐ ร โ ๐โ ๐
= ๐os ๐ ๐ฐ โ ๐โ ๐ + sin ๐ ๐ ร as required.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 85
If for an arbitrary unit vector ๐, the tensor, ๐ธ ๐ = cos ๐ ๐ฐ + (1 โ cos ๐ฝ )๐ โ๐ + sin ๐ (๐ ร) where (๐ ร) is the skew tensor whose ๐๐ component is ๐๐๐๐๐๐.
Show for an arbitrary vector ๐ฎ that ๐ฏ = ๐ธ ๐ ๐ฎ has the same magnitude as ๐ฎ.
Given an arbitrary vector ๐ฎ, compute the vector ๐ฏ = ๐ธ ๐ ๐ฎ. Clearly, ๐ฏ = cos ๐ ๐ฎ + 1 โ cos ๐ฝ ๐ฎ โ ๐ ๐ + sin ๐ ๐ ร ๐ฎ
The square of the magnitude of ๐ฏ is
๐ฏ โ ๐ฏ = ๐ฏ ๐ = cos2๐ ๐ฎ โ ๐ฎ + 1 โ cos ๐ฝ ๐ ๐ฎ โ ๐ฎ ๐ + sin2 ๐ ๐ ร ๐ฎ 2
+ 2 cos ๐ฝ 1 โ cos ๐ฝ ๐ฎ โ ๐ ๐
= cos2๐ ๐ฎ โ ๐ฎ + 1 โ cos ๐ฝ ๐ฎ โ ๐ ๐ 1 โ cos ๐ฝ + ๐ cos ๐ฝ+ sin2 ๐ ๐ ร ๐ฎ 2
= cos2๐ ๐ฎ โ ๐ฎ + 1 โ cos ๐ฝ ๐ฎ โ ๐ ๐ 1 + cos ๐ฝ + sin2 ๐ ๐ ร ๐ฎ 2
= cos2๐ ๐ฎ โ ๐ฎ + 1 โ cos2 ๐ฝ ๐ฎ โ ๐ ๐ + sin2 ๐ ๐ ร ๐ฎ 2
= cos2๐ ๐ฎ โ ๐ฎ + sin2 ๐ ๐ฎ โ ๐ ๐ + ๐ ร ๐ฎ 2
= cos2๐ ๐ฎ โ ๐ฎ + sin2 ๐ ๐ฎ โ ๐ฎ = ๐ฎ โ ๐ฎ.
The operation of the tensor ๐ธ ๐ on ๐ฎ is independent of ๐ and does not change the magnitude. Furthermore, it is also easy to show that the projection ๐ฎ โ ๐ of an arbitrary vector on ๐ as well as that of its image ๐ฏ โ ๐ are the same. The axis of rotation is therefore in the direction of ๐.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 86
If for an arbitrary unit vector ๐, the tensor, ๐ธ ๐ = cos ๐ ๐ฐ + (1 โ cos ๐ฝ )๐โ ๐ +sin ๐ (๐ ร) where (๐ ร) is the skew tensor whose ๐๐ component is ๐๐๐๐๐๐. Show for an arbitrary 0 โค ๐ผ, ๐ฝ โค 2๐, that ๐ธ ๐ผ + ๐ฝ = ๐ธ ๐ผ ๐ธ ๐ฝ .
It is convenient to write ๐ธ ๐ผ and ๐ธ ๐ฝ in terms of their components: The ij component of
๐ธ ๐ผ ๐๐ = (cos๐ผ)๐ฟ๐๐ + 1 โ cos๐ผ ๐๐๐๐ โ (sin ๐ผ) ๐๐๐๐๐๐
Consequently, we can write,
๐ธ ๐ผ ๐ธ ๐ฝ ๐๐ = ๐ธ ๐ผ ๐๐ ๐ธ ๐ฝ ๐๐ =
= (cos๐ผ)๐ฟ๐๐ + 1 โ cos๐ผ ๐๐๐๐ โ (sin ๐ผ) ๐๐๐๐๐๐ (cos๐ฝ)๐ฟ๐๐+ 1 โ cos๐ฝ ๐๐๐๐ โ (sin๐ฝ) ๐๐๐๐๐๐ = (cos๐ผ cos๐ฝ) ๐ฟ๐๐๐ฟ๐๐ + cos๐ผ(1 โ cos๐ฝ)๐ฟ๐๐๐๐๐๐ โ cos๐ผ sin ๐ฝ ๐๐๐๐๐๐๐ฟ๐๐
+ cos๐ฝ(1 โ cos๐ผ)๐ฟ๐๐๐๐๐๐ + 1 โ cos๐ผ 1 โ cos๐ฝ ๐๐๐๐๐๐๐๐โ 1 โ cos๐ผ ๐๐๐๐ (sin ๐ฝ) ๐๐๐๐๐๐ โ (sin๐ผ cos๐ฝ) ๐๐๐๐๐๐๐ฟ๐๐โ (sin๐ผ) 1 โ cos๐ฝ ๐๐๐๐ ๐๐๐๐๐๐ + (sin๐ผ sin ๐ฝ) ๐๐๐๐๐๐๐๐๐๐๐๐ = (cos๐ผ cos๐ฝ) ๐ฟ๐๐ + cos๐ผ(1 โ cos๐ฝ)๐๐๐๐ โ cos๐ผ sin ๐ฝ ๐๐๐๐๐๐
+ cos๐ฝ(1 โ cos๐ผ)๐๐๐๐ + 1 โ cos๐ผ 1 โ cos๐ฝ ๐๐๐๐ โ (sin๐ผ cos๐ฝ) ๐๐๐๐๐๐+ (sin๐ผ sin๐ฝ) ๐ฟ๐๐๐ฟ๐๐ โ ๐ฟ๐๐๐ฟ๐๐ ๐๐๐๐ = (cos๐ผ cos๐ฝ โ sin ๐ผ sin ๐ฝ) ๐ฟ๐๐ + 1 โ ( cos ฮฑcos๐ฝ โ sin ๐ผ sin ๐ฝ) ๐๐๐๐
โ cos๐ผ sin ๐ฝ โ sin ๐ผ cos๐ฝ ๐๐๐๐๐๐ = ๐ธ ๐ผ + ๐ฝ ๐๐
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 87
Use the results of 52 and 55 above to show that the tensor ๐ธ ๐ = cos ๐ ๐ฐ + (1 โcos ๐ฝ )๐ โ ๐ + sin ๐ (๐ ร) is periodic with a period of 2๐.
From 55 we can write that ๐ธ ๐ผ + 2๐ = ๐ธ ๐ผ ๐ธ 2๐ . But from 52, ๐ธ 0 = ๐ธ 2๐ = ๐ฐ. We therefore have that,
๐ธ ๐ผ + 2๐ = ๐ธ ๐ผ ๐ธ 2๐ = ๐ธ ๐ผ
which completes the proof. The above results show that ๐ธ ๐ผ is a rotation along the unit vector ๐ through an angle ๐ผ.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 88
Define Lin+as the set of all tensors with a positive determinant. Show that Lin+is invariant under G where is the proper orthogonal group of all rotations, in the sense that for any tensor ๐ โ Lin+ ๐ โ G โ ๐๐๐T โ Lin+ .(G285)
Since we are given that ๐ โ Lin+, the determinant of ๐ is positive. Consider
det ๐๐๐T . We observe the fact that the determinant of a product of tensors is
the product of their determinants (proved above). We see clearly that,
det ๐๐๐T = det ๐ ร det ๐ ร det ๐T . Since ๐ is a rotation, det ๐ =
det ๐T = 1. Consequently we see that,
det ๐๐๐T = det ๐ ร det ๐ ร det ๐T
= det ๐๐๐T
= 1 ร det ๐ ร 1 = det ๐
Hence the determinant of ๐๐๐T is also positive and therefore ๐๐๐T โ Lin+ .
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 89
Define Sym as the set of all symmetric tensors. Show that Sym is invariant under G where is the proper orthogonal group of all rotations, in the sense that for any tensor A โ Sym every ๐ โ ๐บ โ ๐๐๐T โ Sym. (G285)
Since we are given that A โ Sym, we inspect the tensor ๐๐๐T. Its transpose is,
๐๐๐TT= ๐T
T๐๐T = ๐๐๐T. So that ๐๐๐T is symmetric and therefore
๐๐๐T โ Sym. so that the transformation is invariant.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 90
Central to the usefulness of tensors in Continuum Mechanics is the Eigenvalue Problem and its consequences.
โข These issues lead to the mathematical representation of such physical properties as Principal stresses, Principal strains, Principal stretches, Principal planes, Natural frequencies, Normal modes, Characteristic values, resonance, equivalent stresses, theories of yielding, failure analyses, Von Mises stresses, etc.
โข As we can see, these seeming unrelated issues are all centered around the eigenvalue problem of tensors. Symmetry groups, and many other constructs that simplify analyses cannot be understood outside a thorough understanding of the eigenvalue problem.
โข At this stage of our study of Tensor Algebra, we shall go through a simplified study of the eigenvalue problem. This study will reward any diligent effort. The converse is also true. A superficial understanding of the Eigenvalue problem will cost you dearly.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 91
Recall that a tensor ๐ป is a linear transformation for ๐ โ V
๐ป: V โ V
states that โ ๐ โ V such that, ๐ป๐ โก ๐ป ๐ = ๐
Generally, ๐ and its image, ๐ are independent vectors for an arbitrary tensor ๐ป. The eigenvalue problem considers the special case when there is a linear dependence between ๐ and ๐.
The Eigenvalue Problem
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 92
Here the image ๐ = ๐๐ where ๐ โ R ๐ป๐ = ๐๐
The vector ๐, if it can be found, that satisfies the above equation, is called an eigenvector while the scalar ๐ is its corresponding eigenvalue.
The eigenvalue problem examines the existence of the eigenvalue and the corresponding eigenvector as well as their consequences.
Eigenvalue Problem
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 93
In order to obtain such solutions, it is useful to write out this equation in its component form:
๐๐๐๐ข๐๐ ๐ = ๐๐ข
๐๐ ๐
so that,
๐๐๐ โ ๐๐ฟ๐
๐ ๐ข๐๐ ๐ = ๐จ
the zero vector. Each component must vanish identically so that we can write
๐๐๐ โ ๐๐ฟ๐
๐ ๐ข๐ = 0
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 94
From the fundamental law of algebra, the above equations can only be possible for arbitrary values of ๐ข๐ if the determinant,
๐๐๐ โ ๐๐ฟ๐
๐
Vanishes identically. Which, when written out in full, yields,
๐11 โ ๐ ๐2
1 ๐31
๐12 ๐2
2 โ ๐ ๐32
๐13 ๐2
3 ๐33 โ ๐
= 0
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 95
Expanding, we have,
โ๐31๐22๐13 + ๐2
1๐32๐13 + ๐3
1๐12๐23 โ ๐1
1๐32๐23 โ ๐2
1๐12๐33
+ ๐11๐22๐33 + ๐2
1๐12๐ โ ๐1
1๐22๐ + ๐3
1๐13๐ + ๐3
2๐23๐
โ ๐11๐33๐ โ ๐2
2๐33๐ + ๐1
1๐2 + ๐22๐2 + ๐3
3๐2 โ ๐3
= 0
= โ๐31๐22๐13 + ๐2
1๐32๐13 + ๐3
1๐12๐23 โ ๐1
1๐32๐23 โ ๐2
1๐12๐33
+ ๐11๐22๐33
+ (๐21๐12 โ ๐1
1๐22 + ๐3
1๐13 + ๐3
2๐23 โ ๐1
1๐33
โ ๐22๐33)๐ + ๐1
1 + ๐22 + ๐3
3 ๐2 โ ๐3 = 0
or ๐3 โ ๐ผ1๐
2 + ๐ผ2๐ โ ๐ผ3 = 0
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 96
This is the characteristic equation for the tensor ๐ป. From here we are able, in the best cases, to find the three eigenvalues. Each of these can be used in to obtain the corresponding eigenvector.
The above coefficients are the same invariants we have seen earlier!
Principal Invariants Again
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 97
A tensor ๐ป is Positive Definite if for all ๐ โ V , ๐ โ ๐ป๐ > 0
It is easy to show that the eigenvalues of a symmetric, positive definite tensor are all greater than zero. (HW: Show this, and its converse that if the eigenvalues are greater than zero, the tensor is symmetric and positive definite. Hint, use the spectral decomposition.)
Positive Definite Tensors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 98
We now state without proof (See Dill for proof) the important Caley-Hamilton theorem: Every tensor satisfies its own characteristic equation. That is, the characteristic equation not only applies to the eigenvalues but must be satisfied by the tensor ๐ itself. This means,
๐3 โ ๐ผ1๐2 + ๐ผ2๐ โ ๐ผ3๐ = ๐ถ
is also valid.
This fact is used in continuum mechanics to obtain the spectral decomposition of important material and spatial tensors.
Cayley- Hamilton Theorem
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 99
It is easy to show that when the tensor is symmetric, its three eigenvalues are all real. When they are distinct, corresponding eigenvectors are orthogonal. It is therefore possible to create a basis for the tensor with an orthonormal system based on the normalized eigenvectors. This leads to what is called a spectral decomposition of a symmetric tensor in terms of a coordinate system formed by its eigenvectors:
๐ = ๐๐
3
๐=1
๐ง๐โจ๐ง๐
Where ๐ง๐ is the normalized eigenvector corresponding to the eigenvalue ๐๐.
Spectral Decomposition
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 100
The above spectral decomposition is a special case where the eigenbasis forms an Orthonormal Basis. Clearly, all symmetric tensors are diagonalizable.
Multiplicity of roots, when it occurs robs this representation of its uniqueness because two or more coefficients of the eigenbasis are now the same.
The uniqueness is recoverable by the ingenious device of eigenprojection.
Multiplicity of Roots
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 101
Case 1: All Roots equal.
The three orthonormal eigenvectors in an ONB obviously constitutes an Identity tensor ๐. The unique spectral representation therefore becomes
๐ = ๐๐
3
๐=1
๐ง๐โจ๐ง๐ = ๐ ๐ง๐โจ๐ง๐
3
๐=1
since ๐1 = ๐2 = ๐3 = ๐ in this case.
Eigenprojectors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 102
Case 2: Two Roots equal: ๐1unique while ๐2 = ๐3
In this case, ๐ = ๐1๐ง1โจ๐ง1 + ๐2 ๐ โ ๐ง1โจ๐ง1
since ๐2 = ๐3 in this case.
The eigenspace of the tensor is made up of the projectors: ๐ท1 = ๐ง1โจ๐ง1
and ๐ท2 = ๐ โ ๐ง2โจ๐ง2
Eigenprojectors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 103
The eigen projectors in all cases are based on the normalized eigenvectors of the tensor. They constitute the eigenspace even in the case of repeated roots. They can be easily shown to be:
1. Idempotent: ๐ท๐ ๐ท๐ = ๐ท๐ (no sums)
2. Orthogonal: ๐ท๐ ๐ท๐ = ๐ถ (the anihilator)
3. Complete: ๐ท๐ = ๐๐๐=1 (the identity)
Eigenprojectors
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 104
For symmetric tensors (with real eigenvalues and consequently, a defined spectral form in all cases), the tensor equivalent of real functions can easily be defined:
Trancendental as well as other functions of tensors are defined by the following maps:
๐ญ:Sym โ Sym
Maps a symmetric tensor into a symmetric tensor. The latter is the spectral form such that,
๐ญ ๐ป โก ๐(๐๐)๐ง๐โจ๐ง๐
3
๐=1
Tensor Functions
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 105
Where ๐(๐๐) is the relevant real function of the ith eigenvalue of the tensor ๐ป.
Whenever the tensor is symmetric, for any map, ๐:R โR, โ ๐ญ:Sym โ Sym
As defined above. The tensor function is defined uniquely through its spectral representation.
Tensor functions
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 106
Show that the principal invariants of a tensor ๐บ satisfy
๐ฐ๐ ๐ธ๐บ๐ธT = ๐ผ๐ ๐บ , ๐ = 1,2, or 3 Rotations and orthogonal
transformations do not change the Invariants
๐ผ1 ๐ธ๐บ๐ธT = tr ๐ธ๐บ๐ธT = tr ๐ธT๐ธ๐บ = tr ๐บ = ๐ผ1(๐บ)
๐ผ2 ๐ธ๐บ๐ธT =1
2tr2 ๐ธ๐บ๐ธT โ tr ๐ธ๐บ๐ธT๐ธ๐บ๐ธT
=1
2I12 ๐บ โ tr ๐ธ๐บ๐๐ธT
=1
2I12 ๐บ โ tr ๐ธT๐ธ๐บ๐
=1
2I12(๐บ) โ tr ๐บ๐ = ๐ผ2(๐บ)
๐ผ3 ๐ธ๐บ๐ธT = det ๐ธ๐บ๐ธT
= det ๐ธT๐ธ๐บ = det ๐บ = ๐ผ3 ๐บ
Hence ๐ผ๐ ๐ธ๐บ๐ธT = ๐ผ๐ ๐บ , ๐ = 1,2, or 3
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 107
Show that, for any tensor ๐บ, tr ๐บ2 = ๐ผ12(๐บ) โ 2๐ผ2 ๐บ and
tr ๐บ3 = ๐ผ13 ๐บ โ 3๐ผ1๐ผ2 ๐บ + 3๐ผ3 ๐บ
๐ผ2 ๐บ =1
2tr2 ๐บ โ tr ๐บ2
=1
2๐ผ12(๐บ) โ tr ๐บ2
So that, tr ๐บ2 = ๐ผ1
2(๐บ) โ 2๐ผ2 ๐บ
By the Cayley-Hamilton theorem, ๐บ3 โ ๐ผ1๐บ
2 + ๐ผ2๐บ โ ๐ผ3๐ = ๐
Taking a trace of the above equation, we can write that, tr ๐บ3 โ ๐ผ1๐บ
2 + ๐ผ2๐บ โ ๐ผ3๐ = tr(๐บ3) โ ๐ผ1tr ๐บ
2 + ๐ผ2tr ๐บ โ 3๐ผ3 = 0
so that, tr ๐บ3 = ๐ผ1 ๐บ tr ๐บ
2 โ ๐ผ2 ๐บ tr ๐บ + 3๐ผ3 ๐บ
= ๐ผ1 ๐บ ๐ผ12 ๐บ โ 2๐ผ2 ๐บ โ ๐ผ1 ๐บ ๐ผ2 ๐บ + 3๐ผ3 ๐บ
= ๐ผ13 ๐บ โ 3๐ผ1๐ผ2 ๐บ + 3๐ผ3 ๐บ
As required.
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 108
Suppose that ๐ผ and ๐ช are symmetric, positive-definite tensors with ๐ผ2 = ๐ช, write the invariants of C in terms of U
๐ผ1 ๐ช = tr ๐ผ2 = ๐ผ1
2(๐ผ) โ 2๐ผ2 ๐ผ
By the Cayley-Hamilton theorem, ๐ผ3 โ ๐ผ1๐ผ
2 + ๐ผ2๐ผ โ ๐ผ3๐ฐ = ๐
which contracted with ๐ผ gives, ๐ผ4 โ ๐ผ1๐ผ
3 + ๐ผ2๐ผ2 โ ๐ผ3๐ผ = ๐
so that, ๐ผ4 = ๐ผ1๐ผ
3 โ ๐ผ2๐ผ2 + ๐ผ3๐ผ
and tr ๐ผ4 = ๐ผ1tr ๐ผ
3 โ ๐ผ2tr ๐ผ2 + ๐ผ3tr ๐ผ
= ๐ผ1 ๐ผ ๐ผ13 ๐ผ โ 3๐ผ1 ๐ผ ๐ผ2 ๐ผ + 3๐ผ3 ๐ผ
โ ๐ผ2 ๐ผ ๐ผ12 ๐ผ โ 2๐ผ2 ๐ผ + ๐ผ1 ๐ผ ๐ผ3 ๐ผ
= ๐ผ14 ๐ผ โ 4๐ผ1
2 ๐ผ ๐ผ2 ๐ผ + 4๐ผ1 ๐ผ ๐ผ3 ๐ผ + 2๐ผ22 ๐ผ
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 109
But,
๐ผ2 ๐ช =1
2๐ผ12 ๐ช โ tr ๐ช2 =
1
2๐ผ12 ๐ผ2 โ tr ๐ผ4
=1
2tr2 ๐ผ2 โ tr ๐ผ4
=1
2๐ผ12 ๐ผ โ 2๐ผ2 ๐ผ
2โ tr ๐ผ4
=1
2 ๐ผ14 ๐ผ โ 4๐ผ1
2 ๐ผ ๐ผ2 ๐ผ + 4๐ผ22 ๐ผ
โ ๐ผ14 ๐ผ โ 4๐ผ1
2 ๐ผ ๐ผ2 ๐ผ + 4๐ผ1 ๐ผ ๐ผ3 ๐ผ + 2๐ผ22 ๐ผ
The boxed items cancel out so that, ๐ผ2 ๐ช = ๐ผ2
2 ๐ผ โ 2๐ผ1 ๐ผ ๐ผ3 ๐ผ
as required. ๐ผ3 ๐ช = det ๐ช = det ๐ผ
2 = det ๐ผ 2 = ๐ผ32 ๐ผ
[email protected] 12/30/2012 Department of Systems Engineering, University of Lagos 110