tensors in cartesian coordinates

24
1 TENSORS IN CARTESIAN COORDINATES (Lectures 14) Prepared by Dr. U.S. Dixit, IIT Guwahati for ME501 students, July 2009 You are already familiar with the concept of scalars and vectors in physics. For example, mass is a scalar quantity and velocity is a vector quantity. A scalar quantity has only magnitude but no direction, whereas a vector quantity has both magnitude and directions. Mathematically, we can say that a scalar quantity is fully described by just one component i.e., its magnitude. On the other hand, a vector quantity is described by two components in two-dimensional space and three components in three-dimensional space. Remember that in 2-D polar coordinates, two components are r and θ, r indicating the magnitude of the vector and θ the directions. The coordinates r and θ can also be expressed in terms of Cartesian coordinates x and y. Hence, in Cartesian coordinates, a vector can be specified by its x and y components. However, unlike polar coordinates, x and y do not explicitly tell about direction and magnitude. You can of course calculate the magnitude and direction by the following formulae: 2 2 r x y = + , (1) 1 tan ( / ) y x θ = . (2) In the similar manner in three-dimensional space, a vector needs three components. These components fully specify the direction and the magnitude. This becomes obvious if we consider spherical coordinates system having coordinates r, φ and θ. Here, the first coordinate specifies the magnitude and the other two the direction. We can transform these coordinates to x, y and z in Cartesian system and thus specify the vector, although these components will not directly give you the magnitude and direction. It is easy to understand the concept of vector with the example of a position vector. A position vector of a point is actually the displacement needed for reaching that point from a reference point (say origin). Thus, for the point (x, y, z), the position vector with respect to origin is written as ˆ ˆ ˆ xi yj zk + + , where ˆ ˆˆ , and i j k are the unit vectors in three directions. All vectors should behave like a position vector. In fact, the other vectors are obtained by scalar operations on position vectors. For example, the velocity can be obtained by differentiation of the position vector with respect to time and the acceleration can be obtained by double differentiation of position vector. The force is mass multiplied by acceleration, and thus a vector. In a more formal way, we state that addition and subtraction of two vectors result in a vector. The multiplication (and division) of a vector by a scalar also results in a vector. As differentiation of a vector involves subtraction of two vectors divided by time interval, it also results in a vector. We may say that all the vectors are basically derived from a position vector and they preserve the characteristics of a position vector. In two-dimensional space, a position vector is specified by two coordinates (with respect to a reference point, say origin) and in three-dimensional space by three coordinates. Similarly, in a four-dimensional space, it is specified by four coordinates, although it is difficult to visualize a four-dimensional space. In all the dimensions, a scalar is specified by only one component. Thus, we can say that in an n-dimensional space, a scalar is specified by n 0 components and a vector by n 1 components. However, it is not sufficient. The scalar should be invariant to reference frame. If a coordinate system

Upload: nithree

Post on 28-Apr-2015

98 views

Category:

Documents


7 download

TRANSCRIPT

Page 1: Tensors in Cartesian Coordinates

1

TENSORS IN CARTESIAN COORDINATES (Lectures 1−4)

Prepared by Dr. U.S. Dixit, IIT Guwahati for ME501 students, July 2009 You are already familiar with the concept of scalars and vectors in physics. For

example, mass is a scalar quantity and velocity is a vector quantity. A scalar quantity has only magnitude but no direction, whereas a vector quantity has both magnitude and directions. Mathematically, we can say that a scalar quantity is fully described by just one component i.e., its magnitude. On the other hand, a vector quantity is described by two components in two-dimensional space and three components in three-dimensional space. Remember that in 2-D polar coordinates, two components are r and θ, r indicating the magnitude of the vector and θ the directions. The coordinates r and θ can also be expressed in terms of Cartesian coordinates x and y. Hence, in Cartesian coordinates, a vector can be specified by its x and y components. However, unlike polar coordinates, x and y do not explicitly tell about direction and magnitude. You can of course calculate the magnitude and direction by the following formulae:

2 2r x y= + , (1) 1tan ( / )y xθ −= . (2)

In the similar manner in three-dimensional space, a vector needs three components. These components fully specify the direction and the magnitude. This becomes obvious if we consider spherical coordinates system having coordinates r, φ and θ. Here, the first coordinate specifies the magnitude and the other two the direction. We can transform these coordinates to x, y and z in Cartesian system and thus specify the vector, although these components will not directly give you the magnitude and direction. It is easy to understand the concept of vector with the example of a position vector. A position vector of a point is actually the displacement needed for reaching that point from a reference point (say origin). Thus, for the point (x, y, z), the position vector with respect to origin is written as ˆˆ ˆxi yj z k+ + , where ˆˆ ˆ, andi j k are the unit vectors in three directions. All vectors should behave like a position vector. In fact, the other vectors are obtained by scalar operations on position vectors. For example, the velocity can be obtained by differentiation of the position vector with respect to time and the acceleration can be obtained by double differentiation of position vector. The force is mass multiplied by acceleration, and thus a vector. In a more formal way, we state that addition and subtraction of two vectors result in a vector. The multiplication (and division) of a vector by a scalar also results in a vector. As differentiation of a vector involves subtraction of two vectors divided by time interval, it also results in a vector. We may say that all the vectors are basically derived from a position vector and they preserve the characteristics of a position vector. In two-dimensional space, a position vector is specified by two coordinates (with respect to a reference point, say origin) and in three-dimensional space by three coordinates. Similarly, in a four-dimensional space, it is specified by four coordinates, although it is difficult to visualize a four-dimensional space. In all the dimensions, a scalar is specified by only one component. Thus, we can say that in an n-dimensional space, a scalar is specified by n0 components and a vector by n1 components. However, it is not sufficient. The scalar should be invariant to reference frame. If a coordinate system

Page 2: Tensors in Cartesian Coordinates

2

is rotated the scalar does not change. On the other hand, the components of a vector transform with a particular transformation rule under the rotation of a coordinate system. Let us not worry about the exact rule at this moment. However, it is worth realizing that the transformation rule for all vectors will be same as that for a position vector. Thus, for a quantity to be called as vector, the following conditions should be met: (1) It should have n components in an n-dimensional space. (2) Its components should transform in a particular fashion under the rotation of coordinate system. Some authors present the second condition as “The vectors should follow the parallelogram law of addition”. Thus, finite rotations although having three components in a three-dimensional space are not vectors, as they do not follow the parallelogram law of addition. Finite rotations will also not transform like the vectors under the rotation of coordinate system. A scalar is also called as a tensor of rank 0. Similarly, a vector is also called a tensor of rank 1. Is there a tensor of rank 2? Yes, tensors of rank 2 are commonly used in physical world, for example, stress at a point is a tensor of rank 2 or second-order tensor. Physically, the stress is the force per unit area. Like force, it has the direction and magnitude, but the direction and force will be dependent on the plane. This means that direction and magnitude of stress will be different at different plane passing through them. In two dimensional-spaces, a stress has 4 components, and in three-dimensional space, it has got 9 components. Besides the stress components should follow a particular transformation rules under the rotation of coordinate system. We shall study the properties of scalar, vectors and tensors in more detail, but before that index notation and related things will be described. Index notations are very helpful in representing the lengthy expressions in a concise form. If one is not afraid of using unabridged notations, there is no need to study index notation. However, you will soon see that without the index notations, certain mathematical expressions will look horrible and therefore the study of index notations is a must.

Index Notation Suppose a vector has 3 components— a1, a2, a3. We can represent the component of the vector in the following form: , where 1,2,3ia i = . It is understood that in a three-dimensional space, a vector has 3 components and in two-dimensional space, it has 2 components. Hence, it is enough to write the components of the vector as ai. Similarly, σij can represent the components of a tensor. For a three-dimensional space i and j both vary from 1 to 3. Hence, σij represents 9 components depending on the values of indices i and j. The set of 9 components can be represented as a 3-by-3 matrix in an unabridged notation and by [σij] in abridged notation. Thus,

11 12 13

21 22 23

31 32 33

[ ]ij

σ σ σσ σ σ σ

σ σ σ

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

. (3)

In two-dimensional space, 11 12

21 22

[ ]ij

σ σσ

σ σ⎡ ⎤

= ⎢ ⎥⎣ ⎦

. (4)

Page 3: Tensors in Cartesian Coordinates

3

In a two-dimensional space all the indices vary from 1 to 2 and in a three-dimensional space, they vary from 1 to 3. This is called the range convention. The expression ai+bi=0 in three-dimensional space means the following three equations:

1 1 0a b+ = , (5a) 2 2 0a b+ = , (5b)

3 3 0a b+ = . (5c) What does expression aibi=0 means? Does it mean the following equations?

1 1 2 2 3 30; 0; 0a b a b a b= = = . (6) No. It does not mean that. It means the following single expression:

1 1 2 2 3 3 0a b a b a b+ + = . (7) The rule is that if in a term an index is repeated, then it means the summation of the terms obtained by assigning the values of index over its range. This is called Einstein’s summation convention. Thus, ik k ia x b= means the following expression:

1 1 2 2 3 3i i i ia x a x a x b+ + = . (8) How is it obtained? As k is a repeated index in the term ik ka x , we obtain three terms—

1 1 2 2 3 3, andi i ia x a x a x by varying k over its range i.e., from 1 to 3. All these terms are added. In the process, index k disappears, while index i remains. Observe that the expression ij j ia x b= would also provide Eq. (8). Thus, the repeated index may be replaced by any other index. For this reason, it is called a dummy index, while the non-repeated index is called free index. Equation (8) further implies the following three equations:

11 1 12 2 13 3 1 21 1 22 2 23 3 2 31 1 32 2 33 3 3; ;a x a x a x b a x a x a x b a x a x a x b+ + = + + = + + = . (9) Note that in an expression, each term should have the same free indices. Thus, the following are the valid expressions:

0iji

j

bxσ∂

+ =∂

, (10)

ij ijkl klCσ ε= , (11) ij i jp n nσ= . (12)

In Eq. (11), j is the dummy index and i is the free index. Note that the free index i is present in each term. In Eq. (11), i and j are free indices, which are present in each term, whilst k and l are dummy indices. Note that

11 11 12 12 13 13 21 21 22 22 23 23

31 31 32 32 33 33

ijkl kl ij ij ij ij ij ij

ij ij ij

C C C C C C C

C C C

ε ε ε ε ε ε ε

ε ε ε

= + + + + +

+ + +. (13)

Thus, Eq. (11) can be written as 11 11 12 12 13 13 21 21 22 22 23 23

31 31 32 32 33 33

ij ij ij ij ij ij ij

ij ij ij

C C C C C C

C C C

σ ε ε ε ε ε ε

ε ε ε

= + + + + +

+ + + (14)

Expression in Eq. (14) represents 9 equations that can be obtained by varying i and j from 1 to 3. In Eq. (12), i and j are dummy indices. There are no free indices in this expression.

Page 4: Tensors in Cartesian Coordinates

4

The following expressions are invalid: i j ia b c= , (15)

0ij klσ ε+ = , (16) 0i i ia b c = . (17) In Eq. (15), the first term contains two free indices i and j, whilst the second term contains only one free index i. Thus, all the terms are not having the same indices. Hence, the expression is invalid. In Eq. (16), the first term has the free indices i and j, whilst the second term has the indices k and l. Hence, it is not a valid expression. The valid expression will have the same free indices, for example the following two expressions are the valid expression:

0; 0ij ij kl klσ ε σ ε+ = + = . (18) In Eq, (17), in the first term, index i occurs three times. Hence, it is an invalid expression. In a term, an index can occur one or two times. We also introduce comma (,) notation here. The comma in subscript indicates differentiation with respect to corresponding coordinate. Thus,

,i

i jj

aa

x∂

=∂

. (19)

If φ is a scalar function of the coordinates, then

,iixφφ ∂

=∂

. (20)

In three-dimensional space, the index i can take the values 1, 2 and 3. Thus,

1

,2

3

{ } gradient of i

x

x

x

φ

φφ φ

φ

⎧ ⎫∂⎪ ⎪∂⎪ ⎪⎪ ⎪∂

= =⎨ ⎬∂⎪ ⎪⎪ ⎪∂⎪ ⎪∂⎩ ⎭

. (21)

Note that φ,i indicates one component of gradient vector. Suppose vi denotes the component of a vector. The component vi,j denotes differentiation with respect to coordinate. Thus,

,i

i jj

vv

x∂

=∂

. (22)

As i and j vary from 1 to 3, vi,j can take on 9 values. Example 1: Express , 0ij j ibσ + = in unabridged form. Solution: As per comma notation:

,ij

ij jjx

σσ

∂=∂

. (23)

In the above expression j occurs twice, hence it is a dummy index. By summation convention:

Page 5: Tensors in Cartesian Coordinates

5

1 2 3

1 2 3

ij i i i

jx x x xσ σ σ σ∂ ∂ ∂ ∂

= + +∂ ∂ ∂ ∂

. (24)

Hence, the expression , 0ij j ibσ + = means

1 2 3

1 2 3

0i i iib

x x xσ σ σ∂ ∂ ∂

+ + + =∂ ∂ ∂

(25)

In the above expression, i is a free index that can take on values 1, 2 and 3. Hence, the expression represents the following three equations:

1311 121

1 2 3

0bx x x

σσ σ ∂∂ ∂+ + + =

∂ ∂ ∂ (26a)

2321 222

1 2 3

0bx x x

σσ σ ∂∂ ∂+ + + =

∂ ∂ ∂ (26b)

31 32 333

1 2 3

0bx x xσ σ σ∂ ∂ ∂

+ + + =∂ ∂ ∂

(26c)

Example 2: Prove that ui,j can be decomposed into two components such that ,i j ij iju wε= + , (27)

where

( ) ( ), , , ,1 1and 2 2ij i j j i ij i j j iu u w u uε = + = − . (28)

Further, prove that 0ij ijwε = . (29)

Solution: Starting from the right hand side:

( ) ( ), , , , ,1 12 2ij ij i j j i i j j i i jw u u u u uε + = + + − = . (30)

Hence, Eq. (27) is proved. Now,

( ) ( ), , , ,1 12 2ji j i i j i j j i iju u u uε ε= + = + = (31)

and

( ) ( ), , , ,1 1 2 2ji j i i j i j j i ijw u u u u w= − = − = − . (32)

In the expression, ij ijwε both i and j are dummy indices, hence they can be replaced by any other indices. Here, we replace i by j and j by i and use Eqs. (31) and (32). Hence,

ij ij ji ji ij ijw w wε ε ε= = − . (33) Thus,

2 0 or 0ij ij ij ijw wε ε= = . (34) Example 3: Consider the following system of equations:

Page 6: Tensors in Cartesian Coordinates

6

xx xy xzx x

y yx yy yz y

z zx zy zz z

t nt n

t n

σ σ σ

σ σ σ

σ σ σ

⎡ ⎤⎧ ⎫ ⎧ ⎫⎢ ⎥⎪ ⎪ ⎪ ⎪=⎨ ⎬ ⎨ ⎬⎢ ⎥

⎪ ⎪ ⎪ ⎪⎢ ⎥⎩ ⎭ ⎩ ⎭⎣ ⎦

. (35)

Express it using the index notation. Solution: First instead of x, y, z, we use 1, 2, 3. Thus, Eq. (35) is written as

11 12 131 1

2 21 22 23 2

3 331 32 33

t nt nt n

σ σ σσ σ σσ σ σ

⎡ ⎤⎧ ⎫ ⎧ ⎫⎪ ⎪ ⎪ ⎪⎢ ⎥=⎨ ⎬ ⎨ ⎬⎢ ⎥⎪ ⎪ ⎪ ⎪⎢ ⎥⎩ ⎭ ⎩ ⎭⎣ ⎦

. (36)

The above equation represents the following three equations: 1 11 1 12 2 13 3t n n nσ σ σ= + + , (37a)

2 21 1 22 2 23 3t n n nσ σ σ= + + , (37b) 3 31 1 32 2 33 3t n n nσ σ σ= + + . (37c)

Using the free index i, these equations can be represented by 1 1 2 2 3 3i i i it n n nσ σ σ= + + . (38)

Using the dummy index j, the above equation can be written as i ij jt nσ= . (39)

Kronecker-delta and Levy-Civita Symbols

Now, we introduce two important symbols. First is a symbol introduced by a German mathematician Leopold Kronecker. This is called Kronecker δ symbol. It has two indices attached to δ (as subscripts). The value of δij is 1 if both the indices take same value and 0 if both indices take different values. Thus,

11 22 33 12 13 21 23 31 321, 0δ δ δ δ δ δ δ δ δ= = = = = = = = = . (40) Note that δij represent, the elements of an identity matrix, i.e.,

1 0 00 1 00 0 1

ijδ⎡ ⎤⎢ ⎥⎡ ⎤ =⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦

. (41)

Note that δ11 is 1, but δii is not equal to 1. As i is a repeated (dummy) index in δii, 11 22 33 1 1 1 3iiδ δ δ δ= + + = + + = . (42)

The Kronecker’s delta has got the following substitution properties: (i) (ii) (iii)i ij j ij jk ik ij jk ika a a aδ δ δ δ δ= = = (43)

These can be proved easily, understanding the definition of Kronecker delta. Consider the first expression. It is clear that in the first term i is the dummy index and j is the free index. Thus,

1 1 2 2 3 3i ij j j ja a a aδ δ δ δ= + + (44) In the above expression, right hand side is a1 for j=1, a2 for j=2 and a3 for j=3. Hence, it can be written as aj. Thus, i ij ja aδ = . In the same way, the other relations can be proved. Another important symbol is Levi-Civita ε symbol, named after an Italian mathematician Tullio Levi-Civita. This symbol is also referred to as the permutation

Page 7: Tensors in Cartesian Coordinates

7

symbol, the alternating symbol or the alternator. It has three indices attached to it and in general form is written as εijk. Only if i, j and k all are different values, the permutation symbol is non-zero. For non-zero values of εijk, the index i can take 3 values; with each values of i, j can take 2 values, and with the values of i and j fixed k can take only one value (different from i and j). Thus, in the six cases, εijk can be non-zero. The convention is to take the value εijk equal to 1 if the indices take the values in cyclic order and −1 if the indices take the values in acyclic order. Thus,

123 231 312 132 213 3211 and 1ε ε ε ε ε ε= = = = = = − . (45) The remaining 21 possible values of εijk are zero. The permutation symbol is very helpful in vector algebra. Consider the three unit vectors e1, e2 and e3 along the x-, y- and z- directions respectively. The cross product of the unit vectors may be represented as

ijkε× =i j ke e e . (46) The above expression shows that if i and j take same values, the cross product is zero, i.e., the cross product of unit vector with itself is zero. If i is 1 and j is 2, then

12 121 121 121 0 0kε ε ε ε× = = + + = + + =1 2 1 2 3 3 3ke e e e e e e e . (47) Similarly, one can find the other cross products. It is interesting to note that the values of nine different cross products are given by just one expression at Eq. (46). Because the permutation symbol depends on the order of the indices, the following relation holds good:

ijk jki kij ikj jik kjiε ε ε ε ε ε= = =− = − = − . (48) The following relation is called ε-δ identity or the permutation identity:

ijk pqk ip jq iq jpε ε δ δ δ δ= − . (49) It can be proved in the following way. The left hand side expression will be non-zero only if i, j, p, q are different from k; also, i should be different from j and p should be different from q. This implies two possibilities— (i) i is equal to p and j is equal to q and (ii) i is equal to q and j is equal to p. As k is the dummy index,

1 1 2 2 3 3ijk pqk ij pq ij pq ij pqε ε ε ε ε ε ε ε= + + . (50) In the above expression, only one term will be non-zero on the right hand side. If possibility (i) occurs, the value of the expression will be 1. If possibility (ii) occurs, the value of expression will be −1. This is represented by the right hand side of Eq. (49). Verify that if i=p and j=q, ip jq iq jpδ δ δ δ− becomes equal to 1 and if i=q and j=p, it becomes equal to −1. Example 4: Prove that

( ) ( ) ( )ijk pqr ip jq kr jr kq iq jr kp jp kr ir jp kq jq kpε ε δ δ δ δ δ δ δ δ δ δ δ δ δ δ δ= − + − + − (51) Solution: First we shall prove that

det( )ip iq ir

ijk pqr ij jp jq jr

kp kq kr

a a a

a a a a

a a a

ε ε = , (52)

Page 8: Tensors in Cartesian Coordinates

8

where det(aij) denotes the determinant of [aij]. We shall prove it by showing that the two sides are equal. It is easy to see that if atleast two of i, j, k or two of p, q, r are equal, then both sides of Eq. (52) are zero. If i, j, k are different from one another and p, q, r are also different from one another, then the following two cases may arise. (1) Both i, j, k and p, q, r are cyclic or acyclic. In that case, both sides of Eq. (52) are equal to det(aij). (2) Between i, j, k and p, q, r, one group is cyclic and other acyclic. In that case, both sides of Eq. (52) are equal to −det(aij). Hence, proved. Now, let aij=δij. In that case, from Eq. (52):

det( )ip iq ir

ijk pqr ij jp jq jr

kp kq kr

δ δ δ

ε ε δ δ δ δ

δ δ δ

= (53)

Using the fact that det(δij)=1 and expanding the right hand side of Eq. (53), we get Eq. (51). Example 5: Use Eq. (51) to prove Eq. (49). Solution: Replacing r by k in Eq. (51):

( ) ( ) ( )( ) ( ) ( )3 3

ijk pqk ip jq kk jk kq iq jk kp jp kk ik jp kq jq kp

ip jq jk kq iq jk kp jp ik jp kq jq kp

ε ε δ δ δ δ δ δ δ δ δ δ δ δ δ δ δ

δ δ δ δ δ δ δ δ δ δ δ δ δ

= − + − + −

= − + − + −. (54)

Using the substitution property of Kronecker-delta, the above expression is written as ( ) ( ) ( )3 3ijk pqk ip jq jq iq jp jp jp iq jq ip

ip jq iq jp

ε ε δ δ δ δ δ δ δ δ δ δ

δ δ δ δ

= − + − + −

= −. (55)

Hence, proved. Example 6: Show that the determinant of 3-by-3 matrix [aij] can be expressed as

1 2 3det( )ij pqr p q ra a a aε= . (56) Solution: Determinant of 3-by-3 matrix is given by

11 12 13

21 22 23

31 32 33

det( )ij

a a aa a a a

a a a= . (57)

In expanded form: ( ) ( ) ( )11 22 33 23 32 12 23 31 21 33 13 21 32 22 31det( )ija a a a a a a a a a a a a a a a= − + − + − , (58)

or 11 22 33 12 23 31 13 21 32 11 23 32 12 21 33 13 22 31det( )ija a a a a a a a a a a a a a a a a a a= + + − − − . (59)

Observing the above expression, it is seen that a typical term is a1pa2qa3r. Further, when p, q, r are cyclic, the term is positive, otherwise it is negative. The expression contains the summation of six terms. Therefore, it can be written as 1 2 3pqr p q ra a aε , which is a summation of the six terms, p, q, r being the dummy indices.

Page 9: Tensors in Cartesian Coordinates

9

Transformation Rule for Vector Components under the Rotation of Cartesian Coordinate System

Consider two coordinate systems— x, y, z and x′, y′, z′, which are rotated with respect to each other. The unit vectors along x, y and z are e1, e2 and e3 and unit vectors along x′, y′ and z′ are , and′ ′ ′1 2 3e e e . In x, y, z system, a vector v can be represented as

1 2 3 pv v v v= + + =1 2 3 pv e e e e (60) The same vector is represented in x′, y′, z′ system as

1 2 3 pv v v v′ ′ ′ ′ ′ ′ ′ ′= + + =1 2 3 pv e e e e (61) Note that the vector remains same in both the systems. However, its components change. We want to determine the component in one system as a function of components in other systems. For this purpose we first write

p qv v′ ′=p qe e . (62) Note that as p is the dummy index in Eq. (61), it can be replaced by q. Taking the dot product of both sides with unit vector er, we get

p qv v′ ′• = •p r q re e e e . (63) Now,

prδ• =p re e , (64) which means that the dot product of identical vectors is 1 and non-identical vectors is 0. The dot product ′ •q re e is equal to the cosine of angle between qth axis in x′, y′, z′ system and rth axis in x, y, z system and we denote it by αqr. Hence, Eq. (63) can be written as

p pr q qrv vδ α′= . (65) Using the substitution properties of Kronecker-delta and the fact that scalar quantities commute, Eq. (65) can be written as

r qr qv vα ′= . (66) In the expanded form,

1 1 2 2 3 3r r r rv v v vα α α′ ′ ′= + + . (67) The reader may be familiar with the above expression, which states that the component of a vector along r direction is equal to the some of the projections of its three orthogonal components along r direction. The set of direction cosines [αqr], a total of 9 components, is as follows in the expanded form:

11 12 13

21 22 23

31 32 33

qr

α α αα α α α

α α α

⎡ ⎤⎢ ⎥⎡ ⎤ =⎣ ⎦ ⎢ ⎥⎢ ⎥⎣ ⎦

. (68)

You may note that the rows of the above matrix correspond to the axes in x′, y′, z′ system and the columns correspond to the axes in x, y, z system. The entries in the matrix correspond to the cosines of the angles between two lines. From three-dimensional coordinate geometry, recall that

Page 10: Tensors in Cartesian Coordinates

10

2 2 2 2 2 2 2 2 211 12 13 21 22 23 31 32 33

11 21 12 22 13 23 31 21 32 22 33 23 31 11 32 12 33 13

2 2 2 2 2 2 2 2 211 21 31 12 22 32 13 23 33

11 12 21 22 31 32 12 13

1; 1; 1;0; 0; 0;

1; 1; 1;0;

α α α α α α α α αα α α α α α α α α α α α α α α α α α

α α α α α α α α αα α α α α α α α

+ + = + + = + + =+ + = + + = + + =

+ + = + + = + + =

+ + = 22 23 32 33 11 13 21 23 31 330; 0.α α α α α α α α α α+ + = + + =

(69) Thus, matrix given by Eq. (68) is an orthogonal matrix, i.e.,

T T[ ]qr qr qr qr Iα α α α⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤= =⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ . (70)

Thus, 1 T

qr qrα α−

⎡ ⎤ ⎡ ⎤=⎣ ⎦ ⎣ ⎦ . (71) Now, in the matrix form, Eq. (66) is written as

T{ } [ ] { }r qr qv vα ′= . (72) Therefore, in view of Eq. (70),

{ } [ ]{ }q qr rv vα′ = (72) The above equation is written in index notation as

q qr rv vα′ = . (73) In the above equation r is the dummy index and therefore,

1 1 2 2 3 3q q q qv v v vα α α′ = + + . (74) Thus, the component along the direction q in the primed (x′, y′, z′) system is equal to the sum of the projections of the components of unprimed system along q direction. The reader may already be familiar with this statement. Equations (66) and (73) form the part of the definition of the Cartesian vector, which is a tensor of order 1. A Cartesian tensor of order 1 is a quantity that consists of n components in an n-dimensional space and which follows the transformation rules given in Eq.(66) and Eq. (77) under the rotation of the axis system. Example 7: The components of a vector in the x, y, z system are given by {1 2 3}T. The x′, y′, z′ is obtained by rotating the original system about the z-axis through an angle of 30° in the counterclockwise way. Find the components of the vector in x′, y′, z′ system. Solution: Table 1 contains the angles between the axis of one system and the axis of other system. With this the matrix of direction cosine [αij] is written as

3 1 02 2cos30 cos 60 cos90

1 3cos120 cos30 cos90 02 2

cos90 cos90 cos 0 0 0 1

ijα

⎡ ⎤⎢ ⎥

⎡ ⎤ ⎢ ⎥⎢ ⎥ ⎢ ⎥

⎡ ⎤ ⎢ ⎥= = −⎢ ⎥⎣ ⎦ ⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥⎣ ⎦

⎢ ⎥⎢ ⎥⎣ ⎦

. (75)

Page 11: Tensors in Cartesian Coordinates

11

Table 1: Angles between axes of one system with the axes of other system x y z x′ 30° 60° 90° y′ 120° 30° 90° z′ 90° 90° 0° Here, deliberately, the index i, j has been written in place of q, r to establish that one can name the free indexes in anyway as long as it is consistent throughout the expression. Now, Eq. (72)is{ } [ ]{ }i ij jv vα′ = . Therefore,

1

2

3

3 1 30 12 2 211 3 10 2 32 2 2

30 0 1 3

v

v

v

⎡ ⎤ ⎧ ⎫⎢ ⎥ +⎪ ⎪⎢ ⎥′⎧ ⎫ ⎪ ⎪⎧ ⎫⎢ ⎥⎪ ⎪ ⎪ ⎪ ⎪ ⎪′ = − = − +⎢ ⎥⎨ ⎬ ⎨ ⎬ ⎨ ⎬⎢ ⎥⎪ ⎪ ⎪ ⎪ ⎪ ⎪′ ⎩ ⎭⎩ ⎭ ⎢ ⎥ ⎪ ⎪⎢ ⎥ ⎪ ⎪

⎩ ⎭⎢ ⎥⎣ ⎦

(76)

Thus, we have obtained the components of the vector in rotated system.

Transformation Rule for Tensor Components under the Rotation of Cartesian Coordinate System

We have discussed the transformation rule for a vector, which is a tensor of rank 1. Now, we shall discuss the transformation rules for a general tensor of order n. However, first consider the tensor product of two vectors. Let u and v be two vectors. They can be multiplied in such a way so as to produce scalar. The corresponding multiplication is called a scalar or dot product, which is an inner product. In index notation

i iu v• =u v . (77) It can be easily shown that this dot product remains invariant under the rotation of the coordinate system. The outline of the proof is as follows:

( )( )i i ip p iq q pq p q p p i iu v u v u v u v u vα α δ′ ′ = = = = . (78) The two vectors can be multiplied in a way to yield the vector. A well-known example of it is the cross product of two vectors. In index notation, the cross-product of the vectors, u × v is represented as εijkujvk. Two vectors can also be multiplied in a manner to yield a tensor of order 2. It is called a tensor product or outer product denoted by u⊗v. In index notation, it is denoted as ui vj. It is clear that in three-dimensional space, it has distinct 9 components. Now,

( )( )i j ip p jq q ip jq p qu v u v u vα α α α′ ′ = = (79) and

( )( )i j pi p qj q pi qj p qu v u v u vα α α α′ ′ ′ ′= = . (80) Any (physical) quantity that has n2 components in an n-dimensional space and follows the transformation rule given by Eq. (79) and Eq. (80) is called a tensor of order two. Thus, if aij and ija′ are the components of tensor in the unprimed and primed Cartesian coordinate systems, then

Page 12: Tensors in Cartesian Coordinates

12

;ij ip jq pq ij pi qj pqa a a aα α α α′ ′= = . (81) The transformation rule given in Eq. (81) can be generalized. For example, for a tensor of third order, the rule is

;ijk ip jq kr pqr ij pi qj rk pqra a a aα α α α α α′ ′= = . (82) Similarly, the transformation rule for higher order tensors can be written. Now, we wish to write the transformation rules in the form of matrix multiplications. Given matrices [aij] and [bij], the matrix product in index notation is defined as

ij ip pjc a b= . (83) The transpose of a matrix is obtained by interchanging the rows and columns of the matrix. The i-jth element of a matrix is same as the j-ith element of a transpose matrix. With this knowledge it is easy to see that the transformation rule can be written as

T T;′ ′A = αAα A = α A α . (84) We have decided to use boldface letter for tensors. Example 8: Stress at point is a tensor of rank 2. Therefore, it is possible to use transformation rule given by Eq. (81) or Eq. (84) for finding the stress components along any axis system. Consider the case of plane stress with stress components σx , σy and τxy. Finding out the stress components along the rotated Cartesian system, where the axis x′ makes an angle θ with axis x. Solution: The matrix of direction cosines is given by

cos sinsin cosθ θθ θ

⎡ ⎤⎢ ⎥−⎣ ⎦

α = (85)

and the symmetric stress tensor in x-y system is given as xx xy

xy yy

σ τ

τ σ⎡ ⎤⎢ ⎥⎢ ⎥⎣ ⎦

σ = . (86)

Using Eq. (84), the stress components in a new system are given by Tcos sin cos sin cos sin cos sin

sin cos sin cos sin cos sin cosxx xy xx xy

xy yy xy yy

σ τ σ τθ θ θ θ θ θ θ θθ θ τ σ θ θ θ θ τ σ θ θ

⎡ ⎤ ⎡ ⎤ −⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤′ =⎢ ⎥ ⎢ ⎥⎢ ⎥ ⎢ ⎥ ⎢ ⎥ ⎢ ⎥− − −⎢ ⎥ ⎢ ⎥⎣ ⎦ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦⎣ ⎦ ⎣ ⎦σ =

(87) that provides

( )2 2 2 2

2

cos sin sin coscos sinsin cos cos sin sin cos

cos sin 2 sin cos sin cos cos sin cos sin

sin cos cos sin cos

xx xy xx xy

xy yy xy yy

xx yy xy xx yy xy

xx yy xy

σ θ τ θ σ θ τ θθ θθ θ τ θ σ θ τ θ σ θ

σ θ σ θ τ θ θ σ θ θ σ θ θ τ θ θ

σ θ θ σ θ θ τ θ

+ − +⎡ ⎤⎡ ⎤′ ⎢ ⎥⎢ ⎥− + − +⎢ ⎥⎣ ⎦ ⎣ ⎦

+ + − + + −=

− + + −

σ =

( )2 2 2sin sin cos 2 sin cosxx yy xyθ σ θ σ θ τ θ θ

⎡ ⎤⎢ ⎥⎢ ⎥+ −⎣ ⎦

(88) Thus,

Page 13: Tensors in Cartesian Coordinates

13

2 2 1 cos 2 1 cos 2cos sin 2 sin cos sin 22 2

cos 2 sin 22 2

xx xx yy xy x xx yy xy

xx yy xx yyxy

θ θσ σ θ σ θ τ θ θ σ σ τ θ

σ σ σ σθ τ θ

+ −′ = + + = + +

+ −= + +

2 2 1 cos 2 1 cos 2sin cos 2 sin cos sin 22 2

cos 2 sin 22 2

yy xx yy xy x xx yy xy

xx yy xx yyxy

θ θσ σ θ σ θ τ θ θ σ σ τ θ

σ σ σ σθ τ θ

− +′ = + − = + −

+ −= − −

(89) and

( )2 2sin cos cos sin cos sin

sin 2 cos 22

xy xx yy xy

xx yyxy

τ σ θ θ σ θ θ τ θ θ

σ σθ τ θ

′ =− + + −

−= − +

. (90)

Contraction and Quotient Laws

If a force is represented as Fi in index notation and the displacement is represented by di, then the dot product Fi di, which is a scalar, will represent the work done. The tensor product of the force and displacement is denoted by Fi dj, which is a tensor of rank 2. Once we change j to i, the rank of tensor reduces to zero. This is called contraction operation. Thus, Fi di is the contraction of Fi dj. Similarly, consider the tensor product of a tensor (of rank 2) σij and vector nk, which gives σij nk. It can be shown that σij nk is a tensor of order 3. If we replace k by j, the product σij nj is the contraction of σij nk and is of rank 1. Thus, σij nj is a vector that can be denoted by ti. The relation ti=σij nj can be written in the matrix form as

{ }or [ ]{ }t nσ= =t σn , (91) whichever notation you like. Note that Eq. (91) is the commonly known product of a matrix with a vector (a column matrix). We observe that the pre-multiplication of a vector by the tensor provides another vector. The components of the vector t are the linear functions of the components of the vector n. Thus, tensor (of rank 2) can be called a linear map that assigns to each vector another vector. This is the alternative definition of tensor. Now consider the reverse problem. If it is known that Fi are the components of a vector and Fidi is a scalar, then one can conclude that di are the components of a vector. This is one type of quotient law. Quotient laws basically infer the nature of a quantity based on the outcome of its contracted product with a known quantity. Another quotient law can be written as follows: If σij are the components of the tensor (of rank 2) and σij nj are the components of the vectors, then nj are the components of the vectors. In a similar way, a number of quotient rules can be written. Example 9: The components of a symmetric tensor a (symmetry implies aij=aji) are given by

Page 14: Tensors in Cartesian Coordinates

14

1 2 3[ ] 2 5 10

3 10 4a

⎡ ⎤⎢ ⎥= ⎢ ⎥⎢ ⎥⎣ ⎦

. (92)

The component of a skew-symmetric b (skew-symmetry implies bij=−bji) are given by

0 8 11[ ] 8 0 10

11 10 0b

⎡ ⎤⎢ ⎥= −⎢ ⎥⎢ ⎥− −⎣ ⎦

. (93)

Find out the scalar product of the tensor aij bij. Solution: The scalar product aij bij means multiplying each entry of a with the corresponding entry b and adding all 9 products. Thus,

1 0 2 8 3 11 2 ( 8) 5 0 10 10 3 ( 11) 10 ( 10) 4 0 0ij ija b = × + × + × + × − + × + × + × − + × − + × = (94)

We can easily prove that the scalar product of a symmetric tensor with an skew-symmetric tensor will be zero. Example 10: From the quotient law, show that mass moment of inertia is a tensor of rank 2. Solution: The angular momentum L is defined as

=L Iω , (95) where L and the angular velocity ω both are vector. In index notation, Eq. (95) is written as

i ij jL I ω= . (96) From the direct relation, it is easy to prove that if I is a tensor, the right hand side of Eq. (96) will represent the components of the vector. Thus, inverse relation is that if L and ω are vector, I is a tensor (of rank 2).

Some Important Definitions and Properties of Tensor The components of a tensor (of rank 2) can be represented in the form of a matrix. Thus, the many properties and definitions are common between matrix and tensors. However, the components of the matrix need not follow the transformation rule and the matrix need not represent any physical quantity. If a matrix represents a physical quantity and its components follow the transformation rule, then it is a tensor. Now, we briefly discuss some properties of the tensor. In many cases, details are left for the reader to carry out them as exercise problems. Whenever we refer a tensor without specifying its rank, it will be understood as a tensor of rank 2. (1) Zero tensor: If all the components of a tensor are zero, the tensor is called zero tensor 0. The vector product of this vector with any vector v will give a zero vector. Thus,

=0 0v . (97) Note that 0 in the left hand side of above equation should be understood as a tensor having 9 components in a three-dimensional space, whereas 0 in right hand side is a vector having 3 components in a three-dimensional space. (2) Identity tensor: The tensor I whose components are δij is referred to as an identity tensor. For any vector v,

=Iv v . (98)

Page 15: Tensors in Cartesian Coordinates

15

(3) Product of two tensor: The ordinary product C of two tensors (of rank 2 ) A and B is a tensor of rank 2 and is written as

=C AB . (99) In index notation,

ik ij jkC A B= . (100) Thus, it is a contracted product. One can have scalar product defined by

ij ijc A B= , (101) or the non-contracted tensor product defined by

ijkl ij klC A B= . (102) Note that A and B commute during scalar product. (4) Transpose of a tensor: The transpose of a tensor is obtained by interchanging the indices of its components. Thus,

Tij jiA A= (103)

It can be shown that for all vectors u and v, T• = •Au v u A v . (104)

Also, for tensors A and B, T T T T T T( ) ; ( )+ = + =A B A B AB B A . (105)

(5) Additive decomposition of a tensor into a symmetric and a skew part: A tensor A can be decomposed into a symmetric E and a skew-symmetric W part, such that

= +A E W , (106) where

( ) ( )T T1 1;2 2

= + = −E A A W A A . (107)

(6) Inverse of a tensor: Given a tensor A, if there exist a tensor B such that AB = I , (108)

then B is called the inverse of A. If B is equal to AT, the tensor A is called an orthogonal tensor. An orthogonal tensor whose determinant is 1 is called proper orthogonal tensor. If the determinant of A is 0, then the matrix corresponding to A is called a singular matrix and the tensor is non-invertible. Reader is already familiar with the procedure to determine the inverse of a matrix. In index notation, i-jth component of the tensor B is given by

1 1det( ) 2ij jpq irs pr qs

ij

B A Aa

ε ε⎡ ⎤= ⎢ ⎥⎣ ⎦. (109)

(7) Invariants of a tensor: In three-dimensional space, a tensor A has three principal invariants (that remain unchanged during coordinate transformation) as described below: (i) First invariant IA: In index notation, it is written as Aii, which is called trace of A and is denoted as tr(A). (ii) Second invariant IIA: It is given by

( )2 11 13 22 2311 122

21 22 31 33 32 33

1II tr tr2A

A A A AA AA A A A A A

⎡ ⎤= − = + +⎣ ⎦A A (110)

(iii) Third invariant IIIA: The determinant of A is the third invariant. (8) Positive definite tensor: A tensor A is positive definite tensor if 0•v Av > for all non-zero vectors v. It is called positive semi-definite if 0• ≥v Av .

Page 16: Tensors in Cartesian Coordinates

16

(9) Negative definite tensor: A tensor A is negative definite tensor if 0•v Av < for all non-zero vectors v. It is called negative semi-definite if 0• ≤v Av . Eigenvalues of a Tensor

It is known that a tensor A carries out a linear transformation of a vector x by the relation:

=Ax b . (111) It is possible that for some non-zero vector x, the vector b may be parallel to x, i.e., b=λx. Thus, for some scalar λ and some vector x, the following relation holds good:

λ=Ax x . (112) The above equation is called an eigenvalue problem, λ is called the eigenvalue and x the eigenvector. Eq. (112) may also be written as

( )λ− = 0A I x . (113) Equation (113) implies that

det( )λ λ− − = 0A I = A I . (114) The roots of the above equation will provide the eigenvalues. For a particular eigenvalue, Eq. (112) may be used for finding out the eigenvector. Note that eigenvectors are not unique. One can obtain a normalized eigenvalue whose magnitude is 1. We shall prove that eigenvalues of a Hermitian matrix are real and eigenvectors corresponding to distinct eigenvalues of a real symmetric matrix are orthogonal to one another. First, let us define Hermitian matrix. Given a matrix A, the complex conjugate matrix A* is formed by taking the complex conjugate of each element. The adjoint of A is formed by transposing A* . The matrix is called Hermitian (or self-adjoint) if A=(A*)T. Let λi and λj be two eigenvectors of a Hermitian matrix, then

=i i iAx λ x (115a) =j j jAx λ x (115b)

Pre-multiplying both sides of Eq. (115a) by ( )T*jx and Eq. (115b) by ( )T*

ix we get

( ) ( )T T=* *

j i i j ix Ax λ x x (116a)

( ) ( )T T=* *

i j j i jx Ax λ x x (116b) Taking the adjoint of Eq. (116b),

( ) ( )T TT( ) =* * *j i j j ix A* x λ x x . (117)

As A is Hermitian, the above equation can be written as

( ) ( )T T=* * *

j i j j ix A x λ x x . (118) Comparing it with Eq. (116a) we get

( )( )T0iλ− =* *

j j iλ x x . (119) In the above equation, replacing j by i

( )( )T0iλ− =* *

i i iλ x x (120)

As ( )T*i ix x is a positive number, iλ=

*iλ . This implies that λi is real. This result holds

good for a real symmetric matrix as it is a special case of Hermitian matrix.

Page 17: Tensors in Cartesian Coordinates

17

Now, for a real symmetric matrix, if λi and λj are two distinct eigenvalues, then Eq. (115) holds good. Pre-multiplying both sides of Eq. (115a) by ( )T

jx and Eq. (115b) by ( )Tix

we get

( ) ( )T T=j i i j ix Ax λ x x (121a)

( ) ( )T T=i j j i jx Ax λ x x (121b) Taking the transpose of Eq. (121b) and using the fact that A is symmetric we get

( ) ( )T T=j i j j ix A x λ x x . (122)

Subtracting Eq. (121a) and (122) we get

( )( )T0λ =i j j i- λ x x . (123)

As both the eigenvalues are distinct, the above equation implies that

( )T0.=j ix x (124)

Thus, the eigenvectors corresponding to distinct eigenvectors of a real symmetric matrix are orthogonal. Example 11: A sheet is subjected to plane stress condition. The symmetric stress components at a point are as follows:

10 MPa, 2 MPa, 3MPax y xyσ σ τ= = = . Find out the principal stress and principle directions. Solution: In the form of matrix, stress components are represented as

10 33 2ijσ⎡ ⎤

⎡ ⎤ = ⎢ ⎥⎣ ⎦⎣ ⎦

. (125)

The eigenvalues of the above matrix will be the principle stresses and eignvectors the principle direction. Let λ be the eigenvalue, then

210 30 or 12 11 0

3 2λ

λ λλ

−= − + =

−, (126)

which gives λ=11 MPa, 1 MPa. Thus the maximum principal stress is 11 MPa and minimum principal stress is 1 MPa. Let us find out the eigenvector corresponding to eigenvalue of 11 MPa. By Eq. (113):

1

2

10 11 30

3 2 11xx

− ⎧ ⎫⎡ ⎤=⎨ ⎬⎢ ⎥−⎣ ⎦ ⎩ ⎭

, (127)

which represent the following two equations: 1 2 1 23 0; 3 9 0x x x x− + = − = . (128)

The second equation is the scaled (by a factor of −3) version of the first equation. Thus effectively there is one equation that gives 1 23x x= . Thus, we get multiple solutions. Taking x2=α (an arbitrary constant), x1 is 3α. The normalized eigenvector components are

Page 18: Tensors in Cartesian Coordinates

18

1 22 2 2 2

3 3 1;10 109 9

n nα α

α α α α= = = =

+ +. (129)

These are the direction cosines of the first eigenvector (with n3=0), corresponding to first principle direction. For the principle stress of 1 MPa, Eq. (113) gives

1

2

10 1 30

3 2 1xx

− ⎧ ⎫⎡ ⎤=⎨ ⎬⎢ ⎥−⎣ ⎦ ⎩ ⎭

, (130)

which represent the following two equations: 1 2 1 29 3 0; 3 0x x x x+ = + = . (131)

The first equation is the scaled (by a factor of 3) version of the second equation. Thus effectively there is one equation that gives 2 13x x= − . Thus, we get multiple solutions. Taking x1=α (an arbitrary constant), x2 is −3α. The normalized eigenvector components are

1 22 2 2 2

1 3 3;10 109 9

n nα α

α α α α

− −= = = =

+ +. (132)

It can be easily verified that both the eigenvectors are orthogonal to each other, i.e., their dot product is zero.

Polar Decomposition of a Tensor Every invertible tensor A can be decomposed into an orthogonal tensor Q and a positive definite symmetric tensor U such that

=A QU . (133) Similarly, it can be decomposed into an orthogonal tensor Q and a positive definite symmetric tensor V such that

=A VQ . (134) Starting from Eq. (133),

( )TT T T T T( )= = 2A A QU QU U Q QU = U IU = U U = UU = U . (135) Similarly, starting from Eq. (134), it can be shown that

T = 2AA V . (136) It can be easily shown that

T=V QUQ . (137)

Tensor Calculus Let g be a function whose values are scalars, vectors or tensors and whose domain is an open interval D of real numbers. The derivative ( )tg of g at t, if it exists is defined by

[ ]Lim0

d 1( ) ( ) ( ) ( )d( )

t g t t tt α

αα→

= = + −g g g . (138)

This implies that if the components of a tensor are represented in a matrix form, the derivative of the tensor can also be represented in a matrix form, in which each entry will be the derivative of individual entries. Here, it is assumed that axes do not change with time. Example 12: Components of a tensor are given as

Page 19: Tensors in Cartesian Coordinates

19

2

3

2 sin5 e7 ln( )

tij

t t ta t

t t

⎡ ⎤⎢ ⎥

⎡ ⎤ = ⎢ ⎥⎣ ⎦⎢ ⎥⎣ ⎦

. (139)

Find out the components of its derivatives with respect to time. Solution: By differentiating individual components we get

2

2 2 cos1 0 e3 0 1/

tij

t ta

t t

⎡ ⎤⎢ ⎥⎡ ⎤ = ⎢ ⎥⎣ ⎦⎢ ⎥⎣ ⎦

. (140)

We can very easily prove the following relation for a tensor whose i-jth component is given by

ijip jq

pq

aa

δ δ∂

=∂

. (141)

It states that if the component is partially differentiated with respect to itself, we get 1. If it is partially differentiated with respect to other component, we get 0.

We shall adopt comma notation, which means (...)(...),i

ix∂

=∂

. (142)

Now, note that

,i

i j ijj

xx

∂= =∂

. (143)

In the sequel, we shall describe some special derivatives. (1) Gradient: If f is a scalar field i.e., f is a function of the coordinates, then it can be shown that ai=f,i are the components a vector. (For this, one has to show that the transformation rule is followed.) This vector is called the gradient of f and is denoted by grad f or ∇f. The operator ∇ may be written as / ix∂ ∂ . We know that for infinitesimal change in the values of the coordinates, the infinitesimal change in the function is given by

d d d + df f ff x y zx y z∂ ∂ ∂

= +∂ ∂ ∂

. (144)

In index notation, it is written as ( ), ,

d d or d di i iif f x f f x= = ∇ . (155)

In the vector notation, it is written as d df f= ∇ • x . (156)

Now, suppose there is a surface given by f(x, y, z)=0 and we move by an infinitesimal distance on this surface, then dx will be tangential to the surface and df will be zero. It then follows from Eq. (156) that ∇f will be perpendicular to dx. Thus, ∇f will be directed along normal to the surface. If there is a unit vector a, then ∇f•a will indicate the change in the function for a unit length movement along a, which is called the directional derivative of f along a. It is denoted by ∂f/∂a. Note that

Page 20: Tensors in Cartesian Coordinates

20

cos cosf f f fφ φ∂= ∇ • = ∇ = ∇

∂a a

a, (157)

where φ is the angle between a and normal (along ∇f) to the surface. It then follows that the normal derivative along ∇f is the maximum. Thus, ∇f is the direction of maximum increase in the function value. The concept of gradient can be extended to vectors and tensors as well. If vi are the components of the vector field, then vi,j represents the component of the gradient of the vector field. It can be shown that it is a tensor of rank 2. Similarly, if aij are the components a tensor field, then aij,k represents the component of the gradient of the tensor and is a tensor of rank 3. (2) Divergence: Let vi be the component a vector field, then gradient of the vector field is represented in index notation as vi,j. Contracting it once, we get vi,i, which is called the divergence of the vector field. As the contraction operation reduces the rank of the tensor field by 2, vi,i, is of rank 0 and thus a scalar field. Similarly, if aij are the components a tensor field, then aij,k represents the component of the gradient of the tensor field. Applying the contraction operation, we get aij,j which represents the divergence of the tensor field. The divergence is denoted by ‘div’. If a is a tensor field of rank 2, div a or ∇•a is a vector field. (3) Curl: The curl of a vector field is a vector field. If vi represents the component of a vector field, then curl of the vector field v is denoted by curl v or ∇×v and its ith component is given by

( ) ,imn n miv vε∇× = . (158)

It is obvious that the curl of a constant vector field is a zero vector. Let us find out the value of curl grad f. The nth component of grad f is given by f,n.

Therefore, from Eq. (158), the ith component of curl grad f is given by ( ) ,curl = imn mni

φ ε φ∇ . (159) In the above expression, we interchange the dummy indices to get

( ) ,curl = inm nmiφ ε φ∇ . (160)

Note that , ,andinm inm nm mnε ε φ φ= − = . (161)

Therefore, Eq. (160) can be written as ( ) ,curl = imn mni

φ ε φ∇ − . (162) Comparing Eqs. (159) and (162), we see that

( ) ( ) ( )curl = curl or curl 0i i i

φ φ φ∇ − ∇ ∇ = . (163) Hence, the curl of the gradient of a scalar field is zero. (4) Laplacian: The Laplacian of a scalar field f is f,ii, of a vector field denoted by vi is vi,jj and of a tensor field denoted by aij is aij,kk. The Laplacian operator is denoted by ∇2. Note that for x-y-z coordinate system

2 2 22

2 2 2 div(grad )x y zφ φ φφ φ∂ ∂ ∂

∇ ≡ + + ≡∂ ∂ ∂

. (164)

Page 21: Tensors in Cartesian Coordinates

21

Divergence theorem (i) Let V be the volume of a three-dimensional region bounded by a closed regular surface S, then for a scalar field f defined in V and on S,

grad d dV S

f V f S=∫ ∫ n , (165)

where n is the unit outward normal to S. (ii) Let V be the volume of a three-dimensional region bounded by a closed regular surface S, then for a vector field v defined in V and on S,

div d dV S

V S= •∫ ∫v v n , (166)

where n is the unit outward normal to S. Practice Problem 28 provides the theorem concerning second order tensor. Stokes’ theorem Let C be simple closed curve in 3-dimensional space and S be an open regular surface bounded by C. Then, for a vector field v defined on S as well C,

d (curl ) dS

s S• = •∫ ∫v t v n , (167)

where t is the unit vector tangent to C, which is assumed to be positively oriented relative to the unit normal n to S. Practice problem 29 extends this theorem to tensors. Practice Problems: Q.1: For an incompressible fluid, the continuity equation in the Eulerian form is

0u v wx y z∂ ∂ ∂

+ + =∂ ∂ ∂

,

where u, v and w are components of velocity field along x, y and z directions, respectively. Express this equation in index notation. Q.2: The 3-dimensional stress equilibrium equations are given by

, 0ij j ibσ + = , where σ is the stress tensor and b is body force per unit volume. Write down the stress equilibrium equations in unabridged notations. Q.3: Evaluate the following expressions: (i) δii (ii) εijk εijk (iii) εijk εkji (iv) δijδikδjk

Q.4: Prove the following ε-δ identity: ijk pqk ip jq iq jpε ε δ δ δ δ= − .

Q.5: Prove that ( ) ( ) ( )ijk pqr ip jq kr jr kq iq jr kp jp kr ir jp kq jq kpε ε δ δ δ δ δ δ δ δ δ δ δ δ δ δ δ= − + − + − .

Q.6: A second-order tensor is a linear transformation that maps vectors to vectors. For example, if [σ] is the matrix containing stress components, {n}is the vector of direction cosines to a plane and {t} is the traction vector on the plane, then by Cauchy’s relation

{ } [ ]{ }t nσ= . Here, stress tensor maps direction cosine vector into traction vector. The transformation law for vectors is given by

Page 22: Tensors in Cartesian Coordinates

22

'k ki ix xα=

Using the aforementioned definition of tensor, find out the transformation law for tensors. Q.7: If a and b are vectors with components ai and bi, respectively, then aibj are components of a second-order tensor, called the tensor product a⊗b. Prove that (i) If aij are components of a second-order tensor A and bi are components of a vector b, then aijbk are components of a third-order tensor (known as the tensor product of A and b in that order, denoted ⊗A b . (ii) If aij and bij are components of two second-order tensor A and B, then aijbkl are components of a fourth-order tensor (known as the tensor product of A and B in that order, denoted ⊗A B . Q.8: If aij and bij are components of two second-order tensor A and B and ci are components of vector c, then prove that (i) aijcj are components of a vector (known as the vector product of A and c in that order, denoted Ac. (ii) aikbkj are components of a second-order tensor, called the product of A and B in that order denoted by AB . (iii) aijbij is a scalar, called the scalar product of A and B and denoted by A.B. Q.9. Prove the following quotient rules:

(i) Let ai be an ordered triplet related to the xi system. For an arbitrary vector with components bi, if aibi are scalar, then ai are components of a vector. (ii) Let aij be a 3×3 matrix related to the xi system. For an arbitrary vector with components bi, if aijbj are components of a vector, then aij are components of a tensor. Q.10: By using the transformation law for the vector, show that the vector product of a×b is also a vector. Q.11: Prove that if A is a second-order tensor, then it is a linear operator on vectors and its components are given by

ija = i je .Ae . Also prove that conversely, if A is a linear operator on vectors and aij are defined by the above equation, the aij are components of a second-order tensor. Q. 12: Show that every tensor with components aij can be represented in the form

ijA a= ⊗i je e

Q.13: Prove that 2 1( , )x x− are the components of a first order Cartesian tensor in two dimensions, whereas 2 1( , )x x and 2 2

1 2( , )x x are not. Q.14: Let aij and bij be the components of two 3×3 matrices respectively and their scalar product is defined as aijbij. Prove that the scalar product of a symmetric and skew-symmetric matrix is zero. Q.15: Prove that following are three invariants of a tensor A with components aij: (i) trA iiI A a= = (tr is a short form of trace.).

(ii) [ ]2 21 1(tr ) ( )2 2A ii kk ik kiII A tr A a a a a⎡ ⎤= − = −⎣ ⎦

(iii) 3 3 21 1(tr ) 2 tr( ) 3(tr )(tr ) 2 3

6 6A ii jj kk ik km mi ik ki jjIII A A A A a a a a a a a a a⎡ ⎤ ⎡ ⎤= + − = + −⎣ ⎦⎣ ⎦

Page 23: Tensors in Cartesian Coordinates

23

Further, prove that third invariant can also be written as

1 2 3 1 2 3detA ij ijk i j k ijk i j kIII a a a a a a aε ε⎡ ⎤= = =⎣ ⎦ .

Q.16: Let aij be components of a tensor A. If * 1

2ij ipq jrs pr qsa a aε ε= .

Show that *ija are components of a tensor. If this tensor is denoted by A*, prove the

following: (i) A(A*)T=(A*)T A= (det A) I (ii) If A is invertible, then

1 T1 ( )det( )

− = *A AA

.

(iii) For all vectors a, b ( )× = ×*A a b Aa Ab

(iv) For all vectors a, b and c, ( ) .( ) .T × = ×*A a b c a Ab Ac Here A* is called adjugate or cofactor of A and (A*)T is called the adjoint of A. Q. 17: Obtain the expressions for 3 invariants of a deviatoric tensor of A. Q.18: Prove that a number λ is an eigenvalue of a tensor A if and only if it is a real root of the cubic equation

3 2 0A A AI II IIIλ λ λ− + − = . Q.19: Prove that if A is a symmetric tensor, then all three roots of the characteristic equation of A are real, and therefore, A has exactly three (not necessarily distinct) eigen values. Q.20: Prove that eigenvectors (principal directions) corresponding to two distinct eigenvalues (principal values) of a symmetric tensor are orthogonal. Q.21: Prove that a symmetric tensor has at least three mutually perpendicular principal directions. Q.22: Given a symmetric tensor A, there exits at least one coordinate system with respect to which matrix of A is diagonal. Q.23: Let A be a symmetric tensor with λi as eigenvalues and vi as corresponding eigen vectors. Show that A can be represented as

3

1( )k

== ⊗∑ k kA v v

This is known as the spectral representation of A. Q.24: Every invertible tensor A can be represented in the form

= =A QU VQ , where Q is orthogonal tensor and U and V are positive definite symmetric tensors such that U2=ATA and V2=AAT. Furthermore, the representations are unique.

Q. 25: If Q(t) is an orthogonal tensor. Show that Tddt

⎛ ⎞⎜ ⎟⎝ ⎠

Q Q is a skew tensor.

Q.26: Prove that xi,j=δij and xi,i=3.

Page 24: Tensors in Cartesian Coordinates

24

Q.27: Express the divergence and curl of a vector and tensor field in index notation. Q.28: Prove the following divergence theorem for a tensor. Let V be the volume of a three-dimensional region bounded by a closed regular surface S, then for a tensor field A defined in V and on S,

div d dV S

V S=∫ ∫A An ,

where n is the unit outward normal to S. Q.29: Prove the following Stoke’s theorem for a tensor. Let C be simple closed curve in 3-dimensional space and S be an open regular surface bounded by C. Then, for a vector field defined on S as well C,

Td (curl ) d

Ss S=∫ ∫At A n ,

where t is the unit vector tangent to C, which is assumed to be positively oriented relative to the unit normal n to S.