tensor analysis

33
(Chapter head:)Tensor Analysis 1 Points, vectors and tensors Let E be an n-dimensional Euclidean space and let U be the space of n- dimensional vectors associated with E. Points of E and vectors of U satisfy the basic rules of vector algebra. 1.1 Inner product, norm and ortogonality Let u.v <u,v> (1) denote the inner product of u and v. The norm of a vector u is defined as u = u.u (2) and u is said to be a unit vector if u =1. (3) A vector u is said to be orthogonal (or perpendicular) to a vector v if and only if u.v =0. (4) 1.1.1 Ortogonal bases and cartesian coordinate frames A set {e i }≡{e 1 ,e 2 ,...e n } of n mutually orthogonal vectors satisfying e i .e j = δ ij (5) where δ ij = 1 if i = j 0 if i = j (6) is the Kronecker delta, defines an orthonormal basis for U . Any vector u U can be represented as u = u 1 e 1 + u 2 e 2 ...... + u n e n = u i e i (7) where u i = e i .u, i =1...n (8) are the cartesian components of u relative to the basis e i . Any vector of U is uniquely defined by its components relative to a given basis. This allow us to represent any vector u as a single column matrix, denoted {u}, of components: {u} = u 1 u 2 . . . u n . (9) 1

Upload: luis-fernando-nicolini

Post on 12-May-2017

298 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Tensor Analysis

(Chapter head:)Tensor Analysis

1 Points, vectors and tensors

Let E be an n-dimensional Euclidean space and let U be the space of n-dimensional vectors associated with E. Points of E and vectors of U satisfythe basic rules of vector algebra.

1.1 Inner product, norm and ortogonality

Letu.v ≡< u,v > (1)

denote the inner product of u and v. The norm of a vector u is defined as

u =√u.u (2)

and u is said to be a unit vector if

u = 1. (3)

A vector u is said to be orthogonal (or perpendicular) to a vector v if andonly if

u.v = 0. (4)

1.1.1 Ortogonal bases and cartesian coordinate frames

A set ei ≡ e1,e2, ...en of n mutually orthogonal vectors satisfying

ei.ej = δij (5)

where

δij =

1 if i = j

0 if i = j(6)

is the Kronecker delta, defines an orthonormal basis for U .Any vector u ∈ U can be represented as

u = u1e1 + u2e2......+ unen = uiei (7)

whereui = ei.u, i = 1...n (8)

are the cartesian components of u relative to the basis ei. Any vector of U isuniquely defined by its components relative to a given basis. This allow us torepresent any vector u as a single column matrix, denoted u, of components:

u =

u1u2...un

. (9)

1

Page 2: Tensor Analysis

An orthonormal basis ei, together with an origin point, xo ∈ E, define acartesian coordinate frame. Thus, analogously to the representation of vectors,any point x of E can be represented by an array:

x =

x1x2...xn

, (10)

of cartesian coordinates of x. The cartesian coordinates xi of x are the cartesiancomponets of the vector

u = x− xo (11)

withui = (x− xo) .ei. (12)

1.2 Linear operators on vectors. Second order tensors

Second order tensors are linear transformations from U into U , i.e., a secondorder tensor T : U → U maps each vector u ∈ U into a vector v ∈ U :

v = Tu. (13)

The operation of sum and scalar multiplication of tensors are defined by:

(S + T )u = Su+ Tu (14)

(αS)u = α (Su)

where α ∈ ℜ. In addition, the zero tensor, 0, and the identity tensor, I, are,respectively, the tensors that satisfy

(0)u = 0 (15)

(I)u = u

∀u ∈ U .The product of two tensors S and T is the tensor ST defined by:

STu = S (Tu) . (16)

In general,ST = TS. (17)

If ST = TS, then S and T are said to commute.

2

Page 3: Tensor Analysis

1.2.1 The Transpose, Symmetric and Skew tensors

The transpose, TT , of a tensor T is the unique tensor that satisfies:

Tu.v = u.TTv, ∀u,v ∈ U . (18)

If T = TT then T is said to be symmetric. If T = −TT then T is said to beskew.

Any tensor T can be decomposed as the sum:

T = Tsym + Tskew (19)

of its symmetric part

Tsym =1

2

T + TT

(20)

and its skew part

Tskew =1

2

T − TT

. (21)

Basic properties The following basic properties involving the transpose andthe skew and symmetric parts of a tensor hold:

(i) (S + T )T = ST + TT ;

(ii) (ST )T = TTST ;

(iii)TTT= T ;

(iv) If T is symmetric, then

Tskew = 0 and Tsym = T ; (22)

(v) If T is skew, thenTskew = T and Tsym = 0. (23)

1.3 Cross product

In the vector space V(= R3) of the translation of the pontual Euclidean spaceE, we may define the cross product of the vectors u and v

u× v = εijkuivjek (24)

where ek is the k-th cartesian base (cartesian base representation) and εijk isthe permutation symbol.

The cross product has the following properties:

• u× v = − v × u

• (αu+ βv)× w = α (u× w) + β (v × w)

3

Page 4: Tensor Analysis

• u. (u× v) = 0

• (u× v) . (u× v) = (u.u) (v.v)− (u.v)2,

for every u, v and w ∈ V and α, β ∈ R.From the definition in (24) we can see that

ei × ej = εijkek (25)

andεijk = (ei × ej) .ek. (26)

Now, since

εijkεmnp = [(ei × ej) .ek] [(em × en) .ep] (27)

= det

ei.em ei.en ei.epej .em ej .en ej .epek.em ek.en ek.ep

= det

δim δin δipδjm δjn δjpδkm δkn δkp

,

placing sucessively: k = p; k = p and j = n; and k = p, j = n and i = m, wederive

εijkεmnk = δimδjn − δinδjm (28)

εijkεmjk = 2δim

εijkεijk = 2δii = 6

Let A denote the matrix represented in a cartesian coordinate system as

A =

u1 v1 w1u2 v2 w2u3 v3 w3

.

Then the paralelepiped formed by the edges u, v and w is given by

det [A] = (u× v) . w = εijkuivjwk (29)

Two additional properties are given by:

• u× (v × w) = (u.w)v − (u.v) w

• (u× v)× w = (w.u)v − (w.v)u

4

Page 5: Tensor Analysis

1.4 Tensors and its components

We denote a second order tensor a linear transformation T : V→ V that corre-sponds to a given vector v a vector u, i.e.,

u = Tv (30)

Since T is a linear transformation,

T (αv1 + βv2) = αTv1 + βTv2

for ∀ v1, v2 ∈ V and α, β ∈ R.

1.5 Operations with tensors

The set of tensors forms a linear space L(V,V).

1.5.1 Trace function

The function trace of a tensor T is defined as the sum of the diagonal ele-ments of the matrix [T ] that represents T with respect to a cartesian baseei, i = 1, ...ni.e.

tr [T ] = Tii (31)

The trace function is a linear transformation tr : L→ R, since

tr [α T + β R] = α tr [T ] + β tr [R] , ∀T , R ∈ L and α, β ∈ R.Properties:

• tr [A] = 0, ∀A ∈ Skew

• tr [I] = 3

• trAT= tr [A] , ∀A ∈ L

• tr [T1T2T3] = tr [T3T1T2] = tr [T2T3T1] (ciclic permutation)

1.5.2 Inner product

The inner product of two tensors T , R ∈ L may be defined as:

T.R = trTRT

. (32)

Let [T ] and [R] be the matrix representation of the linear transformations T :V→ V and R : V→ V with respect to a cartesian base ei, i = 1, ...n. Then,the inner product of two tensors may be expressed as

T.R = TijRij

At this point, we can notice that the trace function may also be defined as

tr [A] = A.I (33)

where I is the identity transformation.

5

Page 6: Tensor Analysis

1.6 Tensor Product

The tensor product of the vectors a and b, represented by a ⊗b, is the tensor(linear transformation) defined by

a⊗b

v =

b.va, ∀v ∈ V. (34)

From the above definition, we may obtain the following properties:

•αa+ βb

⊗ c = α (a⊗ c) + β

b⊗ c

• a⊗αb+ βc

= α

a⊗b

+ β (a⊗ c)

for ∀a, b and c ∈ V and α, β ∈ R.The following formulas are also valid:

tra⊗b

= a.b

a⊗b

T=b⊗ a

a⊗b

c⊗ d

=b.c

a⊗ d

Ta⊗b

= (Ta)⊗b

a⊗b

T = a⊗

TTb

i

(ei ⊗ ei) = I, i.e., (ei ⊗ ei) = I

(35)

for any ∀a, b, c and d ∈ V and T ∈ L.

1.6.1 Trace, inner product and Euclidean norm

For any u,v ∈ U , the trace of the tensor (u⊗ v) is the linear map defined as

tr (u⊗ v) = u.v. (36)

For a generic tensor, T = Tij (ei ⊗ ej), it then follows that

tr (T ) = Tijtr (ei ⊗ ej) = Tijδij = Tii, (37)

that is, the trace of T is the sum of the diagonal terms of the matrix represen-tation [T ].

The inner product, S.T , between two second order tensors S and T is definedas

S.T ≡ S : T = SijTij . (38)

The Euclidean norm of a tensor T is defined as:

T =√T.T =

T 211 + T 212 · · ·T 2nn. (39)

6

Page 7: Tensor Analysis

1.6.2 Basic properties

The following basic properties involving the internal product of tensors hold forany tensors R,S, T and vectors s, t, u and v:

(i) I.T = tr (T ) ;

(ii) R. (ST ) = STR.T = RTT .S;

(iii) u.Sv = S. (u⊗ v) ;

(iv)s⊗ t

. (u⊗ v) =

u.t(s.v) ;

(v) Tij = T. (ei ⊗ ej) ;

(vi) (u⊗ v)ij = (u⊗ v) . (ei ⊗ ej) = uivj ;

(vii) If S is symmetric, then S.T = S.TT = S.Tsym;

(viii) If S is skew, then S.T = −S.TT = S.Tskew;

(ix) If S is symmetric and T is skew, then S.T = 0.

1.7 Basis in L

Let T ∈ L and ei, i = 1, ...n be a cartesian base of V. Then,

T = Tij (ei ⊗ ej) (40)

where(ei ⊗ ej) , i, j = 1, ...n (41)

is a basis for L and Tij are the components of T with respect to this basis.

1.7.1 Cartesian components and matrix representation

Any second order tensor T can be represented as:

T = T11 (e1 ⊗ e1) + T12 (e1 ⊗ e2) + ...+ Tnn (en ⊗ en) (42)

= Tij (ei ⊗ ej)

whereTij = ei.Tej (43)

are the cartesian components of T .Any tensor is uniquely defined by its cartesian components. Thus, by ar-

ranging the components Tij in a matrix, we may have the following matrixrepresentation for T :

[T ] =

T11 T12 . . . T1nT21 T22 . . . T2n...

.... . .

...Tn1 Tn2 · · · Tnn

. (44)

7

Page 8: Tensor Analysis

For instance, the cartesian components of the identity tensor read:

Iij = δij , (45)

so that its matrix representation is given by:

[I] =

1 0 . . . 00 1 . . . 0...

.... . .

...0 0 · · · 1

. (46)

The cartesian components of the vector v = Tu are given by:

vi = [Tlk (el ⊗ ek)ujej ] .ei = Tijuj.

Thus, the array v of cartesian components of v is obtained from the matrixvector product:

v =

v1v2...vn

=

T11 T12 . . . T1nT21 T22 . . . T2n...

.... . .

...Tn1 Tn2 · · · Tnn

u1u2...un

. (47)

It can be easily proved that the cartesian components TTij of the transpose TT

of a tensor T are given by:TTij = Tji. (48)

Thus, TT has the following cartesian matrix representation:

TT=

T11 T21 . . . Tn1T12 T22 . . . Tn2...

.... . .

...T1n T2n · · · Tnn

. (49)

The skew part of the tensor producta⊗b

is a tensor denoted by the

external product, a∧ b, i.e., by definition

a ∧b =a⊗b

skew(50)

=1

2

a⊗b

−a⊗b

T

=1

2

a⊗b

−b⊗ a

.

Also, a fourth order tensor D : L→ L is defined as:

σ = Dε, ∀ σ, ε ∈ Lwhere

a⊗b⊗ c⊗ d

e⊗ f≡ (c.e)

d.f

a⊗b. (51)

8

Page 9: Tensor Analysis

1.7.2 Determinant of a tensor

The determinant function is a scalar function with a tensor argument, definedin a cartesian coordinate system by its components as

εijkεnqr det [T ] = det

Tip Tiq TirTjp Tjq TjrTkp Tkq Tkr

= det

Tpi Tqi TriTpj Tqj TrjTpk Tqk Trk

(52)

i.e.

εijk det [T ] = εpqrTipTjqTkr (53)

= εpqrTpiTqjTrk

= det

Ti1 Ti2 Ti3Tj1 Tj2 Tj3Tk1 Tk2 Tk3

= det

T1i T2i T3iT1j T2j T3jT1k T2k T3k

and

det [T ] =1

6εijkεpqrTipTjqTkr (54)

=1

6εijkεpqrTpiTqjTrk

= det

T11 T12 T13T21 T22 T23T31 T32 T33

= det

T11 T21 T31T12 T22 T32T13 T23 T33

With these relations, we may derive

• det[I] = 1;

• detTT= det [T ];

• det(αT ) = α3 det [T ] , ∀α ∈ R, dim(V) = 3;

• det(u⊗ v) = 0;

• det(RT ) = det (R) det (T ).

9

Page 10: Tensor Analysis

1.8 Cofactor of a tensor

Let CT = cof [T ] the cofactor of the tensor T , whose components CTij are thecofactors of the components Tij of tensor T . Developing det [T ] in terms ofcofactors, we derive

det [T ] =1

6εijkεpqrTipTjqTkr

=1

3T.CT =

1

3TipCTip

then, wrt a cartesian base, we have

CTip =1

2εijkεpqrTjqTkr (55)

The tensor (CT )T = (cof [T ])T = cof

TTis denoted the adjoint tensor, rep-

resented by adj [T ].

adj [T ] = (cof [T ])T = cofTT

(56)

This tensor has the following properties:

T (adj [T ]) = (adj [T ])T = I det [T ] (57)

In fact,

CTipTmp =1

2εijkεpqrTmpTjqTkr

=1

2εmjkεijk det [T ]

= δmi det [T ]

Notice that

det (T adj [T ]) = det (I det [T ]) = (det [T ])3

= det [T ] det [adj [T ]] .

Thus, if T is non-singular, i.e., det [T ] = 0, then

det [adj [T ]] = (det [T ])2 . (58)

Multiplying (57) by (det [T ])−1 we derive

(det [T ])−1

T adj [T ] = I

what implies

T−1 =1

det [T ]adj [T ] (59)

10

Page 11: Tensor Analysis

1.8.1 Inverse tensor and determinant

A tensor T is said to be invertible if its inverse, denoted T−1, satisfying

T−1T = TT−1 = I

exists.The determinant of a tensor T , denoted det (T ), is the determinat of the

matrix [T ]. A tensor T is invertible if and only if

det (T ) = 0.

A tensor T is said to be positive definite if

Tu.u ≥ 0, ∀u = 0.

Any positive definite tensor is invertible.

Basic relations involving the determinant and inverse tensor Relation(i) below holds for any tensors S and T :

(i) det (ST ) = det (S) det (T );

(ii) detT−1

= (det (T ))−1;

(iii) (ST )−1 = T−1S−1;

(iv)T−1

T=TT−1

.

1.8.2 Geometric interpretation of det [T ]

We have seen that the volume V (P ) of a paralelepiped formed by the edges u,v and w is given by:

V (P ) = |(u× v) .w| = |εijkuivjwk| (60)

A tensor T transform the paralelepiped into another paralelepiped given byϑ(P ), i.e.,

ϑ(P ) = |(Tu× Tv) .T w| (61)

= |εpqrTpiTqjTrkuivjwk|= |εijkuivjwk det [T ]|= |(u× v) .w| |det [T ]|

thusϑ(P ) = |det [T ]|V (P )

consequentlyϑ(P )

V (P )= |det [T ]| = |(Tu× Tv) .T w|

|(u× v) . w| (62)

11

Page 12: Tensor Analysis

1.9 Analysis of Tensorial Functions

1.9.1 Derivative of scalar functions

The scalar functions f : D→ R may have vector (D⊂ V) or tensor (D⊂ L) typeof arguments. Consider the function f : D ⊂ V → R. We say that a scalarfunction f is diferentiable at x ∈ D (open set), along the direction u, when thefollowing limit exists

Df (x; u) = limh→0

f (x+ h u)− f (x)

h=

d

d hf (x+ h u)

h=0

. (63)

If f is differentiable, then

Df (x; u) = ∇f (x) .u. (64)

Consider the function f : D ⊂ L → R. We say that a scalar function f isdiferentiable at T ∈ D (open set), along the direction C, when the followinglimit exists

Df (T ;C) = limh→0

f (T + h C)− f (T )

h=

d

d hf (T + h C)

h=0

(65)

If f is differentiable, then

Df (T ;C) = ∇Tf (T ) .C = tr∇Tf (T )C

T

(66)

1.9.2 Example

Consider the casef (T ) = tr

T k

(67)

Then, Df (T ;C) = dd hf (T + h C)

h=0

. Now, f (T + h C) = tr(T + h C)k

.

But, from the binomial formula we have

(T + h C)k = T k + h k T k−1C +1

2h2k (k − 1)T k−2C2 + ...+ hkCk.

Hence, from the linearity of the trace function, we may write

tr(T + h C)k

= tr

T k+ h k tr

T k−1C

+ o

h2

consequentlyDf (T ;C) = k tr

T k−1C

. (68)

Consider now the tensor T ,V = R3

. The characteristic equation associated

with the tensor T is given by

det [T − λI] = p (λ) = 0

12

Page 13: Tensor Analysis

i.e.p (λ) = λ3 − ITλ

2 + IITλ− IIIT = 0 (69)

The invariants of T , denoted by IT , IIT and IIIT , are given by

IT = tr [T ] (70)

IIT =1

2

I2T − tr

T 2

IIIT = det [T ] =1

6

(tr [T ])3 − 3tr

T 2tr [T ] + 2tr

T 3

However, the Cayley-Hamilton theorem states that any tensor T satisfies itscharacteristics equation, i.e.,

T 3 − ITT2 + IITT − IIIT I = 0 (71)

From the above results, we may derive:

∂IT

∂T= I (72)

∂IIT

∂T= (I tr [T ]− T )T

∂IIIT

∂T=

T 2 − ITT + IIT I

T

Notice that,∂IT

∂Tij=

∂Tkk

∂Tij= δikδkj = δij

hence∂IT

∂T= I.

Moreover, since tr [T ] = I.T = IT

∂IIT

∂Tij=

1

2

2 (tr [T ])

∂IT

∂Tij− ∂

∂Tij(TrkTkr)

= IT δij −1

2

∂Trk

∂TijTkr +

∂Tkr

∂TijTrk

= IT δij −1

2(δirδjkTkr + δikδjrTrk)

= IT δij −1

2(Tji + Tji)

= IT δij − Tji

thus∂IIT

∂T= IT I − TT

Form the Cayley-Hamilton theorem,

T 3 − ITT2 + TIIT − IIIT I = 0

13

Page 14: Tensor Analysis

Taking the trace of the above function, one derives

trT 3− IT tr

T 2+ tr [T ] IIT − 3IIIT = 0

Since IT = tr [T ]

trT 3− IT tr

T 2+ IT IIT − 3IIIT = 0

TirTrsTsi − ITTisTsi + IT IIT − 3IIIT = 0consequently

∂TabTirTrsTsi − ITTisTsi + IT IIT − 3IIIT = 0

i.e.

δiaδrbTrsTsi + TirδraδsbTsi + TirTrsδsaδib − δabTisTsi − IT δiaδsbTsi

−ITTisδsaδib + δabIIT + IT (IT δab − Tba)− 3∂IIIT

∂Tab= 0

thus

TbsTsa + TiaTbi + TbrTra − δabTisTsi − ITTba

−ITTba + δabIIT + IT (IT δab − Tba)− 3∂IIIT

∂Tab= 0

So, in a compact notation we have

3T 2T − tr

T 2

I − 2IT TT + IIT I + ITIT I − TT

= 3

∂IIIT

∂T.

But IIT =1

2

(tr [T ])2 − tr

T 2

, then

3∂IIIT

∂T= 3

T 2T+I2T − tr

T 2

I + IIT I − 3ITTT

= 3T 2T+ 3IIT I − 3ITTT

i.e.∂IIIT

∂T=T 2T+ IIT I − ITT

T

what can be written as

∂IIIT

∂T=T 2 − ITT + IIT I

T.

Now, T 3 − ITT2 + TIIT − IIIT I = 0, so

T 2 − ITT + IIT I − IIITT−1 = 0

14

Page 15: Tensor Analysis

i.e.T 2 − ITT + IIT I = IIITT

−1

which allow us to write∂IIIT

∂T= IIITT

−T . (73)

Consider now the relationd

dtdet [T (t)] .

Then, we have

d

dtdet [T (t)] =

d

dtIIIT (t) (74)

=∂IIIT

∂Trs

d

dtTrs (t)

=∂IIIT

∂T.T (t)

Substituting (73) into (74) we derive

d

dtdet [T (t)] = det [T (t)]T−T .T

= det [T (t)] T .T−T

= det [T (t)] trTT−1

consequentlyd

dtdet [T (t)] = det [T ] tr

TT−1

. (75)

1.9.3 Derivatives of vetorial valued functions

The vector valued functions f : D → V may have vector (D⊂ V) or tensor(D⊂ L) type of arguments. Consider the vector valued function with a vector

argument: f : D ⊂ V → V. We say that f is diferentiable at x, along thedirection u, when the following limit exists

Df (x;u) = limh→0

f (x+ h u)− f (x)

h=

d

d h

f (x+ h u)

h=0

(76)

If f is differentiable, then

Df (x; u) = [∇f (x)] u (77)

15

Page 16: Tensor Analysis

Definition Consider the vector valued function with a vector argument: f :

D ⊂ V→ V. Then, the divergence of f (x), represented by divf (x)

, is defined

by

divf (x)

:= tr

∇f (x)

(78)

In a cartesian coordinate system,

divf (x)

=

∂fi

∂xi= fi,i (79)

The rotational of f (x), represented by rotf (x)

, is defined by

rotf (x)

× v = 2

∇f (x)

Skewv = 2 w∇f × v (80)

where w∇f is the axial vector associated with the Skew part of ∇f (x), with

∇f (x)

Skew=1

2

∇f (x)−∇f (x)T

In a cartesian coordinate system, we have

rot [f (x)]i = εijk∂fk

∂xj(x) (81)

Notice that, if a tensor A is Skew,V = R3

, then A = −AT implies

A =

0 −a3 a2a3 0 −a1−a2 a1 0

(82)

where wA = (a1, a2, a3).

1.9.4 Derivatives of tensorial valued functions

The tensor valued functions f : D → L may have vector (D⊂ V) or tensor(D⊂ L) type of arguments. Consider the tensor valued function with a tensor

argument: f : D ⊂ L → L. We say that f is diferentiable at T , along thedirection C, when the following limit exists

Df (T ;C) = limh→0

f (T + h C)− f (T )

h=

d

d hf (T + h C)

h=0

(83)

If f is differentiable, then

Df (T ;C) = [∇f (T )]C (84)

In terms of a cartesian coordinate system,

Df (T ;C)ij = [∇f (T )]ijklCkl

16

Page 17: Tensor Analysis

Divergent of a tensorial field The divergence of a tensor field F (x) ∈ L,at x, is the only vector defined by

div [F (x)] .v = div[F (x)]T v

, ∀v ∈ V (85)

In terms of a cartesian coordinate system,

div [F (x)]i =∂Fij

∂xj(x) (86)

Rotational of a tensorial field The rotational of a tensor field, at x, is atensor defined by

rot [F (x)]v = rot[F (x)]T v

, ∀v ∈ V (87)

In terms of a cartesian coordinate system,

rot [F (x)]ij = εimk∂Fjk

∂xm(x) (88)

Laplacian of a scalar field Is the scalar valued function, φ : x ∈ V→ R,defined by

∆φ = div [∇φ (x)] (89)

In terms of a cartesian coordinate system,

∆φ =∂2φ

∂x2i(x) = φ,ii (x) (90)

Laplacian of a vector valued field Is the scalar valued function, f : x ∈V→ V, defined by

∆f = div∇f (x)

(91)

In terms of a cartesian coordinate system,

∆fi =∂2fi

∂x2j(x) = fi,jj (x) (92)

Laplacian of a tensor valued field Is the tensor valued function, T : x ∈V→ L, defined by

[∆T (x)]v = ∆([T (x)]v) , ∀v ∈ V (93)

In terms of a cartesian coordinate system,

[∆T (x)]ij =∂2Tij

∂x2k(x) = Tij,kk (x) (94)

17

Page 18: Tensor Analysis

Properties Form the above relations, we may derive

1. div (u⊗ v) = grad [u]v + u div [v]

2. div (φ u) = grad [φ] .u+ φ div [u]

3. grad [φ u] = u⊗ grad [φ] + φ grad [u]

4. div

TTu= u.div [T ] + T .grad [u]

5. div (φ T ) = φ div [T ] + [T ] grad [φ]

6. rot [φ u] = grad [φ]× u+ φ rot [u]

7. rot [u× v] = grad [u]v − grad [v] u+ u div [v]− v div [u]

8. div (u× v) = v.rot [u]− u.rot [v]

9. rot [v]× u = grad [v] u− grad [v]T u

10. rot [grad [φ]] = 0

11. div [rot [u]] = 0

12. rot [rot [u]] = grad [div [u]]−∆u

1.9.5 Derivative of tensorial and vetorial fields parametrized by ascalar variable (t-time)

• ddt

a (t)⊗b (t)

=ddta (t)⊗b (t)

+a (t)⊗ d

dtb (t)

• ddtT−1 = −T−1T T−1

In fact, sinceT T−1 = I

we obtain, by differentiation that

T T−1 + T T−1 = 0

thusT−1 = −T−1T T−1 (95)

18

Page 19: Tensor Analysis

2 Integrals of Tensor fields

Here, we are considering tensorial fields ℑ : D→ Y, where R are regular regionscontained in D ⊂ V, i.e., is a region contained in D with a regular contour ∂R.A region is denoted regular if it is bounded, orientable and has a continuousnormal vector field, pointed outward from R.

The general expression for the divergence theorem is given by

!

∂R

ℑijk...nαdA =!

R

∂ℑijk...∂xα

dV (96)

where ℑijk... denote tensorial components of class C1 in R.From the general expression, we may derive the following particular expres-

sions

2.0.6 Scalar valued fields

Let φ (x) : V→ R, then

!

∂R

φ n dA =

!

R

grad [φ] dV (97)

In a cartesian coordinate system,

!

∂R

φ nα dA =

!

R

∂φ

∂xαdV (98)

2.0.7 Vector valued fields

Let u (x) : V→ V, then

!

∂R

u.n dA =

!

R

div [u] dV (99)

In a cartesian coordinate system,

!

∂R

uini dA =

!

R

∂ui

∂xidV (100)

2.0.8 Tensor valued fields

Let T (x) : V→ L, then

!

∂R

Tn dA =

!

R

div [T ] dV (101)

In a cartesian coordinate system,

!

∂R

Tijnj dA =

!

R

∂Tij

∂xjdV (102)

19

Page 20: Tensor Analysis

Let ℑij = εijkuk. Then

!

∂R

εijkuknj dA =

!

R

εijk∂uk

∂xjdV (103)

what may be written in a compact form as

!

∂R

n× u dA =

!

R

rot [u] dV . (104)

Also, !

∂R

uinj dA =

!

R

∂ui

∂xjdV (105)

what may be written in a compact form as

!

∂R

u⊗ n dA =

!

R

grad [u] dV . (106)

Moreover,

!

∂R

u⊗ Tn dA =

!

R

grad [u]TT + u⊗ div [T ] dV . (107)

Now, considering u = ψ∇φ, where both ψ and φ are scalar valued fields, wederive

!

∂R

ψ∇φ.n dA =

!

R

div [ψ∇φ] dV

=

!

R

(∇ψ.∇φ+ ψdiv [∇φ]) dV

=

!

R

(∇ψ.∇φ+ ψ∆φ) dV

replacing ψ by φ we derive

!

∂R

(ψ∇φ− φ∇ψ) .n dA =

!

R

(ψ∆φ− φ∆ψ) dV . (108)

2.1 Stokes Theorem

Consider now the regular surface S with a closed contour C. The general

"

C

ℑijk...dxi =!

S

εpqr∂ℑrjk...∂xq

npdA (109)

where dxi are the components of the tangent vector to C at x.

20

Page 21: Tensor Analysis

3 Homework #2

i) Consider that ϕ ∈ R, v and u ∈ V and S ∈ L. Show that:

1. ∇ (ϕv) = ϕ∇v + v ⊗∇ϕ;

2. div (ϕv) = ϕdiv (v) + v.∇ϕ;

3. divSTv

= S.∇v + v.div (S);

4. div (ϕS) = ϕdiv (S) + [S]∇ϕ;

5. rot (ϕu) = ∇ϕ× u+ ϕrot (u);

6. div (u× v) = v.rot (u)− u.rot (v);

ii) Let v and w ∈ V and S ∈ L. Show that:

1.#∂Ω

v.Sn dΓ =#Ωv.div (S) + S.∇vdΩ

2.#∂Ω

v. (w.n) dΓ =#Ωv.div (w) + [∇v] w dΩ

3.#∂Ω

Sn⊗ v dΓ =#Ω

div (S)⊗ v + [S] [∇v]T

iii) show that

1. ǫijkǫmjk = 2δim

2. ǫijkǫijk = 6

3. rot (u)i = ǫijk∂uk∂xj

4. [rot (T )]ij = ǫimk∂Tjk∂xm

5.f (x)

i= ∂2fi

∂x2j= fi,jj

4 Tensors Operations

4.1 Orthogonal tensors

A tensor Q is said to be orthogonal if

QT = Q−1. (110)

This definition implies that the determinat of any orthogonal tensor equals either+1 or −1. An orthogonal tensor Q with

det (Q) = 1 (111)

21

Page 22: Tensor Analysis

is called a proper orthogonal tensor (or rotation). The product Q1Q2 of anytwo orthogonal tensors Q1 and Q2 is an orthogonal tensor. For all vectors u

and v, an orthogonal tensor Q satisfies:

Qu.Qv = u.v (112)

Rotations and changes of basisLet ei, i = 1...n and e∗i , i = 1...n be two orthogonal bases of U . Such

bases are related by:e∗j = Rej , for j = 1...n, (113)

where R is a rotation (proper orthogonal tensor). Let T and u be, respectively,a tensor and a vector with matrix representation [T ] and u with respect tothe basis ei, i = 1...n. The matrix representations [T ∗] and u∗ of T andu relative to the basis e∗i , i = 1...n are given by the following products ofmatrices:

[T ∗] = [R]T [T ] [R] and u∗ = [R]T u . (114)

Equivalently, in component form, we have:

T ∗ij = RkiTklRlj and u∗i = Rjiuj . (115)

The matrix [R] is given by:

[R] =

e1.e∗1 e1.e

∗2 . . . e1.e

∗n

e2.e∗1 e2.e

∗2 . . . e2.e

∗n

......

. . ....

en.e∗1 en.e

∗2 · · · en.e

∗n

, (116)

or, in component form,Rij = ei.e

∗j . (117)

4.1.1 Example: A rotation matrix in two dimensions

In a two dimensional space, the rotation tensor has a simple cartesian repre-sentation. Let the tensor R be a transformation that rotates all vectors ofthe two-dimensional space by an (anti-clockwise positive) angle θ. The matrixrepresentation of R is given as:

[R] =

$cos (θ) − sin (θ)sin (θ) cos (θ)

%. (118)

4.2 Spectral decomposition

Given a tensor T , a non-zero vector u is said to be an eigenvector of T associatedwith a given eigenvalue λ if

Tu = λ u. (119)

The space of all vectors u satisfying the above relation is called the eigen-space(or characteristic space) of T corresponding to λ. The following properties hold:

22

Page 23: Tensor Analysis

(i) The eigenvalues of a positive definite tensor are strictly positive

Proof : Let λi be an eigenvalue of A, a positive definite tensor. Then,∃ vi, vi = 1, so that

Avi = λi vi (120)

therefore

λi = λi vi, vi = λi vi2 (121)

= Avi.vi > 0.

(ii) The characteristic spaces of a symmetric tensor are mutually orthogonal.

Proof : let µ and λ be distinct eigenvalues of a symmetric tensor S, thenthere exists v and w, with v = 1 and w = 1 so that

Sv = µv

andS w = λw

(122)

thenµv.w = Sv.w = v.S w

andλw.v = S w.v

(123)

subtracting both equations yield

(µ− λ)v.w = 0. (124)

Since (µ− λ) = 0, we must have

v.w = 0. (125)

4.2.1 Spectral theorem

Let S be a symmetric tensor. Then S admits the representation

S =n

i=1

ωi (vi ⊗ vi) , (126)

where (vi, i = 1...n) is an orthonormal basis for U consisting exclusively ofeigenvectors of S and ωi are the corresponding eigenvalues. The above rep-resentation is called the spectral decomposition of S. Relative to the basis(vi, i = 1...n), S has the following diagonal representation

[S] =

ω1 0 . . . 00 ω2 . . . 0...

.... . .

...0 0 · · · ωn

. (127)

23

Page 24: Tensor Analysis

4.2.2 Eigenprojections

Alternatively, with p ≤ n defined as the number of distinct eigenvalues of S, wemay write

S =

p

i=1

ωiEi, (128)

where the symmetric tensors Ei are called the eigenprojections of S. Eacheigenprojection Ei is the orthogonal projection operator on the characteristicspace of S associated with ωi. The eigenprojections have the property

I =

p

i=1

Ei, (129)

and, if p = n (no multiple eigenvalues), then

Ei = (vi ⊗ vi) , for i = 1...n. (130)

Also, the eigenprojections satisfy

Ei =

&Πpj=1j =i

1

ωi−ωj(S − ωjI) if p > 1

I if p = 1(131)

In the particular case in which n = 3, we have:

(i) In the vase ω1 = ω2 = ω3, we have

Ei = (vi ⊗ vi) , for i = 1..3, (132)

where vi is the eigenvector associated with ωi. Therefore

S =3

i=1

ωiEi. (133)

(ii) In the vase ω1 = ω2 = ω3, we have

E1 = (v1 ⊗ v1)andE2 = I −E1 = I − (v1 ⊗ v1)

(134)

where v1 is the eigenvector associated with ω1. Therefore

S = ω1 (v1 ⊗ v1) + ω2 [I − (v1 ⊗ v1)] (135)

(iii) In the vase ω1 = ω2 = ω3, we have

E1 = I. (136)

ThereforeS = ω1 I. (137)

24

Page 25: Tensor Analysis

4.2.3 Characteristic equation and Principal invariants

Every eigenvalue ωi satisfies the characteristic equation

p (ωi) = det (S − ωiI) = 0. (138)

In the three-dimensional space, for any α ∈ ℜ, det (S − αI) admits the followingrepresentation

det (S − αI) = −α3 + α2IS − αIIS + IIIS, (139)

where IS, IIS and IIIS are the principal invariants of S, defined by

IS = tr (S) = Sii; (140)

IIS =1

2

trS2− tr (S)2

=1

2(SiiSjj − SijSji) ;

IIIS = det (S) =1

6ǫijkǫpqrSipSjqSkr.

In this case, the characteristic equation reads

−ω3i + ω2i IS − ωiIIS + IIIS = 0 (141)

and the eigenvalues ωi are the solution of this cubic equation.If S is symmetric, then its principal invariants can be expressed in terms of

its eigenvalues as

IS = ω1 + ω2 + ω3; (142)

IIS = ω1ω2 + ω2ω3 + ω1ω3;

IIIS = ω1ω2ω3.

4.2.4 Polar decomposition

Let F be a positive definite tensor. Then there exists symmetric positive definitetensors U and V ,and a rotation R such that

F = RU = V R. (143)

The decomposition RU and V R are unique and are called, respectively, theright and left polar decompositions of F . The symmetric tensors U and V aregiven by

U =√FTF and V =

√FFT , (144)

where√· denotes the tensor square root. The square root of a symmetric tensor

A is the unique tensor B that satisfies

B2 ≡ BB = A. (145)

LetA =

i

λai (vai ⊗ vai ) (146)

25

Page 26: Tensor Analysis

with λai and vai denoting, respectively, the eigenvalues and the basis of eigen-vectors of A. The spectral decomposition of its square root, B, reads

B =

i

'λai (v

ai ⊗ vai ) . (147)

4.3 Special tensors

The deviator of a symmetric tensor T , denoted Tdev, is defined as

Tdev : = T − 13(I.T ) I (148)

= T − 13TvolI

withTvol := tr (T ) = I.T (149)

and it follows thattr (Tdev) = I.Tdev = 0. (150)

The spherical part of T , denoted Tsph, is defined as

Tsph := T − Tdev =1

3TvolI =

1

3[I ⊗ I]T (151)

Assume that T is a rank-one update of I. Its inverse can be computedexplicitly according to the Sherman-Morrison formula:

T = I + α (u⊗ v) (152)

thenT−1 = I − α

1 + α u,v (u⊗ v) (153)

where u and v are arbitrary vectors and α is an arbitrary scalar such that

α = −1u,v ,

so that T is non-singular.Proof : Let

T−1 = I + β (u⊗ v) . (154)

Then, in order to compute β we impose that

TT−1 = T−1T = I (155)

A straightforward generalization of the formula in (153) is the following: If

T = U + α (u⊗ v) (156)

26

Page 27: Tensor Analysis

thenT−1 = U−1 − α

1+αU−1#u,#v

U−1u⊗ U−Tv

i.e.T−1 = U−1 − α

1+αU−1#u,#vU−1 (u⊗ v)U−1

(157)

where it is assumed that U is a non-singular tensor.Proof: Express T = UT with T = I+αU−1u⊗v, such that T−1 = T−1U−1,

and use (153).

5 Higher order tensors

So far we have seen operations involving scalars, that can be considered as zeroorder tensors, vectors, which can be considered firts order tensors, and secondorder tensors, which are associated with linear operators (or transformations) onvectors. Linear operators of higher order, or higher order tensors, are frequentlyemployed in continuum mechanics. In this section we introduce some basicdefinitions and operations involving higher order tensors.

A third order tensor may be represented as

A = Aijk (ei ⊗ ej ⊗ ek) , (158)

with the definition a⊗b⊗ c

d =

c.d

a⊗b, (159)

5.1 Fourth order tensor

Fourth order tensors are particularly relevant in continuum mechanics. A gen-eral fourth order tensor D is represented as

D = Dijks (ei ⊗ ej ⊗ ek ⊗ es) . (160)

Fourth order tensors map second order tensors into second order tensors. Theyalso map vectors in third order tensors and third order tensors into vectors.

As a direct extension, we definea⊗b⊗ c⊗ d

e =

e.d

a⊗b⊗ c, (161)

and the double contractionsa⊗b⊗ c⊗ d

e⊗ f

a⊗b⊗ c⊗ d

:e⊗ f

(162)

= (c.e)d. f

a⊗b,

anda⊗b⊗ c⊗ d

e⊗ f ⊗ g ⊗ h

a⊗b⊗ c⊗ d

:e⊗ f ⊗ g ⊗ h

(163)

= (c.e)d. f

a⊗b⊗ g ⊗ h,

with the above definitions, the following reations are valid

27

Page 28: Tensor Analysis

(i) Dijkl = D (ek ⊗ el) . (ei ⊗ ej) ≡ (ei ⊗ ej) : D : (ek ⊗ el)

(ii) Du = Dijklul (ei ⊗ ej ⊗ ek)

(iii) DS= DijksSks (ei ⊗ ej) ≡ D : SConsider the generalized Hook’s law.

σ = Dε. (164)

In components, we have

σ = σij (ei ⊗ ej) (165)

= Dijks (ei ⊗ ej ⊗ ek ⊗ es) εnl (en ⊗ el)

= Dijksεnl ek,en es, el (ei ⊗ ej)

= Dijksεnlδknδsl (ei ⊗ ej)

= Dijksεks (ei ⊗ ej)

(iv) DTS= DijksSij (ek ⊗ es) ≡ S : D;

Notice that

DU,S =(DTS,U

), ∀U and S ∈ Lin (V,V) . (166)

In components(DijksUks)Sij = (DijksSij)Uks (167)

(v) DT = DijmnTmnkl (ei ⊗ ej ⊗ ek ⊗ el) ≡ D : T.This represents the following composition

(DT)U = D (TU) (168)

whereD and T are linear transformations mapping Lin (V,V)→ Lin (V,V).

5.1.1 Symmetry

We shall call symmetric any fourth order tensor that satisfies

DS,U = S,DU , ∀U and S ∈ Lin (V,V) . (169)

In dyadic we haveS : D : U = (D : S) : U

for any second order tensors S and U .This definition is analogous to that of symmetric second order tensors. The

cartesian components of symmetric fourth order tensors satisfy

Dijkl = Dklij . (170)

28

Page 29: Tensor Analysis

It should be noted that other symmetries are possible in fourth order tensors.If symmetry occurs in the last two indices, i.e., if

Dijkl = Dijlk (171)

the tensor has the properties:

DS = DST (172)

in dyadicsD : S = D : ST and S : D =(S : D)T

for any S. If it is symmetric in the first two indices, i.e.,

Dijkl = Djikl (173)

then,DS = (DS)T (174)

in dyadicsD : S = (D : S)T and S : D =ST : D.

5.1.2 Change of basis transformation

Again, let us consider the orthogonal basis e∗i , i = 1, ...n defined as

e∗j = Rej (175)

with R a rotation. The components D∗ijkl of a tensor D relative to the basisdefined by e∗i , i = 1, ...n are given by

D∗ijkl = RmiRnjRpkRqlDmnpq (176)

where Dmnpq are the components of D relative to ei. In fact

D = D∗ijkl

e∗i ⊗ e∗j ⊗ e∗k ⊗ e∗l

(177)

= Dmnpq (em ⊗ en ⊗ ep ⊗ eq) .

Now

e∗j = Rej (178)

= Rej , emem= Rmjem

therefore

D = D∗ijkl

e∗i ⊗ e∗j ⊗ e∗k ⊗ e∗l

(179)

= D∗ijklRmiRnjRpkRql (em ⊗ en ⊗ ep ⊗ eq)

= Dmnpq (em ⊗ en ⊗ ep ⊗ eq)

29

Page 30: Tensor Analysis

henceD∗ijklRmiRnjRpkRql = Dmnpq. (180)

Also

em = RTe∗m (181)

=(RTe∗m,e∗i

)e∗i

= RTime∗i

= Rmie∗i

therefore

D= D mnpq (em ⊗ en ⊗ ep ⊗ eq) (182)

= DmnpqRmiRnjRpkRql

e∗i ⊗ e∗j ⊗ e∗k ⊗ e∗l

= D∗ijkl

e∗i ⊗ e∗j ⊗ e∗k ⊗ e∗l

henceD∗ijkl = RmiRnjRpkRqlDmnpq. (183)

5.1.3 Isotropic Tensors

A tensor is said to be isotropic if its components are invariant under any changeof basis. The only second order isotropic tensors are the so-called sphericaltensors, i.e., the tensors defined as

αI (184)

with scalar α.Any isotropic fourth order tensor U can be constructed as a linear combina-

tion of three basic isotropic tensors, I, IT and (I ⊗ I):

U = αI+ βIT + γ (I ⊗ I) (185)

where α, β and γ are scalars.The tensor I is called the fourth order identity. Its components are:

Iijkl = δikδjl. (186)

For any second order tensor S, we have

IS = S, ∀S ∈ Lin (V,V) (187)

in dyadicsI : S = S : I = S.

Moreover, for any fourth order tensor T

IT = TI = T, ∀T (188)

30

Page 31: Tensor Analysis

in dyadicsI : T = T : I = T.

The tensor IT is the transposition tensor. It maps any second order tensoronto its transpose, i.e.,

ITS = ST , ∀S ∈ Lin (V,V) (189)

in dyadicsIT : S = S : IT = ST

for any S. The components of IT are

(IT )ijkl = δijδkl. (190)

Finally, the tensor (I ⊗ I) has components

(I ⊗ I)ijkl = δijδkl. (191)

When applied to any tensor T it gives

(I ⊗ I) .T ≡ (I ⊗ I) : T (192)

= tr (T ) I.

Another important isotropic tensor that frequently appears in continuum me-chanics is the tensor defined as

ISym =1

2(I+ IT ) . (193)

This tensor maps any second order tensor into its symmetric part, i.e.,

ISymT = Tsym, ∀T ∈ Lin (V,V) (194)

in dyadicsISym : T = T : ISym = Tsym, ∀T ∈ Lin (V,V) .

This tensor is denoted as the symmetric projection or symmetric identity. Itscomponents are given by:

(ISym)ijkl =1

2(δikδjl + δilδjk) . (195)

By analogy, we can defined ISkew as

ISkew =1

2(I− IT ) . (196)

This tensor maps any second order tensor into its symmetric part, i.e.,

ISkewT = TSkew, ∀T ∈ Lin (V,V) . (197)

31

Page 32: Tensor Analysis

This tensor is denoted as the symmetric projection or symmetric identity. Itscomponents are given by:

(ISkew

)ijkl =1

2(δikδjl − δilδjk) . (198)

Generic tensors of order m are defined as

ℑ = ℑi1i2···im (ei1 ⊗ ei2 · · ·eim)

where, extending the previous definitions of the tensor product, we have

(ei1 ⊗ ei2 · · ·eim)u = (u.eim)ei1 ⊗ ei2 · · ·eim−1

for all u ∈ U . The definition of contraction operations are completely analogousto those seen above for fourth order tensors.

5.2 Elementary algebra of 4th order tensors

5.2.1 Component representation

The simplest form of a 4th order tensor A is a quad, which is defined as thetensor product of two 2nd order tensors T and U , i.e.

A = T ⊗ U = [Tij (ei ⊗ ej)]⊗ [Ukl (ek ⊗ el)] = TijUkl (ei ⊗ ej ⊗ ek ⊗ el) (199)

The products (ei ⊗ ej ⊗ ek ⊗ el), which are denoted the base quadrads, formthe basis of the product space R3 ×R3 ×R3 ×R3. The expression in (199) is,clearly, only a special case of the general representation of a 4th order tensor

A = Aijkl (ei ⊗ ej ⊗ ek ⊗ el) (200)

Any 4th order tensor defines a linear mapping from R3 ×R3 to R3 ×R3, since

AS = Aijkl (ei ⊗ ej ⊗ ek ⊗ el)Srs (er ⊗ es) (201)

= AijklSrs (ei ⊗ ej ⊗ ek ⊗ el) (er ⊗ es)

= AijklSrs ek,er el, es (ei ⊗ ej)

= AijklSrsδkrδls (ei ⊗ ej)

= AijklSkl (ei ⊗ ej)

= Uij (ei ⊗ ej)

= U

where we introduced the tensor U with components Uij = AijklSkl.Useful notations are the “overline open product” ⊗ and the “underline open

product” ⊗, which are defined via the component representations

T ⊗ U := TikUjl (ei ⊗ ej ⊗ ek ⊗ el)andT ⊗ U := TilUjk (ei ⊗ ej ⊗ ek ⊗ el)

(202)

32

Page 33: Tensor Analysis

Useful rules, that involve the open ( Tensor) product symbols, for 2nd ordertensors U , V and W are:

[U ⊗ V ]W = V,W U[U ⊗ V ]W = UWV T

[U ⊗ V ]W = UWTV T

(203)

in dyadics we also have

W : [U ⊗ V ] = U,W VW : [U ⊗ V ] = UTWV

W : [U ⊗ V ] =UTWV

T= V TWTU .

(204)

5.2.2 Symmetry and skew-symmetry

The major transpose of a 4th order tensor A is defined as

AT = Aklij (ei ⊗ ej ⊗ ek ⊗ el) (205)

i.e. the transpose is associated with a “major shift” of indices. The major-symmetric part of A, denoted Asym, and the major-skew-symmetric part of A,denoted Askew, are defined as follows:

Asym =1

2

A+AT

andAskew =

1

2

A−AT

(206)

A possesses major symmetry if Asym = A (and Askew = 0), i.e. when A = AT .In component form,

Aijkl = Aklij . (207)

A possesses major skew-symmetry when Askew = A (and Asym = 0), i.e. whenA = −AT . In component form,

Aijkl = −Aklij , (208)

which (in particular) infers that

Aijkl = 0, for ij = kl. (209)

Moreover, A possesses 1st and 2nd minor symmetry if

Aijkl = AjiklandAijkl = Aijlk

(210a)

respectively. Likewise, A possesses 1st and 2nd minor skew-symmetry if

Aijkl = −AjiklandAijkl = −Aijlk

(211)

respectively.

33