A Brief Introduction to Tensors and their properties

 

 

 

1. BASIC PROPERTIES OF TENSORS

 

 

1.1 Examples of Tensors

 

The gradient of a vector field is a good example of a second-order tensor.  Visualize a vector field: at every point in space, the field has a vector value .  Let  represent the gradient of u.  By definition, G enables you to calculate the change in u when you move from a point x in space to a nearby point at :

 

G is a second order tensor.  From this example, we see that when you multiply a vector by a tensor, the result is another vector. 

 

This is a general property of all second order tensors.  A tensor is a linear mapping of a vector onto another vector.  Two examples, together with the vectors they operate on, are:

 

 The stress tensor

 

where n is a unit vector normal to a surface,  is the stress tensor and t is the traction vector acting on the surface.

 

 The deformation gradient tensor

 

where dx is an infinitesimal line element in an undeformed solid, and dw is the vector representing the deformed line element.

 

 

 

1.2 Matrix representation of a tensor

 

To evaluate and manipulate tensors, we express them as components in a basis, just as for vectors.  We can use the displacement gradient to illustrate how this is done.  Let  be a vector field, and let  represent the gradient of u.  Recall the definition of G

 

Now, let  be a Cartesian basis, and express both du and dx as components.  Then, calculate the components of du in terms of dx using the usual rules of calculus

 

We could represent this as a matrix product

 

Alternatively, using index notation

 

 

From this example we see that G can be represented as a  matrix.  The elements of the matrix are known as the components of G in the basis .  All second order tensors can be represented in this form.  For example, a general second order tensor S could be written as

 

You have probably already seen the matrix representation of stress and strain components in introductory courses.

 

Since S can be represented as a matrix, all operations that can be performed on a  matrix can also be performed on S.  Examples include sums and products, the transpose, inverse, and determinant.  One can also compute eigenvalues and eigenvectors for tensors, and thus define the log of a tensor, the square root of a tensor, etc.  These tensor operations are summarized below.

 

Note that the numbers , , …  depend on the basis , just as the components of a vector depend on the basis used to represent the vector.  However, just as the magnitude and direction of a vector are independent of the basis, so the properties of a tensor are independent of the basis.  That is to say, if S is a tensor and u is a vector, then the vector

 

has the same magnitude and direction, irrespective of the basis used to represent u, v, and S.

 

 

1.3 The difference between a matrix and a tensor

 

If a tensor is a matrix, why is a matrix not the same thing as a tensor?  Well, although you can multiply the three components of a vector u by any  matrix,

 

the resulting three numbers  may or may not represent the components of a vector.  If they are the components of a vector, then the matrix represents the components of a tensor A, if not, then the matrix is just an ordinary old matrix.

 

 To check whether  are the components of a vector, you need to check how  change due to a change of basis.  That is to say, choose a new basis, calculate the new components of u in this basis, and calculate the new matrix in this basis (the new elements of the matrix will depend on how the matrix was defined.  The elements may or may not change  if they don’t, then the matrix cannot be the components of a tensor).  Then, evaluate the matrix product to find a new left hand side, say .  If   are related to  by the same transformation that was used to calculate the new components of u, then  are the components of a vector, and, therefore, the matrix represents the components of a tensor.

 

1.4 Formal definition

 

Tensors are rather more general objects than the preceding discussion suggests.   There are various ways to define a tensor formally.  One way is the following:

 

* A tensor is a linear vector valued function defined on the set of all vectors

 

More specifically, let  denote a tensor operating on a vector.  Linearity then requires that, for all vectors  and scalars  

·          

·          

 

Alternatively, one can define tensors as sets of numbers that transform in a particular way under a change of coordinate system.  In this case we suppose that n dimensional space can be parameterized by a set of n  real numbers .   We could change coordinate system by introducing a second set of real numbers  which are invertible functions of .   Tensors can then be defined as sets of real numbers that transform in a particular way under this change in coordinate system.  For example

 

·         A tensor of zeroth rank is a scalar that is independent of the coordinate system.

·         A covariant tensor of rank 1 is a vector that transforms as  

·         A contravariant tensor of rank 1 is a vector that transforms as  

·         A covariant tensor of rank 2 transforms as  

·         A contravariant tensor of rank 2 transforms as  

·         A mixed tensor of rank 2 transforms as  

 

Higher rank tensors can be defined in similar ways.  In solid and fluid mechanics we nearly always use Cartesian tensors, (i.e. we work with the components of tensors in a Cartesian coordinate system) and this level of generality is not needed (and is rather mysterious).  We might occasionally use a curvilinear coordinate system, in which we do express tensors in terms of covariant or contravariant components  this gives some sense of what these quantities mean.   But since solid and fluid mechanics live in Euclidean space we don’t see some of the subtleties that arise, e.g. in the theory of general relativity.

 

 

1.5 Creating a tensor using a dyadic product of two vectors.

 

Let a and b be two vectors.  The dyadic product of a and b is a second order tensor S denoted by

.

with the property

 

for all vectors u.  (Clearly, this maps u onto a vector parallel to a with magnitude  )

 

The components of  in a basis  are

 

 

Note that not all tensors can be constructed using a dyadic product of only two vectors (this is because  always has to be parallel to a, and therefore the representation cannot map a vector onto an arbitrary vector).  However, if a, b, and c are three independent vectors (i.e. no two of them are parallel) then all tensors can be constructed as a sum of scalar multiples of the nine possible dyadic products of these vectors. 

 

 

2. OPERATIONS ON SECOND ORDER TENSORS

 

 Tensor components

 

Let  be a Cartesian basis, and let S be a second order tensor.  The components of S in  may be represented as a matrix

 

where

 

 

The representation of a tensor in terms of its components can also be expressed in dyadic form as

 

This representation is particularly convenient when using polar coordinates, or when using a general non-orthogonal coordinate system.

 

 Addition

Let S and T be two tensors.  Then
 
is also a tensor.

 

Denote the Cartesian components of U, S and T by matrices as defined above.  The components of U are then related to the components of S and T by

 

In index notation we would write

 

 

 Product of a tensor and a vector

 

Let u be a vector and S a second order tensor.  Then

 

is a vector. 

 

Let  and  denote the components of vectors u and v in a Cartesian basis , and denote the Cartesian components of S as described above.  Then

 

Alternatively, using index notation

 

 

 

The product

 

is also a vector.  In component form

 

or

 

Observe that
 
(unless S is symmetric).

 

 

 Product of two tensors

 

Let T and S be two second order tensors.  Then  is also a tensor.

 

Denote the components of U, S and T by  matrices.  Then,

 

Alternatively, using index notation

 

 

Note that tensor products, like matrix products, are not commutative; i.e.  

 

 

 Transpose

 

Let S be a tensor.  The transpose of S is denoted by  and is defined so that

 

 

Denote the components of S by a 3x3 matrix.  The components of   are then

 

i.e. the rows and columns of the matrix are switched.


Note that, if A and B are two tensors, then

 

 


 Trace

 

Let S be a tensor, and denote the components of S by a  matrix.  The trace of S is denoted by tr(S) or trace(S), and can be computed by summing the diagonals of the matrix of components

 

More formally, let  be any Cartesian basis.  Then

 

The trace of a tensor is an example of an invariant of the tensor  you get the same value for trace(S) whatever basis you use to define the matrix of components of S.

 

In index notation, the trace is written  

 

 Contraction.

 

Inner Product: Let S and T be two second order tensors.  The inner product of S and T is a scalar, denoted by .  Represent S and T by their components in a basis.  Then

 

In index notation  

 

Observe that , and also that , where I is the identity tensor.

 Outer product: Let S and T be two second order tensors.  The outer product of S and T is a scalar, denoted by .  Represent S and T by their components in a basis.  Then

 

In index notation  

 

 

Observe that  

 

 Determinant

 

The determinant of a tensor is defined as the determinant of the matrix of its components in a basis.  For a second order tensor

 

 

 

 

In index notation this would read

 

 

Note that if S and T are two tensors, then

 

 

 Inverse

 

Let S be a second order tensor.  The inverse of S exists if and only if , and is defined by

 

where  denotes the inverse of S and I is the identity tensor.

 

The inverse of a tensor may be computed by calculating the inverse of the matrix of its components.  Formally, the inverse of a second order tensor can be written in a simple form using index notation as

 

In practice it is usually faster to compute the inverse using methods such as Gaussian elimination.

 

 

 Change of Basis.

 

Let S be a tensor, and let  be a Cartesian basis.  Suppose that the components of S in the basis  are known to be

 

 

Now, suppose that we wish to compute the components of  S in a second Cartesian basis, .  Denote these components by

 

To do so, first compute the components of the transformation matrix [Q]

 

(this is the same matrix you would use to transform vector components from  to  ).  Then,

 

or, written out in full

 

 

To prove this result, let u and v be vectors satisfying

 

Denote the components of u and v in the two bases by   and , respectively.  Recall that the vector components are related by

 

Now, we could express the tensor-vector product in either basis

 

Substitute for  from above into the second of these two relations, we see that

 

Recall that

 

so multiplying both sides by [Q] shows that

 

so, comparing with the first of equation (1)

 

as stated.

 

In index notation, we would write

 

Another, perhaps cleaner, way to derive this result is to expand the two tensors as the appropriate dyadic products of the basis vectors

 

 

 

 

 Invariants

 

Invariants of a tensor are scalar functions of the tensor components which remain constant under a basis change.  That is to say, the invariant has the same value when computed in two arbitrary bases  and .  A symmetric second order tensor always has three independent invariants.

 

Examples of invariants are

1.      The three eigenvalues

2.      The determinant

3.      The trace

4.      The inner and outer products

 

These are not all independent  for example any of 2-4 can be calculated in terms of 1.

 

 

 

 

In practice, the most commonly used invariants are:

 

 

 

 Eigenvalues and Eigenvectors (Principal values and direction)

 

Let S be a second order tensor.  The scalars  and unit vectors m which satisfy

 

are known as the eigenvalues and eigenvectors of S, or the principal values and principal directions of S. Note that  may be complex.  For a second order tensor in three dimensions, there are generally three values of  and three unique unit vectors m which satisfy this equation.  Occasionally, there may be only two or one value of .  If this is the case, there are infinitely many possible vectors m that satisfy the equation.  The eigenvalues of a tensor, and the components of the eigenvectors, may be computed by finding the eigenvalues and eigenvectors of the matrix of components.

 

The eigenvalues of a symmetric tensor are always real, and its eigenvectors are mutually perpendicular (these two results are important and are proved below).  The eigenvalues of a skew tensor are always pure imaginary or zero.

 

The eigenvalues of a second order tensor are computed using the condition .  This yields a cubic equation, which can be expressed as

 

There are various ways to solve the resulting cubic equation explicitly  a solution for symmetric S is given below, but the results for a general tensor are too messy to be given here.   The eigenvectors are then computed from the condition .

 

 The Cayley-Hamilton Theorem

 

Let S be a second order tensor and let  be the three invariants.   Then

 

(i.e. a tensor satisfies its characteristic equation).   There is an obscure trick to show this… Consider the tensor  (where  is an arbitrary scalar), and let T be the adjoint of , (the adjoint is just the inverse multiplied by the determinant) which satisfies

 

Assume that T= .   Substituting in the preceding equation shows that

 

Use these to substitute for  into

 

 

 

 

 

3 SPECIAL TENSORS

 

 Identity tensor  The identity tensor I is the tensor such that, for any tensor S or vector v

 

In any basis, the identity tensor has components

 

 

 Symmetric Tensor A symmetric tensor S has the property

 

The components of a symmetric tensor have the form

 

so that there are only six independent components of the tensor, instead of nine.   Symmetric tensors have some nice properties:

 

·         The eigenvectors of a symmetric tensor with distinct eigenvalues are orthogonal.   To see this, let  be two eigenvectors, with corresponding eigenvalues . Then .

 

·         The eigenvalues of a symmetric tensor are realTo see this, suppose that  are a complex eigenvalue/eigenvector pair, and let  denote their complex conjugates.   Then, by definition .  And hence .  But note that for a symmetric tensor .  Thus .

 

The eigenvalues of a symmetric tensor can be computed as

 

The eigenvectors can then be found by back-substitution into . To do this, note that the matrix equation can be written as

 

Since the determinant of the matrix is zero, we can discard any row in the equation system and take any column over to the right hand side.   For example, if the tensor has at least one eigenvector with  then the values of  for this eigenvector can be found by discarding the third row, and writing

 

 

·         Spectral decomposition of a symmetric tensor  Let S be a symmetric second order tensor, and let  be the three eigenvalues and eigenvectors of S.  Then S can be expressed as

 

To see this, note that S can always be expanded as a sum of 9 dyadic products of an orthogonal basis.  But since  are eigenvectors it follows that  

 

 

 Skew Tensor.  A skew tensor S has the property

 

The components of a skew tensor have the form

 

 

Every second-order skew tensor has a dual vector w that satisfies

 

for all vectors u.   You can see this by noting that  and expanding out the tensor and cross products explicitly.  In index notation, we can also write

.

 

 Orthogonal Tensors An orthogonal tensor R has the property

 

An orthogonal tensor must have ; a tensor with  is known as a proper orthogonal tensor.  Orthogonal tensors also have some interesting and useful properties:

·         Orthogonal tensors map a vector onto another vector with the same length.  To see this, let u be an arbitrary vector.  Then, note that  

·         The eigenvalues of an orthogonal tensor are  for some value of .  To see this, let  be an eigenvector, with corresponding eigenvalue .  By definition, .  Hence, . Similarly, .  Since the characteristic equation is cubic, there must be at most three eigenvalues, and at least one eigenvalue must be real.

 

Proper orthogonal tensors can be visualized physically as rotations.  A rotation can also be represented in several other forms besides a proper orthogonal tensor.   For example

·         The Rodriguez representation quantifies a rotation as an angle of rotation  (in radians) about some axis n (specified by a unit vector).  Given R, there are various ways to compute n  and .  For example, one way would be find the eigenvalues and the real eigenvector.  The real eigenvector (suitably normalized) must correspond to n; the complex eigenvalues give .  A faster method is to note that

 

·         Alternatively, given n and , R can be computed from

 

where W is the skew tensor that has n as its dual vector, i.e. .  In index notation, this formula is

 

 

 

 

Another useful result is the Polar Decomposition Theorem, which states that invertible second order tensors can be expressed as a product of a symmetric tensor with an orthogonal tensor: 

 

Moreover, the tensors  are unique.  To see this, note that

·          is symmetric and has positive eigenvalues (to see that it’s symmetric, simply take the transpose, and to see that the eigenvalues are positive, note that  for all vectors dx). 

·         Let   and  be the three eigenvalues and eigenvectors of .  Since the eigenvectors are orthogonal, we can write

·         We can then set  and define U is clearly symmetric, and also . To see that R is orthogonal note that: .

·         Given that U and R exist we can write  so if we define  then .   It is easy to show that V is symmetric.

·         To see that the decomposition is unique, suppose that  for some other tensors .  Then .  But  has a unique square root so .  The uniqueness of R follows immediately.