SlideShare a Scribd company logo
MATHEMATICAL PHYSICS
UNIT – 7
Tensor Algebra
PRESENTED BY: DR. RAJESH MATHPAL
ACADEMIC CONSULTANT
SCHOOL OF SCIENCES
U.O.U. TEENPANI, HALDWANI
UTTRAKHAND
MOB:9758417736,7983713112
STRUCTURE OF UNIT
 7.1. INTRODUCTION
 7.2. n-DIMENSIONAL SPACE
 7.3. CO-ORDINATE TRANSFORMATIONS
 7.4. INDICAL AND SUMMATION CONVENTIONS
 7.5. DUMMY AND REAL INDICES
 7.6. KEONECKER DELTA SYMBOL
 7.7. SCALARS, CONTRAVARIANT VECTORS AND COVARIANT
VECTORS
 7.8. TENSORS OF HIGHER RANKS
 7.9. SYMMETRIC AND ANTISYMMETRIC TENSORS
 7.10. ALGEBRAIC OPERATIONS ON TENSORS
7.1. INTRODUCTION
 Tensors are mathematical objects that generalize scalars, vectors
and matrices to higher dimensions. If you are familiar with basic
linear algebra, you should have no trouble understanding
what tensors are. In short, a single-dimensional tensor can be
represented as a vector.
 The term rank of a tensor extends the notion of the rank of a matrix
in linear algebra, although the term is also often used to mean the
order (or degree) of a tensor. The rank of a matrix is the minimum
number of column vectors needed to span the range of the matrix.
 A tensor is a vector or matrix of n-dimensions that represents all types
of data. All values in a tensor hold identical data type with a known
(or partially known) shape. The shape of the data is the
dimensionality of the matrix or array. A tensor can be originated
from the input data or the result of a computation.
 Tensors are simply mathematical objects that can be used to
describe physical properties, just like scalars and vectors. In
fact tensors are merely a generalisation of scalars and vectors; a
scalar is a zero rank tensor, and a vector is a first rank tensor.
 Pressure itself is looked upon as a scalar quantity; the
related tensor quantity often talked about is the stress tensor. that is,
pressure is the negative one-third of the sum of the diagonal
components of the stress tensor (Einstein summation convention is
implied here in which repeated indices imply a sum).
 Tensors are to multilinear functions as linear maps are to single
variable functions. If you want to apply techniques in linear algebra
to problems depending on more than one variable linearly (usually
something like problems that are more than one-dimensional), the
objects you are studying are tensors.
 A tensor field has a tensor corresponding to each point space.
An example is the stress on a material, such as a construction beam
in a bridge. Other examples of tensors include the strain tensor, the
conductivity tensor, and the inertia tensor.
 A tensor is a generalization of vectors and matrices and is easily
understood as a multidimensional array. ... It is a term and set of
techniques known in machine learning in the training and operation
of deep learning models can be described in terms of tensors.
 The tensor is a more generalized form of scalar and vector. Or, the
scalar, vector are the special cases of tensor. If a tensor has only
magnitude and no direction (i.e., rank 0 tensor), then it is called
scalar. If a tensor has magnitude and one direction (i.e., rank
1 tensor), then it is called vector.
 Tensors are a type of data structure used in linear algebra, and like
vectors and matrices, you can calculate arithmetic operations
with tensors. After completing this tutorial, you will know:
That tensors are a generalization of matrices and are represented
using n-dimensional arrays.
 A tensor is a container which can house data in N dimensions. Often
and erroneously used interchangeably with the matrix (which is
specifically a 2-dimensional tensor), tensors are generalizations of
matrices to N-dimensional space. Mathematically
speaking, tensors are more than simply a data container, however.
 Stress is a tensor because it describes things happening in two
directions simultaneously. ... Pressure is part of the stress tensor. The
diagonal elements form the pressure. For example, σxx measures
how much x-force pushes in the x-direction. Think of your hand
pressing against the wall, i.e. applying pressure.
7.2. n-DIMENSIONAL SPACE
 In three dimensional space a point is determined by a set of three
numbers called the co-ordinates of that point in particular system.
For example (x, y, z) are the co-ordinates of a point in rectangular
Cartesian co-ordinate system. By analogy if a point is respected by
an ordered set of n real variables (x1, x2, x3,……xi …..xn)
or more conveniently (x1, x2, x3, ....xi …..xn)
 Hence the suffixes 1, 2, 3, …, i, ….., n denote variables and not the
powers of the variables involved], then all the points corresponding
to all values of co-ordinates (i.e., variables) are said to form an n-
dimensional space, denoted by Vn.
 A curve in n-dimensional space (Vn) is defined as the collection of
points which satisfy the n-equations.
xi = xi (u), (i =1, 2, 3, …, n)
where u is a parameter and xi (u) are n functions of u, which satisfy
certain continuity conditions.
 A sub-space Vm (m < n) of Vn is defined as the collection of points
which satisfy n-equations
xi = xi (u1, u2, …, um) (i = 1, 2, …, n)
where u1, u2, …, um are m parameters and xi (u1, u2, …, um) are n
functions of u1, u2, …, um which satify certain continuity conditions.
7.3. CO-ORDINATE TRANSFORMATIONS
 Tensor analysis is intimately connected with the subject of co-ordinate transformations.
 Consider two sets of variables (x1, x2, x3, …, xn) and 𝑥1, 𝑥2, 𝑥3, … 𝑥𝑛 which determine the
co-ordinates of point in an n-dimensional space in two different frames of reference. Let
the two sets of variables be related to each other by the transformation equations
𝑥1 = 𝜙1 (𝑥1, 𝑥2, 𝑥3, … 𝑥𝑛)
𝑥2 = 𝜙2 (𝑥1, 𝑥2, 𝑥3, … 𝑥𝑛)
… … … …
… … … …
𝑥𝑛 = 𝜙𝑛
(𝑥1
, 𝑥2
, 𝑥3
, … 𝑥𝑛
)
or briefly 𝑥𝜇 = 𝜙𝜇 (𝑥1, 𝑥2, 𝑥3, … , 𝑥𝑖, … , 𝑥𝑛) …(7.1)
(i = 1, 2, 3, …, n)
where function 𝜙𝜇
are single valued, continuous differentiable functions of co-ordinates. Iit sis
essential that the n-function 𝜙𝜇
be independent.
1
n
i
i
x
x





Equations (7.1) can be solved for co-ordinates xi
as functions of 𝑥𝜇 to yield
𝑥𝑖
= 𝜓𝑖
(𝑥1,𝑥2,𝑥3,…,𝑥𝜇,…,𝑥𝑛) …(7.2)
Equations (4.1) and (4.2) are said to define co-ordinate transformations.
From equations (4.1) the differentials 𝑑𝑥𝜇 are transformed as
𝑑𝑥𝜇 =
𝜕𝑥
𝜇
𝜕𝑥1
𝑑𝑥1
+
𝜕𝑥
𝜇
𝜕𝑥2
𝑑𝑥2
+ ⋯+
𝜕𝑥
𝜇
𝜕𝑥𝑛
𝑑𝑥𝑛
= dxi
, (𝜇 = 1, 2, 3, …, n) …(7.3)
7.4.INDICAL AND SUMMATION
CONVENTIONS
Let us now introduce the following two conventions :
(1) Indicial convention. Any index, used either as subscript or
superscript will take all values from 1 to n unless the contrary is
specified. Thus equations (4.1) can be briefly written as
𝑥
𝜇
= 𝜙𝜇
𝑥𝑖
…(7.4)
The convention reminds us that there are n equations with 𝜇 = 1, 2, …n
and 𝜙𝜇
are the functions of n-co-ordinates with (i = 1, 2, …, n).
1
n
j
j
i
a x


(2) Einstein’s summation convention. If any index is repeated in a term, then a
summation with respected to that index over the range 1, 2, 3, …, n is implied. This
convention instead is called Einstein’s summation convention.
According to this conversation instead of expression ,
we merely write ai xi
.
Using above tow conversation eqn. (7.3) is written as
𝑑𝑥
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝑑𝑥𝑖
…(7.5a)
Thus the summation convention means the drop of sigma sign for the index
appearing twice in a given term. In other words the summation convention implies the sum
of the term for the index appearing twice in that term over defined range.
7.5.DUMMY AND REAL INDICES
 Any index which is repeated in a given term, so that the summation convention
implies, is called a dummy index and it may be replaced freely by any other
index not already used in the term. For example I is a dummy index in 𝑎𝑖
𝜇
𝑥𝑖. Also I
is a dummy index in eqn. (4.5a), so that equation (4.5a) is equally written as
𝑑𝑥
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑘 𝑑𝑥𝑘 =
𝜕𝑥
𝜇
𝜕𝑥𝜆 𝑑𝑥𝜆. …(7.5b)
Also two or more dummy indices can be interchanged. In order to avoid confusion
the same index must not be used more than twice in any single team.
For example will not be written as aixi ai xi but rather aiajxi xj.
 Any index which is not repeated in a given term is called a real index. For
example 𝜇 is a real index in𝑎𝑖
𝜇
𝑥𝑖. A real index cannot be replaced by another real
index, e.g.
𝑎𝑖
𝜇
𝑥𝑖 ≠ 𝑎𝑖
𝑣
𝑥𝑖
7.6.KEONECKER DELTA SYMBOL
 The symbol kronecker delta
𝛿𝑘
𝑗
= ቊ
1 𝑖𝑓 𝑗 = 𝑘
0 𝑖𝑓 𝑗 ≠ 𝑘
…(7.6)
 Some properties of kronecker delta
(i) If x1, x2, x3, …xn are independent variables, then
𝜕𝑥𝑗
𝜕𝑥𝑘 = 𝛿𝑘
𝑗
…(7.7)
(ii) An obvious property of kronecker delta symbol is
𝛿𝑘
𝑗
𝐴𝑗 = 𝐴𝑘. …(7.8)
Since by summation convention in the left hand side of this
equation the summation is with respect to j and by definition of
kronecker delta, the only surviving term is that for which j = k.
(iii) If we are dealing with n dimensions, then
𝛿𝑗
𝑗
= 𝛿𝑘
𝑘
= 𝑛 …(7.9)
By summation convention
𝛿𝑗
𝑗
= 𝛿1
1
+ 𝛿2
2
+ 𝛿3
3
+ ⋯ + 𝛿𝑛
𝑛
= 1 + 1 + 1 + ⋯ + 1 = 𝑛
(iv) 𝛿𝑗
𝑗
𝛿𝑘
𝑗
= 𝛿𝑘
𝑖
. …(7.10)
By summation convention
𝛿𝑗
𝑖
𝛿𝑘
𝑗
= 𝛿1
𝑖
𝛿𝑘
1
+ 𝛿2
𝑖
𝛿𝑘
2
+ 𝛿3
𝑖
𝛿𝑘
3
+ ⋯ + 𝛿𝑖
𝑖
𝛿𝑘
𝑖
+ ⋯ 𝛿𝑛
𝑖 𝛿𝑘
𝑛
= 0 + 0 + 0 + ⋯ + 1. 𝛿𝑘
𝑖
+ ⋯ + 0
= 𝛿𝑘
𝑖
(v)
𝜕𝑥𝑗
𝜕𝑥
𝑖
𝜕𝑥
𝑖
𝜕𝑥𝑘 =
𝜕𝑥𝑗
𝜕𝑥𝑘 = 𝛿𝑘
𝑗
. …(7.11)
Generalised Kronecker Delta. The generalised kronecker delta is symbolized as
𝛿
𝑗1
𝑘1
𝑗2 … 𝑗𝑚
𝑘2 … 𝑘𝑚
and defined as follows :
 The subscripts and superscripts can have any value from 1 to n.
 If either at least two superscripts or at least two subscripts have the same value or the subscribts
are not the same set as super-scripts, then the generalised kronecker delta is zero. For example
𝛿𝑗𝑘𝑙
𝑖
𝑘𝑘 = 𝛿𝑙𝑚𝑚
𝑖𝑗𝑘
= 𝛿𝑘𝑙𝑚
𝑖𝑗𝑘
= 0.
 If all the subscripts are separately different and the subscripts are the same set of numbers as
the superscripts, then the generalised kronecker delta has value +1 or -1 according to whether
it requires as even or odd number of permutations to arrange the superscripts in the same order
as the subscripts.
 For example
𝛿123
123
= 𝛿231
123
= 𝛿4125
1452
= +1
and 𝛿213
123
= 𝛿132
123
= 𝛿4152
1452
= −1.
 It should be noted that
𝛿1
𝑖1
, 𝛿2
𝑖2
, 𝛿3…𝑛
𝑖3…𝑖𝑛
𝛿𝑗1
1
, 𝛿𝑗2
2
, 𝛿𝑗3…𝑗𝑛
3…𝑛
= 𝛿𝑗1
𝑖1
, 𝛿𝑗2
𝑖2
, 𝛿𝑗3…𝑗𝑛
𝑖3…𝑖𝑛
7.7. SCALARS, CONTRAVARIANT VECTORS
AND COVARIANT VECTORS
(a) Scalars. Consider a function 𝜙 in a co-ordinate system of variables xi and let his function
have the value 𝜙 in another system of variables 𝑥
𝜇
. Then if
𝜙 = 𝜙
then the function 𝜙 is said to be scalar or invariant or a tensor of order zero.
The quantity
𝛿𝑖
𝑖
= 𝛿1
1
+ 𝛿2
2
+ 𝛿3
3
+ ⋯ + 𝛿𝑛
𝑛
= 𝑛
is a scalar or an invariant.
(b) Contravariant Vectors. Consider a set of n quantities 𝐴1
, 𝐴2
, 𝐴3
, … 𝐴𝑛
in a system of
variables xi and let these quantities have values 𝐴
1
, 𝐴
2
, 𝐴
3
, … 𝐴
𝑛
in another co-ordinate
system of variables 𝑥
𝜇
. If these quantities obey the transformation relation
𝐴
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑖 = 𝐴𝑖
…(7.12)
then the quantities 𝐴𝑖
are said to be the components of a contravariant vector or a
contravariant tensor of first tank.
Any n functions can be chosen as the components of a contravariant vector in a
system of variables 𝑥
𝜇
.
Multiplying equation (4.12) by
𝜕𝑥𝑗
𝜕𝑥
𝜇 and taking the sum over the index 𝜇 from 1 to n,
we get
𝜕𝑥𝑗
𝜕𝑥
𝜇 𝐴
𝜇
=
𝜕𝑥𝑗
𝜕𝑥
𝜇
𝜕𝑥
𝜇
𝜕𝑥𝑖 𝐴𝑖 =
𝜕𝑥𝑗
𝜕𝑥𝑖 𝐴𝑖 = 𝐴𝑗
or 𝐴𝑗 =
𝜕𝑥𝑗
𝜕𝑥
𝜇 𝐴
𝜇
. …(7.13)
Equations (7.13) represent the solution of equations (7.12).
The transformation of differentials 𝑑𝑥𝑖
and 𝑑𝑥
𝜇
in the systems of variables xi and 𝑥
𝜇
respectively, from eqn. (4.5a), is given by
𝑑𝑥
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑖 𝑑𝑥𝑖 …(7.14)
As equations (7.12) and (7.14) are similar transformation equations, we can say that
the differentials dxi form the components of contravariant vector, whose
components in any other system are the differentials 𝑑𝑥
𝜇
of that system. Also we
conclude that the components of a contravariant vector are actually the
components of a contravariant tensor of rank one.
Let us now consider a further change of variables from 𝑥
𝜇
to x’p, then the new
components A’p must be given by
𝐴′𝑃
=
𝜕𝑥′𝑃
𝜕𝑥
𝜇 𝐴
𝜇
=
𝜕𝑥′𝑃
𝜕𝑥
𝜇
𝜕𝑥
𝜇
𝜕𝑥𝑖 . 𝐴𝑖
(using 7.12)
=
𝜕𝑥,𝑝
𝜕𝑥𝜌 𝐴𝑖
. …(7.15)
This equation has the same from as eqn. (4.12). This indicates that the transformations of
contravariant vectors form a group.
Note. A single superscripts is always used to indicate a contravariant vector unless the
contrary is explicitly stated
Covariant vectors. Consider a set of n quantities A1, A2, A3, … An in a system of variables
xi and let these quantities have values 𝐴1, 𝐴2, 𝐴3, … 𝐴𝑛 in another system of variables 𝑥
𝜇
. If
these quantities obey the transformation equations
𝐴𝜇 =
𝜕𝑥𝑖
𝜕𝑥
𝜇 𝐴𝑖 …(7.16)
then the quantities Aj are said to be the components of a convariant vector or a
covariant tensor of rank one.
Any n functions can be chosen as the components of a covariant vector in a system of
variables xi and equations (4.16) determine the n-components in the new system of
variables 𝑥
𝜇
. Multiplying equation (8.16) by
𝜕𝑥
𝜇
𝜕𝑥𝑖 and taking the sum over the index 𝜇 from
1 to n, we get
𝜕𝑥
𝜇
𝜕𝑥𝑗 𝐴𝜇 =
𝜕𝑥
𝜇
𝜕𝑥𝑗
𝜕𝑥𝑖
𝜕𝑥
𝜇 𝐴𝑖 =
𝜕𝑥𝑖
𝜕𝑥𝑗 𝐴𝑖 = 𝐴𝑗
thus 𝐴𝑗 =
𝜕𝑥
𝜇
𝜕𝑥𝑗 𝐴𝜇. …(7.17)
Equations (7.17) represent the solution of equations (7.16).
Let us now consider a further change of variables from 𝑥
𝜇
to x’p. Then the new
components 𝐴𝑝
′
must be given by
𝐴𝑝
′
=
𝜕𝑥
𝜇
𝜕𝑥,𝑝 𝐴𝜇 =
𝜕𝑥
𝜇
𝜕𝑥,𝑝
𝜕𝑥𝑖
𝜕𝑥
𝜇 𝐴𝑖
=
𝜕𝑥𝑖
𝜕𝑥,𝑝 𝐴𝑙. …(7.18)
This equation has the same form as eqn. (4.16). This indicates that the transformation of
convariant vectors form a group.
As
𝜕𝜓
𝜕𝑥
𝜇 =
𝜕𝜓
𝜕𝑥𝑖
𝜕𝑥𝑖
𝜕𝑥
𝜇 =
𝜕𝑥𝑖
𝜕𝑥
𝜇
𝜕𝜓
𝜕𝑥𝑖
It follows from (4.16) that
𝜕𝜓
𝜕𝑥𝑖 form the components of a convariant vector, whose
components in any other system are the corresponding partial derivatives
𝜕𝜓
𝜕𝑥
𝜇. This
contra variant vector is called grad 𝜓.
Note. A single subscript is always used to indicate covariant vector unless contrary is
explicitly stand; but the exception occurs in the notation of co-ordinates.
7.8. TENSORS OF HIGHER RANKS
The laws of transformation of vectors are:
Contravariant …𝐴
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑖 𝐴𝑖
…(7.12)
Covariant …𝐴
𝜇
=
𝜕𝑥𝑖
𝜕𝑥
𝜇 𝐴𝑖 …(7.16)
(a) Contravariant tensors of second rank.
Let us consider (n)2 quantities Aij (here I and j take the values from 1 to n independently) in a
system of variables xi and let these quantities have values 𝐴
𝜇𝑣
in another system of variables 𝑥
𝜇
. If
these quantities obey the transformation equations
𝐴
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗 𝐴𝑖𝑗
…(7.19)
then the quantities Aij are said to be the components of a contravariant tensor of second rank.
The transformation law represented by (7.19) is the generalisation of the transformation law (7.12).
Any set of (n)2 quantities can be chosen as the components of a contravariant tensor of second
rank in a system of variables xi and then quantities (7.19) determine (n)2 components in any other
system of variable 𝑥
𝜇
.
(b) Covariant tensor of second rank. If (n)2 quantities Aij in a system of variables xi are
related to another (n)2 quantities 𝐴𝜇𝑣 in another system of variables 𝑥
𝜇
by the
transformation equations
𝐴𝜇𝑣 =
𝜕𝑥𝑖
𝜕𝑥
𝜇
𝜕𝑥𝑗
𝜕𝑥
𝑣 𝐴𝑖𝑗 …(7.20)
then the quantities Aij are said to be the components of a covariant tensor of second
rank.
The transformation law (7.20) is a generalisation of (7.16). Any set of (n)2 quantities
can be chosen as the components of a covariant tensor of second rank in a system of
variables xi and then equation (7.20) determine (n)2 components in any other system of
variables 𝑥
𝜇
.
(c) Mixed tensor of second rank. If (n)2 quantities 𝐴𝑗
𝑖
in a system of variables xi are related
to another (n)2 quantities 𝐴𝑣
𝜇
in another system of variables 𝑥
𝜇
by the transformation
equations
𝐴𝑣
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑗
𝜕𝑥𝑗
𝜕𝑥
𝑣 𝐴𝑗
𝑖
…(7.21)
then the quantities 𝐴𝑗
𝑖
are said to be component of a mixed tensor of second rank.
An important example of mixed tensor of second rank is kronecker delta 𝐴𝑗
𝑖
.
(d) Tensor of higher ranks, rank of a tensor. The tensors of higher ranks are defined
by similar laws. The rank of a tensor only indicates the number of indices attached
to its per component. For example 𝐴𝐼
𝑖𝑗𝑘
are the components of a mixed tensor of
rank 4; contravariant of rank 3 and covariant of rank 1, if they transform according
to the equation
𝐴𝑝
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥
𝑙
𝜕𝑥𝑝 𝐴𝑙
𝑖𝑗𝑘
…(7.22)
The rank of a tensor when raised as power to the number of dimensions gives the
number of components of the tensor. For example a tensor of rank r inn dimensional
space has (n)r components. Thus the rank of a tensor gives the number of the mode
of changes of a physical quantity when passing from one system to the other which
is in rotation relative to the first. Obviously a quantity that remains unchanged when
axes are rotated is a tensor of zero rank. The tensors of zero rank are scalars or
invariant and similarly the tensors of rank one are vectors.
7.9. SYMMETRIC AND ANTISYMMETRIC
TENSORS
If two contravariant or covarint indices can be interchanged without altering the
tensor, then the tensor is said to be symmetric with respect to these two indicas.
For example if
or ቋ
𝐴𝑖𝑗
= 𝐴𝑗𝑖
𝐴𝑖𝑗 = 𝐴𝑗𝑖
…(7.23)
then the contracariant tensor of second rank Aij or covariant tensor Aij is said to be
symmetric.
For a tensor of higher rank 𝐴𝑙
𝑖𝑗𝑘
if
𝐴𝑙
𝑖𝑗𝑘
= 𝐴𝑙
𝑗𝑖𝑘
then the tensor 𝐴𝑙
𝑖𝑗𝑘
is said to be symmetric with respect to indices i and j.
The symmetry property of a tensor in independent of co-ordinate system used. So if
a tensor is symmetric with respect to two indices in any co-ordinate system, it
remains symmetric with respect to these two indices in any other co-ordinate
system.
This can be seen as follows:
If tensor 𝐴𝑙
𝑖𝑗𝑘
is symmetric with respect to first indices i and j, we have
𝐴𝑙
𝑖𝑗𝑘
= 𝐴𝑙
𝑗𝑖𝑘
…(7.24)
We have 𝐴𝑝
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥
𝑙
𝜕𝑥𝑝 , 𝐴𝑙
𝑖𝑗𝑘
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴 𝑗𝑖𝑘 (using 7.24)
Now interchanging the dummy indices i and j, we get
𝐴𝜌
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑗
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴 𝑗𝑖𝑘 = 𝐴𝜌
𝑣𝜇𝜎
i.e., given tensor is gain symmetric with respect to first two indices in new
co-ordinate system. This result can also be proved for covariant indices.
Thus the symmetry property of a tensor is independent of coordinate
system.
Let 𝐴𝑙
𝑖𝑗𝑘
be symmetric with respect to two indices, one contravarient I and the other covariant l,
then we have
𝐴𝑙
𝑖𝑗𝑘
= 𝐴𝑖
𝑙𝑗𝑘
…(7.25)
We have 𝐴𝜌
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙
𝑖𝑗𝑘
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑖
𝑙𝑗𝑘
[(using 7.25)]
Now interchanging dummy indices i and l, we have
𝐴𝜌
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙
𝑖𝑗𝑘
=
𝜕𝑥𝑖
𝜕𝑥
𝜌
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥
𝜇
𝜕𝑥𝑙 𝐴𝑙
𝑖𝑗𝑘
…(7.26)
According to tensor transformation law,
𝐴𝜇
𝜌𝑣𝜎
=
𝜕𝑥
𝜌
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜇 𝐴𝑙
𝑖𝑗𝑘
…(7.27)
Comparing (4.26) and (4.27), we see that
𝐴𝜌
𝜇𝑣𝜎
≠ 𝐴𝜇
𝜌𝑣𝜎
i.e., summetry is not preserved after a change of co-ordinate system. But kronecker delta which is
a mixed tensor is symmetric with respect to its indices.
(b) Antisymmetric tensors or skew symmetric tensors. A tensor, whose each component
alters in sign but not in magnitude when two contravariant or covariant indices are
inhterchanged, is said to be skew symmetric or antisymmetric with respect to these two
indices.
For example if
or ቋ
𝐴𝑖𝑗
= −𝐴𝑗𝑖
𝐴𝑖𝑗 = −𝐴𝑗𝑖
…(7.28)
then contravariant tensor Aij or covariant tensor Aij of second rank is antisymmetric or for
a tensor of higher rank 𝐴𝑙
𝑖𝑗𝑘
if
𝐴𝑙
𝑖𝑗𝑘
= −𝐴𝑙
𝑖𝑘𝑗
then tensor 𝐴𝑙
𝑖𝑗𝑘
is antisymmetric with respect to indices j and k.
The skew-symmetry property of a tensor is also independent of athe choice of co-
ordinate system. So if a tensor is skew, symmetric with respect to two indices in any co-
ordinate system, it remains skew-symmetric with respect to these two indices in any
other co-ordinate system.
If tensor 𝐴𝑙
𝑖𝑗𝑘
is antisymmetric with respect to first two indices i and j.
We have
𝐴𝑙
𝑖𝑗𝑘
= −𝐴𝑙
𝑗𝑖𝑘
…(7.29)
We have 𝐴𝜌
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙
𝑖𝑗𝑘
= −
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙
𝑗𝑖𝑘
[using (7.29)]
Now interchanging the dummy indices i and j, we get
𝐴𝜌
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑗
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙
𝑖𝑗𝑘
= −𝐴𝜌
𝑣𝜇𝜎
i.e., given tensor is again antisymmetric with respect to first two indices in new co-
ordinate system. Thus antisymmetry property is retained under co-ordinate
transformation.
Antisymmetry property, like symmetry property, cannot be defined with respect to
two indices of which one is contravariant and the other covariant.
If all the indices of a contravariant or covariant tensor can be interchanged so that
its components change sign at each interchange of a pair of indices, the tensor is
said to be antisymmetric, i.e.,
Aijk = -Ajik = +Ajki.
Thus we may state that a contravariant or covariant tensor is antisymmetric if its
components change sign under an odd permutation of its indices and do not
change sign under an even permutation of its indices.
7.10. ALGEBRAIC OPERATIONS ON TENSORS
(i) Additional and subtraction: The addition and subtraction of tensors is
defined only in the case of tensors of some rank and same type. Same
type means the same number of contravarient and covariant indices. The
addition or subtraction of two tensors, like vectors, involves the individual
elements. To add or subtract two tensors the corresponding elements are
added or subtracted.
The sum or difference of two ensors of the same rank and same type is
also a tensor of the same rank and same type.
For example if there are two tenors 𝐴𝑘
𝑖𝑗
and 𝐵𝑘
𝑖𝑗
of the same rank and
same type, then the laws of addition and subtraction are given by
𝐴𝑘
𝑖𝑗
+ 𝐵𝑘
𝑖𝑗
= 𝐶𝑘
𝑖𝑗
(Addition) …(7.35)
𝐴𝑘
𝑖𝑗
− 𝐵𝑘
𝑖𝑗
= 𝐷𝑘
𝑖𝑗
(Subtraction) …(7.36)
where 𝐶𝑘
𝑖𝑗
and 𝐷𝑘
𝑖𝑗
are the tensors of the same rank and same type as the
given tensors.
The transformation laws for the given tensors are
𝐴𝜎
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥𝑘
𝜕𝑥
𝜎 𝐴𝑘
𝑖𝑗
…(7.37)
and 𝐵𝜎
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥𝑘
𝜕𝑥
𝜎 𝐵𝑘
𝑖𝑗
…(7.38)
Adding (7.37) and (7.38), we get
𝐶𝜎
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥𝑘
𝜕𝑥
𝜎 𝐶𝑘
𝑖𝑗
…(7.39)
where is a transformation law for the sum and is similar to transformation laws for 𝐴𝑘
𝑖𝑗
and 𝐵𝑘
𝑖𝑗
given
by (7.37) and (7.38). Hence the sum 𝐶𝑘
𝑖𝑗
= 𝐴𝑘
𝑖𝑗
+ 𝐵𝑘
𝑖𝑗
is itself a tensor of the same rank and same
type as the given tensors.
Subtracting eqn. (4.38) from (4.37), we get
𝐴𝜎
𝜇𝑣
− 𝐵𝜎
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥𝑘
𝜕𝑥
𝜎 (𝐴𝑘
𝑖𝑗
− 𝐵𝑘
𝑖𝑗
)
or 𝐷𝜎
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥𝑘
𝜕𝑥
𝜎 𝐷𝑘
𝑖𝑗
. …(7.40)
which is a transformation law for the difference and is again similar to the transformation law for 𝐴𝑘
𝑖𝑗
and 𝐵𝑘
𝑖𝑗
. Hence the difference 𝐷𝑘
𝑖𝑗
= 𝐴𝑘
𝑖𝑗
− 𝐵𝑘
𝑖𝑗
is itself a tensor of the same rank and same type as
the given tensors.
(ii) Equity of tensors: Two tensors of the same rank and same type are said to be
equal if their components are one to one equal, i.e., if
𝐴𝑘
𝑖𝑗
= 𝐵𝑘
𝑖𝑗
for all values of the indices.
If two tensors are equal in one co-ordinate system, they will be equal in any other
co-ordinate system. Thus if a particular equation is expressed in tensorial form, then it
will be invariant under co-ordinate transformations.
(iii) Outer product: The outer product of two tensors is a tensor whose rank is the sum of the ranks of
given tensors.
Thus if r and r’ are the ranks of two tensors, their outer product will be a tensor of rank (r + r’).
For example if 𝐴𝑘
𝑖𝑗
and 𝐵𝑚
𝑙
are two tensors of ranks 3 and 2 respectively, then
𝐴𝑘
𝑖𝑗
𝐵𝑚
𝑙
= 𝐶𝑘𝑚
𝑖𝑗𝑙
(say) …(7.41)
is a tensor of rank 5 (= 3 + 2)
For proof of this statement we write the transformation equations of the given tensors as
𝐴𝜎
𝜇𝑣
− 𝐵𝜎
𝜇𝑣
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑖
𝜕𝑥𝑘
𝜕𝑥
𝜎 𝐴𝑘
𝑖𝑗
…(7.42)
𝐵𝜆
𝜌
=
𝜕𝑥
𝜌
𝜕𝑥𝑙
𝜕𝑥𝑚
𝜕𝑥
𝜆 𝐵𝑚
𝑙
. …(7.43)
Multiplying (8.42) and (8.43), we get
𝐴𝜎
𝜇𝑣
− 𝐵𝜆
𝜌
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑘
𝜕𝑥
𝜎
𝜕𝑥
𝜌
𝜕𝑥𝑙
𝜕𝑥𝑚
𝜕𝑥
𝜆 𝐴𝑘
𝑖𝑗
𝐵𝑚
𝑙
or 𝐶𝜎𝜆
𝜇𝑣𝜌
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜌
𝜕𝑥𝑙
𝜕𝑥𝑘
𝜕𝑥
𝜎
𝜕𝑥𝑚
𝜕𝑥
𝜆 𝐶𝑘𝑚
𝑖𝑗𝑙
…(7.44)
which is a transformation law for tensor of rank 5. Hence the outer product of two tensors 𝐴𝑘
𝑖𝑗
and
𝐵𝑚
𝑙
is a tensor 𝐶𝑘𝑚
𝑖𝑗𝑙
of rank (3 + 2 =) 5.
(iv) Contraction of tensors: The algebraic operation by which the rank of a mixed tonsor is lowered by 2 is
known as contraction. In the process of contraction one contravariant index and one convariant index of a
mixed tensor are set equal and the repeated index summed over, the result is a tensor of rank lower by two
than the original tensor.
For example consider a mixed tensor 𝐴𝑙𝑚
𝑖𝑗𝑘
of rank 5 with contravariant indices i, j, k and covariant indices
l, m
The transformation law of the given tensor is
𝐴𝜎𝜆
𝜇𝑣𝜌
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌
𝜕𝑥𝑚
𝜕𝑥
𝜆 𝐴𝑙𝑚
𝑖𝑗𝑘
…(7.45)
To apply the process of contraction, we put 𝜆 = 𝜎 and obtain
𝐴𝜌𝜎
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌
𝜕𝑥𝑚
𝜕𝑥
𝜎 𝐴𝑙𝑚
𝑖𝑗𝑘
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑙
𝜕𝑥
𝜌
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑚
𝜕𝑥
𝜎 𝐴𝑙𝑚
𝑖𝑗𝑘
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝛿𝑘
𝑚
𝐴𝑙𝑚
𝑖𝑗𝑘
since substitution operator
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑚
𝜕𝑥
𝜎 = 𝛿𝑘
𝑚
i.e., 𝐴𝜌𝜎
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙𝑘
𝑖𝑗𝑘
…(7.46)
which is a transformation law for a mixed tensor of rank 3. Hence 𝐴𝑙𝑘
𝑖𝑗𝑘
is a mixed tensor of rank 3 and may be
denoted by 𝐴𝑙
𝑖𝑗
. In this example we can further apply the contraction process and obtain the contravariant
vector 𝐴
𝑖𝑗𝑘
or 𝐴𝑖
. Thus the process of contraction enables us to obtain a tensor of rank (r – 2) from a mixed
To apply the process of contraction, we put 𝜆 = 𝜎 and obtain
𝐴𝜌𝜎
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑙
𝜕𝑥
𝜌
𝜕𝑥𝑚
𝜕𝑥
𝜎 𝐴𝑙𝑚
𝑖𝑗𝑘
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑙
𝜕𝑥
𝜌
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑚
𝜕𝑥
𝜎 𝐴𝑙𝑚
𝑖𝑗𝑘
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝛿𝑘
𝑚
𝐴𝑙𝑚
𝑖𝑗𝑘
since substitution operator
𝜕𝑥
𝜎
𝜕𝑥𝑘
𝜕𝑥𝑚
𝜕𝑥
𝜎 = 𝛿𝑘
𝑚
i.e., 𝐴𝜌𝜎
𝜇𝑣𝜎
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥
𝑣
𝜕𝑥𝑗
𝜕𝑥𝑙
𝜕𝑥
𝜌 𝐴𝑙𝑘
𝑖𝑗𝑘
…(7.46)
which is a transformation law for a mixed tensor of rank 3. Hence 𝐴𝑙𝑘
𝑖𝑗𝑘
is a mixed tensor of rank 3 and may be
denoted by 𝐴𝑙
𝑖𝑗
. In this example we can further apply the contraction process and obtain the contravariant
vector 𝐴𝑙𝑘
𝑖𝑗𝑘
or 𝐴𝑖
. Thus the process of contraction enables us to obtain a tensor of rank (r – 2) from a mixed
tensor of rank r.
An another example consider the contraction of the mixed tensor 𝐴𝑗
𝑖
of rank 2, whose transformation
law is
𝐴𝑣
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥𝑗
𝜕𝑥
𝑣 𝐴𝑗
𝑖
.
To apply contraction process we put 𝑣 = 𝜇 and obtain
𝐴𝜇
𝜇
=
𝜕𝑥
𝜇
𝜕𝑥𝑖
𝜕𝑥𝑗
𝜕𝑥
𝜇 𝐴𝑗
𝑖
= 𝛿𝑖
𝑗
𝐴𝑗
𝑖
= 𝐴𝑖
𝑖
(v) Inner Product: The outer product of two tensors followed by a contraction results a new tensor
called and inner product of the two tensors and the process is called the inner multiplication of two
tensors.
Example (a). Consider two tensors 𝐴𝑘
𝑖𝑗
and 𝐵𝑚
𝑙
The outer product of these two tensors is
𝐴𝑘
𝑖𝑗
𝐵𝑚
𝑙
= 𝐶𝑘𝑚
𝑖𝑗𝑙
(say)
Applying contraction process by setting m = i, we obtain
𝐴𝑘
𝑖𝑗
𝐵𝑚
𝑙
= 𝐶𝑘𝑚
𝑖𝑗𝑙
= 𝐷𝑘
𝑗𝑙
(a new tensor)
The new tensor 𝐷𝑘
𝑗𝑙
is the inner product of the two tensors 𝐴𝑘
𝑖𝑗
and 𝐵𝑚
𝑙 .
(b) An another example consider two tensors of rank 1 as 𝐴𝑖
and 𝐵𝑗. The outer product of 𝐴𝑖
and 𝐵𝑗 is
𝐴𝑖
𝐵𝑗 = 𝐶𝑗
𝑖
Applying contraction process by setting i = j, we get
𝐴𝑖
𝐵𝑗 = 𝐶𝑗
𝑗
(a scalar or a tensor of rank zero).
Thus the inner product of two tensors of rank one is a tensor of rank zero. (i.e., invariant).
(vi) Quotient law: In tensor analysis it is often necessary to ascertain whether a given
entity is a tensor or not. The direct method requires us to find out if the given entity
obeys the tensor transformation law or not. In practise this is troublesome and a
simpler test is provided by a law known as quantient law which states:
An entity whose inner product with an arbitrary tensor (contravariant or covariant) is
a tensor, is itself a tensor.
(vii) Extension of rank: The rank of a tensor can be extended by differentiating its each
component with respect to variables xi.
As an example consider a simple case in which the original tensor is of rank zero, i.e., a
scalar S (xi) whose, derivatives relative to the variables xi are
𝜕𝑆
𝜕𝑥𝑖. In Other system of
variables 𝑥
𝜇
the scalar is 𝑆 𝑥
𝜇
, such that
𝜕𝑆
𝜕𝑥
𝜇 =
𝜕𝑆
𝜕𝑥𝑖
𝜕𝑥𝑖
𝜕𝑥
𝜇 =
𝜕𝑥𝑖
𝜕𝑥
𝜇
𝜕𝑆
𝜕𝑥𝑖 …(7.47)
This shows that
𝜕𝑆
𝜕𝑥𝑖, transforms like the components of a tensor of rank one. Thus the
differentiation of a tensor of rank zero gives a tensor of rank one. In general we may say
that the differentiation of a tensor with respect to variables xi yields a new tensor of rank
one greater than the original tensor.
The rank of a tensor can also be extended when a tensor depends upon another
tensor and the differentiation with respect to that tensor is performed. As an example
consider a tensor S of rank zero (i.e., a scalar) depending upon another tensor Aij, then
𝜕𝑆
𝜕𝐴𝑖𝑗
= 𝐵𝑖𝑗 = 𝑎 tensor of rank 2. …(7.48)
Thus the rank of the tensor of rank zero has been extended by 2.
THANKS

More Related Content

PPTX
Divergence
PPTX
Poisson’s and Laplace’s Equation
PDF
2 Amplitude_Modulation.pdf
PPTX
TYPES OF POLARIZATION
PPTX
Faradays law
PDF
M.Sc. Phy SII UIV Quantum Mechanics
PPTX
PPT
Oscillators
Divergence
Poisson’s and Laplace’s Equation
2 Amplitude_Modulation.pdf
TYPES OF POLARIZATION
Faradays law
M.Sc. Phy SII UIV Quantum Mechanics
Oscillators

What's hot (20)

PPTX
Impedence
PDF
(10) electron spin & angular momentum coupling
PPTX
Cro cathode ray oscilloscope working and applications
PPT
Lecture 25 induction. faradays law. lenz law
PPT
Oscillators
PPTX
Alternating Current
PPT
Electric Charges & Lorentz Force
PPTX
Plane wave
 
PPT
Klystron
PDF
Wave Optics
PPTX
Diffraction of Light waves
PPTX
Chap 11 - Magnetic Materials.pptx
PPT
Diffraction
PPTX
Magnetic flux
PPT
THE NATURE OF MAGNETIC MATERIAL FORCE & TORQUE ON CLOSED CIRCUIT
PDF
Malus law salman
PPT
Inductance and capacitance
PPTX
Magnetic Dipole Moment & Magnetic susceptibility.pptx
PPT
DIELECTRICS PPT
Impedence
(10) electron spin & angular momentum coupling
Cro cathode ray oscilloscope working and applications
Lecture 25 induction. faradays law. lenz law
Oscillators
Alternating Current
Electric Charges & Lorentz Force
Plane wave
 
Klystron
Wave Optics
Diffraction of Light waves
Chap 11 - Magnetic Materials.pptx
Diffraction
Magnetic flux
THE NATURE OF MAGNETIC MATERIAL FORCE & TORQUE ON CLOSED CIRCUIT
Malus law salman
Inductance and capacitance
Magnetic Dipole Moment & Magnetic susceptibility.pptx
DIELECTRICS PPT
Ad

Similar to ppt MATHEMATICAL PHYSICS tensor unit 7.pdf (20)

PDF
Tensor 1
PDF
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
PDF
07 Tensor Visualization
PDF
DOMV No 4 PHYSICAL DYNAMIC MODEL TYPES (1).pdf
PDF
Atp (Advancede transport phenomena)
PDF
overviewPCA
PDF
PDF
The theory of continuum and elasto plastic materials
PPT
Linear and Genralized linear modeling (GLm)
PDF
Chapter 7-2.pdf. .
PDF
Sequences 01
PDF
Linear Algebra and its use in finance:
PDF
Random process and noise
PDF
Masterscriptie T. Dings (4029100)
PDF
Lecture 3 - Linear Regression
PPT
Presentation1.ppt
Tensor 1
Using Mathematical Foundations To Study The Equivalence Between Mass And Ener...
07 Tensor Visualization
DOMV No 4 PHYSICAL DYNAMIC MODEL TYPES (1).pdf
Atp (Advancede transport phenomena)
overviewPCA
The theory of continuum and elasto plastic materials
Linear and Genralized linear modeling (GLm)
Chapter 7-2.pdf. .
Sequences 01
Linear Algebra and its use in finance:
Random process and noise
Masterscriptie T. Dings (4029100)
Lecture 3 - Linear Regression
Presentation1.ppt
Ad

Recently uploaded (20)

PPTX
History, Philosophy and sociology of education (1).pptx
PDF
Computing-Curriculum for Schools in Ghana
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
Weekly quiz Compilation Jan -July 25.pdf
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
advance database management system book.pdf
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PDF
RMMM.pdf make it easy to upload and study
PPTX
UNIT III MENTAL HEALTH NURSING ASSESSMENT
PDF
Indian roads congress 037 - 2012 Flexible pavement
PPTX
Cell Types and Its function , kingdom of life
PDF
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
PDF
A systematic review of self-coping strategies used by university students to ...
History, Philosophy and sociology of education (1).pptx
Computing-Curriculum for Schools in Ghana
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
A powerpoint presentation on the Revised K-10 Science Shaping Paper
RTP_AR_KS1_Tutor's Guide_English [FOR REPRODUCTION].pdf
Final Presentation General Medicine 03-08-2024.pptx
Paper A Mock Exam 9_ Attempt review.pdf.
Chinmaya Tiranga quiz Grand Finale.pdf
Supply Chain Operations Speaking Notes -ICLT Program
Weekly quiz Compilation Jan -July 25.pdf
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
202450812 BayCHI UCSC-SV 20250812 v17.pptx
advance database management system book.pdf
Orientation - ARALprogram of Deped to the Parents.pptx
RMMM.pdf make it easy to upload and study
UNIT III MENTAL HEALTH NURSING ASSESSMENT
Indian roads congress 037 - 2012 Flexible pavement
Cell Types and Its function , kingdom of life
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
A systematic review of self-coping strategies used by university students to ...

ppt MATHEMATICAL PHYSICS tensor unit 7.pdf

  • 1. MATHEMATICAL PHYSICS UNIT – 7 Tensor Algebra PRESENTED BY: DR. RAJESH MATHPAL ACADEMIC CONSULTANT SCHOOL OF SCIENCES U.O.U. TEENPANI, HALDWANI UTTRAKHAND MOB:9758417736,7983713112
  • 2. STRUCTURE OF UNIT  7.1. INTRODUCTION  7.2. n-DIMENSIONAL SPACE  7.3. CO-ORDINATE TRANSFORMATIONS  7.4. INDICAL AND SUMMATION CONVENTIONS  7.5. DUMMY AND REAL INDICES  7.6. KEONECKER DELTA SYMBOL  7.7. SCALARS, CONTRAVARIANT VECTORS AND COVARIANT VECTORS
  • 3.  7.8. TENSORS OF HIGHER RANKS  7.9. SYMMETRIC AND ANTISYMMETRIC TENSORS  7.10. ALGEBRAIC OPERATIONS ON TENSORS
  • 4. 7.1. INTRODUCTION  Tensors are mathematical objects that generalize scalars, vectors and matrices to higher dimensions. If you are familiar with basic linear algebra, you should have no trouble understanding what tensors are. In short, a single-dimensional tensor can be represented as a vector.  The term rank of a tensor extends the notion of the rank of a matrix in linear algebra, although the term is also often used to mean the order (or degree) of a tensor. The rank of a matrix is the minimum number of column vectors needed to span the range of the matrix.
  • 5.  A tensor is a vector or matrix of n-dimensions that represents all types of data. All values in a tensor hold identical data type with a known (or partially known) shape. The shape of the data is the dimensionality of the matrix or array. A tensor can be originated from the input data or the result of a computation.  Tensors are simply mathematical objects that can be used to describe physical properties, just like scalars and vectors. In fact tensors are merely a generalisation of scalars and vectors; a scalar is a zero rank tensor, and a vector is a first rank tensor.
  • 6.  Pressure itself is looked upon as a scalar quantity; the related tensor quantity often talked about is the stress tensor. that is, pressure is the negative one-third of the sum of the diagonal components of the stress tensor (Einstein summation convention is implied here in which repeated indices imply a sum).  Tensors are to multilinear functions as linear maps are to single variable functions. If you want to apply techniques in linear algebra to problems depending on more than one variable linearly (usually something like problems that are more than one-dimensional), the objects you are studying are tensors.
  • 7.  A tensor field has a tensor corresponding to each point space. An example is the stress on a material, such as a construction beam in a bridge. Other examples of tensors include the strain tensor, the conductivity tensor, and the inertia tensor.  A tensor is a generalization of vectors and matrices and is easily understood as a multidimensional array. ... It is a term and set of techniques known in machine learning in the training and operation of deep learning models can be described in terms of tensors.
  • 8.  The tensor is a more generalized form of scalar and vector. Or, the scalar, vector are the special cases of tensor. If a tensor has only magnitude and no direction (i.e., rank 0 tensor), then it is called scalar. If a tensor has magnitude and one direction (i.e., rank 1 tensor), then it is called vector.  Tensors are a type of data structure used in linear algebra, and like vectors and matrices, you can calculate arithmetic operations with tensors. After completing this tutorial, you will know: That tensors are a generalization of matrices and are represented using n-dimensional arrays.
  • 9.  A tensor is a container which can house data in N dimensions. Often and erroneously used interchangeably with the matrix (which is specifically a 2-dimensional tensor), tensors are generalizations of matrices to N-dimensional space. Mathematically speaking, tensors are more than simply a data container, however.  Stress is a tensor because it describes things happening in two directions simultaneously. ... Pressure is part of the stress tensor. The diagonal elements form the pressure. For example, σxx measures how much x-force pushes in the x-direction. Think of your hand pressing against the wall, i.e. applying pressure.
  • 10. 7.2. n-DIMENSIONAL SPACE  In three dimensional space a point is determined by a set of three numbers called the co-ordinates of that point in particular system. For example (x, y, z) are the co-ordinates of a point in rectangular Cartesian co-ordinate system. By analogy if a point is respected by an ordered set of n real variables (x1, x2, x3,……xi …..xn) or more conveniently (x1, x2, x3, ....xi …..xn)  Hence the suffixes 1, 2, 3, …, i, ….., n denote variables and not the powers of the variables involved], then all the points corresponding to all values of co-ordinates (i.e., variables) are said to form an n- dimensional space, denoted by Vn.
  • 11.  A curve in n-dimensional space (Vn) is defined as the collection of points which satisfy the n-equations. xi = xi (u), (i =1, 2, 3, …, n) where u is a parameter and xi (u) are n functions of u, which satisfy certain continuity conditions.  A sub-space Vm (m < n) of Vn is defined as the collection of points which satisfy n-equations xi = xi (u1, u2, …, um) (i = 1, 2, …, n) where u1, u2, …, um are m parameters and xi (u1, u2, …, um) are n functions of u1, u2, …, um which satify certain continuity conditions.
  • 12. 7.3. CO-ORDINATE TRANSFORMATIONS  Tensor analysis is intimately connected with the subject of co-ordinate transformations.  Consider two sets of variables (x1, x2, x3, …, xn) and 𝑥1, 𝑥2, 𝑥3, … 𝑥𝑛 which determine the co-ordinates of point in an n-dimensional space in two different frames of reference. Let the two sets of variables be related to each other by the transformation equations 𝑥1 = 𝜙1 (𝑥1, 𝑥2, 𝑥3, … 𝑥𝑛) 𝑥2 = 𝜙2 (𝑥1, 𝑥2, 𝑥3, … 𝑥𝑛) … … … … … … … … 𝑥𝑛 = 𝜙𝑛 (𝑥1 , 𝑥2 , 𝑥3 , … 𝑥𝑛 ) or briefly 𝑥𝜇 = 𝜙𝜇 (𝑥1, 𝑥2, 𝑥3, … , 𝑥𝑖, … , 𝑥𝑛) …(7.1) (i = 1, 2, 3, …, n) where function 𝜙𝜇 are single valued, continuous differentiable functions of co-ordinates. Iit sis essential that the n-function 𝜙𝜇 be independent.
  • 13. 1 n i i x x      Equations (7.1) can be solved for co-ordinates xi as functions of 𝑥𝜇 to yield 𝑥𝑖 = 𝜓𝑖 (𝑥1,𝑥2,𝑥3,…,𝑥𝜇,…,𝑥𝑛) …(7.2) Equations (4.1) and (4.2) are said to define co-ordinate transformations. From equations (4.1) the differentials 𝑑𝑥𝜇 are transformed as 𝑑𝑥𝜇 = 𝜕𝑥 𝜇 𝜕𝑥1 𝑑𝑥1 + 𝜕𝑥 𝜇 𝜕𝑥2 𝑑𝑥2 + ⋯+ 𝜕𝑥 𝜇 𝜕𝑥𝑛 𝑑𝑥𝑛 = dxi , (𝜇 = 1, 2, 3, …, n) …(7.3)
  • 14. 7.4.INDICAL AND SUMMATION CONVENTIONS Let us now introduce the following two conventions : (1) Indicial convention. Any index, used either as subscript or superscript will take all values from 1 to n unless the contrary is specified. Thus equations (4.1) can be briefly written as 𝑥 𝜇 = 𝜙𝜇 𝑥𝑖 …(7.4) The convention reminds us that there are n equations with 𝜇 = 1, 2, …n and 𝜙𝜇 are the functions of n-co-ordinates with (i = 1, 2, …, n).
  • 15. 1 n j j i a x   (2) Einstein’s summation convention. If any index is repeated in a term, then a summation with respected to that index over the range 1, 2, 3, …, n is implied. This convention instead is called Einstein’s summation convention. According to this conversation instead of expression , we merely write ai xi . Using above tow conversation eqn. (7.3) is written as 𝑑𝑥 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝑑𝑥𝑖 …(7.5a) Thus the summation convention means the drop of sigma sign for the index appearing twice in a given term. In other words the summation convention implies the sum of the term for the index appearing twice in that term over defined range.
  • 16. 7.5.DUMMY AND REAL INDICES  Any index which is repeated in a given term, so that the summation convention implies, is called a dummy index and it may be replaced freely by any other index not already used in the term. For example I is a dummy index in 𝑎𝑖 𝜇 𝑥𝑖. Also I is a dummy index in eqn. (4.5a), so that equation (4.5a) is equally written as 𝑑𝑥 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑘 𝑑𝑥𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝜆 𝑑𝑥𝜆. …(7.5b) Also two or more dummy indices can be interchanged. In order to avoid confusion the same index must not be used more than twice in any single team. For example will not be written as aixi ai xi but rather aiajxi xj.  Any index which is not repeated in a given term is called a real index. For example 𝜇 is a real index in𝑎𝑖 𝜇 𝑥𝑖. A real index cannot be replaced by another real index, e.g. 𝑎𝑖 𝜇 𝑥𝑖 ≠ 𝑎𝑖 𝑣 𝑥𝑖
  • 17. 7.6.KEONECKER DELTA SYMBOL  The symbol kronecker delta 𝛿𝑘 𝑗 = ቊ 1 𝑖𝑓 𝑗 = 𝑘 0 𝑖𝑓 𝑗 ≠ 𝑘 …(7.6)  Some properties of kronecker delta (i) If x1, x2, x3, …xn are independent variables, then 𝜕𝑥𝑗 𝜕𝑥𝑘 = 𝛿𝑘 𝑗 …(7.7) (ii) An obvious property of kronecker delta symbol is 𝛿𝑘 𝑗 𝐴𝑗 = 𝐴𝑘. …(7.8) Since by summation convention in the left hand side of this equation the summation is with respect to j and by definition of kronecker delta, the only surviving term is that for which j = k.
  • 18. (iii) If we are dealing with n dimensions, then 𝛿𝑗 𝑗 = 𝛿𝑘 𝑘 = 𝑛 …(7.9) By summation convention 𝛿𝑗 𝑗 = 𝛿1 1 + 𝛿2 2 + 𝛿3 3 + ⋯ + 𝛿𝑛 𝑛 = 1 + 1 + 1 + ⋯ + 1 = 𝑛 (iv) 𝛿𝑗 𝑗 𝛿𝑘 𝑗 = 𝛿𝑘 𝑖 . …(7.10) By summation convention 𝛿𝑗 𝑖 𝛿𝑘 𝑗 = 𝛿1 𝑖 𝛿𝑘 1 + 𝛿2 𝑖 𝛿𝑘 2 + 𝛿3 𝑖 𝛿𝑘 3 + ⋯ + 𝛿𝑖 𝑖 𝛿𝑘 𝑖 + ⋯ 𝛿𝑛 𝑖 𝛿𝑘 𝑛 = 0 + 0 + 0 + ⋯ + 1. 𝛿𝑘 𝑖 + ⋯ + 0 = 𝛿𝑘 𝑖 (v) 𝜕𝑥𝑗 𝜕𝑥 𝑖 𝜕𝑥 𝑖 𝜕𝑥𝑘 = 𝜕𝑥𝑗 𝜕𝑥𝑘 = 𝛿𝑘 𝑗 . …(7.11)
  • 19. Generalised Kronecker Delta. The generalised kronecker delta is symbolized as 𝛿 𝑗1 𝑘1 𝑗2 … 𝑗𝑚 𝑘2 … 𝑘𝑚 and defined as follows :  The subscripts and superscripts can have any value from 1 to n.  If either at least two superscripts or at least two subscripts have the same value or the subscribts are not the same set as super-scripts, then the generalised kronecker delta is zero. For example 𝛿𝑗𝑘𝑙 𝑖 𝑘𝑘 = 𝛿𝑙𝑚𝑚 𝑖𝑗𝑘 = 𝛿𝑘𝑙𝑚 𝑖𝑗𝑘 = 0.  If all the subscripts are separately different and the subscripts are the same set of numbers as the superscripts, then the generalised kronecker delta has value +1 or -1 according to whether it requires as even or odd number of permutations to arrange the superscripts in the same order as the subscripts.  For example 𝛿123 123 = 𝛿231 123 = 𝛿4125 1452 = +1 and 𝛿213 123 = 𝛿132 123 = 𝛿4152 1452 = −1.  It should be noted that 𝛿1 𝑖1 , 𝛿2 𝑖2 , 𝛿3…𝑛 𝑖3…𝑖𝑛 𝛿𝑗1 1 , 𝛿𝑗2 2 , 𝛿𝑗3…𝑗𝑛 3…𝑛 = 𝛿𝑗1 𝑖1 , 𝛿𝑗2 𝑖2 , 𝛿𝑗3…𝑗𝑛 𝑖3…𝑖𝑛
  • 20. 7.7. SCALARS, CONTRAVARIANT VECTORS AND COVARIANT VECTORS (a) Scalars. Consider a function 𝜙 in a co-ordinate system of variables xi and let his function have the value 𝜙 in another system of variables 𝑥 𝜇 . Then if 𝜙 = 𝜙 then the function 𝜙 is said to be scalar or invariant or a tensor of order zero. The quantity 𝛿𝑖 𝑖 = 𝛿1 1 + 𝛿2 2 + 𝛿3 3 + ⋯ + 𝛿𝑛 𝑛 = 𝑛 is a scalar or an invariant. (b) Contravariant Vectors. Consider a set of n quantities 𝐴1 , 𝐴2 , 𝐴3 , … 𝐴𝑛 in a system of variables xi and let these quantities have values 𝐴 1 , 𝐴 2 , 𝐴 3 , … 𝐴 𝑛 in another co-ordinate system of variables 𝑥 𝜇 . If these quantities obey the transformation relation 𝐴 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 = 𝐴𝑖 …(7.12) then the quantities 𝐴𝑖 are said to be the components of a contravariant vector or a contravariant tensor of first tank.
  • 21. Any n functions can be chosen as the components of a contravariant vector in a system of variables 𝑥 𝜇 . Multiplying equation (4.12) by 𝜕𝑥𝑗 𝜕𝑥 𝜇 and taking the sum over the index 𝜇 from 1 to n, we get 𝜕𝑥𝑗 𝜕𝑥 𝜇 𝐴 𝜇 = 𝜕𝑥𝑗 𝜕𝑥 𝜇 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝐴𝑖 = 𝜕𝑥𝑗 𝜕𝑥𝑖 𝐴𝑖 = 𝐴𝑗 or 𝐴𝑗 = 𝜕𝑥𝑗 𝜕𝑥 𝜇 𝐴 𝜇 . …(7.13) Equations (7.13) represent the solution of equations (7.12). The transformation of differentials 𝑑𝑥𝑖 and 𝑑𝑥 𝜇 in the systems of variables xi and 𝑥 𝜇 respectively, from eqn. (4.5a), is given by 𝑑𝑥 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝑑𝑥𝑖 …(7.14) As equations (7.12) and (7.14) are similar transformation equations, we can say that the differentials dxi form the components of contravariant vector, whose components in any other system are the differentials 𝑑𝑥 𝜇 of that system. Also we conclude that the components of a contravariant vector are actually the components of a contravariant tensor of rank one.
  • 22. Let us now consider a further change of variables from 𝑥 𝜇 to x’p, then the new components A’p must be given by 𝐴′𝑃 = 𝜕𝑥′𝑃 𝜕𝑥 𝜇 𝐴 𝜇 = 𝜕𝑥′𝑃 𝜕𝑥 𝜇 𝜕𝑥 𝜇 𝜕𝑥𝑖 . 𝐴𝑖 (using 7.12) = 𝜕𝑥,𝑝 𝜕𝑥𝜌 𝐴𝑖 . …(7.15) This equation has the same from as eqn. (4.12). This indicates that the transformations of contravariant vectors form a group. Note. A single superscripts is always used to indicate a contravariant vector unless the contrary is explicitly stated Covariant vectors. Consider a set of n quantities A1, A2, A3, … An in a system of variables xi and let these quantities have values 𝐴1, 𝐴2, 𝐴3, … 𝐴𝑛 in another system of variables 𝑥 𝜇 . If these quantities obey the transformation equations 𝐴𝜇 = 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝐴𝑖 …(7.16) then the quantities Aj are said to be the components of a convariant vector or a covariant tensor of rank one.
  • 23. Any n functions can be chosen as the components of a covariant vector in a system of variables xi and equations (4.16) determine the n-components in the new system of variables 𝑥 𝜇 . Multiplying equation (8.16) by 𝜕𝑥 𝜇 𝜕𝑥𝑖 and taking the sum over the index 𝜇 from 1 to n, we get 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝐴𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝐴𝑖 = 𝜕𝑥𝑖 𝜕𝑥𝑗 𝐴𝑖 = 𝐴𝑗 thus 𝐴𝑗 = 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝐴𝜇. …(7.17) Equations (7.17) represent the solution of equations (7.16). Let us now consider a further change of variables from 𝑥 𝜇 to x’p. Then the new components 𝐴𝑝 ′ must be given by 𝐴𝑝 ′ = 𝜕𝑥 𝜇 𝜕𝑥,𝑝 𝐴𝜇 = 𝜕𝑥 𝜇 𝜕𝑥,𝑝 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝐴𝑖 = 𝜕𝑥𝑖 𝜕𝑥,𝑝 𝐴𝑙. …(7.18) This equation has the same form as eqn. (4.16). This indicates that the transformation of convariant vectors form a group.
  • 24. As 𝜕𝜓 𝜕𝑥 𝜇 = 𝜕𝜓 𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥 𝜇 = 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝜕𝜓 𝜕𝑥𝑖 It follows from (4.16) that 𝜕𝜓 𝜕𝑥𝑖 form the components of a convariant vector, whose components in any other system are the corresponding partial derivatives 𝜕𝜓 𝜕𝑥 𝜇. This contra variant vector is called grad 𝜓. Note. A single subscript is always used to indicate covariant vector unless contrary is explicitly stand; but the exception occurs in the notation of co-ordinates.
  • 25. 7.8. TENSORS OF HIGHER RANKS The laws of transformation of vectors are: Contravariant …𝐴 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝐴𝑖 …(7.12) Covariant …𝐴 𝜇 = 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝐴𝑖 …(7.16) (a) Contravariant tensors of second rank. Let us consider (n)2 quantities Aij (here I and j take the values from 1 to n independently) in a system of variables xi and let these quantities have values 𝐴 𝜇𝑣 in another system of variables 𝑥 𝜇 . If these quantities obey the transformation equations 𝐴 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝐴𝑖𝑗 …(7.19) then the quantities Aij are said to be the components of a contravariant tensor of second rank. The transformation law represented by (7.19) is the generalisation of the transformation law (7.12). Any set of (n)2 quantities can be chosen as the components of a contravariant tensor of second rank in a system of variables xi and then quantities (7.19) determine (n)2 components in any other system of variable 𝑥 𝜇 .
  • 26. (b) Covariant tensor of second rank. If (n)2 quantities Aij in a system of variables xi are related to another (n)2 quantities 𝐴𝜇𝑣 in another system of variables 𝑥 𝜇 by the transformation equations 𝐴𝜇𝑣 = 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝜕𝑥 𝑣 𝐴𝑖𝑗 …(7.20) then the quantities Aij are said to be the components of a covariant tensor of second rank. The transformation law (7.20) is a generalisation of (7.16). Any set of (n)2 quantities can be chosen as the components of a covariant tensor of second rank in a system of variables xi and then equation (7.20) determine (n)2 components in any other system of variables 𝑥 𝜇 . (c) Mixed tensor of second rank. If (n)2 quantities 𝐴𝑗 𝑖 in a system of variables xi are related to another (n)2 quantities 𝐴𝑣 𝜇 in another system of variables 𝑥 𝜇 by the transformation equations 𝐴𝑣 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝜕𝑥𝑗 𝜕𝑥 𝑣 𝐴𝑗 𝑖 …(7.21) then the quantities 𝐴𝑗 𝑖 are said to be component of a mixed tensor of second rank. An important example of mixed tensor of second rank is kronecker delta 𝐴𝑗 𝑖 .
  • 27. (d) Tensor of higher ranks, rank of a tensor. The tensors of higher ranks are defined by similar laws. The rank of a tensor only indicates the number of indices attached to its per component. For example 𝐴𝐼 𝑖𝑗𝑘 are the components of a mixed tensor of rank 4; contravariant of rank 3 and covariant of rank 1, if they transform according to the equation 𝐴𝑝 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥 𝑙 𝜕𝑥𝑝 𝐴𝑙 𝑖𝑗𝑘 …(7.22) The rank of a tensor when raised as power to the number of dimensions gives the number of components of the tensor. For example a tensor of rank r inn dimensional space has (n)r components. Thus the rank of a tensor gives the number of the mode of changes of a physical quantity when passing from one system to the other which is in rotation relative to the first. Obviously a quantity that remains unchanged when axes are rotated is a tensor of zero rank. The tensors of zero rank are scalars or invariant and similarly the tensors of rank one are vectors.
  • 28. 7.9. SYMMETRIC AND ANTISYMMETRIC TENSORS If two contravariant or covarint indices can be interchanged without altering the tensor, then the tensor is said to be symmetric with respect to these two indicas. For example if or ቋ 𝐴𝑖𝑗 = 𝐴𝑗𝑖 𝐴𝑖𝑗 = 𝐴𝑗𝑖 …(7.23) then the contracariant tensor of second rank Aij or covariant tensor Aij is said to be symmetric. For a tensor of higher rank 𝐴𝑙 𝑖𝑗𝑘 if 𝐴𝑙 𝑖𝑗𝑘 = 𝐴𝑙 𝑗𝑖𝑘 then the tensor 𝐴𝑙 𝑖𝑗𝑘 is said to be symmetric with respect to indices i and j. The symmetry property of a tensor in independent of co-ordinate system used. So if a tensor is symmetric with respect to two indices in any co-ordinate system, it remains symmetric with respect to these two indices in any other co-ordinate system.
  • 29. This can be seen as follows: If tensor 𝐴𝑙 𝑖𝑗𝑘 is symmetric with respect to first indices i and j, we have 𝐴𝑙 𝑖𝑗𝑘 = 𝐴𝑙 𝑗𝑖𝑘 …(7.24) We have 𝐴𝑝 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥 𝑙 𝜕𝑥𝑝 , 𝐴𝑙 𝑖𝑗𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴 𝑗𝑖𝑘 (using 7.24) Now interchanging the dummy indices i and j, we get 𝐴𝜌 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴 𝑗𝑖𝑘 = 𝐴𝜌 𝑣𝜇𝜎 i.e., given tensor is gain symmetric with respect to first two indices in new co-ordinate system. This result can also be proved for covariant indices. Thus the symmetry property of a tensor is independent of coordinate system.
  • 30. Let 𝐴𝑙 𝑖𝑗𝑘 be symmetric with respect to two indices, one contravarient I and the other covariant l, then we have 𝐴𝑙 𝑖𝑗𝑘 = 𝐴𝑖 𝑙𝑗𝑘 …(7.25) We have 𝐴𝜌 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙 𝑖𝑗𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑖 𝑙𝑗𝑘 [(using 7.25)] Now interchanging dummy indices i and l, we have 𝐴𝜌 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙 𝑖𝑗𝑘 = 𝜕𝑥𝑖 𝜕𝑥 𝜌 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥 𝜇 𝜕𝑥𝑙 𝐴𝑙 𝑖𝑗𝑘 …(7.26) According to tensor transformation law, 𝐴𝜇 𝜌𝑣𝜎 = 𝜕𝑥 𝜌 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜇 𝐴𝑙 𝑖𝑗𝑘 …(7.27) Comparing (4.26) and (4.27), we see that 𝐴𝜌 𝜇𝑣𝜎 ≠ 𝐴𝜇 𝜌𝑣𝜎 i.e., summetry is not preserved after a change of co-ordinate system. But kronecker delta which is a mixed tensor is symmetric with respect to its indices.
  • 31. (b) Antisymmetric tensors or skew symmetric tensors. A tensor, whose each component alters in sign but not in magnitude when two contravariant or covariant indices are inhterchanged, is said to be skew symmetric or antisymmetric with respect to these two indices. For example if or ቋ 𝐴𝑖𝑗 = −𝐴𝑗𝑖 𝐴𝑖𝑗 = −𝐴𝑗𝑖 …(7.28) then contravariant tensor Aij or covariant tensor Aij of second rank is antisymmetric or for a tensor of higher rank 𝐴𝑙 𝑖𝑗𝑘 if 𝐴𝑙 𝑖𝑗𝑘 = −𝐴𝑙 𝑖𝑘𝑗 then tensor 𝐴𝑙 𝑖𝑗𝑘 is antisymmetric with respect to indices j and k. The skew-symmetry property of a tensor is also independent of athe choice of co- ordinate system. So if a tensor is skew, symmetric with respect to two indices in any co- ordinate system, it remains skew-symmetric with respect to these two indices in any other co-ordinate system.
  • 32. If tensor 𝐴𝑙 𝑖𝑗𝑘 is antisymmetric with respect to first two indices i and j. We have 𝐴𝑙 𝑖𝑗𝑘 = −𝐴𝑙 𝑗𝑖𝑘 …(7.29) We have 𝐴𝜌 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙 𝑖𝑗𝑘 = − 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙 𝑗𝑖𝑘 [using (7.29)] Now interchanging the dummy indices i and j, we get 𝐴𝜌 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑗 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙 𝑖𝑗𝑘 = −𝐴𝜌 𝑣𝜇𝜎 i.e., given tensor is again antisymmetric with respect to first two indices in new co- ordinate system. Thus antisymmetry property is retained under co-ordinate transformation.
  • 33. Antisymmetry property, like symmetry property, cannot be defined with respect to two indices of which one is contravariant and the other covariant. If all the indices of a contravariant or covariant tensor can be interchanged so that its components change sign at each interchange of a pair of indices, the tensor is said to be antisymmetric, i.e., Aijk = -Ajik = +Ajki. Thus we may state that a contravariant or covariant tensor is antisymmetric if its components change sign under an odd permutation of its indices and do not change sign under an even permutation of its indices.
  • 34. 7.10. ALGEBRAIC OPERATIONS ON TENSORS (i) Additional and subtraction: The addition and subtraction of tensors is defined only in the case of tensors of some rank and same type. Same type means the same number of contravarient and covariant indices. The addition or subtraction of two tensors, like vectors, involves the individual elements. To add or subtract two tensors the corresponding elements are added or subtracted. The sum or difference of two ensors of the same rank and same type is also a tensor of the same rank and same type. For example if there are two tenors 𝐴𝑘 𝑖𝑗 and 𝐵𝑘 𝑖𝑗 of the same rank and same type, then the laws of addition and subtraction are given by 𝐴𝑘 𝑖𝑗 + 𝐵𝑘 𝑖𝑗 = 𝐶𝑘 𝑖𝑗 (Addition) …(7.35) 𝐴𝑘 𝑖𝑗 − 𝐵𝑘 𝑖𝑗 = 𝐷𝑘 𝑖𝑗 (Subtraction) …(7.36) where 𝐶𝑘 𝑖𝑗 and 𝐷𝑘 𝑖𝑗 are the tensors of the same rank and same type as the given tensors.
  • 35. The transformation laws for the given tensors are 𝐴𝜎 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝐴𝑘 𝑖𝑗 …(7.37) and 𝐵𝜎 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝐵𝑘 𝑖𝑗 …(7.38) Adding (7.37) and (7.38), we get 𝐶𝜎 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝐶𝑘 𝑖𝑗 …(7.39) where is a transformation law for the sum and is similar to transformation laws for 𝐴𝑘 𝑖𝑗 and 𝐵𝑘 𝑖𝑗 given by (7.37) and (7.38). Hence the sum 𝐶𝑘 𝑖𝑗 = 𝐴𝑘 𝑖𝑗 + 𝐵𝑘 𝑖𝑗 is itself a tensor of the same rank and same type as the given tensors. Subtracting eqn. (4.38) from (4.37), we get 𝐴𝜎 𝜇𝑣 − 𝐵𝜎 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥𝑘 𝜕𝑥 𝜎 (𝐴𝑘 𝑖𝑗 − 𝐵𝑘 𝑖𝑗 ) or 𝐷𝜎 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝐷𝑘 𝑖𝑗 . …(7.40) which is a transformation law for the difference and is again similar to the transformation law for 𝐴𝑘 𝑖𝑗 and 𝐵𝑘 𝑖𝑗 . Hence the difference 𝐷𝑘 𝑖𝑗 = 𝐴𝑘 𝑖𝑗 − 𝐵𝑘 𝑖𝑗 is itself a tensor of the same rank and same type as the given tensors.
  • 36. (ii) Equity of tensors: Two tensors of the same rank and same type are said to be equal if their components are one to one equal, i.e., if 𝐴𝑘 𝑖𝑗 = 𝐵𝑘 𝑖𝑗 for all values of the indices. If two tensors are equal in one co-ordinate system, they will be equal in any other co-ordinate system. Thus if a particular equation is expressed in tensorial form, then it will be invariant under co-ordinate transformations.
  • 37. (iii) Outer product: The outer product of two tensors is a tensor whose rank is the sum of the ranks of given tensors. Thus if r and r’ are the ranks of two tensors, their outer product will be a tensor of rank (r + r’). For example if 𝐴𝑘 𝑖𝑗 and 𝐵𝑚 𝑙 are two tensors of ranks 3 and 2 respectively, then 𝐴𝑘 𝑖𝑗 𝐵𝑚 𝑙 = 𝐶𝑘𝑚 𝑖𝑗𝑙 (say) …(7.41) is a tensor of rank 5 (= 3 + 2) For proof of this statement we write the transformation equations of the given tensors as 𝐴𝜎 𝜇𝑣 − 𝐵𝜎 𝜇𝑣 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑖 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝐴𝑘 𝑖𝑗 …(7.42) 𝐵𝜆 𝜌 = 𝜕𝑥 𝜌 𝜕𝑥𝑙 𝜕𝑥𝑚 𝜕𝑥 𝜆 𝐵𝑚 𝑙 . …(7.43) Multiplying (8.42) and (8.43), we get 𝐴𝜎 𝜇𝑣 − 𝐵𝜆 𝜌 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝜕𝑥 𝜌 𝜕𝑥𝑙 𝜕𝑥𝑚 𝜕𝑥 𝜆 𝐴𝑘 𝑖𝑗 𝐵𝑚 𝑙 or 𝐶𝜎𝜆 𝜇𝑣𝜌 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜌 𝜕𝑥𝑙 𝜕𝑥𝑘 𝜕𝑥 𝜎 𝜕𝑥𝑚 𝜕𝑥 𝜆 𝐶𝑘𝑚 𝑖𝑗𝑙 …(7.44) which is a transformation law for tensor of rank 5. Hence the outer product of two tensors 𝐴𝑘 𝑖𝑗 and 𝐵𝑚 𝑙 is a tensor 𝐶𝑘𝑚 𝑖𝑗𝑙 of rank (3 + 2 =) 5.
  • 38. (iv) Contraction of tensors: The algebraic operation by which the rank of a mixed tonsor is lowered by 2 is known as contraction. In the process of contraction one contravariant index and one convariant index of a mixed tensor are set equal and the repeated index summed over, the result is a tensor of rank lower by two than the original tensor. For example consider a mixed tensor 𝐴𝑙𝑚 𝑖𝑗𝑘 of rank 5 with contravariant indices i, j, k and covariant indices l, m The transformation law of the given tensor is 𝐴𝜎𝜆 𝜇𝑣𝜌 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝜕𝑥𝑚 𝜕𝑥 𝜆 𝐴𝑙𝑚 𝑖𝑗𝑘 …(7.45) To apply the process of contraction, we put 𝜆 = 𝜎 and obtain 𝐴𝜌𝜎 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝜕𝑥𝑚 𝜕𝑥 𝜎 𝐴𝑙𝑚 𝑖𝑗𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑚 𝜕𝑥 𝜎 𝐴𝑙𝑚 𝑖𝑗𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝛿𝑘 𝑚 𝐴𝑙𝑚 𝑖𝑗𝑘 since substitution operator 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑚 𝜕𝑥 𝜎 = 𝛿𝑘 𝑚 i.e., 𝐴𝜌𝜎 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙𝑘 𝑖𝑗𝑘 …(7.46) which is a transformation law for a mixed tensor of rank 3. Hence 𝐴𝑙𝑘 𝑖𝑗𝑘 is a mixed tensor of rank 3 and may be denoted by 𝐴𝑙 𝑖𝑗 . In this example we can further apply the contraction process and obtain the contravariant vector 𝐴 𝑖𝑗𝑘 or 𝐴𝑖 . Thus the process of contraction enables us to obtain a tensor of rank (r – 2) from a mixed
  • 39. To apply the process of contraction, we put 𝜆 = 𝜎 and obtain 𝐴𝜌𝜎 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝜕𝑥𝑚 𝜕𝑥 𝜎 𝐴𝑙𝑚 𝑖𝑗𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑚 𝜕𝑥 𝜎 𝐴𝑙𝑚 𝑖𝑗𝑘 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝛿𝑘 𝑚 𝐴𝑙𝑚 𝑖𝑗𝑘 since substitution operator 𝜕𝑥 𝜎 𝜕𝑥𝑘 𝜕𝑥𝑚 𝜕𝑥 𝜎 = 𝛿𝑘 𝑚 i.e., 𝐴𝜌𝜎 𝜇𝑣𝜎 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥 𝑣 𝜕𝑥𝑗 𝜕𝑥𝑙 𝜕𝑥 𝜌 𝐴𝑙𝑘 𝑖𝑗𝑘 …(7.46) which is a transformation law for a mixed tensor of rank 3. Hence 𝐴𝑙𝑘 𝑖𝑗𝑘 is a mixed tensor of rank 3 and may be denoted by 𝐴𝑙 𝑖𝑗 . In this example we can further apply the contraction process and obtain the contravariant vector 𝐴𝑙𝑘 𝑖𝑗𝑘 or 𝐴𝑖 . Thus the process of contraction enables us to obtain a tensor of rank (r – 2) from a mixed tensor of rank r. An another example consider the contraction of the mixed tensor 𝐴𝑗 𝑖 of rank 2, whose transformation law is 𝐴𝑣 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥𝑗 𝜕𝑥 𝑣 𝐴𝑗 𝑖 . To apply contraction process we put 𝑣 = 𝜇 and obtain 𝐴𝜇 𝜇 = 𝜕𝑥 𝜇 𝜕𝑥𝑖 𝜕𝑥𝑗 𝜕𝑥 𝜇 𝐴𝑗 𝑖 = 𝛿𝑖 𝑗 𝐴𝑗 𝑖 = 𝐴𝑖 𝑖
  • 40. (v) Inner Product: The outer product of two tensors followed by a contraction results a new tensor called and inner product of the two tensors and the process is called the inner multiplication of two tensors. Example (a). Consider two tensors 𝐴𝑘 𝑖𝑗 and 𝐵𝑚 𝑙 The outer product of these two tensors is 𝐴𝑘 𝑖𝑗 𝐵𝑚 𝑙 = 𝐶𝑘𝑚 𝑖𝑗𝑙 (say) Applying contraction process by setting m = i, we obtain 𝐴𝑘 𝑖𝑗 𝐵𝑚 𝑙 = 𝐶𝑘𝑚 𝑖𝑗𝑙 = 𝐷𝑘 𝑗𝑙 (a new tensor) The new tensor 𝐷𝑘 𝑗𝑙 is the inner product of the two tensors 𝐴𝑘 𝑖𝑗 and 𝐵𝑚 𝑙 . (b) An another example consider two tensors of rank 1 as 𝐴𝑖 and 𝐵𝑗. The outer product of 𝐴𝑖 and 𝐵𝑗 is 𝐴𝑖 𝐵𝑗 = 𝐶𝑗 𝑖 Applying contraction process by setting i = j, we get 𝐴𝑖 𝐵𝑗 = 𝐶𝑗 𝑗 (a scalar or a tensor of rank zero). Thus the inner product of two tensors of rank one is a tensor of rank zero. (i.e., invariant).
  • 41. (vi) Quotient law: In tensor analysis it is often necessary to ascertain whether a given entity is a tensor or not. The direct method requires us to find out if the given entity obeys the tensor transformation law or not. In practise this is troublesome and a simpler test is provided by a law known as quantient law which states: An entity whose inner product with an arbitrary tensor (contravariant or covariant) is a tensor, is itself a tensor.
  • 42. (vii) Extension of rank: The rank of a tensor can be extended by differentiating its each component with respect to variables xi. As an example consider a simple case in which the original tensor is of rank zero, i.e., a scalar S (xi) whose, derivatives relative to the variables xi are 𝜕𝑆 𝜕𝑥𝑖. In Other system of variables 𝑥 𝜇 the scalar is 𝑆 𝑥 𝜇 , such that 𝜕𝑆 𝜕𝑥 𝜇 = 𝜕𝑆 𝜕𝑥𝑖 𝜕𝑥𝑖 𝜕𝑥 𝜇 = 𝜕𝑥𝑖 𝜕𝑥 𝜇 𝜕𝑆 𝜕𝑥𝑖 …(7.47) This shows that 𝜕𝑆 𝜕𝑥𝑖, transforms like the components of a tensor of rank one. Thus the differentiation of a tensor of rank zero gives a tensor of rank one. In general we may say that the differentiation of a tensor with respect to variables xi yields a new tensor of rank one greater than the original tensor. The rank of a tensor can also be extended when a tensor depends upon another tensor and the differentiation with respect to that tensor is performed. As an example consider a tensor S of rank zero (i.e., a scalar) depending upon another tensor Aij, then 𝜕𝑆 𝜕𝐴𝑖𝑗 = 𝐵𝑖𝑗 = 𝑎 tensor of rank 2. …(7.48) Thus the rank of the tensor of rank zero has been extended by 2.