1
Introduction to probability and
random walk
2
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
3
Single-particle mechanics that is from – Differential equations such as
Newtons equations of motion
Schrodinger wave equation
Address problem of mechanics in the single-particle level
They predict future position and momentum and wave function from initial
conditions
Successful in predicting properties of system where there is mild coupling –
interaction between them are limited
Methods of statistical mechanics and random walk
⃗
F=m⃗
a
iℏ
∂ ψ(x ,t)
∂t
=−
ℏ
2m
∂2
ψ(x ,t)
∂ x2
+V (x)ψ(x ,t)
Statistical mechanics apply to statistical methods to arrive at useful properties of
many-body systems.
Typical examples of many-body systems are -
Atoms, molecules, macro-molecules
Fundamental particles photons, phonons, etc
4
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
5
Now look at collective properties of a collection of particles
Example – Gas in a chamber
position and momentum
List of useful properties
are
Temperature
Pressure
Heat capacity
Compressibility
1 N
{r1, r2,.... ,rN ; p1, p2,.... , pN }
T
P
χv
Cv
In equilibrium it is possible to obtain thermodynamics quantities from
fundamental properties.
Typically the thermodynamic variables are obtained directly from experiments
6
1 N
{r1, r2,.... ,rN ; p1, p2,.... , pN }
Consider a scenario of system evolve in a chamber of volume V
V
Let total energy sum energy of each particle
E=∑
i=0
N
ei
Each increase or decrease and fluctuate around
ei
⟨ei⟩=E/N
7
1 N
{r1, r2,.... ,rN ; p1, p2,.... , pN }
V
⟨ei⟩=E/N
Number of particles or time -at discrete intervals
N t
{t1, t2,.... ,tN }
at time
{e1, e2,.... ,eN }
{e1, e2,.... ,eN }
then energy
discrete sampling
8
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
9
All variables of constituent particles contained in the volume in a chamber
execute random variation of variable such as
V
{r1, r2,.... ,rN ; p1, p2,.... , pN }
Random walk is completely unpredictable
Example of random walk is outcome of toss of coin in tosses
Random walk and probability
Methods of statistical mechanics and random walk
N
HTTTHHHTT
1 2 3
2 4 5 6 7 8 9
1
H=1,T=0
10
Second toss have head is highly unpredictable
In coin toss experiment the nature of the walk changes -
When tossing is repeated N →∞, P(H)=P(T )=
1
2
Diffusion is the probability model for random walks repeated many times
Probability profile of this walk many walkers in ink spreading in water is given
below
Ink spreading in a bottle
of water by diffusion-
black lines are path of
random walkers
t1<t2<t3<t4
Line of equal probability
where concentration is
constant – different circles
give a probability to find
at least a particle
11
A simple case of random walk is in one-dimension
HTTTHHHTT
Let is the right move
H=→
Let is the left move move
T=←
←
HTTTHHHTT
→
←
←
→→→
←←
walker
final destination
A single walk
generated from
coin toss
experiment
A single walk of is not repeated again in another set of coin toss experiment
Similar to diffusion in a liquid – either large number of experiments are repeated
and averaged to find probability of an event
Probability of an event is reproducible, while individual random walks are not
12
Probability and random walk in one dimension
Random walk in one dimension is one of the simplest problem that can introduce
methods of probability theory that is applicable to large class of problems
Let us start with most ideal case walker moves either right or left in a perfectly random
fashion, that is with probability p and q.
Consider a single particle executes random walk along a line starting from the origin
Let each step of the walker be of length , let the total number of steps be , in such
a travel the total distance traveled is
This is in terms of (fundamental) length
The integer satisfies the condition since maximum length a walker could have
traveled in positive direction or in negative direction is
0 1 2
-1
-2
l N
x=ml
l
−N≤m≤N
m
N
13
Let total number of right steps
And let the total number of left steps
Then total number of steps
The net displacement of the traveler
As total left and right moves are related to
Let all steps of the moves are statistically independent (decision on a next move does
not depend on previous move)
Let move to the right is taken with a probability then probability of moving to the left
is given by
The probabilities for right steps and left steps is given by
The number of distinct possibilities in which this probability is achieved is
0 1 2
-1
-2
l
x=ml
−N≤m≤N
n1+ n2=N
m=n1−n2
N
n1
n2
n2=N−n1
p
q=1−p
n1 n2 p
n1
q
n2
N !
n1 !n2 !
In a
typical
random
walk
q+ p=1
14
Consider one step random walk – then distribution of probability is
P(→)+P(←)=p+q
(P(→)+P(←))2
=( p+q)2
=1
Consider two step random walk – then distribution of probability is
=P2
(→)+P(→)P(←)+P(←)P(→)
⏟+P2
(←)
multiplicity
redistribution of same probability
( p+q)0
=1
When particle is at the origin 0 step
Probability is calculated after repeating the experiments many times
In a typical unbiased coin toss experiment – one has to repeat in many
times to get ideal probability ½ for the head and the tail
No of time tail/head
obtained Total number of trials
15
For large number of steps the probability re-distribute – for 3
=( p3
+3 p2
q+3 p q2
+q3
)
multiplicity
(P(→)+P(←))3
=( p+q)3
=1
for N - we get the binomial series
( p+q)
N
=∑
n=0
N
N !
n!(N−n)!
p
n
q
N−n
←
→
←
←
→→→
←←
walker
final destination
Total number of ways steps can be
Arranged among them selves
Total number of way for left or right step –
with re arrangement make same multiplicity –
therefore must be removed from by
division
N !
16
Total probability that in a random walk of steps there are right steps is
given by
This distribution can be easily be identified as expansion term in the binomial series, this
is given by
The probability to obtain total displacement is same as that of the probability of
making displacement towards right
It is more meaningful to express the probability in terms of the total displacement, by
change of variables
N n1
W N (n1)=
N !
n1 !n2 !
p
n1
q
n2
WN (n1)
(p+q)N
=∑
n=0
N
N !
n!(N−n)!
pn
qN−n
m
n1
n1=
1
2
(N +m) n2=
1
2
(N −m)
PN (m)=
N !
[(N+m)/2]![(N−m)/2]!
p[(N +m)/2]
q[(N−m)/2]
n1+n2=N
m=n1−n2
n2=N−n1
17
WN (n1)=PN (m)=
N !
[(N+ m)/2]![(N−m)/2]! (1
2 )
N
For unbiased random walk
What does this mean and what various quantities represent
p=q=
1
2
n1=0 n1=N
WN (n1)=PN (m)
multiplicity increases in the middle
where probability increase
18
Displacement versus steps
19
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability for large N
20
Random variables definition and few examples
Random variable is an outcome of a experiment that produces random values
Then we can introduce a random variable which is a function of . A particular
example is identity function, , which is given by
Examples of a sample space:
1) outcome of the throw of a dice
The random variables can have six possibilities
2) results of flipping of a coin
The random variable have two possibilities
3) Outcome of a random walk of walker for a total steps of
Here random variables can have possibilities
Example of such random walk is the change in velocity of a particular molecule in a
system of particles that are at equilibrium after a time of
u
F u u
X u=u
X u
N
2N
t
21
Let are the out come of a random variable
Respective probabilities
For dice experiment
Average value of faces of dice
In compact form in general
When probability is normalized
Therefore any function of the random variable
u={u1, u2 ......um}
{P(u1), P(u2)...... P(um)}
{P(u1), P(u2)...... P(um)}=
1
6
⟨u⟩=
n1×1+n2×2+n3×3+n4×4+n5×5+n6×6
n1+n2+n3+n4+n5+n6
=3.5
⟨u⟩=
∑
i=0
N
P(ui)ui
∑
i=0
N
P(ui)
∑ P(ui)=1
⟨f (u)⟩=
∑ P(ui)f (ui)
∑ P(ui)
{P(u1), P(u2)...... P(um)}=
1
6
⟨u⟩=
1/6×1+1/6×2+1/6×3+1/6×4+1/6×5+1/6×6
1/6+1/6+1/6+1/6+1/6+1/6
=3.5
⟨u⟩=
∑
i=0
N
ni ui
∑
i=0
N
ni
⟨u⟩=∑
i=0
N
P(ui)ui
P(ui)=
ni
∑
i=0
N
ni
From experiment
From assumed possibilities
22
Other notable properties of the averages of random variables are – sum of averages of
two functions are equal to average of their sum
When a function is multiplied with a constant and averaged is equivalent to the average
of the function multiplied by the same constant
In a probability distribution the mean values are very useful, simplest of them is mean
This is the central value around which all values are distributed. The deviation from the
central value is
Another useful average is second moment of about its mean, known as dispersion
This give the spread of the distribution around the mean – it may be expressed as
as this quantity is always positive ; the may be generalized to get
〈f (u)+ g(u)〉=〈f (u)〉+ 〈g(u)〉
〈c f (u)〉=c〈f (u)〉
〈u〉
 u=u−〈u〉 〈 u〉=0
u
⟨(Δu)2
⟩=∑
i=1
M
P(ui)(ui−⟨u⟩)2
≥0
〈(u−〈u〉)2
〉=〈u2
〉−〈u〉2
〈u2
〉≥〈u〉2
〈 un
〉
⟨u⟩=∑
i=0
N
P(ui)ui
23
Mean of the distribution
〈f u〉
N
f (u)=u
⟨(u−⟨u⟩)2
⟩=⟨u2
⟩−⟨u⟩2
Dispersion is the root of
Typical variation in random variables whose outcome in N trials
24
Blue is a distribution with all values are at mean
Black is a distribution with mean same as blue with different dispersion
Red is a distribution with mean same as blue, and black but third
moment is different – asymmetrical with respect to mean
Green is a distribution with mean same as blue, black and red differ
in all higher moments
f u=u
N
Distributions that differs in moments
25
Mean values of random walk problem
The probability distribution is given by
as
The normalization condition can be verified as follows, in the binomial
series expansion
where
Mean number of steps to right is obtained by the relation
Evaluation of this sum is non-trivial, but possible from the relation
substituting this relation we get
W N (n1)=
N !
n1 !n2 !
p
n1
q
n2
W N n1=
N !
n1!N−n1!
p
n1
q
N−n1
n2=N−n1
∑
i=1
M
Pui=1
 pqN
=∑
n=0
N
N !
n1!N−n1!
p
n1
q
N−n1
=1N
=1 1=p+q
⟨f (u)⟩=∑
i=1
M
f (ui)P(ui)
〈n1〉=∑
i=1
M
n1W n1=∑
n=0
N
n1
N !
n1 !N −n1!
p
n1
q
N−n1
n1 p1
n
=p ∂
∂ p
p
n1
⟨f (u)⟩=
∑ P(ui) f (ui)
∑ P(ui)
p=q
26
substituting this relation we get
By inter changing the order of summation and differentiation
Using binomial expansion
Using equation
This is physically meaningful since it shows average number of right steps is equal to
probability to move toward right with total number of steps
⟨n1⟩=∑
i=1
M
n1 W (n1)=∑
n=0
N
n1
N !
n1 !(N−n1)!
p
n1
q
N−n1
=∑
n=0
N
N !
n1 !(N −n1)!
p
∂( pn1
)
∂ p
q
N−n1
=p ∂
∂ p [∑
n=0
N
N !
n1 !(N−n1)!
pn1
qN−n1
]
=p ∂
∂ p [∑
n=0
N
N !
n1 !(N−n1)!
pn1
qN−n1
]
=p ∂
∂ p
(p+q)
N
=N p(p+q)N−1
⟨n1⟩=N p
1=p+q
27
Similarly the average left steps is given by
Sum of the average move on left and right
Net displacement of the particles
If both probabilities of left and right moves are equal then net displacement is zero
Dispersion of the random walk
The dispersion of the random walk is given by
we have
Now we have to evaluate
In terms of the sums
Now using the relation
⟨n2⟩=N q
⟨n1⟩+⟨n2⟩=N
⟨m⟩=N (p−q)
⟨(Δn1)2
⟩
⟨(Δn1)2
⟩=⟨(n1−⟨n1⟩)2
⟩=⟨n1
2
⟩−⟨n1⟩2
⟨n1⟩=N p
〈n1
2
〉
⟨n1
2
⟩=∑
i=1
M
n1
2
W (n1)=∑
n=0
N
n1
2 N !
n1 !(N −n1)!
p
n1
q
N−n1
n1
2
p1
n
=n1 p ∂
∂ p
pn1
=
(p ∂
∂ p)
2
pn1
=∑
n=0
N
N !
n1 !(N −n1)! (p ∂
∂ p )
2
p
n1
q
N−n1
28
⟨n1
2
⟩=∑
n=0
N
N !
n1!(N−n1)! (p ∂
∂ p )
2
p
n1
q
N−n1
=(p ∂
∂ p )
2
∑
n=0
N
N !
n1 !(N−n1)!
p
n1
q
N−n1
=
(p ∂
∂ p )
2
(p+q)N
=
(p ∂
∂ p )[ pN (p+q)N−1
]
=p[N (p+q)N−1
+ pN (N−1)( p+q)N−2
]
q+ p=1
=p[N + pN (N−1)] =Np[1+ pN−p]
=Np[q+ pN ] =Npq+(Np)2
=Npq+⟨n1⟩2
⟨n1⟩=N p
⟨(Δn1)2
⟩=⟨(n1−⟨n1⟩)2
⟩=⟨n1
2
⟩−⟨n1⟩2
⟨(Δn1)2
⟩=⟨n1
2
⟩−⟨n1⟩2
=Npq
Therefore the dispersion is given by
29
m=n1−n2=2n1−N
 m=m−〈m〉=2n1−N −2〈n1〉−N=2n1−〈n1〉=2 n1
 m2
=4 n12
〈 m2
〉=4〈 n12
〉=4 N pq
p=q=
1
2
〈 m2
〉=N
The dispersion of the displacement may also be calculated
Difference between average net displacement and instantaneous net displacement
Taking averages both sides
For an unbiased random walk
The root mean square deviation is given by
Δ∗
n1=⟨(Δn1)2
⟩1/2
It is a linear measure of width of the distribution
The relative width of this distribution is
Δ∗
n1
⟨n1⟩
=
√Npq
Np
=
√q
√Np
=
1
√N when q=p=1/2
N
Width of
the
distribution
reduces
30
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability distribution for large N
31
Probability distribution for large N
For large N the the binomial probability distribution has a pronounced or
dominant maximum around . The value of this function decreases from the
maximum. It is useful to find an expression for for large
Near the region where the maximum occurs where is also large, the change in the
distribution is characterized by
we consider the as continuous function of , now it is permissible to take
derivative of this function,
also
The derivatives are evaluated very near to the maximum, represented by the
coordinates
When the change is sufficiently small it is possible to expand the function in the Taylor
series around the point . If we choose to expand, instead of the original
function, the logarithm of this function. For example an approximate expression valid
near for the function
A direct Taylor series expansion gives
|W (n1+1)−W (n1)|≪W (n1)
W n1
W n1
n1= 
n1
n1
W (n1) N
n1
dW (n1)
d n1
=0
d lnW (n1)
d n1
=0
n1=~
n1+η

〈n1〉
y≪1 f =1 y−N
f =1−Ny
1
2
N N1 y
2.
.....
N
32
f =1−Ny+
1
2
N (N +1) y
2......
Ny>1
For large N
Therefore the series does not converge to a value therefore difficult to truncate
Now we take logarithm of this function and try an expansion
Here the power of N does not grow, and moreover we get an expression
This expression is valid near
Now expanding the probability in the Taylor series
where
Since derivative evaluated near the maximum the function has the following properties
This can be explicitly stated as
ln f =−N ln(1+ y)
ln f =−N ( y−
1
2
y
2
+....)
f =e
−N( y−
1
2
y
2
+....)
y≤1
lnW (n1)
lnW (n1)=lnW (~
n1)+B1 η+
1
2
B2 η
2
+
1
6
B3 η
3
+... Bk=
dk
lnW (~
n1)
d n1
k
B1=0 B2>0
B2=−|B2|
33
Now the function near the peak be written as
The probability distribution near the peak may be written as
For sufficiently small higher order terms in may be neglected
In order to have explicit look at the expression of the derivative we may write starting
from the expression the binomial probability distribution
Logarithm of this expression is
For differentiating we need approximate expression for differential of a factorial
This formula may also be obtained from the Stirling's approximation
~
W =
~
W (~
n1)
W (n1)=
~
W e
−
1
2
|B2|η
2
+
1
6
B3 η
3
η

W (n1)=
~
W e
−
1
2
|B2|η
2
W N (n1)=
N !
n1 !(N−n1)!
p
n1
q
N−n1
ln W N (n1)=ln N !−lnn1 !−ln(N−n1)!+n1 ln p+N−n1 ln q
d ln n!
d n
≃
ln(n+1)!−lnn!
1
=ln
(n+1)!
n!
=ln(n+1)
d ln n!
d n
≃lnn n≫1
34
Using this equation
The first derivative is zero near maximum
On further differentiation
Evaluating this expression near
Is negative as required by
On further differentiation it is possible to show that higher order terms are can indeed be
neglected
d ln W N (n1)
d ln n1
=−ln n1+ln(N−n1)+ln p−ln q
ln
[(N −~
n1)
~
n1
p
q ]=0 (N−~
n1) p=~
n1 q
~
n1=N p
1=p+q
d2
lnW N (n1)
d lnn1
2
=−
1
n1
−
1
(N−n1)
n1=~
n1
B2=−
1
N p
−
1
(N−N p)
=−
1
N pq
[(N−~
n1)
~
n1
p
q ]=1
lnW N (n1)=ln N !−lnn1 !−ln(N−n1)!+n1 ln p+N−n1 ln q
n1=~
n1
d ln n!
d n
≃lnn
35
d2
lnW N (n1)
d lnn1
2
=−
1
n1
−
1
(N−n1)
It is good to look at higher order terms
=
1
N
2
p
2
−
1
(N −Np)
2
d3
lnW N (n1)
d ln n1
3
=
1
n1
2
−
1
(N−n1)
2
=
1
N
2
p
2
−
1
N
2
q
2
=
q
2
− p
2
N
2
p
2
q
2
≪−
1
N p q
=
d lnW N (n1)
d lnn1
At large N – higher order terms are safely ignored
36
The value of the constant in the probability distribution can be evaluated assuming the
variable is quasi-continuous variable. The summation of the probabilities may be
replaced by an integral
For large the probability distribution has negligible contribution away from the
maximum
Therefore the probability distribution can be written finally as
Using the expression
alternatively we may write this expression as
∑
n1=0
N
W (n1)≃∫W (n1)dn1=∫
−∞
∞
W (~
n1+η)d η=1
~
W ∫
−∞
∞
e
−
1
2
|B2|η
2
d η=1 W (n1)=
~
W e
−
1
2
|B2|η
2
~
W
√2π
|B2|
=1
W (n1)=
√|B2|
2π
exp(−
1
2
|B2|η
2
)=
√|B2|
2π
exp(−
1
2
|B2|(n1−~
n1)
2
)
N
B2=−
1
N pq
W (n1)=
√ 1
2π N pq
exp(−(n1−N p)2
2 N p q )
n1=~
n1+η
=
√ 1
2π⟨(Δ n1)2
⟩
exp(−(n1−~
n1)2
2N pq )
√⟨(Δn1)2
⟩=√N pq
∫
−∞
∞
due−au
2
+bu
=
√
π
a
e
b
2
4 a
37
The Gaussian probability distribution
The expression for probability distribution
may be rearranged to get probability distribution for net displacement
Number of right steps to obtain a net displacement of
From the formula the variation in is in the units of
If we take the variation in as infinitesimally small we can consider the distribution
as continuous. It also varies in even numbers, this fact is irrelevant when total number
of steps
Transforming in terms of continuous variable using the relation , is the length
in each step
W (n1)=
√ 1
2π N p q
exp(−(n1−N p)2
2N pq )
m
m
n1
Pm
P(m)=W (N +m
2 )=
√ 1
2π N pq
exp(−(m−N ( p−q))
2
8 N p q )
n1=
1
2
(N +m)
n1−N p=
1
2
(N +m−2N p)=
1
2
[m−N (p−q)]
m=2n1−N m Δm=±2
m
x=ml l
N ∞
38
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability distribution for large N
●
Gaussian probability distribution
●
Generalization of the results to many variables and unequal
random walk
39
Pm
m
x=
1
2
exp
−x−2
22 
dx
2l
P(m)=
√ 1
2π N p q
exp(−(m−N (p−q))
2
8 N pq )
40
As the probability distribution becomes much larger, therefore variation
for the probability distribution for adjacent values of negligible, this means
For such distributions the variation in may be replaced with and change in the
variation may be written as
The transformation to continuous variable can be achieved by the following steps, using
the assumption that is same for a small region till next value of probability
corresponding to occurs.
Is the probability density and is independent of the magnitude of
Therefore the final probability density may be obtained from the relation
As
Where
The final expression is the standard Gaussian probability distribution
N ∞ Pm
m
|P(m+2)−P(m)|≪P(m)
m x
x+dx
Pm
m2
ρ(x)dx=P(m)
dx
2l
ρ(x) d x
ρ(x)=
P(m)
2l
=
1
√2 πσ
exp
(−(x−μ)2
2σ2 ) = p−qN l
=2 N pq l
P(m)=
√ 1
2π N p q
exp(−(m−N (p−q))
2
8 N pq ) x=ml
41
We generalize the assumptions used to derive expression for Gaussian distribution to
many natural process and say that for all random walks that involve large number of
steps results in a Gaussian distribution
Since Gaussian distribution is a representation of the probability the integral on all range
of the distribution must yield one
Using standard integral of a Gaussian
Note: the standard result
The constant of the Gaussian distribution can be identified from properties of the
distribution
The mean of the distribution is given by
Let
x=
1
2
exp
−x−2
22 
∫
−∞
∞
xdx=
1
2 
∫
−∞
∞
exp
−x−2
22 dx=1
∫
−∞
+∞
exp(−ax
2
)dx=√π/a
〈x〉=∫
−∞
∞
x xdx
∫
−∞
+∞
xρ(x)dx=
1
√2 πσ
∫
−∞
+∞
xexp
(−(x−μ)2
2σ2 )dx=1
y=x− =
1
√2πσ
∫
−∞
+∞
y exp
(− y2
2σ
2 )dy+μ ∫
−∞
+∞
exp
(− y2
2σ
2 )dy
∫
−∞
∞
due−au
2
+bu
=
√
π
a
e
b
2
4a
42
Now the integral yields
Note that this shows th since the Gaussian distribution is symmetric around peak of the
distribution the average value is the value at the peak.
Now the dispersion of the distribution is given by
=
1
√2πσ [∫
−∞
+∞
yexp
(− y2
2σ
2 )dy+μ ∫
−∞
+∞
exp
(− y2
2σ
2 )dy
]
0 1
〈x〉=
〈x〉=
〈x−
2
〉=∫
−∞
∞
x−
2
xdx
43
〈x−
2
〉=∫
−∞
∞
x−
2
xdx
Now the integral yields
Let
Now we have the results for mean and root mean square deviation as which is same
from the results obtained from binomial distribution
=
1
2
∫
−∞
∞
x−
2
exp
−x−2
22 dx
=
1
√2πσ
∫
−∞
+∞
y
2
exp
(− y2
2σ
2 )dy
y=x−μ
=
1
2 [
2
22

3
2
]=2
〈 x2
〉=4 N p q l2
〈x〉=N p−ql
μ=( p−q)N l
σ=2√N pql
=
−1
√2πσ ( ∂
∂ a )∫
−∞
+∞
exp(−a y
2
)dy
dy=dx
=
−1
√2πσ ( ∂
∂ a )(π
1/2
/a
1/2
)
=
−1
√2πσ
π
1/2
2
(−1)a
−3/2
=
1
√2πσ
π
1/2
2
(1/2σ
2
)
−3/2
=
1
√2πσ
π
1/2
2
(1/2σ
2
)
−3/2
∫
−∞
+∞
exp(−ax
2
)dx=√π/a
44
●
Requirement for a probabilistic approach to physics
●
Examples of random walk in physical systems
●
Random walk in one dimension
●
Physical interpretation and definition of probability
●
Probability distribution for large N
●
Gaussian probability distribution
●
Generalization of the results to many variables and unequal
random walk
45
General discussion on the random walks
The random-walk we discussed so far is in one-dimension, which need to be generalized to
many dimension – discrete to continuous steps to apply to real world problems
The method of analysis we have used is called combinatorial analysis (approach
based on how number of way different things are arranged)
Now we may generalize these methods for steps of variable length with use of multiple variables
u={u1, u2 ......um}
v={v1, v2 ......vm}
Two set of
random
variables
P(ui ,v j) Probability of finding
and vj
ui
∑
i=1
M
∑
j=1
N
P(ui ,v j)=1
The normalization condition for the probability distribution
46
Example of simultaneous throw of dice and coin toss
+
Joint probability of getting a head + value of 2 = 1/2X1/6=1/12
P(ui ,v j)=1/12
Note that throwing dice and flipping of the coin are independent events
Pu(ui)=∑
j=1
M
P(ui ,v j)
Gives probability that assumes a value irrespective of the value of
the
ui
ui
vi
vi
Pu(ui)=6/12=1/2
Pv(vi)=2/12=1/6
six possibilities
two possibilities
(H ,1),(H ,2),(H ,3),(H ,4),(H ,5),(H ,6)
(T ,1),(T ,2),(T ,3),(T ,4),(T ,5),(T ,6)
Sample space
47
The coin and dice example is a special case where both variables are
statistically independent of each other
P(ui ,v j)=Pu(ui)Pv (v j)
Similar to probability distribution of single random variable
⟨F(u ,v)⟩=∑
i=1
M
∑
j=1
N
P(ui ,v j)F(ui ,vj)
When function depend only on one of the variable – the other probability can be
summed out
⟨f (u)⟩=∑
i=1
M
∑
j=1
N
P(ui , vj)f (ui)=∑
i=1
M
Pu(ui)f (ui)
⟨F(u ,v)+G(u, v)⟩=⟨F(u, v)⟩+⟨G(u, v)⟩
This can be easily proved by direct expansion as sum
If the variables are statistically independent then the product of
average also can be distributed
⟨f (u)g(v)⟩=∑
i=1
M
∑
j=1
N
P(ui , vj)f (ui)g(vj)
=∑
i=1
M
∑
j=1
N
Pu(ui)f (ui)Pv (v j)f (vj)
48
⟨f (u)g(v)⟩=∑
i=1
M
∑
j=1
N
Pu(ui)f (ui)Pv (vj)f (v j)
⟨f (u)g(v)⟩=∑
i=1
M
Pu(ui)f (ui)∑
j=1
N
Pv (vj)f (v j)
⟨f (u)g(v)⟩=⟨f (u)⟩⟨g(v)⟩
Average product is equal to product of the average
These results can be generalized to more than one variable
Another generalization is for continuous probability distribution
Consider a variable in the continuous range
a1<u<a2
Now it is possible to look for change in the probability between and
u u+du
P∝du
Probability
ρ(u)du
Expressed as ρ(u) probability density independent of du
49
The continuous variable can be converted in discrete
u u+du
ρ(u)du
The interval we consider for converting it into discrete form is δu≪du
In interval the probability does not vary at all
δu
The properties discrete random variable is true for continuous variable also
∑
i=1
M
P(ui)=1
∫
a1
a2
ρ(u)du=1
The average properties can be computed from this
⟨f (ui)⟩=∑
i=1
M
f (ui)P(ui) ⟨f (ui)⟩=∫
a1
a2
f (u)ρ(u)du
50
The continuous probability distribution can be converted discrete probability similar to
Gaussian distribution in 1d random walk
ρ(x)dx=P(m)
dx
2l
ρ(u, v)du dv=P(u ,v)
du
δu
dv
δ v
- number of infinitesimal number of cells of magnitude contained
between and and and
P(u ,v)
ρ(u, v)du dv=P(u ,v)
du
δu
dv
δ v
δu δ v
u u+du v v+dv
The normalization condition is then given by
∫
a1
a2
∫
b1
b2
ρ(u,v)dudv=1
⟨F(u ,v)⟩=∫
a1
a2
∫
b1
b2
F(u, v)ρ(u ,v)dudv
With averaging property
51
Mean values of general random walk
Most random walks real microscopic particles have different lengths at fixed
time interval.
This means we have to prove the results of random walk of fixed length to
variable length to apply to natural systems.
Consider that is the displacement in th step.
For example a gas molecule in a 1d channel will displace according to its
velocity which is random number depending on the temperature – Velocity
distribution is from typical Maxwell-Boltzmann distribution – which we will
derive later in this course
In general
si i
w(si)dsi
Probability distribution of displacement
interval
si+dsi
w(si)
si
si is independent of si+1
w(si)
si left -right fixed-size steps
continuous distribution of step-size
52
Let the probability distribution is same for all steps
w(si)
Similar to walks in fixed size steps - total length of travel in steps
N
x=∑
i=1
N
si
⟨(Δ x)2
⟩=⟨(x−⟨x⟩)2
⟩
⟨f (u)+g(u)⟩=⟨f (u)⟩+⟨g(u)⟩
⟨x⟩=N ⟨si⟩ ⟨si⟩=∫ds s w(s)
where
Mean displacement per step
Dispersion is
(x−⟨x⟩)=∑
i
(si−⟨si ⟩)
Δ x=∑
i
Δ si
53
⟨Δ x2
⟩=⟨∑
i
Δsi ∑
j
Δ sj ⟩
=⟨∑
i
(Δsi)2
+∑
i
∑
j ,i≠ j
Δ si Δ sj⟩
mean
si
-deviation from the mean is symmetric
Δ si
Δ si Δ sj - is also symmetric with respect to zero line
when one event is independent of other –
statistically independent
∑
i
∑
j,i≠ j
Δ si Δ sj=0
⟨(Δ x)2
⟩=∑
i
⟨(Δsi)2
⟩=N ⟨(Δ si)2
⟩
54
⟨(Δ si)
2
⟩=∫w(si)ds(Δ si)
2
Dispersion of displacement per step
⟨Δ x2
⟩=N ⟨Δ s2
⟩
Square of the width of the distribution
Δ x∗
=√⟨Δ x2
⟩
Root mean square deviation is width of the distribution
⟨x⟩=N ⟨si ⟩
increases with
⟨si⟩≠0 N
⟨x⟩
when
In addition the distribution around the mean also increases with
⟨Δ x2
⟩=N ⟨Δ s2
⟩
N
⟨Δ x2
⟩1/2
=N1/2
⟨Δ s2
⟩1/2
The relative variation with respect to mean is
⟨Δ x2
⟩1/2
⟨x⟩
=
N1/2
⟨Δ s2
⟩1/2
N ⟨si⟩
Δ x∗
⟨x⟩
=
Δ s∗
√N ⟨si⟩
Width of
the
distribution
reduces
55
The calculation of the probability distribution
Complete information of the random walk with differing walk length is contained in
the probability distribution
x=∑
i=1
N
si
Probability of finding displacement between and after steps for a
particular sequence is product of probability of each step
x N
x+dx
s1 s1+ds1
Between and is w(s1)ds1
s2 s2+ds2
Between and is w(s2)ds2
s3+ds3
Between and is w(s3)ds3
sN sN +dsN
Between and is w(sN)dsN
s3
Subject to the constraint that x<∑
i=1
N
si<x+dx
Therefore the direct evaluation of the integral is difficult
P(x)dx=∫...∫w(s1)w(s2)w(s3)...w(sN)ds1 ds2 ds3 ...dsN
56
This constraint prohibit combined displacement to exceed the limit
x<∑
i=1
N
si<x+dx
One way to address this issue to express this in terms of a mathematical form
at large steps random walk the dx→0
This is because at large number of steps the as distribution
become sharper
x=N ⟨s⟩
N
Width of
the
distribution
reduces
P(x)
x
Therefore this constraint can be expressed as a delta function δ(x−∑
i=1
N
si)
∫δ (x−∑
i=1
N
si)dx=1
57
∫δ (x−∑
i=1
N
si)dx=1
Substituting this on the integral
P(x)dx=∫
−∞
∞
...∫
−∞
∞
w(s1)w(s2)w(s3)...w(sN )ds1 ds2 ds3 ...dsN
=∫
−∞
∞
...∫
−∞
∞
w(s1)d w(s2)w(s3)........w(sN)δ(x−∑
i=1
N
si )dx ds1 ds2 ds3 ...dsN
Therefore by removing
P(x)=∫
−∞
∞
...∫
−∞
∞
w(s1)w(s2)w(s3)........ w(sN)δ(x−∑
i=1
N
si)ds1 ds2 ds3...dsN
dx
In the small range of integration δ(x−∑
i=1
N
si)dx=1
Substituting the expression of integral representation of the delta function
δ(x−∑
i=1
N
si)=
1
2π
∫dk e
ik(∑
i=1
N
si−x)
P(x)=
1
2 π
∫
−∞
∞
...∫
−∞
∞
w(s1)w(s2)w(s3)........ w(sN )∫
−∞
∞
dk e
ik(∑
i=1
N
si−x)
ds1 ds2 ds3 ...dsN
58
P(x)=
1
2π
∫
−∞
∞
...∫
−∞
∞
w(s1)w(s2)w(s3)........ w(sN )∫
−∞
∞
dk e
ik(∑
i=1
N
si−x)
ds1 ds2 ds3 ...dsN
Club each integral with same type of variable
P(x)=
1
2π
∫
−∞
∞
dk e
−ikx
∫
−∞
∞
ds1 w(s1)e
iks1
∫
−∞
∞
ds2 w(s2)e
iks2
...∫
−∞
∞
dsN w(sN )e
iksN
Except first all integrals are identical let Q(k)=∫dsw(s)e
iks
P(x)=
1
2π
∫dk e
−ikx
Q
N
(k)
Finally we arrived at a simple form
For very large number of steps this further simplified
59
P(x)=
1
2π
∫
−∞
∞
dk e
−ikx
Q
N
(k)
Q(k)=∫dsw(s)e
iks
w(s)
s
When k is small the
integral contribute
When k is large
neighboring
contributions cancel
each other thus
negligible
eiks
Is expanded in Taylor series to evaluate the integral
Q(k)=∫
−∞
∞
dsw(s)e
iks
Q(k)=∫
−∞
∞
dsw(s)[1+iks−
1
2
(ks)2
......]
Is expanded in Taylor series to evaluate the integral
With substituted value of integrals
⟨s⟩=∫
−∞
∞
ds w(s)s
⟨s
2
⟩=∫
−∞
∞
ds w(s)s
2
Q(k)=[1+ik ⟨s⟩−
1
2
k2
⟨s2
⟩......]
60
Q(k)=[1+ik ⟨s⟩−
1
2
k2
⟨s2
⟩......] ⟨s
n
⟩=∫dsw(s)s
n
These moments are finite since as
|w(s)|→0 |s|→∞
lnQN
(k)=N ln[1+ik ⟨s⟩−
1
2
k2
⟨s2
⟩......]
For further simplification
lnQN
(k)=N [i⟨s⟩k−
1
2
k2
⟨s2
⟩−
1
2
(i⟨s⟩k)2
......]
Using Taylor series expansion ln(1+ y)= y−
1
2
y
2
, y≪1
Ignoring terms larger than square
=N [i⟨s⟩k−
1
2
k2
(⟨s2
⟩−⟨s⟩2
)......]
=N [i⟨s⟩k−
1
2
k2
⟨Δ s2
⟩ ......]
⟨Δ s2
⟩=⟨s2
⟩−⟨s⟩2
61
lnQN
(k)=N [i⟨s⟩k−
1
2
k2
⟨Δ s2
⟩......]
Q
N
(k)=e
[i N ⟨s⟩k−
1
2
N k2
⟨Δs2
⟩......
]
P(x)=
1
2π
∫
−∞
∞
dk e
−ikx
e
[i N ⟨s⟩ k−
1
2
N k2
⟨Δ s2
⟩
]
Substituting in probability distribution
P(x)=
1
2π
∫
−∞
∞
dk e
−ikx
Q
N
(k)
=
1
2π
∫
−∞
∞
dk e
[(i N ⟨s⟩−x)k−
1
2
N k2
⟨Δ s2
⟩
]
This is a Gaussian integral in the standard form ∫
−∞
∞
due−au
2
+bu
=
√
π
a
e
b
2
4a
P(x)=
1
√2πσ2
e
−(x−μ)
2
2σ2
μ=N ⟨s⟩
σ=N ⟨Δ s2
⟩
62
We have arrived at Gaussian distribution in more general steps of the random walk
All the steps must be statistically independent
The distribution of the probability must be as
|w(s)|→0 |s|→∞
When these conditions are satisfied all natural distributions come from random walk
appears as Gaussian distribution
N →∞
This result is known as the central limit theorem – one of the important result of
probability theory
63
Reference
Fundamentals of statistical and thermal physics by F. Reif
Chapter 1
Statistical Physics of Particles by Mehran Kardar
Chapter 2

More Related Content

PDF
1. Random walk.pdf
PDF
ch9.pdf
PPT
2D CFD Code Based on MATLAB- As Good As FLUENT!
PDF
Queueing theory
PDF
Multivriada ppt ms
PPTX
orthogonal.pptx
PPT
Orthogonal basis and gram schmidth process
PDF
Proceedings A Method For Finding Complete Observables In Classical Mechanics
1. Random walk.pdf
ch9.pdf
2D CFD Code Based on MATLAB- As Good As FLUENT!
Queueing theory
Multivriada ppt ms
orthogonal.pptx
Orthogonal basis and gram schmidth process
Proceedings A Method For Finding Complete Observables In Classical Mechanics

Similar to random walk lectures by p pj class notes (20)

PDF
Implementation of parallel randomized algorithm for skew-symmetric matrix game
PPTX
2 Review of Statistics. 2 Review of Statistics.
PDF
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
PDF
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
PDF
Parallel Numerical Methods for Ordinary Differential Equations: a Survey
PDF
The Generalized Difference Operator of the 퐧 퐭퐡 Kind
PDF
PTSP PPT.pdf
PPT
NUMERICAL METHODS
PPT
HUST-talk-1.pptof uncertainty quantification. Volume 6. Springer, 2017. SFK08...
PDF
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
PDF
Duel of cosmological screening lengths
PDF
Quantum chaos of generic systems - Marko Robnik
PDF
Notes.on.popularity.versus.similarity.model
PDF
E041046051
PPT
1. Logic and Proofs.ppt
PDF
EXACT SOLUTIONS OF SCHRÖDINGER EQUATION WITH SOLVABLE POTENTIALS FOR NON PT/P...
PPTX
Quantum%20Physics.pptx
PDF
DissertationSlides169
PDF
Lecture_Random Process_Part-1_July-Dec 2023.pdf
PDF
2 cooper
Implementation of parallel randomized algorithm for skew-symmetric matrix game
2 Review of Statistics. 2 Review of Statistics.
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
A Mathematical Model for the Hormonal Responses During Neurally Mediated Sync...
Parallel Numerical Methods for Ordinary Differential Equations: a Survey
The Generalized Difference Operator of the 퐧 퐭퐡 Kind
PTSP PPT.pdf
NUMERICAL METHODS
HUST-talk-1.pptof uncertainty quantification. Volume 6. Springer, 2017. SFK08...
2018 MUMS Fall Course - Statistical Representation of Model Input (EDITED) - ...
Duel of cosmological screening lengths
Quantum chaos of generic systems - Marko Robnik
Notes.on.popularity.versus.similarity.model
E041046051
1. Logic and Proofs.ppt
EXACT SOLUTIONS OF SCHRÖDINGER EQUATION WITH SOLVABLE POTENTIALS FOR NON PT/P...
Quantum%20Physics.pptx
DissertationSlides169
Lecture_Random Process_Part-1_July-Dec 2023.pdf
2 cooper
Ad

Recently uploaded (20)

PDF
Journal of Dental Science - UDMY (2022).pdf
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PDF
Race Reva University – Shaping Future Leaders in Artificial Intelligence
PPTX
Education and Perspectives of Education.pptx
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
English Textual Question & Ans (12th Class).pdf
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 1).pdf
PPTX
DRUGS USED FOR HORMONAL DISORDER, SUPPLIMENTATION, CONTRACEPTION, & MEDICAL T...
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
PDF
Journal of Dental Science - UDMY (2021).pdf
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PDF
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
PDF
IP : I ; Unit I : Preformulation Studies
Journal of Dental Science - UDMY (2022).pdf
Cambridge-Practice-Tests-for-IELTS-12.docx
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
Race Reva University – Shaping Future Leaders in Artificial Intelligence
Education and Perspectives of Education.pptx
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
English Textual Question & Ans (12th Class).pdf
Environmental Education MCQ BD2EE - Share Source.pdf
Introduction to pro and eukaryotes and differences.pptx
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 1).pdf
DRUGS USED FOR HORMONAL DISORDER, SUPPLIMENTATION, CONTRACEPTION, & MEDICAL T...
A powerpoint presentation on the Revised K-10 Science Shaping Paper
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
Journal of Dental Science - UDMY (2021).pdf
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
Unit 4 Computer Architecture Multicore Processor.pptx
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
IP : I ; Unit I : Preformulation Studies
Ad

random walk lectures by p pj class notes

  • 2. 2 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability for large N
  • 3. 3 Single-particle mechanics that is from – Differential equations such as Newtons equations of motion Schrodinger wave equation Address problem of mechanics in the single-particle level They predict future position and momentum and wave function from initial conditions Successful in predicting properties of system where there is mild coupling – interaction between them are limited Methods of statistical mechanics and random walk ⃗ F=m⃗ a iℏ ∂ ψ(x ,t) ∂t =− ℏ 2m ∂2 ψ(x ,t) ∂ x2 +V (x)ψ(x ,t) Statistical mechanics apply to statistical methods to arrive at useful properties of many-body systems. Typical examples of many-body systems are - Atoms, molecules, macro-molecules Fundamental particles photons, phonons, etc
  • 4. 4 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability for large N
  • 5. 5 Now look at collective properties of a collection of particles Example – Gas in a chamber position and momentum List of useful properties are Temperature Pressure Heat capacity Compressibility 1 N {r1, r2,.... ,rN ; p1, p2,.... , pN } T P χv Cv In equilibrium it is possible to obtain thermodynamics quantities from fundamental properties. Typically the thermodynamic variables are obtained directly from experiments
  • 6. 6 1 N {r1, r2,.... ,rN ; p1, p2,.... , pN } Consider a scenario of system evolve in a chamber of volume V V Let total energy sum energy of each particle E=∑ i=0 N ei Each increase or decrease and fluctuate around ei ⟨ei⟩=E/N
  • 7. 7 1 N {r1, r2,.... ,rN ; p1, p2,.... , pN } V ⟨ei⟩=E/N Number of particles or time -at discrete intervals N t {t1, t2,.... ,tN } at time {e1, e2,.... ,eN } {e1, e2,.... ,eN } then energy discrete sampling
  • 8. 8 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability for large N
  • 9. 9 All variables of constituent particles contained in the volume in a chamber execute random variation of variable such as V {r1, r2,.... ,rN ; p1, p2,.... , pN } Random walk is completely unpredictable Example of random walk is outcome of toss of coin in tosses Random walk and probability Methods of statistical mechanics and random walk N HTTTHHHTT 1 2 3 2 4 5 6 7 8 9 1 H=1,T=0
  • 10. 10 Second toss have head is highly unpredictable In coin toss experiment the nature of the walk changes - When tossing is repeated N →∞, P(H)=P(T )= 1 2 Diffusion is the probability model for random walks repeated many times Probability profile of this walk many walkers in ink spreading in water is given below Ink spreading in a bottle of water by diffusion- black lines are path of random walkers t1<t2<t3<t4 Line of equal probability where concentration is constant – different circles give a probability to find at least a particle
  • 11. 11 A simple case of random walk is in one-dimension HTTTHHHTT Let is the right move H=→ Let is the left move move T=← ← HTTTHHHTT → ← ← →→→ ←← walker final destination A single walk generated from coin toss experiment A single walk of is not repeated again in another set of coin toss experiment Similar to diffusion in a liquid – either large number of experiments are repeated and averaged to find probability of an event Probability of an event is reproducible, while individual random walks are not
  • 12. 12 Probability and random walk in one dimension Random walk in one dimension is one of the simplest problem that can introduce methods of probability theory that is applicable to large class of problems Let us start with most ideal case walker moves either right or left in a perfectly random fashion, that is with probability p and q. Consider a single particle executes random walk along a line starting from the origin Let each step of the walker be of length , let the total number of steps be , in such a travel the total distance traveled is This is in terms of (fundamental) length The integer satisfies the condition since maximum length a walker could have traveled in positive direction or in negative direction is 0 1 2 -1 -2 l N x=ml l −N≤m≤N m N
  • 13. 13 Let total number of right steps And let the total number of left steps Then total number of steps The net displacement of the traveler As total left and right moves are related to Let all steps of the moves are statistically independent (decision on a next move does not depend on previous move) Let move to the right is taken with a probability then probability of moving to the left is given by The probabilities for right steps and left steps is given by The number of distinct possibilities in which this probability is achieved is 0 1 2 -1 -2 l x=ml −N≤m≤N n1+ n2=N m=n1−n2 N n1 n2 n2=N−n1 p q=1−p n1 n2 p n1 q n2 N ! n1 !n2 ! In a typical random walk q+ p=1
  • 14. 14 Consider one step random walk – then distribution of probability is P(→)+P(←)=p+q (P(→)+P(←))2 =( p+q)2 =1 Consider two step random walk – then distribution of probability is =P2 (→)+P(→)P(←)+P(←)P(→) ⏟+P2 (←) multiplicity redistribution of same probability ( p+q)0 =1 When particle is at the origin 0 step Probability is calculated after repeating the experiments many times In a typical unbiased coin toss experiment – one has to repeat in many times to get ideal probability ½ for the head and the tail No of time tail/head obtained Total number of trials
  • 15. 15 For large number of steps the probability re-distribute – for 3 =( p3 +3 p2 q+3 p q2 +q3 ) multiplicity (P(→)+P(←))3 =( p+q)3 =1 for N - we get the binomial series ( p+q) N =∑ n=0 N N ! n!(N−n)! p n q N−n ← → ← ← →→→ ←← walker final destination Total number of ways steps can be Arranged among them selves Total number of way for left or right step – with re arrangement make same multiplicity – therefore must be removed from by division N !
  • 16. 16 Total probability that in a random walk of steps there are right steps is given by This distribution can be easily be identified as expansion term in the binomial series, this is given by The probability to obtain total displacement is same as that of the probability of making displacement towards right It is more meaningful to express the probability in terms of the total displacement, by change of variables N n1 W N (n1)= N ! n1 !n2 ! p n1 q n2 WN (n1) (p+q)N =∑ n=0 N N ! n!(N−n)! pn qN−n m n1 n1= 1 2 (N +m) n2= 1 2 (N −m) PN (m)= N ! [(N+m)/2]![(N−m)/2]! p[(N +m)/2] q[(N−m)/2] n1+n2=N m=n1−n2 n2=N−n1
  • 17. 17 WN (n1)=PN (m)= N ! [(N+ m)/2]![(N−m)/2]! (1 2 ) N For unbiased random walk What does this mean and what various quantities represent p=q= 1 2 n1=0 n1=N WN (n1)=PN (m) multiplicity increases in the middle where probability increase
  • 19. 19 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability for large N
  • 20. 20 Random variables definition and few examples Random variable is an outcome of a experiment that produces random values Then we can introduce a random variable which is a function of . A particular example is identity function, , which is given by Examples of a sample space: 1) outcome of the throw of a dice The random variables can have six possibilities 2) results of flipping of a coin The random variable have two possibilities 3) Outcome of a random walk of walker for a total steps of Here random variables can have possibilities Example of such random walk is the change in velocity of a particular molecule in a system of particles that are at equilibrium after a time of u F u u X u=u X u N 2N t
  • 21. 21 Let are the out come of a random variable Respective probabilities For dice experiment Average value of faces of dice In compact form in general When probability is normalized Therefore any function of the random variable u={u1, u2 ......um} {P(u1), P(u2)...... P(um)} {P(u1), P(u2)...... P(um)}= 1 6 ⟨u⟩= n1×1+n2×2+n3×3+n4×4+n5×5+n6×6 n1+n2+n3+n4+n5+n6 =3.5 ⟨u⟩= ∑ i=0 N P(ui)ui ∑ i=0 N P(ui) ∑ P(ui)=1 ⟨f (u)⟩= ∑ P(ui)f (ui) ∑ P(ui) {P(u1), P(u2)...... P(um)}= 1 6 ⟨u⟩= 1/6×1+1/6×2+1/6×3+1/6×4+1/6×5+1/6×6 1/6+1/6+1/6+1/6+1/6+1/6 =3.5 ⟨u⟩= ∑ i=0 N ni ui ∑ i=0 N ni ⟨u⟩=∑ i=0 N P(ui)ui P(ui)= ni ∑ i=0 N ni From experiment From assumed possibilities
  • 22. 22 Other notable properties of the averages of random variables are – sum of averages of two functions are equal to average of their sum When a function is multiplied with a constant and averaged is equivalent to the average of the function multiplied by the same constant In a probability distribution the mean values are very useful, simplest of them is mean This is the central value around which all values are distributed. The deviation from the central value is Another useful average is second moment of about its mean, known as dispersion This give the spread of the distribution around the mean – it may be expressed as as this quantity is always positive ; the may be generalized to get 〈f (u)+ g(u)〉=〈f (u)〉+ 〈g(u)〉 〈c f (u)〉=c〈f (u)〉 〈u〉  u=u−〈u〉 〈 u〉=0 u ⟨(Δu)2 ⟩=∑ i=1 M P(ui)(ui−⟨u⟩)2 ≥0 〈(u−〈u〉)2 〉=〈u2 〉−〈u〉2 〈u2 〉≥〈u〉2 〈 un 〉 ⟨u⟩=∑ i=0 N P(ui)ui
  • 23. 23 Mean of the distribution 〈f u〉 N f (u)=u ⟨(u−⟨u⟩)2 ⟩=⟨u2 ⟩−⟨u⟩2 Dispersion is the root of Typical variation in random variables whose outcome in N trials
  • 24. 24 Blue is a distribution with all values are at mean Black is a distribution with mean same as blue with different dispersion Red is a distribution with mean same as blue, and black but third moment is different – asymmetrical with respect to mean Green is a distribution with mean same as blue, black and red differ in all higher moments f u=u N Distributions that differs in moments
  • 25. 25 Mean values of random walk problem The probability distribution is given by as The normalization condition can be verified as follows, in the binomial series expansion where Mean number of steps to right is obtained by the relation Evaluation of this sum is non-trivial, but possible from the relation substituting this relation we get W N (n1)= N ! n1 !n2 ! p n1 q n2 W N n1= N ! n1!N−n1! p n1 q N−n1 n2=N−n1 ∑ i=1 M Pui=1  pqN =∑ n=0 N N ! n1!N−n1! p n1 q N−n1 =1N =1 1=p+q ⟨f (u)⟩=∑ i=1 M f (ui)P(ui) 〈n1〉=∑ i=1 M n1W n1=∑ n=0 N n1 N ! n1 !N −n1! p n1 q N−n1 n1 p1 n =p ∂ ∂ p p n1 ⟨f (u)⟩= ∑ P(ui) f (ui) ∑ P(ui) p=q
  • 26. 26 substituting this relation we get By inter changing the order of summation and differentiation Using binomial expansion Using equation This is physically meaningful since it shows average number of right steps is equal to probability to move toward right with total number of steps ⟨n1⟩=∑ i=1 M n1 W (n1)=∑ n=0 N n1 N ! n1 !(N−n1)! p n1 q N−n1 =∑ n=0 N N ! n1 !(N −n1)! p ∂( pn1 ) ∂ p q N−n1 =p ∂ ∂ p [∑ n=0 N N ! n1 !(N−n1)! pn1 qN−n1 ] =p ∂ ∂ p [∑ n=0 N N ! n1 !(N−n1)! pn1 qN−n1 ] =p ∂ ∂ p (p+q) N =N p(p+q)N−1 ⟨n1⟩=N p 1=p+q
  • 27. 27 Similarly the average left steps is given by Sum of the average move on left and right Net displacement of the particles If both probabilities of left and right moves are equal then net displacement is zero Dispersion of the random walk The dispersion of the random walk is given by we have Now we have to evaluate In terms of the sums Now using the relation ⟨n2⟩=N q ⟨n1⟩+⟨n2⟩=N ⟨m⟩=N (p−q) ⟨(Δn1)2 ⟩ ⟨(Δn1)2 ⟩=⟨(n1−⟨n1⟩)2 ⟩=⟨n1 2 ⟩−⟨n1⟩2 ⟨n1⟩=N p 〈n1 2 〉 ⟨n1 2 ⟩=∑ i=1 M n1 2 W (n1)=∑ n=0 N n1 2 N ! n1 !(N −n1)! p n1 q N−n1 n1 2 p1 n =n1 p ∂ ∂ p pn1 = (p ∂ ∂ p) 2 pn1 =∑ n=0 N N ! n1 !(N −n1)! (p ∂ ∂ p ) 2 p n1 q N−n1
  • 28. 28 ⟨n1 2 ⟩=∑ n=0 N N ! n1!(N−n1)! (p ∂ ∂ p ) 2 p n1 q N−n1 =(p ∂ ∂ p ) 2 ∑ n=0 N N ! n1 !(N−n1)! p n1 q N−n1 = (p ∂ ∂ p ) 2 (p+q)N = (p ∂ ∂ p )[ pN (p+q)N−1 ] =p[N (p+q)N−1 + pN (N−1)( p+q)N−2 ] q+ p=1 =p[N + pN (N−1)] =Np[1+ pN−p] =Np[q+ pN ] =Npq+(Np)2 =Npq+⟨n1⟩2 ⟨n1⟩=N p ⟨(Δn1)2 ⟩=⟨(n1−⟨n1⟩)2 ⟩=⟨n1 2 ⟩−⟨n1⟩2 ⟨(Δn1)2 ⟩=⟨n1 2 ⟩−⟨n1⟩2 =Npq Therefore the dispersion is given by
  • 29. 29 m=n1−n2=2n1−N  m=m−〈m〉=2n1−N −2〈n1〉−N=2n1−〈n1〉=2 n1  m2 =4 n12 〈 m2 〉=4〈 n12 〉=4 N pq p=q= 1 2 〈 m2 〉=N The dispersion of the displacement may also be calculated Difference between average net displacement and instantaneous net displacement Taking averages both sides For an unbiased random walk The root mean square deviation is given by Δ∗ n1=⟨(Δn1)2 ⟩1/2 It is a linear measure of width of the distribution The relative width of this distribution is Δ∗ n1 ⟨n1⟩ = √Npq Np = √q √Np = 1 √N when q=p=1/2 N Width of the distribution reduces
  • 30. 30 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability distribution for large N
  • 31. 31 Probability distribution for large N For large N the the binomial probability distribution has a pronounced or dominant maximum around . The value of this function decreases from the maximum. It is useful to find an expression for for large Near the region where the maximum occurs where is also large, the change in the distribution is characterized by we consider the as continuous function of , now it is permissible to take derivative of this function, also The derivatives are evaluated very near to the maximum, represented by the coordinates When the change is sufficiently small it is possible to expand the function in the Taylor series around the point . If we choose to expand, instead of the original function, the logarithm of this function. For example an approximate expression valid near for the function A direct Taylor series expansion gives |W (n1+1)−W (n1)|≪W (n1) W n1 W n1 n1=  n1 n1 W (n1) N n1 dW (n1) d n1 =0 d lnW (n1) d n1 =0 n1=~ n1+η  〈n1〉 y≪1 f =1 y−N f =1−Ny 1 2 N N1 y 2. ..... N
  • 32. 32 f =1−Ny+ 1 2 N (N +1) y 2...... Ny>1 For large N Therefore the series does not converge to a value therefore difficult to truncate Now we take logarithm of this function and try an expansion Here the power of N does not grow, and moreover we get an expression This expression is valid near Now expanding the probability in the Taylor series where Since derivative evaluated near the maximum the function has the following properties This can be explicitly stated as ln f =−N ln(1+ y) ln f =−N ( y− 1 2 y 2 +....) f =e −N( y− 1 2 y 2 +....) y≤1 lnW (n1) lnW (n1)=lnW (~ n1)+B1 η+ 1 2 B2 η 2 + 1 6 B3 η 3 +... Bk= dk lnW (~ n1) d n1 k B1=0 B2>0 B2=−|B2|
  • 33. 33 Now the function near the peak be written as The probability distribution near the peak may be written as For sufficiently small higher order terms in may be neglected In order to have explicit look at the expression of the derivative we may write starting from the expression the binomial probability distribution Logarithm of this expression is For differentiating we need approximate expression for differential of a factorial This formula may also be obtained from the Stirling's approximation ~ W = ~ W (~ n1) W (n1)= ~ W e − 1 2 |B2|η 2 + 1 6 B3 η 3 η  W (n1)= ~ W e − 1 2 |B2|η 2 W N (n1)= N ! n1 !(N−n1)! p n1 q N−n1 ln W N (n1)=ln N !−lnn1 !−ln(N−n1)!+n1 ln p+N−n1 ln q d ln n! d n ≃ ln(n+1)!−lnn! 1 =ln (n+1)! n! =ln(n+1) d ln n! d n ≃lnn n≫1
  • 34. 34 Using this equation The first derivative is zero near maximum On further differentiation Evaluating this expression near Is negative as required by On further differentiation it is possible to show that higher order terms are can indeed be neglected d ln W N (n1) d ln n1 =−ln n1+ln(N−n1)+ln p−ln q ln [(N −~ n1) ~ n1 p q ]=0 (N−~ n1) p=~ n1 q ~ n1=N p 1=p+q d2 lnW N (n1) d lnn1 2 =− 1 n1 − 1 (N−n1) n1=~ n1 B2=− 1 N p − 1 (N−N p) =− 1 N pq [(N−~ n1) ~ n1 p q ]=1 lnW N (n1)=ln N !−lnn1 !−ln(N−n1)!+n1 ln p+N−n1 ln q n1=~ n1 d ln n! d n ≃lnn
  • 35. 35 d2 lnW N (n1) d lnn1 2 =− 1 n1 − 1 (N−n1) It is good to look at higher order terms = 1 N 2 p 2 − 1 (N −Np) 2 d3 lnW N (n1) d ln n1 3 = 1 n1 2 − 1 (N−n1) 2 = 1 N 2 p 2 − 1 N 2 q 2 = q 2 − p 2 N 2 p 2 q 2 ≪− 1 N p q = d lnW N (n1) d lnn1 At large N – higher order terms are safely ignored
  • 36. 36 The value of the constant in the probability distribution can be evaluated assuming the variable is quasi-continuous variable. The summation of the probabilities may be replaced by an integral For large the probability distribution has negligible contribution away from the maximum Therefore the probability distribution can be written finally as Using the expression alternatively we may write this expression as ∑ n1=0 N W (n1)≃∫W (n1)dn1=∫ −∞ ∞ W (~ n1+η)d η=1 ~ W ∫ −∞ ∞ e − 1 2 |B2|η 2 d η=1 W (n1)= ~ W e − 1 2 |B2|η 2 ~ W √2π |B2| =1 W (n1)= √|B2| 2π exp(− 1 2 |B2|η 2 )= √|B2| 2π exp(− 1 2 |B2|(n1−~ n1) 2 ) N B2=− 1 N pq W (n1)= √ 1 2π N pq exp(−(n1−N p)2 2 N p q ) n1=~ n1+η = √ 1 2π⟨(Δ n1)2 ⟩ exp(−(n1−~ n1)2 2N pq ) √⟨(Δn1)2 ⟩=√N pq ∫ −∞ ∞ due−au 2 +bu = √ π a e b 2 4 a
  • 37. 37 The Gaussian probability distribution The expression for probability distribution may be rearranged to get probability distribution for net displacement Number of right steps to obtain a net displacement of From the formula the variation in is in the units of If we take the variation in as infinitesimally small we can consider the distribution as continuous. It also varies in even numbers, this fact is irrelevant when total number of steps Transforming in terms of continuous variable using the relation , is the length in each step W (n1)= √ 1 2π N p q exp(−(n1−N p)2 2N pq ) m m n1 Pm P(m)=W (N +m 2 )= √ 1 2π N pq exp(−(m−N ( p−q)) 2 8 N p q ) n1= 1 2 (N +m) n1−N p= 1 2 (N +m−2N p)= 1 2 [m−N (p−q)] m=2n1−N m Δm=±2 m x=ml l N ∞
  • 38. 38 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability distribution for large N ● Gaussian probability distribution ● Generalization of the results to many variables and unequal random walk
  • 40. 40 As the probability distribution becomes much larger, therefore variation for the probability distribution for adjacent values of negligible, this means For such distributions the variation in may be replaced with and change in the variation may be written as The transformation to continuous variable can be achieved by the following steps, using the assumption that is same for a small region till next value of probability corresponding to occurs. Is the probability density and is independent of the magnitude of Therefore the final probability density may be obtained from the relation As Where The final expression is the standard Gaussian probability distribution N ∞ Pm m |P(m+2)−P(m)|≪P(m) m x x+dx Pm m2 ρ(x)dx=P(m) dx 2l ρ(x) d x ρ(x)= P(m) 2l = 1 √2 πσ exp (−(x−μ)2 2σ2 ) = p−qN l =2 N pq l P(m)= √ 1 2π N p q exp(−(m−N (p−q)) 2 8 N pq ) x=ml
  • 41. 41 We generalize the assumptions used to derive expression for Gaussian distribution to many natural process and say that for all random walks that involve large number of steps results in a Gaussian distribution Since Gaussian distribution is a representation of the probability the integral on all range of the distribution must yield one Using standard integral of a Gaussian Note: the standard result The constant of the Gaussian distribution can be identified from properties of the distribution The mean of the distribution is given by Let x= 1 2 exp −x−2 22  ∫ −∞ ∞ xdx= 1 2  ∫ −∞ ∞ exp −x−2 22 dx=1 ∫ −∞ +∞ exp(−ax 2 )dx=√π/a 〈x〉=∫ −∞ ∞ x xdx ∫ −∞ +∞ xρ(x)dx= 1 √2 πσ ∫ −∞ +∞ xexp (−(x−μ)2 2σ2 )dx=1 y=x− = 1 √2πσ ∫ −∞ +∞ y exp (− y2 2σ 2 )dy+μ ∫ −∞ +∞ exp (− y2 2σ 2 )dy ∫ −∞ ∞ due−au 2 +bu = √ π a e b 2 4a
  • 42. 42 Now the integral yields Note that this shows th since the Gaussian distribution is symmetric around peak of the distribution the average value is the value at the peak. Now the dispersion of the distribution is given by = 1 √2πσ [∫ −∞ +∞ yexp (− y2 2σ 2 )dy+μ ∫ −∞ +∞ exp (− y2 2σ 2 )dy ] 0 1 〈x〉= 〈x〉= 〈x− 2 〉=∫ −∞ ∞ x− 2 xdx
  • 43. 43 〈x− 2 〉=∫ −∞ ∞ x− 2 xdx Now the integral yields Let Now we have the results for mean and root mean square deviation as which is same from the results obtained from binomial distribution = 1 2 ∫ −∞ ∞ x− 2 exp −x−2 22 dx = 1 √2πσ ∫ −∞ +∞ y 2 exp (− y2 2σ 2 )dy y=x−μ = 1 2 [ 2 22  3 2 ]=2 〈 x2 〉=4 N p q l2 〈x〉=N p−ql μ=( p−q)N l σ=2√N pql = −1 √2πσ ( ∂ ∂ a )∫ −∞ +∞ exp(−a y 2 )dy dy=dx = −1 √2πσ ( ∂ ∂ a )(π 1/2 /a 1/2 ) = −1 √2πσ π 1/2 2 (−1)a −3/2 = 1 √2πσ π 1/2 2 (1/2σ 2 ) −3/2 = 1 √2πσ π 1/2 2 (1/2σ 2 ) −3/2 ∫ −∞ +∞ exp(−ax 2 )dx=√π/a
  • 44. 44 ● Requirement for a probabilistic approach to physics ● Examples of random walk in physical systems ● Random walk in one dimension ● Physical interpretation and definition of probability ● Probability distribution for large N ● Gaussian probability distribution ● Generalization of the results to many variables and unequal random walk
  • 45. 45 General discussion on the random walks The random-walk we discussed so far is in one-dimension, which need to be generalized to many dimension – discrete to continuous steps to apply to real world problems The method of analysis we have used is called combinatorial analysis (approach based on how number of way different things are arranged) Now we may generalize these methods for steps of variable length with use of multiple variables u={u1, u2 ......um} v={v1, v2 ......vm} Two set of random variables P(ui ,v j) Probability of finding and vj ui ∑ i=1 M ∑ j=1 N P(ui ,v j)=1 The normalization condition for the probability distribution
  • 46. 46 Example of simultaneous throw of dice and coin toss + Joint probability of getting a head + value of 2 = 1/2X1/6=1/12 P(ui ,v j)=1/12 Note that throwing dice and flipping of the coin are independent events Pu(ui)=∑ j=1 M P(ui ,v j) Gives probability that assumes a value irrespective of the value of the ui ui vi vi Pu(ui)=6/12=1/2 Pv(vi)=2/12=1/6 six possibilities two possibilities (H ,1),(H ,2),(H ,3),(H ,4),(H ,5),(H ,6) (T ,1),(T ,2),(T ,3),(T ,4),(T ,5),(T ,6) Sample space
  • 47. 47 The coin and dice example is a special case where both variables are statistically independent of each other P(ui ,v j)=Pu(ui)Pv (v j) Similar to probability distribution of single random variable ⟨F(u ,v)⟩=∑ i=1 M ∑ j=1 N P(ui ,v j)F(ui ,vj) When function depend only on one of the variable – the other probability can be summed out ⟨f (u)⟩=∑ i=1 M ∑ j=1 N P(ui , vj)f (ui)=∑ i=1 M Pu(ui)f (ui) ⟨F(u ,v)+G(u, v)⟩=⟨F(u, v)⟩+⟨G(u, v)⟩ This can be easily proved by direct expansion as sum If the variables are statistically independent then the product of average also can be distributed ⟨f (u)g(v)⟩=∑ i=1 M ∑ j=1 N P(ui , vj)f (ui)g(vj) =∑ i=1 M ∑ j=1 N Pu(ui)f (ui)Pv (v j)f (vj)
  • 48. 48 ⟨f (u)g(v)⟩=∑ i=1 M ∑ j=1 N Pu(ui)f (ui)Pv (vj)f (v j) ⟨f (u)g(v)⟩=∑ i=1 M Pu(ui)f (ui)∑ j=1 N Pv (vj)f (v j) ⟨f (u)g(v)⟩=⟨f (u)⟩⟨g(v)⟩ Average product is equal to product of the average These results can be generalized to more than one variable Another generalization is for continuous probability distribution Consider a variable in the continuous range a1<u<a2 Now it is possible to look for change in the probability between and u u+du P∝du Probability ρ(u)du Expressed as ρ(u) probability density independent of du
  • 49. 49 The continuous variable can be converted in discrete u u+du ρ(u)du The interval we consider for converting it into discrete form is δu≪du In interval the probability does not vary at all δu The properties discrete random variable is true for continuous variable also ∑ i=1 M P(ui)=1 ∫ a1 a2 ρ(u)du=1 The average properties can be computed from this ⟨f (ui)⟩=∑ i=1 M f (ui)P(ui) ⟨f (ui)⟩=∫ a1 a2 f (u)ρ(u)du
  • 50. 50 The continuous probability distribution can be converted discrete probability similar to Gaussian distribution in 1d random walk ρ(x)dx=P(m) dx 2l ρ(u, v)du dv=P(u ,v) du δu dv δ v - number of infinitesimal number of cells of magnitude contained between and and and P(u ,v) ρ(u, v)du dv=P(u ,v) du δu dv δ v δu δ v u u+du v v+dv The normalization condition is then given by ∫ a1 a2 ∫ b1 b2 ρ(u,v)dudv=1 ⟨F(u ,v)⟩=∫ a1 a2 ∫ b1 b2 F(u, v)ρ(u ,v)dudv With averaging property
  • 51. 51 Mean values of general random walk Most random walks real microscopic particles have different lengths at fixed time interval. This means we have to prove the results of random walk of fixed length to variable length to apply to natural systems. Consider that is the displacement in th step. For example a gas molecule in a 1d channel will displace according to its velocity which is random number depending on the temperature – Velocity distribution is from typical Maxwell-Boltzmann distribution – which we will derive later in this course In general si i w(si)dsi Probability distribution of displacement interval si+dsi w(si) si si is independent of si+1 w(si) si left -right fixed-size steps continuous distribution of step-size
  • 52. 52 Let the probability distribution is same for all steps w(si) Similar to walks in fixed size steps - total length of travel in steps N x=∑ i=1 N si ⟨(Δ x)2 ⟩=⟨(x−⟨x⟩)2 ⟩ ⟨f (u)+g(u)⟩=⟨f (u)⟩+⟨g(u)⟩ ⟨x⟩=N ⟨si⟩ ⟨si⟩=∫ds s w(s) where Mean displacement per step Dispersion is (x−⟨x⟩)=∑ i (si−⟨si ⟩) Δ x=∑ i Δ si
  • 53. 53 ⟨Δ x2 ⟩=⟨∑ i Δsi ∑ j Δ sj ⟩ =⟨∑ i (Δsi)2 +∑ i ∑ j ,i≠ j Δ si Δ sj⟩ mean si -deviation from the mean is symmetric Δ si Δ si Δ sj - is also symmetric with respect to zero line when one event is independent of other – statistically independent ∑ i ∑ j,i≠ j Δ si Δ sj=0 ⟨(Δ x)2 ⟩=∑ i ⟨(Δsi)2 ⟩=N ⟨(Δ si)2 ⟩
  • 54. 54 ⟨(Δ si) 2 ⟩=∫w(si)ds(Δ si) 2 Dispersion of displacement per step ⟨Δ x2 ⟩=N ⟨Δ s2 ⟩ Square of the width of the distribution Δ x∗ =√⟨Δ x2 ⟩ Root mean square deviation is width of the distribution ⟨x⟩=N ⟨si ⟩ increases with ⟨si⟩≠0 N ⟨x⟩ when In addition the distribution around the mean also increases with ⟨Δ x2 ⟩=N ⟨Δ s2 ⟩ N ⟨Δ x2 ⟩1/2 =N1/2 ⟨Δ s2 ⟩1/2 The relative variation with respect to mean is ⟨Δ x2 ⟩1/2 ⟨x⟩ = N1/2 ⟨Δ s2 ⟩1/2 N ⟨si⟩ Δ x∗ ⟨x⟩ = Δ s∗ √N ⟨si⟩ Width of the distribution reduces
  • 55. 55 The calculation of the probability distribution Complete information of the random walk with differing walk length is contained in the probability distribution x=∑ i=1 N si Probability of finding displacement between and after steps for a particular sequence is product of probability of each step x N x+dx s1 s1+ds1 Between and is w(s1)ds1 s2 s2+ds2 Between and is w(s2)ds2 s3+ds3 Between and is w(s3)ds3 sN sN +dsN Between and is w(sN)dsN s3 Subject to the constraint that x<∑ i=1 N si<x+dx Therefore the direct evaluation of the integral is difficult P(x)dx=∫...∫w(s1)w(s2)w(s3)...w(sN)ds1 ds2 ds3 ...dsN
  • 56. 56 This constraint prohibit combined displacement to exceed the limit x<∑ i=1 N si<x+dx One way to address this issue to express this in terms of a mathematical form at large steps random walk the dx→0 This is because at large number of steps the as distribution become sharper x=N ⟨s⟩ N Width of the distribution reduces P(x) x Therefore this constraint can be expressed as a delta function δ(x−∑ i=1 N si) ∫δ (x−∑ i=1 N si)dx=1
  • 57. 57 ∫δ (x−∑ i=1 N si)dx=1 Substituting this on the integral P(x)dx=∫ −∞ ∞ ...∫ −∞ ∞ w(s1)w(s2)w(s3)...w(sN )ds1 ds2 ds3 ...dsN =∫ −∞ ∞ ...∫ −∞ ∞ w(s1)d w(s2)w(s3)........w(sN)δ(x−∑ i=1 N si )dx ds1 ds2 ds3 ...dsN Therefore by removing P(x)=∫ −∞ ∞ ...∫ −∞ ∞ w(s1)w(s2)w(s3)........ w(sN)δ(x−∑ i=1 N si)ds1 ds2 ds3...dsN dx In the small range of integration δ(x−∑ i=1 N si)dx=1 Substituting the expression of integral representation of the delta function δ(x−∑ i=1 N si)= 1 2π ∫dk e ik(∑ i=1 N si−x) P(x)= 1 2 π ∫ −∞ ∞ ...∫ −∞ ∞ w(s1)w(s2)w(s3)........ w(sN )∫ −∞ ∞ dk e ik(∑ i=1 N si−x) ds1 ds2 ds3 ...dsN
  • 58. 58 P(x)= 1 2π ∫ −∞ ∞ ...∫ −∞ ∞ w(s1)w(s2)w(s3)........ w(sN )∫ −∞ ∞ dk e ik(∑ i=1 N si−x) ds1 ds2 ds3 ...dsN Club each integral with same type of variable P(x)= 1 2π ∫ −∞ ∞ dk e −ikx ∫ −∞ ∞ ds1 w(s1)e iks1 ∫ −∞ ∞ ds2 w(s2)e iks2 ...∫ −∞ ∞ dsN w(sN )e iksN Except first all integrals are identical let Q(k)=∫dsw(s)e iks P(x)= 1 2π ∫dk e −ikx Q N (k) Finally we arrived at a simple form For very large number of steps this further simplified
  • 59. 59 P(x)= 1 2π ∫ −∞ ∞ dk e −ikx Q N (k) Q(k)=∫dsw(s)e iks w(s) s When k is small the integral contribute When k is large neighboring contributions cancel each other thus negligible eiks Is expanded in Taylor series to evaluate the integral Q(k)=∫ −∞ ∞ dsw(s)e iks Q(k)=∫ −∞ ∞ dsw(s)[1+iks− 1 2 (ks)2 ......] Is expanded in Taylor series to evaluate the integral With substituted value of integrals ⟨s⟩=∫ −∞ ∞ ds w(s)s ⟨s 2 ⟩=∫ −∞ ∞ ds w(s)s 2 Q(k)=[1+ik ⟨s⟩− 1 2 k2 ⟨s2 ⟩......]
  • 60. 60 Q(k)=[1+ik ⟨s⟩− 1 2 k2 ⟨s2 ⟩......] ⟨s n ⟩=∫dsw(s)s n These moments are finite since as |w(s)|→0 |s|→∞ lnQN (k)=N ln[1+ik ⟨s⟩− 1 2 k2 ⟨s2 ⟩......] For further simplification lnQN (k)=N [i⟨s⟩k− 1 2 k2 ⟨s2 ⟩− 1 2 (i⟨s⟩k)2 ......] Using Taylor series expansion ln(1+ y)= y− 1 2 y 2 , y≪1 Ignoring terms larger than square =N [i⟨s⟩k− 1 2 k2 (⟨s2 ⟩−⟨s⟩2 )......] =N [i⟨s⟩k− 1 2 k2 ⟨Δ s2 ⟩ ......] ⟨Δ s2 ⟩=⟨s2 ⟩−⟨s⟩2
  • 61. 61 lnQN (k)=N [i⟨s⟩k− 1 2 k2 ⟨Δ s2 ⟩......] Q N (k)=e [i N ⟨s⟩k− 1 2 N k2 ⟨Δs2 ⟩...... ] P(x)= 1 2π ∫ −∞ ∞ dk e −ikx e [i N ⟨s⟩ k− 1 2 N k2 ⟨Δ s2 ⟩ ] Substituting in probability distribution P(x)= 1 2π ∫ −∞ ∞ dk e −ikx Q N (k) = 1 2π ∫ −∞ ∞ dk e [(i N ⟨s⟩−x)k− 1 2 N k2 ⟨Δ s2 ⟩ ] This is a Gaussian integral in the standard form ∫ −∞ ∞ due−au 2 +bu = √ π a e b 2 4a P(x)= 1 √2πσ2 e −(x−μ) 2 2σ2 μ=N ⟨s⟩ σ=N ⟨Δ s2 ⟩
  • 62. 62 We have arrived at Gaussian distribution in more general steps of the random walk All the steps must be statistically independent The distribution of the probability must be as |w(s)|→0 |s|→∞ When these conditions are satisfied all natural distributions come from random walk appears as Gaussian distribution N →∞ This result is known as the central limit theorem – one of the important result of probability theory
  • 63. 63 Reference Fundamentals of statistical and thermal physics by F. Reif Chapter 1 Statistical Physics of Particles by Mehran Kardar Chapter 2