SlideShare a Scribd company logo
2
Most read
5
Most read
Probability Cheatsheet v2.0
Compiled by William Chen (http://wzchen.com) and Joe Blitzstein,
with contributions from Sebastian Chiu, Yuan Jiang, Yuqi Hou, and
Jessy Hwang. Material based on Joe Blitzstein’s (@stat110) lectures
(http://stat110.net) and Blitzstein/Hwang’s Introduction to
Probability textbook (http://bit.ly/introprobability). Licensed
under CC BY-NC-SA 4.0. Please share comments, suggestions, and errors
at http://github.com/wzchen/probability_cheatsheet.
Last Updated September 4, 2015
Counting
Multiplication Rule
cake
waffle
S
V
C
S
V
C S
V
C
cake
waffle
cake
waffle
cake
waffle
Let’s say we have a compound experiment (an experiment with
multiple components). If the 1st component has n1 possible outcomes,
the 2nd component has n2 possible outcomes, . . . , and the rth
component has nr possible outcomes, then overall there are
n1n2 . . . nr possibilities for the whole experiment.
Sampling Table
7
6
5
8
4
2
9
3
1
The sampling table gives the number of possible samples of size k out
of a population of size n, under various assumptions about how the
sample is collected.
Order Matters Not Matter
With Replacement n
k
n + k − 1
k

Without Replacement
n!
(n − k)!
n
k

Naive Definition of Probability
If all outcomes are equally likely, the probability of an event A
happening is:
Pnaive(A) =
number of outcomes favorable to A
number of outcomes
Thinking Conditionally
Independence
Independent Events A and B are independent if knowing whether
A occurred gives no information about whether B occurred. More
formally, A and B (which have nonzero probability) are independent if
and only if one of the following equivalent statements holds:
P (A ∩ B) = P (A)P (B)
P (A|B) = P (A)
P (B|A) = P (B)
Conditional Independence A and B are conditionally independent
given C if P (A ∩ B|C) = P (A|C)P (B|C). Conditional independence
does not imply independence, and independence does not imply
conditional independence.
Unions, Intersections, and Complements
De Morgan’s Laws A useful identity that can make calculating
probabilities of unions easier by relating them to intersections, and
vice versa. Analogous results hold with more than two sets.
(A ∪ B)
c
= A
c
∩ B
c
(A ∩ B)
c
= A
c
∪ B
c
Joint, Marginal, and Conditional
Joint Probability P (A ∩ B) or P (A, B) – Probability of A and B.
Marginal (Unconditional) Probability P (A) – Probability of A.
Conditional Probability P (A|B) = P (A, B)/P (B) – Probability of
A, given that B occurred.
Conditional Probability is Probability P (A|B) is a probability
function for any fixed B. Any theorem that holds for probability also
holds for conditional probability.
Probability of an Intersection or Union
Intersections via Conditioning
P (A, B) = P (A)P (B|A)
P (A, B, C) = P (A)P (B|A)P (C|A, B)
Unions via Inclusion-Exclusion
P (A ∪ B) = P (A) + P (B) − P (A ∩ B)
P (A ∪ B ∪ C) = P (A) + P (B) + P (C)
− P (A ∩ B) − P (A ∩ C) − P (B ∩ C)
+ P (A ∩ B ∩ C).
Simpson’s Paradox
Dr. Hibbert Dr. Nick
heart
band-aid
It is possible to have
P (A | B, C)  P (A | B
c
, C) and P (A | B, C
c
)  P (A | B
c
, C
c
)
yet also P (A | B)  P (A | B
c
).
Law of Total Probability (LOTP)
Let B1, B2, B3, ...Bn be a partition of the sample space (i.e., they are
disjoint and their union is the entire sample space).
P (A) = P (A|B1)P (B1) + P (A|B2)P (B2) + · · · + P (A|Bn)P (Bn)
P (A) = P (A ∩ B1) + P (A ∩ B2) + · · · + P (A ∩ Bn)
For LOTP with extra conditioning, just add in another event C!
P (A|C) = P (A|B1, C)P (B1|C) + · · · + P (A|Bn, C)P (Bn|C)
P (A|C) = P (A ∩ B1|C) + P (A ∩ B2|C) + · · · + P (A ∩ Bn|C)
Special case of LOTP with B and Bc
as partition:
P (A) = P (A|B)P (B) + P (A|B
c
)P (B
c
)
P (A) = P (A ∩ B) + P (A ∩ B
c
)
Bayes’ Rule
Bayes’ Rule, and with extra conditioning (just add in C!)
P (A|B) =
P (B|A)P (A)
P (B)
P (A|B, C) =
P (B|A, C)P (A|C)
P (B|C)
We can also write
P (A|B, C) =
P (A, B, C)
P (B, C)
=
P (B, C|A)P (A)
P (B, C)
Odds Form of Bayes’ Rule
P (A|B)
P (Ac|B)
=
P (B|A)
P (B|Ac)
P (A)
P (Ac)
The posterior odds of A are the likelihood ratio times the prior odds.
Random Variables and their Distributions
PMF, CDF, and Independence
Probability Mass Function (PMF) Gives the probability that a
discrete random variable takes on the value x.
pX (x) = P (X = x)
0 1 2 3 4
0.0
0.2
0.4
0.6
0.8
1.0
x
pmf
●
●
●
●
●
The PMF satisfies
pX (x) ≥ 0 and
X
x
pX (x) = 1
Cumulative Distribution Function (CDF) Gives the probability
that a random variable is less than or equal to x.
FX (x) = P (X ≤ x)
0 1 2 3 4
0.0
0.2
0.4
0.6
0.8
1.0
x
cdf
●
● ●
● ●
● ●
● ●
●
The CDF is an increasing, right-continuous function with
FX (x) → 0 as x → −∞ and FX (x) → 1 as x → ∞
Independence Intuitively, two random variables are independent if
knowing the value of one gives no information about the other.
Discrete r.v.s X and Y are independent if for all values of x and y
P (X = x, Y = y) = P (X = x)P (Y = y)
Expected Value and Indicators
Expected Value and Linearity
Expected Value (a.k.a. mean, expectation, or average) is a weighted
average of the possible outcomes of our random variable.
Mathematically, if x1, x2, x3, . . . are all of the distinct possible values
that X can take, the expected value of X is
E(X) =
P
i
xiP (X = xi)
X
3
2
6
10
1
1
5
4
...
Y
4
2
8
23
–3
0
9
1
...
X + Y
7
4
14
33
–2
1
14
5
...
∑ xi ∑ yi
+ ∑ (xi + yi)
=
E(X) E(Y)
+ E(X + Y)
=
i=1
n
i=1
n
i=1
n
n
1
n
1
n
1
Linearity For any r.v.s X and Y , and constants a, b, c,
E(aX + bY + c) = aE(X) + bE(Y ) + c
Same distribution implies same mean If X and Y have the same
distribution, then E(X) = E(Y ) and, more generally,
E(g(X)) = E(g(Y ))
Conditional Expected Value is defined like expectation, only
conditioned on any event A.
E(X|A) =
P
x
xP (X = x|A)
Indicator Random Variables
Indicator Random Variable is a random variable that takes on the
value 1 or 0. It is always an indicator of some event: if the event
occurs, the indicator is 1; otherwise it is 0. They are useful for many
problems about counting how many events of some kind occur. Write
IA =
(
1 if A occurs,
0 if A does not occur.
Note that I2
A = IA, IAIB = IA∩B, and IA∪B = IA + IB − IAIB.
Distribution IA ∼ Bern(p) where p = P (A).
Fundamental Bridge The expectation of the indicator for event A is
the probability of event A: E(IA) = P (A).
Variance and Standard Deviation
Var(X) = E (X − E(X))
2
= E(X
2
) − (E(X))
2
SD(X) =
q
Var(X)
Continuous RVs, LOTUS, UoU
Continuous Random Variables (CRVs)
What’s the probability that a CRV is in an interval? Take the
difference in CDF values (or use the PDF as described later).
P (a ≤ X ≤ b) = P (X ≤ b) − P (X ≤ a) = FX (b) − FX (a)
For X ∼ N (µ, σ2
), this becomes
P (a ≤ X ≤ b) = Φ

b − µ
σ

− Φ

a − µ
σ

What is the Probability Density Function (PDF)? The PDF f
is the derivative of the CDF F .
F
0
(x) = f(x)
A PDF is nonnegative and integrates to 1. By the fundamental
theorem of calculus, to get from PDF back to CDF we can integrate:
F (x) =
Z x
−∞
f(t)dt
−4 −2 0 2 4
0.00
0.10
0.20
0.30
x
PDF
−4 −2 0 2 4
0.0
0.2
0.4
0.6
0.8
1.0
x
CDF
To find the probability that a CRV takes on a value in an interval,
integrate the PDF over that interval.
F (b) − F (a) =
Z b
a
f(x)dx
How do I find the expected value of a CRV? Analogous to the
discrete case, where you sum x times the PMF, for CRVs you integrate
x times the PDF.
E(X) =
Z ∞
−∞
xf(x)dx
LOTUS
Expected value of a function of an r.v. The expected value of X
is defined this way:
E(X) =
X
x
xP (X = x) (for discrete X)
E(X) =
Z ∞
−∞
xf(x)dx (for continuous X)
The Law of the Unconscious Statistician (LOTUS) states that
you can find the expected value of a function of a random variable,
g(X), in a similar way, by replacing the x in front of the PMF/PDF by
g(x) but still working with the PMF/PDF of X:
E(g(X)) =
X
x
g(x)P (X = x) (for discrete X)
E(g(X)) =
Z ∞
−∞
g(x)f(x)dx (for continuous X)
What’s a function of a random variable? A function of a random
variable is also a random variable. For example, if X is the number of
bikes you see in an hour, then g(X) = 2X is the number of bike wheels
you see in that hour and h(X) = X
2

=
X(X−1)
2 is the number of
pairs of bikes such that you see both of those bikes in that hour.
What’s the point? You don’t need to know the PMF/PDF of g(X)
to find its expected value. All you need is the PMF/PDF of X.
Universality of Uniform (UoU)
When you plug any CRV into its own CDF, you get a Uniform(0,1)
random variable. When you plug a Uniform(0,1) r.v. into an inverse
CDF, you get an r.v. with that CDF. For example, let’s say that a
random variable X has CDF
F (x) = 1 − e
−x
, for x  0
By UoU, if we plug X into this function then we get a uniformly
distributed random variable.
F (X) = 1 − e
−X
∼ Unif(0, 1)
Similarly, if U ∼ Unif(0, 1) then F −1
(U) has CDF F . The key point is
that for any continuous random variable X, we can transform it into a
Uniform random variable and back by using its CDF.
Moments and MGFs
Moments
Moments describe the shape of a distribution. Let X have mean µ and
standard deviation σ, and Z = (X − µ)/σ be the standardized version
of X. The kth moment of X is µk = E(Xk
) and the kth standardized
moment of X is mk = E(Zk
). The mean, variance, skewness, and
kurtosis are important summaries of the shape of a distribution.
Mean E(X) = µ1
Variance Var(X) = µ2 − µ2
1
Skewness Skew(X) = m3
Kurtosis Kurt(X) = m4 − 3
Moment Generating Functions
MGF For any random variable X, the function
MX (t) = E(e
tX
)
is the moment generating function (MGF) of X, if it exists for all
t in some open interval containing 0. The variable t could just as well
have been called u or v. It’s a bookkeeping device that lets us work
with the function MX rather than the sequence of moments.
Why is it called the Moment Generating Function? Because
the kth derivative of the moment generating function, evaluated at 0,
is the kth moment of X.
µk = E(X
k
) = M
(k)
X (0)
This is true by Taylor expansion of etX
since
MX (t) = E(e
tX
) =
∞
X
k=0
E(Xk
)tk
k!
=
∞
X
k=0
µktk
k!
MGF of linear functions If we have Y = aX + b, then
MY (t) = E(e
t(aX+b)
) = e
bt
E(e
(at)X
) = e
bt
MX (at)
Uniqueness If it exists, the MGF uniquely determines the
distribution. This means that for any two random variables X and Y ,
they are distributed the same (their PMFs/PDFs are equal) if and
only if their MGFs are equal.
Summing Independent RVs by Multiplying MGFs. If X and Y
are independent, then
MX+Y (t) = E(e
t(X+Y )
) = E(e
tX
)E(e
tY
) = MX (t) · MY (t)
The MGF of the sum of two random variables is the product of the
MGFs of those two random variables.
Joint PDFs and CDFs
Joint Distributions
The joint CDF of X and Y is
F (x, y) = P (X ≤ x, Y ≤ y)
In the discrete case, X and Y have a joint PMF
pX,Y (x, y) = P (X = x, Y = y).
In the continuous case, they have a joint PDF
fX,Y (x, y) =
∂2
∂x∂y
FX,Y (x, y).
The joint PMF/PDF must be nonnegative and sum/integrate to 1.
Conditional Distributions
Conditioning and Bayes’ rule for discrete r.v.s
P (Y = y|X = x) =
P (X = x, Y = y)
P (X = x)
=
P (X = x|Y = y)P (Y = y)
P (X = x)
Conditioning and Bayes’ rule for continuous r.v.s
fY |X (y|x) =
fX,Y (x, y)
fX (x)
=
fX|Y (x|y)fY (y)
fX (x)
Hybrid Bayes’ rule
fX (x|A) =
P (A|X = x)fX (x)
P (A)
Marginal Distributions
To find the distribution of one (or more) random variables from a joint
PMF/PDF, sum/integrate over the unwanted random variables.
Marginal PMF from joint PMF
P (X = x) =
X
y
P (X = x, Y = y)
Marginal PDF from joint PDF
fX (x) =
Z ∞
−∞
fX,Y (x, y)dy
Independence of Random Variables
Random variables X and Y are independent if and only if any of the
following conditions holds:
• Joint CDF is the product of the marginal CDFs
• Joint PMF/PDF is the product of the marginal PMFs/PDFs
• Conditional distribution of Y given X is the marginal
distribution of Y
Write X ⊥
⊥ Y to denote that X and Y are independent.
Multivariate LOTUS
LOTUS in more than one dimension is analogous to the 1D LOTUS.
For discrete random variables:
E(g(X, Y )) =
X
x
X
y
g(x, y)P (X = x, Y = y)
For continuous random variables:
E(g(X, Y )) =
Z ∞
−∞
Z ∞
−∞
g(x, y)fX,Y (x, y)dxdy
Covariance and Transformations
Covariance and Correlation
Covariance is the analog of variance for two random variables.
Cov(X, Y ) = E ((X − E(X))(Y − E(Y ))) = E(XY ) − E(X)E(Y )
Note that
Cov(X, X) = E(X
2
) − (E(X))
2
= Var(X)
Correlation is a standardized version of covariance that is always
between −1 and 1.
Corr(X, Y ) =
Cov(X, Y )
p
Var(X)Var(Y )
Covariance and Independence If two random variables are
independent, then they are uncorrelated. The converse is not
necessarily true (e.g., consider X ∼ N (0, 1) and Y = X2
).
X ⊥
⊥ Y −→ Cov(X, Y ) = 0 −→ E(XY ) = E(X)E(Y )
Covariance and Variance The variance of a sum can be found by
Var(X + Y ) = Var(X) + Var(Y ) + 2Cov(X, Y )
Var(X1 + X2 + · · · + Xn) =
n
X
i=1
Var(Xi) + 2
X
ij
Cov(Xi, Xj )
If X and Y are independent then they have covariance 0, so
X ⊥
⊥ Y =⇒ Var(X + Y ) = Var(X) + Var(Y )
If X1, X2, . . . , Xn are identically distributed and have the same
covariance relationships (often by symmetry), then
Var(X1 + X2 + · · · + Xn) = nVar(X1) + 2
n
2

Cov(X1, X2)
Covariance Properties For random variables W, X, Y, Z and
constants a, b:
Cov(X, Y ) = Cov(Y, X)
Cov(X + a, Y + b) = Cov(X, Y )
Cov(aX, bY ) = abCov(X, Y )
Cov(W + X, Y + Z) = Cov(W, Y ) + Cov(W, Z) + Cov(X, Y )
+ Cov(X, Z)
Correlation is location-invariant and scale-invariant For any
constants a, b, c, d with a and c nonzero,
Corr(aX + b, cY + d) = Corr(X, Y )
Transformations
One Variable Transformations Let’s say that we have a random
variable X with PDF fX (x), but we are also interested in some
function of X. We call this function Y = g(X). Also let y = g(x). If g
is differentiable and strictly increasing (or strictly decreasing), then
the PDF of Y is
fY (y) = fX (x)
dx
dy
= fX (g
−1
(y))
d
dy
g
−1
(y)
The derivative of the inverse transformation is called the Jacobian.
Two Variable Transformations Similarly, let’s say we know the
joint PDF of U and V but are also interested in the random vector
(X, Y ) defined by (X, Y ) = g(U, V ). Let
∂(u, v)
∂(x, y)
=
∂u
∂x
∂u
∂y
∂v
∂x
∂v
∂y
!
be the Jacobian matrix. If the entries in this matrix exist and are
continuous, and the determinant of the matrix is never 0, then
fX,Y (x, y) = fU,V (u, v)
∂(u, v)
∂(x, y)
The inner bars tells us to take the matrix’s determinant, and the outer
bars tell us to take the absolute value. In a 2 × 2 matrix,
a b
c d
= |ad − bc|
Convolutions
Convolution Integral If you want to find the PDF of the sum of two
independent CRVs X and Y , you can do the following integral:
fX+Y (t) =
Z ∞
−∞
fX (x)fY (t − x)dx
Example Let X, Y ∼ N (0, 1) be i.i.d. Then for each fixed t,
fX+Y (t) =
Z ∞
−∞
1
√
2π
e
−x2/2 1
√
2π
e
−(t−x)2/2
dx
By completing the square and using the fact that a Normal PDF
integrates to 1, this works out to fX+Y (t) being the N (0, 2) PDF.
Poisson Process
Definition We have a Poisson process of rate λ arrivals per unit
time if the following conditions hold:
1. The number of arrivals in a time interval of length t is Pois(λt).
2. Numbers of arrivals in disjoint time intervals are independent.
For example, the numbers of arrivals in the time intervals [0, 5],
(5, 12), and [13, 23) are independent with Pois(5λ), Pois(7λ), Pois(10λ)
distributions, respectively.
0 T1
T2
T3
T4
T5
+
+
+
+
+
Count-Time Duality Consider a Poisson process of emails arriving
in an inbox at rate λ emails per hour. Let Tn be the time of arrival of
the nth email (relative to some starting time 0) and Nt be the number
of emails that arrive in [0, t]. Let’s find the distribution of T1. The
event T1  t, the event that you have to wait more than t hours to get
the first email, is the same as the event Nt = 0, which is the event that
there are no emails in the first t hours. So
P (T1  t) = P (Nt = 0) = e
−λt
−→ P (T1 ≤ t) = 1 − e
−λt
Thus we have T1 ∼ Expo(λ). By the memoryless property and similar
reasoning, the interarrival times between emails are i.i.d. Expo(λ), i.e.,
the differences Tn − Tn−1 are i.i.d. Expo(λ).
Order Statistics
Definition Let’s say you have n i.i.d. r.v.s X1, X2, . . . , Xn. If you
arrange them from smallest to largest, the ith element in that list is
the ith order statistic, denoted X(i). So X(1) is the smallest in the list
and X(n) is the largest in the list.
Note that the order statistics are dependent, e.g., learning X(4) = 42
gives us the information that X(1), X(2), X(3) are ≤ 42 and
X(5), X(6), . . . , X(n) are ≥ 42.
Distribution Taking n i.i.d. random variables X1, X2, . . . , Xn with
CDF F (x) and PDF f(x), the CDF and PDF of X(i) are:
FX(i)
(x) = P (X(i) ≤ x) =
n
X
k=i
n
k

F (x)
k
(1 − F (x))
n−k
fX(i)
(x) = n
n − 1
i − 1

F (x)
i−1
(1 − F (x))
n−i
f(x)
Uniform Order Statistics The jth order statistic of
i.i.d. U1, . . . , Un ∼ Unif(0, 1) is U(j) ∼ Beta(j, n − j + 1).
Conditional Expectation
Conditioning on an Event We can find E(Y |A), the expected value
of Y given that event A occurred. A very important case is when A is
the event X = x. Note that E(Y |A) is a number. For example:
• The expected value of a fair die roll, given that it is prime, is
1
3 · 2 + 1
3 · 3 + 1
3 · 5 = 10
3 .
• Let Y be the number of successes in 10 independent Bernoulli
trials with probability p of success. Let A be the event that the
first 3 trials are all successes. Then
E(Y |A) = 3 + 7p
since the number of successes among the last 7 trials is Bin(7, p).
• Let T ∼ Expo(1/10) be how long you have to wait until the
shuttle comes. Given that you have already waited t minutes,
the expected additional waiting time is 10 more minutes, by the
memoryless property. That is, E(T |T  t) = t + 10.
Discrete Y Continuous Y
E(Y ) =
P
y yP (Y = y) E(Y ) =
R ∞
−∞
yfY (y)dy
E(Y |A) =
P
y yP (Y = y|A) E(Y |A) =
R ∞
−∞
yf(y|A)dy
Conditioning on a Random Variable We can also find E(Y |X),
the expected value of Y given the random variable X. This is a
function of the random variable X. It is not a number except in
certain special cases such as if X ⊥
⊥ Y . To find E(Y |X), find
E(Y |X = x) and then plug in X for x. For example:
• If E(Y |X = x) = x3
+ 5x, then E(Y |X) = X3
+ 5X.
• Let Y be the number of successes in 10 independent Bernoulli
trials with probability p of success and X be the number of
successes among the first 3 trials. Then E(Y |X) = X + 7p.
• Let X ∼ N (0, 1) and Y = X2
. Then E(Y |X = x) = x2
since if
we know X = x then we know Y = x2
. And E(X|Y = y) = 0
since if we know Y = y then we know X = ±
√
y, with equal
probabilities (by symmetry). So E(Y |X) = X2
, E(X|Y ) = 0.
Properties of Conditional Expectation
1. E(Y |X) = E(Y ) if X ⊥
⊥ Y
2. E(h(X)W |X) = h(X)E(W |X) (taking out what’s known)
In particular, E(h(X)|X) = h(X).
3. E(E(Y |X)) = E(Y ) (Adam’s Law, a.k.a. Law of Total
Expectation)
Adam’s Law (a.k.a. Law of Total Expectation) can also be
written in a way that looks analogous to LOTP. For any events
A1, A2, . . . , An that partition the sample space,
E(Y ) = E(Y |A1)P (A1) + · · · + E(Y |An)P (An)
For the special case where the partition is A, Ac
, this says
E(Y ) = E(Y |A)P (A) + E(Y |A
c
)P (A
c
)
Eve’s Law (a.k.a. Law of Total Variance)
Var(Y ) = E(Var(Y |X)) + Var(E(Y |X))
MVN, LLN, CLT
Law of Large Numbers (LLN)
Let X1, X2, X3 . . . be i.i.d. with mean µ. The sample mean is
X̄n =
X1 + X2 + X3 + · · · + Xn
n
The Law of Large Numbers states that as n → ∞, X̄n → µ with
probability 1. For example, in flips of a coin with probability p of
Heads, let Xj be the indicator of the jth flip being Heads. Then LLN
says the proportion of Heads converges to p (with probability 1).
Central Limit Theorem (CLT)
Approximation using CLT
We use ˙
∼ to denote is approximately distributed. We can use the
Central Limit Theorem to approximate the distribution of a random
variable Y = X1 + X2 + · · · + Xn that is a sum of n i.i.d. random
variables Xi. Let E(Y ) = µY and Var(Y ) = σ2
Y . The CLT says
Y ˙
∼ N (µY , σ
2
Y )
If the Xi are i.i.d. with mean µX and variance σ2
X , then µY = nµX
and σ2
Y = nσ2
X . For the sample mean X̄n, the CLT says
X̄n =
1
n
(X1 + X2 + · · · + Xn) ˙
∼ N (µX , σ
2
X /n)
Asymptotic Distributions using CLT
We use
D
−
→ to denote converges in distribution to as n → ∞. The
CLT says that if we standardize the sum X1 + · · · + Xn then the
distribution of the sum converges to N (0, 1) as n → ∞:
1
σ
√
n
(X1 + · · · + Xn − nµX )
D
−
→ N (0, 1)
In other words, the CDF of the left-hand side goes to the standard
Normal CDF, Φ. In terms of the sample mean, the CLT says
√
n(X̄n − µX )
σX
D
−
→ N (0, 1)
Markov Chains
Definition
1 2
1
1/2
3
1/2
1/4
5/12
4
1/3
1/6
7/12
5
1/4
1/8
7/8
A Markov chain is a random walk in a state space, which we will
assume is finite, say {1, 2, . . . , M}. We let Xt denote which element of
the state space the walk is visiting at time t. The Markov chain is the
sequence of random variables tracking where the walk is at all points
in time, X0, X1, X2, . . . . By definition, a Markov chain must satisfy
the Markov property, which says that if you want to predict where
the chain will be at a future time, if we know the present state then
the entire past history is irrelevant. Given the present, the past and
future are conditionally independent. In symbols,
P (Xn+1 = j|X0 = i0, X1 = i1, . . . , Xn = i) = P (Xn+1 = j|Xn = i)
State Properties
A state is either recurrent or transient.
• If you start at a recurrent state, then you will always return
back to that state at some point in the future. ♪You can
check-out any time you like, but you can never leave. ♪
• Otherwise you are at a transient state. There is some positive
probability that once you leave you will never return. ♪You
don’t have to go home, but you can’t stay here. ♪
A state is either periodic or aperiodic.
• If you start at a periodic state of period k, then the GCD of
the possible numbers of steps it would take to return back is
k  1.
• Otherwise you are at an aperiodic state. The GCD of the
possible numbers of steps it would take to return back is 1.
Transition Matrix
Let the state space be {1, 2, . . . , M}. The transition matrix Q is the
M × M matrix where element qij is the probability that the chain goes
from state i to state j in one step:
qij = P (Xn+1 = j|Xn = i)
To find the probability that the chain goes from state i to state j in
exactly m steps, take the (i, j) element of Qm
.
q
(m)
ij = P (Xn+m = j|Xn = i)
If X0 is distributed according to the row vector PMF ~
p, i.e.,
pj = P (X0 = j), then the PMF of Xn is ~
pQn
.
Chain Properties
A chain is irreducible if you can get from anywhere to anywhere. If a
chain (on a finite state space) is irreducible, then all of its states are
recurrent. A chain is periodic if any of its states are periodic, and is
aperiodic if none of its states are periodic. In an irreducible chain, all
states have the same period.
A chain is reversible with respect to ~
s if siqij = sj qji for all i, j.
Examples of reversible chains include any chain with qij = qji, with
~
s = ( 1
M , 1
M , . . . , 1
M ), and random walk on an undirected network.
Stationary Distribution
Let us say that the vector ~
s = (s1, s2, . . . , sM ) be a PMF (written as a
row vector). We will call ~
s the stationary distribution for the chain
if ~
sQ = ~
s. As a consequence, if Xt has the stationary distribution,
then all future Xt+1, Xt+2, . . . also have the stationary distribution.
For irreducible, aperiodic chains, the stationary distribution exists, is
unique, and si is the long-run probability of a chain being at state i.
The expected number of steps to return to i starting from i is 1/si.
To find the stationary distribution, you can solve the matrix equation
(Q0
− I)~
s 0
= 0. The stationary distribution is uniform if the columns
of Q sum to 1.
Reversibility Condition Implies Stationarity If you have a PMF ~
s
and a Markov chain with transition matrix Q, then siqij = sj qji for
all states i, j implies that ~
s is stationary.
Random Walk on an Undirected Network
4
3
1
2
5
If you have a collection of nodes, pairs of which can be connected by
undirected edges, and a Markov chain is run by going from the
current node to a uniformly random node that is connected to it by an
edge, then this is a random walk on an undirected network. The
stationary distribution of this chain is proportional to the degree
sequence (this is the sequence of degrees, where the degree of a node
is how many edges are attached to it). For example, the stationary
distribution of random walk on the network shown above is
proportional to (3, 3, 2, 4, 2), so it’s ( 3
14 , 3
14 , 3
14 , 4
14 , 2
14 ).
Continuous Distributions
Uniform Distribution
Let us say that U is distributed Unif(a, b). We know the following:
Properties of the Uniform For a Uniform distribution, the
probability of a draw from any interval within the support is
proportional to the length of the interval. See Universality of Uniform
and Order Statistics for other properties.
Example William throws darts really badly, so his darts are uniform
over the whole room because they’re equally likely to appear anywhere.
William’s darts have a Uniform distribution on the surface of the
room. The Uniform is the only distribution where the probability of
hitting in any specific region is proportional to the length/area/volume
of that region, and where the density of occurrence in any one specific
spot is constant throughout the whole support.
Normal Distribution
Let us say that X is distributed N (µ, σ2
). We know the following:
Central Limit Theorem The Normal distribution is ubiquitous
because of the Central Limit Theorem, which states that the sample
mean of i.i.d. r.v.s will approach a Normal distribution as the sample
size grows, regardless of the initial distribution.
Location-Scale Transformation Every time we shift a Normal
r.v. (by adding a constant) or rescale a Normal (by multiplying by a
constant), we change it to another Normal r.v. For any Normal
X ∼ N (µ, σ2
), we can transform it to the standard N (0, 1) by the
following transformation:
Z =
X − µ
σ
∼ N (0, 1)
Standard Normal The Standard Normal, Z ∼ N (0, 1), has mean 0
and variance 1. Its CDF is denoted by Φ.
Exponential Distribution
Let us say that X is distributed Expo(λ). We know the following:
Story You’re sitting on an open meadow right before the break of
dawn, wishing that airplanes in the night sky were shooting stars,
because you could really use a wish right now. You know that shooting
stars come on average every 15 minutes, but a shooting star is not
“due” to come just because you’ve waited so long. Your waiting time
is memoryless; the additional time until the next shooting star comes
does not depend on how long you’ve waited already.
Example The waiting time until the next shooting star is distributed
Expo(4) hours. Here λ = 4 is the rate parameter, since shooting
stars arrive at a rate of 1 per 1/4 hour on average. The expected time
until the next shooting star is 1/λ = 1/4 hour.
Expos as a rescaled Expo(1)
Y ∼ Expo(λ) → X = λY ∼ Expo(1)
Memorylessness The Exponential Distribution is the only
continuous memoryless distribution. The memoryless property says
that for X ∼ Expo(λ) and any positive numbers s and t,
P (X  s + t|X  s) = P (X  t)
Equivalently,
X − a|(X  a) ∼ Expo(λ)
For example, a product with an Expo(λ) lifetime is always “as good as
new” (it doesn’t experience wear and tear). Given that the product has
survived a years, the additional time that it will last is still Expo(λ).
Min of Expos If we have independent Xi ∼ Expo(λi), then
min(X1, . . . , Xk) ∼ Expo(λ1 + λ2 + · · · + λk).
Max of Expos If we have i.i.d. Xi ∼ Expo(λ), then
max(X1, . . . , Xk) has the same distribution as Y1 + Y2 + · · · + Yk,
where Yj ∼ Expo(jλ) and the Yj are independent.
Gamma Distribution
0 5 10 15 20
Gamma(3, 1)
x
PDF
0.0
0.1
0.2
0 5 10 15 20
Gamma(3, 0.5)
x
PDF
0.00
0.05
0.10
0 5 10 15 20
Gamma(10, 1)
x
PDF
0.00
0.05
0.10
0 5 10 15 20
Gamma(5, 0.5)
x
PDF
0.00
0.05
0.10
Let us say that X is distributed Gamma(a, λ). We know the following:
Story You sit waiting for shooting stars, where the waiting time for a
star is distributed Expo(λ). You want to see n shooting stars before
you go home. The total waiting time for the nth shooting star is
Gamma(n, λ).
Example You are at a bank, and there are 3 people ahead of you.
The serving time for each person is Exponential with mean 2 minutes.
Only one person at a time can be served. The distribution of your
waiting time until it’s your turn to be served is Gamma(3, 1
2 ).
Beta Distribution
0.0 0.2 0.4 0.6 0.8 1.0
0
1
2
3
4
5
Beta(0.5, 0.5)
x
PDF
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.5
1.0
1.5
2.0
Beta(2, 1)
x
PDF
0.0 0.2 0.4 0.6 0.8 1.0
Beta(2, 8)
x
PDF
0
1
2
3
0.0 0.2 0.4 0.6 0.8 1.0
0.0
0.5
1.0
1.5
2.0
2.5
Beta(5, 5)
x
PDF
Conjugate Prior of the Binomial In the Bayesian approach to
statistics, parameters are viewed as random variables, to reflect our
uncertainty. The prior for a parameter is its distribution before
observing data. The posterior is the distribution for the parameter
after observing data. Beta is the conjugate prior of the Binomial
because if you have a Beta-distributed prior on p in a Binomial, then
the posterior distribution on p given the Binomial data is also
Beta-distributed. Consider the following two-level model:
X|p ∼ Bin(n, p)
p ∼ Beta(a, b)
Then after observing X = x, we get the posterior distribution
p|(X = x) ∼ Beta(a + x, b + n − x)
Order statistics of the Uniform See Order Statistics.
Beta-Gamma relationship If X ∼ Gamma(a, λ),
Y ∼ Gamma(b, λ), with X ⊥
⊥ Y then
• X
X+Y ∼ Beta(a, b)
• X + Y ⊥
⊥ X
X+Y
This is known as the bank–post office result.
χ2
(Chi-Square) Distribution
Let us say that X is distributed χ2
n. We know the following:
Story A Chi-Square(n) is the sum of the squares of n independent
standard Normal r.v.s.
Properties and Representations
X is distributed as Z
2
1 + Z
2
2 + · · · + Z
2
n for i.i.d. Zi ∼ N (0, 1)
X ∼ Gamma(n/2, 1/2)
Discrete Distributions
Distributions for four sampling schemes
Replace No Replace
Fixed # trials (n) Binomial HGeom
(Bern if n = 1)
Draw until r success NBin NHGeom
(Geom if r = 1)
Bernoulli Distribution
The Bernoulli distribution is the simplest case of the Binomial
distribution, where we only have one trial (n = 1). Let us say that X is
distributed Bern(p). We know the following:
Story A trial is performed with probability p of “success”, and X is
the indicator of success: 1 means success, 0 means failure.
Example Let X be the indicator of Heads for a fair coin toss. Then
X ∼ Bern( 1
2 ). Also, 1 − X ∼ Bern( 1
2 ) is the indicator of Tails.
Binomial Distribution
0 2 4 6 8 10
0.00
0.05
0.10
0.15
0.20
0.25
0.30
x
pmf
●
●
●
●
●
●
●
●
●
●
●
0 2 4 6 8 10
Bin(10,1/2)
Let us say that X is distributed Bin(n, p). We know the following:
Story X is the number of “successes” that we will achieve in n
independent trials, where each trial is either a success or a failure, each
with the same probability p of success. We can also write X as a sum
of multiple independent Bern(p) random variables. Let X ∼ Bin(n, p)
and Xj ∼ Bern(p), where all of the Bernoullis are independent. Then
X = X1 + X2 + X3 + · · · + Xn
Example If Jeremy Lin makes 10 free throws and each one
independently has a 3
4 chance of getting in, then the number of free
throws he makes is distributed Bin(10, 3
4 ).
Properties Let X ∼ Bin(n, p), Y ∼ Bin(m, p) with X ⊥
⊥ Y .
• Redefine success n − X ∼ Bin(n, 1 − p)
• Sum X + Y ∼ Bin(n + m, p)
• Conditional X|(X + Y = r) ∼ HGeom(n, m, r)
• Binomial-Poisson Relationship Bin(n, p) is approximately
Pois(λ) if p is small.
• Binomial-Normal Relationship Bin(n, p) is approximately
N (np, np(1 − p)) if n is large and p is not near 0 or 1.
Geometric Distribution
Let us say that X is distributed Geom(p). We know the following:
Story X is the number of “failures” that we will achieve before we
achieve our first success. Our successes have probability p.
Example If each pokeball we throw has probability 1
10 to catch Mew,
the number of failed pokeballs will be distributed Geom( 1
10 ).
First Success Distribution
Equivalent to the Geometric distribution, except that it includes the
first success in the count. This is 1 more than the number of failures.
If X ∼ FS(p) then E(X) = 1/p.
Negative Binomial Distribution
Let us say that X is distributed NBin(r, p). We know the following:
Story X is the number of “failures” that we will have before we
achieve our rth success. Our successes have probability p.
Example Thundershock has 60% accuracy and can faint a wild
Raticate in 3 hits. The number of misses before Pikachu faints
Raticate with Thundershock is distributed NBin(3, 0.6).
Hypergeometric Distribution
Let us say that X is distributed HGeom(w, b, n). We know the
following:
Story In a population of w desired objects and b undesired objects,
X is the number of “successes” we will have in a draw of n objects,
without replacement. The draw of n objects is assumed to be a
simple random sample (all sets of n objects are equally likely).
Examples Here are some HGeom examples.
• Let’s say that we have only b Weedles (failure) and w Pikachus
(success) in Viridian Forest. We encounter n Pokemon in the
forest, and X is the number of Pikachus in our encounters.
• The number of Aces in a 5 card hand.
• You have w white balls and b black balls, and you draw n balls.
You will draw X white balls.
• You have w white balls and b black balls, and you draw n balls
without replacement. The number of white balls in your sample
is HGeom(w, b, n); the number of black balls is HGeom(b, w, n).
• Capture-recapture A forest has N elk, you capture n of them,
tag them, and release them. Then you recapture a new sample
of size m. How many tagged elk are now in the new sample?
HGeom(n, N − n, m)
Poisson Distribution
Let us say that X is distributed Pois(λ). We know the following:
Story There are rare events (low probability events) that occur many
different ways (high possibilities of occurences) at an average rate of λ
occurrences per unit space or time. The number of events that occur
in that unit of space or time is X.
Example A certain busy intersection has an average of 2 accidents
per month. Since an accident is a low probability event that can
happen many different ways, it is reasonable to model the number of
accidents in a month at that intersection as Pois(2). Then the number
of accidents that happen in two months at that intersection is
distributed Pois(4).
Properties Let X ∼ Pois(λ1) and Y ∼ Pois(λ2), with X ⊥
⊥ Y .
1. Sum X + Y ∼ Pois(λ1 + λ2)
2. Conditional X|(X + Y = n) ∼ Bin

n,
λ1
λ1+λ2

3. Chicken-egg If there are Z ∼ Pois(λ) items and we randomly
and independently “accept” each item with probability p, then
the number of accepted items Z1 ∼ Pois(λp), and the number of
rejected items Z2 ∼ Pois(λ(1 − p)), and Z1 ⊥
⊥ Z2.
Multivariate Distributions
Multinomial Distribution
Let us say that the vector ~
X = (X1, X2, X3, . . . , Xk) ∼ Multk(n, ~
p)
where ~
p = (p1, p2, . . . , pk).
Story We have n items, which can fall into any one of the k buckets
independently with the probabilities ~
p = (p1, p2, . . . , pk).
Example Let us assume that every year, 100 students in the Harry
Potter Universe are randomly and independently sorted into one of
four houses with equal probability. The number of people in each of the
houses is distributed Mult4(100, ~
p), where ~
p = (0.25, 0.25, 0.25, 0.25).
Note that X1 + X2 + · · · + X4 = 100, and they are dependent.
Joint PMF For n = n1 + n2 + · · · + nk,
P ( ~
X = ~
n) =
n!
n1!n2! . . . nk!
p
n1
1 p
n2
2 . . . p
nk
k
Marginal PMF, Lumping, and Conditionals Marginally,
Xi ∼ Bin(n, pi) since we can define “success” to mean category i. If
you lump together multiple categories in a Multinomial, then it is still
Multinomial. For example, Xi + Xj ∼ Bin(n, pi + pj ) for i 6= j since
we can define “success” to mean being in category i or j. Similarly, if
k = 6 and we lump categories 1-2 and lump categories 3-5, then
(X1 + X2, X3 + X4 + X5, X6) ∼ Mult3(n, (p1 + p2, p3 + p4 + p5, p6))
Conditioning on some Xj also still gives a Multinomial:
X1, . . . , Xk−1|Xk = nk ∼ Multk−1

n − nk,

p1
1 − pk
, . . . ,
pk−1
1 − pk

Variances and Covariances We have Xi ∼ Bin(n, pi) marginally, so
Var(Xi) = npi(1 − pi). Also, Cov(Xi, Xj ) = −npipj for i 6= j.
Multivariate Uniform Distribution
See the univariate Uniform for stories and examples. For the 2D
Uniform on some region, probability is proportional to area. Every
point in the support has equal density, of value 1
area of region . For the
3D Uniform, probability is proportional to volume.
Multivariate Normal (MVN) Distribution
A vector ~
X = (X1, X2, . . . , Xk) is Multivariate Normal if every linear
combination is Normally distributed, i.e., t1X1 + t2X2 + · · · + tkXk is
Normal for any constants t1, t2, . . . , tk. The parameters of the
Multivariate Normal are the mean vector ~
µ = (µ1, µ2, . . . , µk) and
the covariance matrix where the (i, j) entry is Cov(Xi, Xj ).
Properties The Multivariate Normal has the following properties.
• Any subvector is also MVN.
• If any two elements within an MVN are uncorrelated, then they
are independent.
• The joint PDF of a Bivariate Normal (X, Y ) with N (0, 1)
marginal distributions and correlation ρ ∈ (−1, 1) is
fX,Y (x, y) =
1
2πτ
exp

−
1
2τ2
(x
2
+ y
2
− 2ρxy)

,
with τ =
p
1 − ρ2.
Distribution Properties
Important CDFs
Standard Normal Φ
Exponential(λ) F (x) = 1 − e−λx
, for x ∈ (0, ∞)
Uniform(0,1) F (x) = x, for x ∈ (0, 1)
Convolutions of Random Variables
A convolution of n random variables is simply their sum. For the
following results, let X and Y be independent.
1. X ∼ Pois(λ1), Y ∼ Pois(λ2) −→ X + Y ∼ Pois(λ1 + λ2)
2. X ∼ Bin(n1, p), Y ∼ Bin(n2, p) −→ X + Y ∼ Bin(n1 + n2, p).
Bin(n, p) can be thought of as a sum of i.i.d. Bern(p) r.v.s.
3. X ∼ Gamma(a1, λ), Y ∼ Gamma(a2, λ)
−→ X + Y ∼ Gamma(a1 + a2, λ). Gamma(n, λ) with n an
integer can be thought of as a sum of i.i.d. Expo(λ) r.v.s.
4. X ∼ NBin(r1, p), Y ∼ NBin(r2, p)
−→ X + Y ∼ NBin(r1 + r2, p). NBin(r, p) can be thought of as
a sum of i.i.d. Geom(p) r.v.s.
5. X ∼ N (µ1, σ2
1 ), Y ∼ N (µ2, σ2
2 )
−→ X + Y ∼ N (µ1 + µ2, σ2
1 + σ2
2 )
Special Cases of Distributions
1. Bin(1, p) ∼ Bern(p)
2. Beta(1, 1) ∼ Unif(0, 1)
3. Gamma(1, λ) ∼ Expo(λ)
4. χ2
n ∼ Gamma n
2 , 1
2

5. NBin(1, p) ∼ Geom(p)
Inequalities
1. Cauchy-Schwarz |E(XY )| ≤
p
E(X2)E(Y 2)
2. Markov P (X ≥ a) ≤
E|X|
a for a  0
3. Chebyshev P (|X − µ| ≥ a) ≤ σ2
a2 for E(X) = µ, Var(X) = σ2
4. Jensen E(g(X)) ≥ g(E(X)) for g convex; reverse if g is
concave
Formulas
Geometric Series
1 + r + r
2
+ · · · + r
n−1
=
n−1
X
k=0
r
k
=
1 − rn
1 − r
1 + r + r
2
+ · · · =
1
1 − r
if |r|  1
Exponential Function (ex
)
e
x
=
∞
X
n=0
xn
n!
= 1 + x +
x2
2!
+
x3
3!
+ · · · = lim
n→∞

1 +
x
n
n
Gamma and Beta Integrals
You can sometimes solve complicated-looking integrals by
pattern-matching to a gamma or beta integral:
Z ∞
0
x
t−1
e
−x
dx = Γ(t)
Z 1
0
x
a−1
(1 − x)
b−1
dx =
Γ(a)Γ(b)
Γ(a + b)
Also, Γ(a + 1) = aΓ(a), and Γ(n) = (n − 1)! if n is a positive integer.
Euler’s Approximation for Harmonic Sums
1 +
1
2
+
1
3
+ · · · +
1
n
≈ log n + 0.577 . . .
Stirling’s Approximation for Factorials
n! ≈
√
2πn

n
e
n
Miscellaneous Definitions
Medians and Quantiles Let X have CDF F . Then X has median
m if F (m) ≥ 0.5 and P (X ≥ m) ≥ 0.5. For X continuous, m satisfies
F (m) = 1/2. In general, the ath quantile of X is min{x : F (x) ≥ a};
the median is the case a = 1/2.
log Statisticians generally use log to refer to natural log (i.e., base e).
i.i.d r.v.s Independent, identically-distributed random variables.
Example Problems
Contributions from Sebastian Chiu
Calculating Probability
A textbook has n typos, which are randomly scattered amongst its n
pages, independently. You pick a random page. What is the
probability that it has no typos? Answer: There is a 1 − 1
n

probability that any specific typo isn’t on your page, and thus a

1 −
1
n
n
probability that there are no typos on your page. For n
large, this is approximately e−1
= 1/e.
Linearity and Indicators (1)
In a group of n people, what is the expected number of distinct
birthdays (month and day)? What is the expected number of birthday
matches? Answer: Let X be the number of distinct birthdays and Ij
be the indicator for the jth day being represented.
E(Ij ) = 1 − P (no one born on day j) = 1 − (364/365)
n
By linearity, E(X) = 365 (1 − (364/365)
n
) . Now let Y be the
number of birthday matches and Ji be the indicator that the ith pair
of people have the same birthday. The probability that any two
specific people share a birthday is 1/365, so E(Y ) =
n
2

/365 .
Linearity and Indicators (2)
This problem is commonly known as the hat-matching problem.
There are n people at a party, each with hat. At the end of the party,
they each leave with a random hat. What is the expected number of
people who leave with the right hat? Answer: Each hat has a 1/n
chance of going to the right person. By linearity, the average number
of hats that go to their owners is n(1/n) = 1 .
Linearity and First Success
This problem is commonly known as the coupon collector problem.
There are n coupon types. At each draw, you get a uniformly random
coupon type. What is the expected number of coupons needed until
you have a complete set? Answer: Let N be the number of coupons
needed; we want E(N). Let N = N1 + · · · + Nn, where N1 is the
draws to get our first new coupon, N2 is the additional draws needed
to draw our second new coupon and so on. By the story of the First
Success, N2 ∼ FS((n − 1)/n) (after collecting first coupon type, there’s
(n − 1)/n chance you’ll get something new). Similarly,
N3 ∼ FS((n − 2)/n), and Nj ∼ FS((n − j + 1)/n). By linearity,
E(N) = E(N1) + · · · + E(Nn) =
n
n
+
n
n − 1
+ · · · +
n
1
= n
n
X
j=1
1
j
This is approximately n(log(n) + 0.577) by Euler’s approximation.
Orderings of i.i.d. random variables
I call 2 UberX’s and 3 Lyfts at the same time. If the time it takes for
the rides to reach me are i.i.d., what is the probability that all the
Lyfts will arrive first? Answer: Since the arrival times of the five cars
are i.i.d., all 5! orderings of the arrivals are equally likely. There are
3!2! orderings that involve the Lyfts arriving first, so the probability
that the Lyfts arrive first is
3!2!
5!
= 1/10 . Alternatively, there are 5
3

ways to choose 3 of the 5 slots for the Lyfts to occupy, where each of
the choices are equally likely. One of these choices has all 3 of the
Lyfts arriving first, so the probability is 1/
5
3

= 1/10 .
Expectation of Negative Hypergeometric
What is the expected number of cards that you draw before you pick
your first Ace in a shuffled deck (not counting the Ace)? Answer:
Consider a non-Ace. Denote this to be card j. Let Ij be the indicator
that card j will be drawn before the first Ace. Note that Ij = 1 says
that j is before all 4 of the Aces in the deck. The probability that this
occurs is 1/5 by symmetry. Let X be the number of cards drawn
before the first Ace. Then X = I1 + I2 + ... + I48, where each indicator
corresponds to one of the 48 non-Aces. Thus,
E(X) = E(I1) + E(I2) + ... + E(I48) = 48/5 = 9.6 .
Minimum and Maximum of RVs
What is the CDF of the maximum of n independent Unif(0,1) random
variables? Answer: Note that for r.v.s X1, X2, . . . , Xn,
P (min(X1, X2, . . . , Xn) ≥ a) = P (X1 ≥ a, X2 ≥ a, . . . , Xn ≥ a)
Similarly,
P (max(X1, X2, . . . , Xn) ≤ a) = P (X1 ≤ a, X2 ≤ a, . . . , Xn ≤ a)
We will use this principle to find the CDF of U(n), where
U(n) = max(U1, U2, . . . , Un) and Ui ∼ Unif(0, 1) are i.i.d.
P (max(U1, U2, . . . , Un) ≤ a) = P (U1 ≤ a, U2 ≤ a, . . . , Un ≤ a)
= P (U1 ≤ a)P (U2 ≤ a) . . . P (Un ≤ a)
= a
n
for 0  a  1 (and the CDF is 0 for a ≤ 0 and 1 for a ≥ 1).
Pattern-matching with ex
Taylor series
For X ∼ Pois(λ), find E

1
X + 1

. Answer: By LOTUS,
E

1
X + 1

=
∞
X
k=0
1
k + 1
e−λ
λk
k!
=
e−λ
λ
∞
X
k=0
λk+1
(k + 1)!
=
e−λ
λ
(e
λ
− 1)
Adam’s Law and Eve’s Law
William really likes speedsolving Rubik’s Cubes. But he’s pretty bad
at it, so sometimes he fails. On any given day, William will attempt
N ∼ Geom(s) Rubik’s Cubes. Suppose each time, he has probability p
of solving the cube, independently. Let T be the number of Rubik’s
Cubes he solves during a day. Find the mean and variance of T .
Answer: Note that T |N ∼ Bin(N, p). So by Adam’s Law,
E(T ) = E(E(T |N)) = E(Np) =
p(1 − s)
s
Similarly, by Eve’s Law, we have that
Var(T ) = E(Var(T |N)) + Var(E(T |N)) = E(Np(1 − p)) + Var(Np)
=
p(1 − p)(1 − s)
s
+
p2
(1 − s)
s2
=
p(1 − s)(p + s(1 − p))
s2
MGF – Finding Moments
Find E(X3
) for X ∼ Expo(λ) using the MGF of X. Answer: The
MGF of an Expo(λ) is M(t) = λ
λ−t . To get the third moment, we can
take the third derivative of the MGF and evaluate at t = 0:
E(X
3
) =
6
λ3
But a much nicer way to use the MGF here is via pattern recognition:
note that M(t) looks like it came from a geometric series:
1
1 − t
λ
=
∞
X
n=0

t
λ
n
=
∞
X
n=0
n!
λn
tn
n!
The coefficient of tn
n! here is the nth moment of X, so we have
E(Xn
) = n!
λn for all nonnegative integers n.
Markov chains (1)
Suppose Xn is a two-state Markov chain with transition matrix
Q =

0 1
0 1 − α α
1 β 1 − β

Find the stationary distribution ~
s = (s0, s1) of Xn by solving ~
sQ = ~
s,
and show that the chain is reversible with respect to ~
s. Answer: The
equation ~
sQ = ~
s says that
s0 = s0(1 − α) + s1β and s1 = s0(α) + s0(1 − β)
By solving this system of linear equations, we have
~
s =

β
α + β
,
α
α + β

To show that the chain is reversible with respect to ~
s, we must show
siqij = sj qji for all i, j. This is done if we can show s0q01 = s1q10.
And indeed,
s0q01 =
αβ
α + β
= s1q10
Markov chains (2)
William and Sebastian play a modified game of Settlers of Catan,
where every turn they randomly move the robber (which starts on the
center tile) to one of the adjacent hexagons.
Robber
(a) Is this Markov chain irreducible? Is it aperiodic? Answer:
Yes to both. The Markov chain is irreducible because it can
get from anywhere to anywhere else. The Markov chain is
aperiodic because the robber can return back to a square in
2, 3, 4, 5, . . . moves, and the GCD of those numbers is 1.
(b) What is the stationary distribution of this Markov chain?
Answer: Since this is a random walk on an undirected graph,
the stationary distribution is proportional to the degree
sequence. The degree for the corner pieces is 3, the degree for
the edge pieces is 4, and the degree for the center pieces is 6.
To normalize this degree sequence, we divide by its sum. The
sum of the degrees is 6(3) + 6(4) + 7(6) = 84. Thus the
stationary probability of being on a corner is 3/84 = 1/28, on
an edge is 4/84 = 1/21, and in the center is 6/84 = 1/14.
(c) What fraction of the time will the robber be in the center tile
in this game, in the long run? Answer: By the above, 1/14 .
(d) What is the expected amount of moves it will take for the
robber to return to the center tile? Answer: Since this chain is
irreducible and aperiodic, to get the expected time to return we
can just invert the stationary probability. Thus on average it
will take 14 turns for the robber to return to the center tile.
Problem-Solving Strategies
Contributions from Jessy Hwang, Yuan Jiang, Yuqi Hou
1. Getting started. Start by defining relevant events and
random variables. (“Let A be the event that I pick the fair
coin”; “Let X be the number of successes.”) Clear notion is
important for clear thinking! Then decide what it is that you’re
supposed to be finding, in terms of your notation (“I want to
find P (X = 3|A)”). Think about what type of object your
answer should be (a number? A random variable? A PMF? A
PDF?) and what it should be in terms of.
Try simple and extreme cases. To make an abstract experiment
more concrete, try drawing a picture or making up numbers
that could have happened. Pattern recognition: does the
structure of the problem resemble something we’ve seen before?
2. Calculating probability of an event. Use counting
principles if the naive definition of probability applies. Is the
probability of the complement easier to find? Look for
symmetries. Look for something to condition on, then apply
Bayes’ Rule or the Law of Total Probability.
3. Finding the distribution of a random variable. First make
sure you need the full distribution not just the mean (see next
item). Check the support of the random variable: what values
can it take on? Use this to rule out distributions that don’t fit.
Is there a story for one of the named distributions that fits the
problem at hand? Can you write the random variable as a
function of an r.v. with a known distribution, say Y = g(X)?
4. Calculating expectation. If it has a named distribution,
check out the table of distributions. If it’s a function of an r.v.
with a named distribution, try LOTUS. If it’s a count of
something, try breaking it up into indicator r.v.s. If you can
condition on something natural, consider using Adam’s law.
5. Calculating variance. Consider independence, named
distributions, and LOTUS. If it’s a count of something, break it
up into a sum of indicator r.v.s. If it’s a sum, use properties of
covariance. If you can condition on something natural, consider
using Eve’s Law.
6. Calculating E(X2
). Do you already know E(X) or Var(X)?
Recall that Var(X) = E(X2
) − (E(X))2
. Otherwise try
LOTUS.
7. Calculating covariance. Use the properties of covariance. If
you’re trying to find the covariance between two components of
a Multinomial distribution, Xi, Xj , then the covariance is
−npipj for i 6= j.
8. Symmetry. If X1, . . . , Xn are i.i.d., consider using symmetry.
9. Calculating probabilities of orderings. Remember that all
n! ordering of i.i.d. continuous random variables X1, . . . , Xn
are equally likely.
10. Determining independence. There are several equivalent
definitions. Think about simple and extreme cases to see if you
can find a counterexample.
11. Do a painful integral. If your integral looks painful, see if
you can write your integral in terms of a known PDF (like
Gamma or Beta), and use the fact that PDFs integrate to 1?
12. Before moving on. Check some simple and extreme cases,
check whether the answer seems plausible, check for biohazards.
Biohazards
Contributions from Jessy Hwang
1. Don’t misuse the naive definition of probability. When
answering “What is the probability that in a group of 3 people,
no two have the same birth month?”, it is not correct to treat
the people as indistinguishable balls being placed into 12 boxes,
since that assumes the list of birth months {January, January,
January} is just as likely as the list {January, April, June},
even though the latter is six times more likely.
2. Don’t confuse unconditional, conditional, and joint
probabilities. In applying P (A|B) =
P (B|A)P (A)
P (B)
, it is not
correct to say “P (B) = 1 because we know B happened”; P (B)
is the prior probability of B. Don’t confuse P (A|B) with
P (A, B).
3. Don’t assume independence without justification. In the
matching problem, the probability that card 1 is a match and
card 2 is a match is not 1/n2
. Binomial and Hypergeometric
are often confused; the trials are independent in the Binomial
story and dependent in the Hypergeometric story.
4. Don’t forget to do sanity checks. Probabilities must be
between 0 and 1. Variances must be ≥ 0. Supports must make
sense. PMFs must sum to 1. PDFs must integrate to 1.
5. Don’t confuse random variables, numbers, and events.
Let X be an r.v. Then g(X) is an r.v. for any function g. In
particular, X2
, |X|, F (X), and IX3 are r.v.s.
P (X2
 X|X ≥ 0), E(X), Var(X), and g(E(X)) are numbers.
X = 2 and F (X) ≥ −1 are events. It does not make sense to
write
R ∞
−∞
F (X)dx, because F (X) is a random variable. It does
not make sense to write P (X), because X is not an event.
6. Don’t confuse a random variable with its distribution.
To get the PDF of X2
, you can’t just square the PDF of X.
The right way is to use transformations. To get the PDF of
X + Y , you can’t just add the PDF of X and the PDF of Y .
The right way is to compute the convolution.
7. Don’t pull non-linear functions out of expectations.
E(g(X)) does not equal g(E(X)) in general. The St.
Petersburg paradox is an extreme example. See also Jensen’s
inequality. The right way to find E(g(X)) is with LOTUS.
Distributions in R
Command What it does
help(distributions) shows documentation on distributions
dbinom(k,n,p) PMF P (X = k) for X ∼ Bin(n, p)
pbinom(x,n,p) CDF P (X ≤ x) for X ∼ Bin(n, p)
qbinom(a,n,p) ath quantile for X ∼ Bin(n, p)
rbinom(r,n,p) vector of r i.i.d. Bin(n, p) r.v.s
dgeom(k,p) PMF P (X = k) for X ∼ Geom(p)
dhyper(k,w,b,n) PMF P (X = k) for X ∼ HGeom(w, b, n)
dnbinom(k,r,p) PMF P (X = k) for X ∼ NBin(r, p)
dpois(k,r) PMF P (X = k) for X ∼ Pois(r)
dbeta(x,a,b) PDF f(x) for X ∼ Beta(a, b)
dchisq(x,n) PDF f(x) for X ∼ χ2
n
dexp(x,b) PDF f(x) for X ∼ Expo(b)
dgamma(x,a,r) PDF f(x) for X ∼ Gamma(a, r)
dlnorm(x,m,s) PDF f(x) for X ∼ LN (m, s2
)
dnorm(x,m,s) PDF f(x) for X ∼ N (m, s2
)
dt(x,n) PDF f(x) for X ∼ tn
dunif(x,a,b) PDF f(x) for X ∼ Unif(a, b)
The table above gives R commands for working with various named
distributions. Commands analogous to pbinom, qbinom, and rbinom
work for the other distributions in the table. For example, pnorm,
qnorm, and rnorm can be used to get the CDF, quantiles, and random
generation for the Normal. For the Multinomial, dmultinom can be used
for calculating the joint PMF and rmultinom can be used for generating
random vectors. For the Multivariate Normal, after installing and
loading the mvtnorm package dmvnorm can be used for calculating the
joint PDF and rmvnorm can be used for generating random vectors.
Recommended Resources
• Introduction to Probability Book
(http://bit.ly/introprobability)
• Stat 110 Online (http://stat110.net)
• Stat 110 Quora Blog (https://stat110.quora.com/)
• Quora Probability FAQ (http://bit.ly/probabilityfaq)
• R Studio (https://www.rstudio.com)
• LaTeX File (github.com/wzchen/probability cheatsheet)
Please share this cheatsheet with friends!
http://wzchen.com/probability-cheatsheet
Table of Distributions
Distribution PMF/PDF and Support Expected Value Variance MGF
Bernoulli
Bern(p)
P(X = 1) = p
P(X = 0) = q = 1 − p p pq q + pet
Binomial
Bin(n, p)
P(X = k) = n
k

pkqn−k
k ∈ {0, 1, 2, . . . n} np npq (q + pet)n
Geometric
Geom(p)
P(X = k) = qkp
k ∈ {0, 1, 2, . . . } q/p q/p2 p
1−qet , qet  1
Negative Binomial
NBin(r, p)
P(X = n) = r+n−1
r−1

prqn
n ∈ {0, 1, 2, . . . } rq/p rq/p2 ( p
1−qet )r, qet  1
Hypergeometric
HGeom(w, b, n)
P(X = k) =

w
k

b
n−k

/

w+b
n

k ∈ {0, 1, 2, . . . , n} µ = nw
b+w

w+b−n
w+b−1

n µ
n
(1 − µ
n
) messy
Poisson
Pois(λ)
P(X = k) = e−λ
λk
k!
k ∈ {0, 1, 2, . . . } λ λ eλ(et
−1)
Uniform
Unif(a, b)
f(x) = 1
b−a
x ∈ (a, b) a+b
2
(b−a)2
12
etb
−eta
t(b−a)
Normal
N(µ, σ2)
f(x) = 1
σ
√
2π
e−(x − µ)2/(2σ2)
x ∈ (−∞, ∞) µ σ2 etµ+ σ2t2
2
Exponential
Expo(λ)
f(x) = λe−λx
x ∈ (0, ∞) 1
λ
1
λ2
λ
λ−t
, t  λ
Gamma
Gamma(a, λ)
f(x) = 1
Γ(a)
(λx)ae−λx 1
x
x ∈ (0, ∞) a
λ
a
λ2

λ
λ−t
a
, t  λ
Beta
Beta(a, b)
f(x) =
Γ(a+b)
Γ(a)Γ(b)
xa−1(1 − x)b−1
x ∈ (0, 1) µ = a
a+b
µ(1−µ)
(a+b+1)
messy
Log-Normal
LN(µ, σ2)
1
xσ
√
2π
e−(log x−µ)2
/(2σ2
)
x ∈ (0, ∞) θ = eµ+σ2
/2 θ2(eσ2
− 1) doesn’t exist
Chi-Square
χ2
n
1
2n/2Γ(n/2)
xn/2−1e−x/2
x ∈ (0, ∞) n 2n (1 − 2t)−n/2, t  1/2
Student-t
tn
Γ((n+1)/2)
√
nπΓ(n/2)
(1 + x2/n)−(n+1)/2
x ∈ (−∞, ∞) 0 if n  1 n
n−2
if n  2 doesn’t exist

More Related Content

PDF
Luận văn: Mô hình hóa các quá trình lãi suất, HAY, 9đ
PDF
Các điều kiện có hiệu lực của di chúc theo quy định của Bộ luật dân sự
PPTX
Luat dat dai
PDF
Luận văn thạc sĩ: Xét xử sơ thẩm của các Tòa án quân sự, HOT
PDF
Luận văn: Pháp luật về thanh tra trong lĩnh vực lao động, HAY
PPTX
4 - Phan lop du lieu-Final.pptx
PDF
Ngân hàng câu hỏi môn tài chính quốc tế
PDF
Luận văn: Các tội phạm về ma túy trên địa bàn tỉnh Điện Biên
Luận văn: Mô hình hóa các quá trình lãi suất, HAY, 9đ
Các điều kiện có hiệu lực của di chúc theo quy định của Bộ luật dân sự
Luat dat dai
Luận văn thạc sĩ: Xét xử sơ thẩm của các Tòa án quân sự, HOT
Luận văn: Pháp luật về thanh tra trong lĩnh vực lao động, HAY
4 - Phan lop du lieu-Final.pptx
Ngân hàng câu hỏi môn tài chính quốc tế
Luận văn: Các tội phạm về ma túy trên địa bàn tỉnh Điện Biên

What's hot (20)

PDF
ABC về các quyền dân sự và chính trị cơ bản
PDF
Luận văn: Thừa kế theo pháp luật - vấn đề lý luận và thực tiễn
DOC
Bài tập tổng hợp môn văn bản quản lý nhà nước
PDF
THAM LUẬN ỨNG DỤNG INTERNET CỦA VẠN VẬT TRONG SẢN XUẤT NÔNG NGHIỆP.pdf
PDF
Luận văn: Quản lý, sử dụng đất nghĩa trang tại TP Hà Nội, HAY
DOC
Biện Pháp Quản Lý Hoạt Động Giáo Dục Thể Chất Tại Các TrƯờng Trung Học Cơ Sở ...
PDF
Pháp luật về quản lý chất thải nguy hại trong khu công nghiệp ở Việt Nam: Luậ...
PDF
BREXIT & ITS IMPACT
PPT
BẢO HỘ GIỐNG CÂY TRỒNG VÀ ĐẶC QUYỀN CỦA NÔNG DÂN
PDF
Tổng kết công thức kinh tế lượng ( kinh te luong)
PDF
LUẬN VĂN THỰC HIỆN CHÍNH SÁCH BẢO TRỢ ĐỐI VỚI TRẺ EM VÙNG DÂN TỘC THIỂU SỐ TỈ...
DOCX
Kịch bản diễn án HS Vũ Đức Thắng - Cố ý gây thương tích.docx
PDF
Công chứng hợp đồng thế chấp quyền sử dụng đất theo pháp luật, 9đ
PPTX
Hướng dẫn về LLĐ
PDF
EY Studie: Top 500 FE Unternehmen
 
PDF
Vi du-uoc-luong-du-lieu-bang-eviews
PDF
Luận văn: Tư tưởng về quyền con người, HOT, HAY
DOCX
Top 456+ Đề Tài Báo Cáo Tốt Nghiệp Ngành Luật Hình Sự Và Tố Tụng Hình Sự - Ha...
PDF
Chính sách ngoại thương Việt Nam trong thực hiện cam kết với WTO
PDF
Luận văn: Thanh tra về bảo vệ quyền trẻ em hiện nay, HOT
ABC về các quyền dân sự và chính trị cơ bản
Luận văn: Thừa kế theo pháp luật - vấn đề lý luận và thực tiễn
Bài tập tổng hợp môn văn bản quản lý nhà nước
THAM LUẬN ỨNG DỤNG INTERNET CỦA VẠN VẬT TRONG SẢN XUẤT NÔNG NGHIỆP.pdf
Luận văn: Quản lý, sử dụng đất nghĩa trang tại TP Hà Nội, HAY
Biện Pháp Quản Lý Hoạt Động Giáo Dục Thể Chất Tại Các TrƯờng Trung Học Cơ Sở ...
Pháp luật về quản lý chất thải nguy hại trong khu công nghiệp ở Việt Nam: Luậ...
BREXIT & ITS IMPACT
BẢO HỘ GIỐNG CÂY TRỒNG VÀ ĐẶC QUYỀN CỦA NÔNG DÂN
Tổng kết công thức kinh tế lượng ( kinh te luong)
LUẬN VĂN THỰC HIỆN CHÍNH SÁCH BẢO TRỢ ĐỐI VỚI TRẺ EM VÙNG DÂN TỘC THIỂU SỐ TỈ...
Kịch bản diễn án HS Vũ Đức Thắng - Cố ý gây thương tích.docx
Công chứng hợp đồng thế chấp quyền sử dụng đất theo pháp luật, 9đ
Hướng dẫn về LLĐ
EY Studie: Top 500 FE Unternehmen
 
Vi du-uoc-luong-du-lieu-bang-eviews
Luận văn: Tư tưởng về quyền con người, HOT, HAY
Top 456+ Đề Tài Báo Cáo Tốt Nghiệp Ngành Luật Hình Sự Và Tố Tụng Hình Sự - Ha...
Chính sách ngoại thương Việt Nam trong thực hiện cam kết với WTO
Luận văn: Thanh tra về bảo vệ quyền trẻ em hiện nay, HOT
Ad

Similar to Probability Cheatsheet.pdf (20)

PDF
Probability cheatsheet
PDF
Probability cheatsheet
PDF
Probability and Statistics
PPTX
Unit II PPT.pptx
PDF
PTSP PPT.pdf
PDF
Statistical Hydrology for Engineering.pdf
PPTX
Chapter-4 combined.pptx
PDF
Equational axioms for probability calculus and modelling of Likelihood ratio ...
PDF
Finance Enginering from Columbia.pdf
PDF
Basic math including gradient
PDF
Prob review
PPTX
random variables-descriptive and contincuous
PDF
Introduction to Evidential Neural Networks
PPT
pattern recognition
PDF
Lec 2 discrete random variable
PDF
Divergence center-based clustering and their applications
PPTX
ISM_Session_5 _ 23rd and 24th December.pptx
PDF
IVR - Chapter 1 - Introduction
PPT
Probability and Statistics for Engineers
PPTX
Statistical Inference Part II: Types of Sampling Distribution
Probability cheatsheet
Probability cheatsheet
Probability and Statistics
Unit II PPT.pptx
PTSP PPT.pdf
Statistical Hydrology for Engineering.pdf
Chapter-4 combined.pptx
Equational axioms for probability calculus and modelling of Likelihood ratio ...
Finance Enginering from Columbia.pdf
Basic math including gradient
Prob review
random variables-descriptive and contincuous
Introduction to Evidential Neural Networks
pattern recognition
Lec 2 discrete random variable
Divergence center-based clustering and their applications
ISM_Session_5 _ 23rd and 24th December.pptx
IVR - Chapter 1 - Introduction
Probability and Statistics for Engineers
Statistical Inference Part II: Types of Sampling Distribution
Ad

Recently uploaded (20)

PPTX
5 Stages of group development guide.pptx
PDF
pdfcoffee.com-opt-b1plus-sb-answers.pdfvi
PDF
Unit 1 Cost Accounting - Cost sheet
PPTX
ICG2025_ICG 6th steering committee 30-8-24.pptx
PDF
Nidhal Samdaie CV - International Business Consultant
PDF
kom-180-proposal-for-a-directive-amending-directive-2014-45-eu-and-directive-...
PPT
340036916-American-Literature-Literary-Period-Overview.ppt
PDF
Outsourced Audit & Assurance in USA Why Globus Finanza is Your Trusted Choice
PPTX
Lecture (1)-Introduction.pptx business communication
PDF
Katrina Stoneking: Shaking Up the Alcohol Beverage Industry
PPTX
Amazon (Business Studies) management studies
PDF
Ôn tập tiếng anh trong kinh doanh nâng cao
PDF
Training And Development of Employee .pdf
DOCX
unit 1 COST ACCOUNTING AND COST SHEET
PPTX
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
PDF
Business model innovation report 2022.pdf
PDF
20250805_A. Stotz All Weather Strategy - Performance review July 2025.pdf
PPTX
Dragon_Fruit_Cultivation_in Nepal ppt.pptx
PDF
Laughter Yoga Basic Learning Workshop Manual
PDF
DOC-20250806-WA0002._20250806_112011_0000.pdf
5 Stages of group development guide.pptx
pdfcoffee.com-opt-b1plus-sb-answers.pdfvi
Unit 1 Cost Accounting - Cost sheet
ICG2025_ICG 6th steering committee 30-8-24.pptx
Nidhal Samdaie CV - International Business Consultant
kom-180-proposal-for-a-directive-amending-directive-2014-45-eu-and-directive-...
340036916-American-Literature-Literary-Period-Overview.ppt
Outsourced Audit & Assurance in USA Why Globus Finanza is Your Trusted Choice
Lecture (1)-Introduction.pptx business communication
Katrina Stoneking: Shaking Up the Alcohol Beverage Industry
Amazon (Business Studies) management studies
Ôn tập tiếng anh trong kinh doanh nâng cao
Training And Development of Employee .pdf
unit 1 COST ACCOUNTING AND COST SHEET
CkgxkgxydkydyldylydlydyldlyddolydyoyyU2.pptx
Business model innovation report 2022.pdf
20250805_A. Stotz All Weather Strategy - Performance review July 2025.pdf
Dragon_Fruit_Cultivation_in Nepal ppt.pptx
Laughter Yoga Basic Learning Workshop Manual
DOC-20250806-WA0002._20250806_112011_0000.pdf

Probability Cheatsheet.pdf

  • 1. Probability Cheatsheet v2.0 Compiled by William Chen (http://wzchen.com) and Joe Blitzstein, with contributions from Sebastian Chiu, Yuan Jiang, Yuqi Hou, and Jessy Hwang. Material based on Joe Blitzstein’s (@stat110) lectures (http://stat110.net) and Blitzstein/Hwang’s Introduction to Probability textbook (http://bit.ly/introprobability). Licensed under CC BY-NC-SA 4.0. Please share comments, suggestions, and errors at http://github.com/wzchen/probability_cheatsheet. Last Updated September 4, 2015 Counting Multiplication Rule cake waffle S V C S V C S V C cake waffle cake waffle cake waffle Let’s say we have a compound experiment (an experiment with multiple components). If the 1st component has n1 possible outcomes, the 2nd component has n2 possible outcomes, . . . , and the rth component has nr possible outcomes, then overall there are n1n2 . . . nr possibilities for the whole experiment. Sampling Table 7 6 5 8 4 2 9 3 1 The sampling table gives the number of possible samples of size k out of a population of size n, under various assumptions about how the sample is collected. Order Matters Not Matter With Replacement n k n + k − 1 k Without Replacement n! (n − k)! n k Naive Definition of Probability If all outcomes are equally likely, the probability of an event A happening is: Pnaive(A) = number of outcomes favorable to A number of outcomes Thinking Conditionally Independence Independent Events A and B are independent if knowing whether A occurred gives no information about whether B occurred. More formally, A and B (which have nonzero probability) are independent if and only if one of the following equivalent statements holds: P (A ∩ B) = P (A)P (B) P (A|B) = P (A) P (B|A) = P (B) Conditional Independence A and B are conditionally independent given C if P (A ∩ B|C) = P (A|C)P (B|C). Conditional independence does not imply independence, and independence does not imply conditional independence. Unions, Intersections, and Complements De Morgan’s Laws A useful identity that can make calculating probabilities of unions easier by relating them to intersections, and vice versa. Analogous results hold with more than two sets. (A ∪ B) c = A c ∩ B c (A ∩ B) c = A c ∪ B c Joint, Marginal, and Conditional Joint Probability P (A ∩ B) or P (A, B) – Probability of A and B. Marginal (Unconditional) Probability P (A) – Probability of A. Conditional Probability P (A|B) = P (A, B)/P (B) – Probability of A, given that B occurred. Conditional Probability is Probability P (A|B) is a probability function for any fixed B. Any theorem that holds for probability also holds for conditional probability. Probability of an Intersection or Union Intersections via Conditioning P (A, B) = P (A)P (B|A) P (A, B, C) = P (A)P (B|A)P (C|A, B) Unions via Inclusion-Exclusion P (A ∪ B) = P (A) + P (B) − P (A ∩ B) P (A ∪ B ∪ C) = P (A) + P (B) + P (C) − P (A ∩ B) − P (A ∩ C) − P (B ∩ C) + P (A ∩ B ∩ C). Simpson’s Paradox Dr. Hibbert Dr. Nick heart band-aid It is possible to have P (A | B, C) P (A | B c , C) and P (A | B, C c ) P (A | B c , C c ) yet also P (A | B) P (A | B c ). Law of Total Probability (LOTP) Let B1, B2, B3, ...Bn be a partition of the sample space (i.e., they are disjoint and their union is the entire sample space). P (A) = P (A|B1)P (B1) + P (A|B2)P (B2) + · · · + P (A|Bn)P (Bn) P (A) = P (A ∩ B1) + P (A ∩ B2) + · · · + P (A ∩ Bn) For LOTP with extra conditioning, just add in another event C! P (A|C) = P (A|B1, C)P (B1|C) + · · · + P (A|Bn, C)P (Bn|C) P (A|C) = P (A ∩ B1|C) + P (A ∩ B2|C) + · · · + P (A ∩ Bn|C) Special case of LOTP with B and Bc as partition: P (A) = P (A|B)P (B) + P (A|B c )P (B c ) P (A) = P (A ∩ B) + P (A ∩ B c ) Bayes’ Rule Bayes’ Rule, and with extra conditioning (just add in C!) P (A|B) = P (B|A)P (A) P (B) P (A|B, C) = P (B|A, C)P (A|C) P (B|C) We can also write P (A|B, C) = P (A, B, C) P (B, C) = P (B, C|A)P (A) P (B, C) Odds Form of Bayes’ Rule P (A|B) P (Ac|B) = P (B|A) P (B|Ac) P (A) P (Ac) The posterior odds of A are the likelihood ratio times the prior odds. Random Variables and their Distributions PMF, CDF, and Independence Probability Mass Function (PMF) Gives the probability that a discrete random variable takes on the value x. pX (x) = P (X = x) 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 x pmf ● ● ● ● ● The PMF satisfies pX (x) ≥ 0 and X x pX (x) = 1
  • 2. Cumulative Distribution Function (CDF) Gives the probability that a random variable is less than or equal to x. FX (x) = P (X ≤ x) 0 1 2 3 4 0.0 0.2 0.4 0.6 0.8 1.0 x cdf ● ● ● ● ● ● ● ● ● ● The CDF is an increasing, right-continuous function with FX (x) → 0 as x → −∞ and FX (x) → 1 as x → ∞ Independence Intuitively, two random variables are independent if knowing the value of one gives no information about the other. Discrete r.v.s X and Y are independent if for all values of x and y P (X = x, Y = y) = P (X = x)P (Y = y) Expected Value and Indicators Expected Value and Linearity Expected Value (a.k.a. mean, expectation, or average) is a weighted average of the possible outcomes of our random variable. Mathematically, if x1, x2, x3, . . . are all of the distinct possible values that X can take, the expected value of X is E(X) = P i xiP (X = xi) X 3 2 6 10 1 1 5 4 ... Y 4 2 8 23 –3 0 9 1 ... X + Y 7 4 14 33 –2 1 14 5 ... ∑ xi ∑ yi + ∑ (xi + yi) = E(X) E(Y) + E(X + Y) = i=1 n i=1 n i=1 n n 1 n 1 n 1 Linearity For any r.v.s X and Y , and constants a, b, c, E(aX + bY + c) = aE(X) + bE(Y ) + c Same distribution implies same mean If X and Y have the same distribution, then E(X) = E(Y ) and, more generally, E(g(X)) = E(g(Y )) Conditional Expected Value is defined like expectation, only conditioned on any event A. E(X|A) = P x xP (X = x|A) Indicator Random Variables Indicator Random Variable is a random variable that takes on the value 1 or 0. It is always an indicator of some event: if the event occurs, the indicator is 1; otherwise it is 0. They are useful for many problems about counting how many events of some kind occur. Write IA = ( 1 if A occurs, 0 if A does not occur. Note that I2 A = IA, IAIB = IA∩B, and IA∪B = IA + IB − IAIB. Distribution IA ∼ Bern(p) where p = P (A). Fundamental Bridge The expectation of the indicator for event A is the probability of event A: E(IA) = P (A). Variance and Standard Deviation Var(X) = E (X − E(X)) 2 = E(X 2 ) − (E(X)) 2 SD(X) = q Var(X) Continuous RVs, LOTUS, UoU Continuous Random Variables (CRVs) What’s the probability that a CRV is in an interval? Take the difference in CDF values (or use the PDF as described later). P (a ≤ X ≤ b) = P (X ≤ b) − P (X ≤ a) = FX (b) − FX (a) For X ∼ N (µ, σ2 ), this becomes P (a ≤ X ≤ b) = Φ b − µ σ − Φ a − µ σ What is the Probability Density Function (PDF)? The PDF f is the derivative of the CDF F . F 0 (x) = f(x) A PDF is nonnegative and integrates to 1. By the fundamental theorem of calculus, to get from PDF back to CDF we can integrate: F (x) = Z x −∞ f(t)dt −4 −2 0 2 4 0.00 0.10 0.20 0.30 x PDF −4 −2 0 2 4 0.0 0.2 0.4 0.6 0.8 1.0 x CDF To find the probability that a CRV takes on a value in an interval, integrate the PDF over that interval. F (b) − F (a) = Z b a f(x)dx How do I find the expected value of a CRV? Analogous to the discrete case, where you sum x times the PMF, for CRVs you integrate x times the PDF. E(X) = Z ∞ −∞ xf(x)dx LOTUS Expected value of a function of an r.v. The expected value of X is defined this way: E(X) = X x xP (X = x) (for discrete X) E(X) = Z ∞ −∞ xf(x)dx (for continuous X) The Law of the Unconscious Statistician (LOTUS) states that you can find the expected value of a function of a random variable, g(X), in a similar way, by replacing the x in front of the PMF/PDF by g(x) but still working with the PMF/PDF of X: E(g(X)) = X x g(x)P (X = x) (for discrete X) E(g(X)) = Z ∞ −∞ g(x)f(x)dx (for continuous X) What’s a function of a random variable? A function of a random variable is also a random variable. For example, if X is the number of bikes you see in an hour, then g(X) = 2X is the number of bike wheels you see in that hour and h(X) = X 2 = X(X−1) 2 is the number of pairs of bikes such that you see both of those bikes in that hour. What’s the point? You don’t need to know the PMF/PDF of g(X) to find its expected value. All you need is the PMF/PDF of X. Universality of Uniform (UoU) When you plug any CRV into its own CDF, you get a Uniform(0,1) random variable. When you plug a Uniform(0,1) r.v. into an inverse CDF, you get an r.v. with that CDF. For example, let’s say that a random variable X has CDF F (x) = 1 − e −x , for x 0 By UoU, if we plug X into this function then we get a uniformly distributed random variable. F (X) = 1 − e −X ∼ Unif(0, 1) Similarly, if U ∼ Unif(0, 1) then F −1 (U) has CDF F . The key point is that for any continuous random variable X, we can transform it into a Uniform random variable and back by using its CDF. Moments and MGFs Moments Moments describe the shape of a distribution. Let X have mean µ and standard deviation σ, and Z = (X − µ)/σ be the standardized version of X. The kth moment of X is µk = E(Xk ) and the kth standardized moment of X is mk = E(Zk ). The mean, variance, skewness, and kurtosis are important summaries of the shape of a distribution. Mean E(X) = µ1 Variance Var(X) = µ2 − µ2 1 Skewness Skew(X) = m3 Kurtosis Kurt(X) = m4 − 3
  • 3. Moment Generating Functions MGF For any random variable X, the function MX (t) = E(e tX ) is the moment generating function (MGF) of X, if it exists for all t in some open interval containing 0. The variable t could just as well have been called u or v. It’s a bookkeeping device that lets us work with the function MX rather than the sequence of moments. Why is it called the Moment Generating Function? Because the kth derivative of the moment generating function, evaluated at 0, is the kth moment of X. µk = E(X k ) = M (k) X (0) This is true by Taylor expansion of etX since MX (t) = E(e tX ) = ∞ X k=0 E(Xk )tk k! = ∞ X k=0 µktk k! MGF of linear functions If we have Y = aX + b, then MY (t) = E(e t(aX+b) ) = e bt E(e (at)X ) = e bt MX (at) Uniqueness If it exists, the MGF uniquely determines the distribution. This means that for any two random variables X and Y , they are distributed the same (their PMFs/PDFs are equal) if and only if their MGFs are equal. Summing Independent RVs by Multiplying MGFs. If X and Y are independent, then MX+Y (t) = E(e t(X+Y ) ) = E(e tX )E(e tY ) = MX (t) · MY (t) The MGF of the sum of two random variables is the product of the MGFs of those two random variables. Joint PDFs and CDFs Joint Distributions The joint CDF of X and Y is F (x, y) = P (X ≤ x, Y ≤ y) In the discrete case, X and Y have a joint PMF pX,Y (x, y) = P (X = x, Y = y). In the continuous case, they have a joint PDF fX,Y (x, y) = ∂2 ∂x∂y FX,Y (x, y). The joint PMF/PDF must be nonnegative and sum/integrate to 1. Conditional Distributions Conditioning and Bayes’ rule for discrete r.v.s P (Y = y|X = x) = P (X = x, Y = y) P (X = x) = P (X = x|Y = y)P (Y = y) P (X = x) Conditioning and Bayes’ rule for continuous r.v.s fY |X (y|x) = fX,Y (x, y) fX (x) = fX|Y (x|y)fY (y) fX (x) Hybrid Bayes’ rule fX (x|A) = P (A|X = x)fX (x) P (A) Marginal Distributions To find the distribution of one (or more) random variables from a joint PMF/PDF, sum/integrate over the unwanted random variables. Marginal PMF from joint PMF P (X = x) = X y P (X = x, Y = y) Marginal PDF from joint PDF fX (x) = Z ∞ −∞ fX,Y (x, y)dy Independence of Random Variables Random variables X and Y are independent if and only if any of the following conditions holds: • Joint CDF is the product of the marginal CDFs • Joint PMF/PDF is the product of the marginal PMFs/PDFs • Conditional distribution of Y given X is the marginal distribution of Y Write X ⊥ ⊥ Y to denote that X and Y are independent. Multivariate LOTUS LOTUS in more than one dimension is analogous to the 1D LOTUS. For discrete random variables: E(g(X, Y )) = X x X y g(x, y)P (X = x, Y = y) For continuous random variables: E(g(X, Y )) = Z ∞ −∞ Z ∞ −∞ g(x, y)fX,Y (x, y)dxdy Covariance and Transformations Covariance and Correlation Covariance is the analog of variance for two random variables. Cov(X, Y ) = E ((X − E(X))(Y − E(Y ))) = E(XY ) − E(X)E(Y ) Note that Cov(X, X) = E(X 2 ) − (E(X)) 2 = Var(X) Correlation is a standardized version of covariance that is always between −1 and 1. Corr(X, Y ) = Cov(X, Y ) p Var(X)Var(Y ) Covariance and Independence If two random variables are independent, then they are uncorrelated. The converse is not necessarily true (e.g., consider X ∼ N (0, 1) and Y = X2 ). X ⊥ ⊥ Y −→ Cov(X, Y ) = 0 −→ E(XY ) = E(X)E(Y ) Covariance and Variance The variance of a sum can be found by Var(X + Y ) = Var(X) + Var(Y ) + 2Cov(X, Y ) Var(X1 + X2 + · · · + Xn) = n X i=1 Var(Xi) + 2 X ij Cov(Xi, Xj ) If X and Y are independent then they have covariance 0, so X ⊥ ⊥ Y =⇒ Var(X + Y ) = Var(X) + Var(Y ) If X1, X2, . . . , Xn are identically distributed and have the same covariance relationships (often by symmetry), then Var(X1 + X2 + · · · + Xn) = nVar(X1) + 2 n 2 Cov(X1, X2) Covariance Properties For random variables W, X, Y, Z and constants a, b: Cov(X, Y ) = Cov(Y, X) Cov(X + a, Y + b) = Cov(X, Y ) Cov(aX, bY ) = abCov(X, Y ) Cov(W + X, Y + Z) = Cov(W, Y ) + Cov(W, Z) + Cov(X, Y ) + Cov(X, Z) Correlation is location-invariant and scale-invariant For any constants a, b, c, d with a and c nonzero, Corr(aX + b, cY + d) = Corr(X, Y ) Transformations One Variable Transformations Let’s say that we have a random variable X with PDF fX (x), but we are also interested in some function of X. We call this function Y = g(X). Also let y = g(x). If g is differentiable and strictly increasing (or strictly decreasing), then the PDF of Y is fY (y) = fX (x) dx dy = fX (g −1 (y)) d dy g −1 (y) The derivative of the inverse transformation is called the Jacobian. Two Variable Transformations Similarly, let’s say we know the joint PDF of U and V but are also interested in the random vector (X, Y ) defined by (X, Y ) = g(U, V ). Let ∂(u, v) ∂(x, y) = ∂u ∂x ∂u ∂y ∂v ∂x ∂v ∂y ! be the Jacobian matrix. If the entries in this matrix exist and are continuous, and the determinant of the matrix is never 0, then fX,Y (x, y) = fU,V (u, v) ∂(u, v) ∂(x, y) The inner bars tells us to take the matrix’s determinant, and the outer bars tell us to take the absolute value. In a 2 × 2 matrix, a b c d = |ad − bc| Convolutions Convolution Integral If you want to find the PDF of the sum of two independent CRVs X and Y , you can do the following integral: fX+Y (t) = Z ∞ −∞ fX (x)fY (t − x)dx Example Let X, Y ∼ N (0, 1) be i.i.d. Then for each fixed t, fX+Y (t) = Z ∞ −∞ 1 √ 2π e −x2/2 1 √ 2π e −(t−x)2/2 dx By completing the square and using the fact that a Normal PDF integrates to 1, this works out to fX+Y (t) being the N (0, 2) PDF.
  • 4. Poisson Process Definition We have a Poisson process of rate λ arrivals per unit time if the following conditions hold: 1. The number of arrivals in a time interval of length t is Pois(λt). 2. Numbers of arrivals in disjoint time intervals are independent. For example, the numbers of arrivals in the time intervals [0, 5], (5, 12), and [13, 23) are independent with Pois(5λ), Pois(7λ), Pois(10λ) distributions, respectively. 0 T1 T2 T3 T4 T5 + + + + + Count-Time Duality Consider a Poisson process of emails arriving in an inbox at rate λ emails per hour. Let Tn be the time of arrival of the nth email (relative to some starting time 0) and Nt be the number of emails that arrive in [0, t]. Let’s find the distribution of T1. The event T1 t, the event that you have to wait more than t hours to get the first email, is the same as the event Nt = 0, which is the event that there are no emails in the first t hours. So P (T1 t) = P (Nt = 0) = e −λt −→ P (T1 ≤ t) = 1 − e −λt Thus we have T1 ∼ Expo(λ). By the memoryless property and similar reasoning, the interarrival times between emails are i.i.d. Expo(λ), i.e., the differences Tn − Tn−1 are i.i.d. Expo(λ). Order Statistics Definition Let’s say you have n i.i.d. r.v.s X1, X2, . . . , Xn. If you arrange them from smallest to largest, the ith element in that list is the ith order statistic, denoted X(i). So X(1) is the smallest in the list and X(n) is the largest in the list. Note that the order statistics are dependent, e.g., learning X(4) = 42 gives us the information that X(1), X(2), X(3) are ≤ 42 and X(5), X(6), . . . , X(n) are ≥ 42. Distribution Taking n i.i.d. random variables X1, X2, . . . , Xn with CDF F (x) and PDF f(x), the CDF and PDF of X(i) are: FX(i) (x) = P (X(i) ≤ x) = n X k=i n k F (x) k (1 − F (x)) n−k fX(i) (x) = n n − 1 i − 1 F (x) i−1 (1 − F (x)) n−i f(x) Uniform Order Statistics The jth order statistic of i.i.d. U1, . . . , Un ∼ Unif(0, 1) is U(j) ∼ Beta(j, n − j + 1). Conditional Expectation Conditioning on an Event We can find E(Y |A), the expected value of Y given that event A occurred. A very important case is when A is the event X = x. Note that E(Y |A) is a number. For example: • The expected value of a fair die roll, given that it is prime, is 1 3 · 2 + 1 3 · 3 + 1 3 · 5 = 10 3 . • Let Y be the number of successes in 10 independent Bernoulli trials with probability p of success. Let A be the event that the first 3 trials are all successes. Then E(Y |A) = 3 + 7p since the number of successes among the last 7 trials is Bin(7, p). • Let T ∼ Expo(1/10) be how long you have to wait until the shuttle comes. Given that you have already waited t minutes, the expected additional waiting time is 10 more minutes, by the memoryless property. That is, E(T |T t) = t + 10. Discrete Y Continuous Y E(Y ) = P y yP (Y = y) E(Y ) = R ∞ −∞ yfY (y)dy E(Y |A) = P y yP (Y = y|A) E(Y |A) = R ∞ −∞ yf(y|A)dy Conditioning on a Random Variable We can also find E(Y |X), the expected value of Y given the random variable X. This is a function of the random variable X. It is not a number except in certain special cases such as if X ⊥ ⊥ Y . To find E(Y |X), find E(Y |X = x) and then plug in X for x. For example: • If E(Y |X = x) = x3 + 5x, then E(Y |X) = X3 + 5X. • Let Y be the number of successes in 10 independent Bernoulli trials with probability p of success and X be the number of successes among the first 3 trials. Then E(Y |X) = X + 7p. • Let X ∼ N (0, 1) and Y = X2 . Then E(Y |X = x) = x2 since if we know X = x then we know Y = x2 . And E(X|Y = y) = 0 since if we know Y = y then we know X = ± √ y, with equal probabilities (by symmetry). So E(Y |X) = X2 , E(X|Y ) = 0. Properties of Conditional Expectation 1. E(Y |X) = E(Y ) if X ⊥ ⊥ Y 2. E(h(X)W |X) = h(X)E(W |X) (taking out what’s known) In particular, E(h(X)|X) = h(X). 3. E(E(Y |X)) = E(Y ) (Adam’s Law, a.k.a. Law of Total Expectation) Adam’s Law (a.k.a. Law of Total Expectation) can also be written in a way that looks analogous to LOTP. For any events A1, A2, . . . , An that partition the sample space, E(Y ) = E(Y |A1)P (A1) + · · · + E(Y |An)P (An) For the special case where the partition is A, Ac , this says E(Y ) = E(Y |A)P (A) + E(Y |A c )P (A c ) Eve’s Law (a.k.a. Law of Total Variance) Var(Y ) = E(Var(Y |X)) + Var(E(Y |X)) MVN, LLN, CLT Law of Large Numbers (LLN) Let X1, X2, X3 . . . be i.i.d. with mean µ. The sample mean is X̄n = X1 + X2 + X3 + · · · + Xn n The Law of Large Numbers states that as n → ∞, X̄n → µ with probability 1. For example, in flips of a coin with probability p of Heads, let Xj be the indicator of the jth flip being Heads. Then LLN says the proportion of Heads converges to p (with probability 1). Central Limit Theorem (CLT) Approximation using CLT We use ˙ ∼ to denote is approximately distributed. We can use the Central Limit Theorem to approximate the distribution of a random variable Y = X1 + X2 + · · · + Xn that is a sum of n i.i.d. random variables Xi. Let E(Y ) = µY and Var(Y ) = σ2 Y . The CLT says Y ˙ ∼ N (µY , σ 2 Y ) If the Xi are i.i.d. with mean µX and variance σ2 X , then µY = nµX and σ2 Y = nσ2 X . For the sample mean X̄n, the CLT says X̄n = 1 n (X1 + X2 + · · · + Xn) ˙ ∼ N (µX , σ 2 X /n) Asymptotic Distributions using CLT We use D − → to denote converges in distribution to as n → ∞. The CLT says that if we standardize the sum X1 + · · · + Xn then the distribution of the sum converges to N (0, 1) as n → ∞: 1 σ √ n (X1 + · · · + Xn − nµX ) D − → N (0, 1) In other words, the CDF of the left-hand side goes to the standard Normal CDF, Φ. In terms of the sample mean, the CLT says √ n(X̄n − µX ) σX D − → N (0, 1) Markov Chains Definition 1 2 1 1/2 3 1/2 1/4 5/12 4 1/3 1/6 7/12 5 1/4 1/8 7/8 A Markov chain is a random walk in a state space, which we will assume is finite, say {1, 2, . . . , M}. We let Xt denote which element of the state space the walk is visiting at time t. The Markov chain is the sequence of random variables tracking where the walk is at all points in time, X0, X1, X2, . . . . By definition, a Markov chain must satisfy the Markov property, which says that if you want to predict where the chain will be at a future time, if we know the present state then the entire past history is irrelevant. Given the present, the past and future are conditionally independent. In symbols, P (Xn+1 = j|X0 = i0, X1 = i1, . . . , Xn = i) = P (Xn+1 = j|Xn = i) State Properties A state is either recurrent or transient. • If you start at a recurrent state, then you will always return back to that state at some point in the future. ♪You can check-out any time you like, but you can never leave. ♪ • Otherwise you are at a transient state. There is some positive probability that once you leave you will never return. ♪You don’t have to go home, but you can’t stay here. ♪ A state is either periodic or aperiodic. • If you start at a periodic state of period k, then the GCD of the possible numbers of steps it would take to return back is k 1. • Otherwise you are at an aperiodic state. The GCD of the possible numbers of steps it would take to return back is 1.
  • 5. Transition Matrix Let the state space be {1, 2, . . . , M}. The transition matrix Q is the M × M matrix where element qij is the probability that the chain goes from state i to state j in one step: qij = P (Xn+1 = j|Xn = i) To find the probability that the chain goes from state i to state j in exactly m steps, take the (i, j) element of Qm . q (m) ij = P (Xn+m = j|Xn = i) If X0 is distributed according to the row vector PMF ~ p, i.e., pj = P (X0 = j), then the PMF of Xn is ~ pQn . Chain Properties A chain is irreducible if you can get from anywhere to anywhere. If a chain (on a finite state space) is irreducible, then all of its states are recurrent. A chain is periodic if any of its states are periodic, and is aperiodic if none of its states are periodic. In an irreducible chain, all states have the same period. A chain is reversible with respect to ~ s if siqij = sj qji for all i, j. Examples of reversible chains include any chain with qij = qji, with ~ s = ( 1 M , 1 M , . . . , 1 M ), and random walk on an undirected network. Stationary Distribution Let us say that the vector ~ s = (s1, s2, . . . , sM ) be a PMF (written as a row vector). We will call ~ s the stationary distribution for the chain if ~ sQ = ~ s. As a consequence, if Xt has the stationary distribution, then all future Xt+1, Xt+2, . . . also have the stationary distribution. For irreducible, aperiodic chains, the stationary distribution exists, is unique, and si is the long-run probability of a chain being at state i. The expected number of steps to return to i starting from i is 1/si. To find the stationary distribution, you can solve the matrix equation (Q0 − I)~ s 0 = 0. The stationary distribution is uniform if the columns of Q sum to 1. Reversibility Condition Implies Stationarity If you have a PMF ~ s and a Markov chain with transition matrix Q, then siqij = sj qji for all states i, j implies that ~ s is stationary. Random Walk on an Undirected Network 4 3 1 2 5 If you have a collection of nodes, pairs of which can be connected by undirected edges, and a Markov chain is run by going from the current node to a uniformly random node that is connected to it by an edge, then this is a random walk on an undirected network. The stationary distribution of this chain is proportional to the degree sequence (this is the sequence of degrees, where the degree of a node is how many edges are attached to it). For example, the stationary distribution of random walk on the network shown above is proportional to (3, 3, 2, 4, 2), so it’s ( 3 14 , 3 14 , 3 14 , 4 14 , 2 14 ). Continuous Distributions Uniform Distribution Let us say that U is distributed Unif(a, b). We know the following: Properties of the Uniform For a Uniform distribution, the probability of a draw from any interval within the support is proportional to the length of the interval. See Universality of Uniform and Order Statistics for other properties. Example William throws darts really badly, so his darts are uniform over the whole room because they’re equally likely to appear anywhere. William’s darts have a Uniform distribution on the surface of the room. The Uniform is the only distribution where the probability of hitting in any specific region is proportional to the length/area/volume of that region, and where the density of occurrence in any one specific spot is constant throughout the whole support. Normal Distribution Let us say that X is distributed N (µ, σ2 ). We know the following: Central Limit Theorem The Normal distribution is ubiquitous because of the Central Limit Theorem, which states that the sample mean of i.i.d. r.v.s will approach a Normal distribution as the sample size grows, regardless of the initial distribution. Location-Scale Transformation Every time we shift a Normal r.v. (by adding a constant) or rescale a Normal (by multiplying by a constant), we change it to another Normal r.v. For any Normal X ∼ N (µ, σ2 ), we can transform it to the standard N (0, 1) by the following transformation: Z = X − µ σ ∼ N (0, 1) Standard Normal The Standard Normal, Z ∼ N (0, 1), has mean 0 and variance 1. Its CDF is denoted by Φ. Exponential Distribution Let us say that X is distributed Expo(λ). We know the following: Story You’re sitting on an open meadow right before the break of dawn, wishing that airplanes in the night sky were shooting stars, because you could really use a wish right now. You know that shooting stars come on average every 15 minutes, but a shooting star is not “due” to come just because you’ve waited so long. Your waiting time is memoryless; the additional time until the next shooting star comes does not depend on how long you’ve waited already. Example The waiting time until the next shooting star is distributed Expo(4) hours. Here λ = 4 is the rate parameter, since shooting stars arrive at a rate of 1 per 1/4 hour on average. The expected time until the next shooting star is 1/λ = 1/4 hour. Expos as a rescaled Expo(1) Y ∼ Expo(λ) → X = λY ∼ Expo(1) Memorylessness The Exponential Distribution is the only continuous memoryless distribution. The memoryless property says that for X ∼ Expo(λ) and any positive numbers s and t, P (X s + t|X s) = P (X t) Equivalently, X − a|(X a) ∼ Expo(λ) For example, a product with an Expo(λ) lifetime is always “as good as new” (it doesn’t experience wear and tear). Given that the product has survived a years, the additional time that it will last is still Expo(λ). Min of Expos If we have independent Xi ∼ Expo(λi), then min(X1, . . . , Xk) ∼ Expo(λ1 + λ2 + · · · + λk). Max of Expos If we have i.i.d. Xi ∼ Expo(λ), then max(X1, . . . , Xk) has the same distribution as Y1 + Y2 + · · · + Yk, where Yj ∼ Expo(jλ) and the Yj are independent. Gamma Distribution 0 5 10 15 20 Gamma(3, 1) x PDF 0.0 0.1 0.2 0 5 10 15 20 Gamma(3, 0.5) x PDF 0.00 0.05 0.10 0 5 10 15 20 Gamma(10, 1) x PDF 0.00 0.05 0.10 0 5 10 15 20 Gamma(5, 0.5) x PDF 0.00 0.05 0.10 Let us say that X is distributed Gamma(a, λ). We know the following: Story You sit waiting for shooting stars, where the waiting time for a star is distributed Expo(λ). You want to see n shooting stars before you go home. The total waiting time for the nth shooting star is Gamma(n, λ). Example You are at a bank, and there are 3 people ahead of you. The serving time for each person is Exponential with mean 2 minutes. Only one person at a time can be served. The distribution of your waiting time until it’s your turn to be served is Gamma(3, 1 2 ). Beta Distribution 0.0 0.2 0.4 0.6 0.8 1.0 0 1 2 3 4 5 Beta(0.5, 0.5) x PDF 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 Beta(2, 1) x PDF 0.0 0.2 0.4 0.6 0.8 1.0 Beta(2, 8) x PDF 0 1 2 3 0.0 0.2 0.4 0.6 0.8 1.0 0.0 0.5 1.0 1.5 2.0 2.5 Beta(5, 5) x PDF Conjugate Prior of the Binomial In the Bayesian approach to statistics, parameters are viewed as random variables, to reflect our uncertainty. The prior for a parameter is its distribution before observing data. The posterior is the distribution for the parameter after observing data. Beta is the conjugate prior of the Binomial because if you have a Beta-distributed prior on p in a Binomial, then the posterior distribution on p given the Binomial data is also Beta-distributed. Consider the following two-level model: X|p ∼ Bin(n, p) p ∼ Beta(a, b) Then after observing X = x, we get the posterior distribution p|(X = x) ∼ Beta(a + x, b + n − x) Order statistics of the Uniform See Order Statistics. Beta-Gamma relationship If X ∼ Gamma(a, λ), Y ∼ Gamma(b, λ), with X ⊥ ⊥ Y then
  • 6. • X X+Y ∼ Beta(a, b) • X + Y ⊥ ⊥ X X+Y This is known as the bank–post office result. χ2 (Chi-Square) Distribution Let us say that X is distributed χ2 n. We know the following: Story A Chi-Square(n) is the sum of the squares of n independent standard Normal r.v.s. Properties and Representations X is distributed as Z 2 1 + Z 2 2 + · · · + Z 2 n for i.i.d. Zi ∼ N (0, 1) X ∼ Gamma(n/2, 1/2) Discrete Distributions Distributions for four sampling schemes Replace No Replace Fixed # trials (n) Binomial HGeom (Bern if n = 1) Draw until r success NBin NHGeom (Geom if r = 1) Bernoulli Distribution The Bernoulli distribution is the simplest case of the Binomial distribution, where we only have one trial (n = 1). Let us say that X is distributed Bern(p). We know the following: Story A trial is performed with probability p of “success”, and X is the indicator of success: 1 means success, 0 means failure. Example Let X be the indicator of Heads for a fair coin toss. Then X ∼ Bern( 1 2 ). Also, 1 − X ∼ Bern( 1 2 ) is the indicator of Tails. Binomial Distribution 0 2 4 6 8 10 0.00 0.05 0.10 0.15 0.20 0.25 0.30 x pmf ● ● ● ● ● ● ● ● ● ● ● 0 2 4 6 8 10 Bin(10,1/2) Let us say that X is distributed Bin(n, p). We know the following: Story X is the number of “successes” that we will achieve in n independent trials, where each trial is either a success or a failure, each with the same probability p of success. We can also write X as a sum of multiple independent Bern(p) random variables. Let X ∼ Bin(n, p) and Xj ∼ Bern(p), where all of the Bernoullis are independent. Then X = X1 + X2 + X3 + · · · + Xn Example If Jeremy Lin makes 10 free throws and each one independently has a 3 4 chance of getting in, then the number of free throws he makes is distributed Bin(10, 3 4 ). Properties Let X ∼ Bin(n, p), Y ∼ Bin(m, p) with X ⊥ ⊥ Y . • Redefine success n − X ∼ Bin(n, 1 − p) • Sum X + Y ∼ Bin(n + m, p) • Conditional X|(X + Y = r) ∼ HGeom(n, m, r) • Binomial-Poisson Relationship Bin(n, p) is approximately Pois(λ) if p is small. • Binomial-Normal Relationship Bin(n, p) is approximately N (np, np(1 − p)) if n is large and p is not near 0 or 1. Geometric Distribution Let us say that X is distributed Geom(p). We know the following: Story X is the number of “failures” that we will achieve before we achieve our first success. Our successes have probability p. Example If each pokeball we throw has probability 1 10 to catch Mew, the number of failed pokeballs will be distributed Geom( 1 10 ). First Success Distribution Equivalent to the Geometric distribution, except that it includes the first success in the count. This is 1 more than the number of failures. If X ∼ FS(p) then E(X) = 1/p. Negative Binomial Distribution Let us say that X is distributed NBin(r, p). We know the following: Story X is the number of “failures” that we will have before we achieve our rth success. Our successes have probability p. Example Thundershock has 60% accuracy and can faint a wild Raticate in 3 hits. The number of misses before Pikachu faints Raticate with Thundershock is distributed NBin(3, 0.6). Hypergeometric Distribution Let us say that X is distributed HGeom(w, b, n). We know the following: Story In a population of w desired objects and b undesired objects, X is the number of “successes” we will have in a draw of n objects, without replacement. The draw of n objects is assumed to be a simple random sample (all sets of n objects are equally likely). Examples Here are some HGeom examples. • Let’s say that we have only b Weedles (failure) and w Pikachus (success) in Viridian Forest. We encounter n Pokemon in the forest, and X is the number of Pikachus in our encounters. • The number of Aces in a 5 card hand. • You have w white balls and b black balls, and you draw n balls. You will draw X white balls. • You have w white balls and b black balls, and you draw n balls without replacement. The number of white balls in your sample is HGeom(w, b, n); the number of black balls is HGeom(b, w, n). • Capture-recapture A forest has N elk, you capture n of them, tag them, and release them. Then you recapture a new sample of size m. How many tagged elk are now in the new sample? HGeom(n, N − n, m) Poisson Distribution Let us say that X is distributed Pois(λ). We know the following: Story There are rare events (low probability events) that occur many different ways (high possibilities of occurences) at an average rate of λ occurrences per unit space or time. The number of events that occur in that unit of space or time is X. Example A certain busy intersection has an average of 2 accidents per month. Since an accident is a low probability event that can happen many different ways, it is reasonable to model the number of accidents in a month at that intersection as Pois(2). Then the number of accidents that happen in two months at that intersection is distributed Pois(4). Properties Let X ∼ Pois(λ1) and Y ∼ Pois(λ2), with X ⊥ ⊥ Y . 1. Sum X + Y ∼ Pois(λ1 + λ2) 2. Conditional X|(X + Y = n) ∼ Bin n, λ1 λ1+λ2 3. Chicken-egg If there are Z ∼ Pois(λ) items and we randomly and independently “accept” each item with probability p, then the number of accepted items Z1 ∼ Pois(λp), and the number of rejected items Z2 ∼ Pois(λ(1 − p)), and Z1 ⊥ ⊥ Z2. Multivariate Distributions Multinomial Distribution Let us say that the vector ~ X = (X1, X2, X3, . . . , Xk) ∼ Multk(n, ~ p) where ~ p = (p1, p2, . . . , pk). Story We have n items, which can fall into any one of the k buckets independently with the probabilities ~ p = (p1, p2, . . . , pk). Example Let us assume that every year, 100 students in the Harry Potter Universe are randomly and independently sorted into one of four houses with equal probability. The number of people in each of the houses is distributed Mult4(100, ~ p), where ~ p = (0.25, 0.25, 0.25, 0.25). Note that X1 + X2 + · · · + X4 = 100, and they are dependent. Joint PMF For n = n1 + n2 + · · · + nk, P ( ~ X = ~ n) = n! n1!n2! . . . nk! p n1 1 p n2 2 . . . p nk k Marginal PMF, Lumping, and Conditionals Marginally, Xi ∼ Bin(n, pi) since we can define “success” to mean category i. If you lump together multiple categories in a Multinomial, then it is still Multinomial. For example, Xi + Xj ∼ Bin(n, pi + pj ) for i 6= j since we can define “success” to mean being in category i or j. Similarly, if k = 6 and we lump categories 1-2 and lump categories 3-5, then (X1 + X2, X3 + X4 + X5, X6) ∼ Mult3(n, (p1 + p2, p3 + p4 + p5, p6)) Conditioning on some Xj also still gives a Multinomial: X1, . . . , Xk−1|Xk = nk ∼ Multk−1 n − nk, p1 1 − pk , . . . , pk−1 1 − pk Variances and Covariances We have Xi ∼ Bin(n, pi) marginally, so Var(Xi) = npi(1 − pi). Also, Cov(Xi, Xj ) = −npipj for i 6= j. Multivariate Uniform Distribution See the univariate Uniform for stories and examples. For the 2D Uniform on some region, probability is proportional to area. Every point in the support has equal density, of value 1 area of region . For the 3D Uniform, probability is proportional to volume. Multivariate Normal (MVN) Distribution A vector ~ X = (X1, X2, . . . , Xk) is Multivariate Normal if every linear combination is Normally distributed, i.e., t1X1 + t2X2 + · · · + tkXk is Normal for any constants t1, t2, . . . , tk. The parameters of the Multivariate Normal are the mean vector ~ µ = (µ1, µ2, . . . , µk) and the covariance matrix where the (i, j) entry is Cov(Xi, Xj ). Properties The Multivariate Normal has the following properties. • Any subvector is also MVN. • If any two elements within an MVN are uncorrelated, then they are independent. • The joint PDF of a Bivariate Normal (X, Y ) with N (0, 1) marginal distributions and correlation ρ ∈ (−1, 1) is fX,Y (x, y) = 1 2πτ exp − 1 2τ2 (x 2 + y 2 − 2ρxy) , with τ = p 1 − ρ2.
  • 7. Distribution Properties Important CDFs Standard Normal Φ Exponential(λ) F (x) = 1 − e−λx , for x ∈ (0, ∞) Uniform(0,1) F (x) = x, for x ∈ (0, 1) Convolutions of Random Variables A convolution of n random variables is simply their sum. For the following results, let X and Y be independent. 1. X ∼ Pois(λ1), Y ∼ Pois(λ2) −→ X + Y ∼ Pois(λ1 + λ2) 2. X ∼ Bin(n1, p), Y ∼ Bin(n2, p) −→ X + Y ∼ Bin(n1 + n2, p). Bin(n, p) can be thought of as a sum of i.i.d. Bern(p) r.v.s. 3. X ∼ Gamma(a1, λ), Y ∼ Gamma(a2, λ) −→ X + Y ∼ Gamma(a1 + a2, λ). Gamma(n, λ) with n an integer can be thought of as a sum of i.i.d. Expo(λ) r.v.s. 4. X ∼ NBin(r1, p), Y ∼ NBin(r2, p) −→ X + Y ∼ NBin(r1 + r2, p). NBin(r, p) can be thought of as a sum of i.i.d. Geom(p) r.v.s. 5. X ∼ N (µ1, σ2 1 ), Y ∼ N (µ2, σ2 2 ) −→ X + Y ∼ N (µ1 + µ2, σ2 1 + σ2 2 ) Special Cases of Distributions 1. Bin(1, p) ∼ Bern(p) 2. Beta(1, 1) ∼ Unif(0, 1) 3. Gamma(1, λ) ∼ Expo(λ) 4. χ2 n ∼ Gamma n 2 , 1 2 5. NBin(1, p) ∼ Geom(p) Inequalities 1. Cauchy-Schwarz |E(XY )| ≤ p E(X2)E(Y 2) 2. Markov P (X ≥ a) ≤ E|X| a for a 0 3. Chebyshev P (|X − µ| ≥ a) ≤ σ2 a2 for E(X) = µ, Var(X) = σ2 4. Jensen E(g(X)) ≥ g(E(X)) for g convex; reverse if g is concave Formulas Geometric Series 1 + r + r 2 + · · · + r n−1 = n−1 X k=0 r k = 1 − rn 1 − r 1 + r + r 2 + · · · = 1 1 − r if |r| 1 Exponential Function (ex ) e x = ∞ X n=0 xn n! = 1 + x + x2 2! + x3 3! + · · · = lim n→∞ 1 + x n n Gamma and Beta Integrals You can sometimes solve complicated-looking integrals by pattern-matching to a gamma or beta integral: Z ∞ 0 x t−1 e −x dx = Γ(t) Z 1 0 x a−1 (1 − x) b−1 dx = Γ(a)Γ(b) Γ(a + b) Also, Γ(a + 1) = aΓ(a), and Γ(n) = (n − 1)! if n is a positive integer. Euler’s Approximation for Harmonic Sums 1 + 1 2 + 1 3 + · · · + 1 n ≈ log n + 0.577 . . . Stirling’s Approximation for Factorials n! ≈ √ 2πn n e n Miscellaneous Definitions Medians and Quantiles Let X have CDF F . Then X has median m if F (m) ≥ 0.5 and P (X ≥ m) ≥ 0.5. For X continuous, m satisfies F (m) = 1/2. In general, the ath quantile of X is min{x : F (x) ≥ a}; the median is the case a = 1/2. log Statisticians generally use log to refer to natural log (i.e., base e). i.i.d r.v.s Independent, identically-distributed random variables. Example Problems Contributions from Sebastian Chiu Calculating Probability A textbook has n typos, which are randomly scattered amongst its n pages, independently. You pick a random page. What is the probability that it has no typos? Answer: There is a 1 − 1 n probability that any specific typo isn’t on your page, and thus a 1 − 1 n n probability that there are no typos on your page. For n large, this is approximately e−1 = 1/e. Linearity and Indicators (1) In a group of n people, what is the expected number of distinct birthdays (month and day)? What is the expected number of birthday matches? Answer: Let X be the number of distinct birthdays and Ij be the indicator for the jth day being represented. E(Ij ) = 1 − P (no one born on day j) = 1 − (364/365) n By linearity, E(X) = 365 (1 − (364/365) n ) . Now let Y be the number of birthday matches and Ji be the indicator that the ith pair of people have the same birthday. The probability that any two specific people share a birthday is 1/365, so E(Y ) = n 2 /365 . Linearity and Indicators (2) This problem is commonly known as the hat-matching problem. There are n people at a party, each with hat. At the end of the party, they each leave with a random hat. What is the expected number of people who leave with the right hat? Answer: Each hat has a 1/n chance of going to the right person. By linearity, the average number of hats that go to their owners is n(1/n) = 1 . Linearity and First Success This problem is commonly known as the coupon collector problem. There are n coupon types. At each draw, you get a uniformly random coupon type. What is the expected number of coupons needed until you have a complete set? Answer: Let N be the number of coupons needed; we want E(N). Let N = N1 + · · · + Nn, where N1 is the draws to get our first new coupon, N2 is the additional draws needed to draw our second new coupon and so on. By the story of the First Success, N2 ∼ FS((n − 1)/n) (after collecting first coupon type, there’s (n − 1)/n chance you’ll get something new). Similarly, N3 ∼ FS((n − 2)/n), and Nj ∼ FS((n − j + 1)/n). By linearity, E(N) = E(N1) + · · · + E(Nn) = n n + n n − 1 + · · · + n 1 = n n X j=1 1 j This is approximately n(log(n) + 0.577) by Euler’s approximation. Orderings of i.i.d. random variables I call 2 UberX’s and 3 Lyfts at the same time. If the time it takes for the rides to reach me are i.i.d., what is the probability that all the Lyfts will arrive first? Answer: Since the arrival times of the five cars are i.i.d., all 5! orderings of the arrivals are equally likely. There are 3!2! orderings that involve the Lyfts arriving first, so the probability that the Lyfts arrive first is 3!2! 5! = 1/10 . Alternatively, there are 5 3 ways to choose 3 of the 5 slots for the Lyfts to occupy, where each of the choices are equally likely. One of these choices has all 3 of the Lyfts arriving first, so the probability is 1/ 5 3 = 1/10 . Expectation of Negative Hypergeometric What is the expected number of cards that you draw before you pick your first Ace in a shuffled deck (not counting the Ace)? Answer: Consider a non-Ace. Denote this to be card j. Let Ij be the indicator that card j will be drawn before the first Ace. Note that Ij = 1 says that j is before all 4 of the Aces in the deck. The probability that this occurs is 1/5 by symmetry. Let X be the number of cards drawn before the first Ace. Then X = I1 + I2 + ... + I48, where each indicator corresponds to one of the 48 non-Aces. Thus, E(X) = E(I1) + E(I2) + ... + E(I48) = 48/5 = 9.6 . Minimum and Maximum of RVs What is the CDF of the maximum of n independent Unif(0,1) random variables? Answer: Note that for r.v.s X1, X2, . . . , Xn, P (min(X1, X2, . . . , Xn) ≥ a) = P (X1 ≥ a, X2 ≥ a, . . . , Xn ≥ a) Similarly, P (max(X1, X2, . . . , Xn) ≤ a) = P (X1 ≤ a, X2 ≤ a, . . . , Xn ≤ a) We will use this principle to find the CDF of U(n), where U(n) = max(U1, U2, . . . , Un) and Ui ∼ Unif(0, 1) are i.i.d. P (max(U1, U2, . . . , Un) ≤ a) = P (U1 ≤ a, U2 ≤ a, . . . , Un ≤ a) = P (U1 ≤ a)P (U2 ≤ a) . . . P (Un ≤ a) = a n for 0 a 1 (and the CDF is 0 for a ≤ 0 and 1 for a ≥ 1). Pattern-matching with ex Taylor series For X ∼ Pois(λ), find E 1 X + 1 . Answer: By LOTUS, E 1 X + 1 = ∞ X k=0 1 k + 1 e−λ λk k! = e−λ λ ∞ X k=0 λk+1 (k + 1)! = e−λ λ (e λ − 1)
  • 8. Adam’s Law and Eve’s Law William really likes speedsolving Rubik’s Cubes. But he’s pretty bad at it, so sometimes he fails. On any given day, William will attempt N ∼ Geom(s) Rubik’s Cubes. Suppose each time, he has probability p of solving the cube, independently. Let T be the number of Rubik’s Cubes he solves during a day. Find the mean and variance of T . Answer: Note that T |N ∼ Bin(N, p). So by Adam’s Law, E(T ) = E(E(T |N)) = E(Np) = p(1 − s) s Similarly, by Eve’s Law, we have that Var(T ) = E(Var(T |N)) + Var(E(T |N)) = E(Np(1 − p)) + Var(Np) = p(1 − p)(1 − s) s + p2 (1 − s) s2 = p(1 − s)(p + s(1 − p)) s2 MGF – Finding Moments Find E(X3 ) for X ∼ Expo(λ) using the MGF of X. Answer: The MGF of an Expo(λ) is M(t) = λ λ−t . To get the third moment, we can take the third derivative of the MGF and evaluate at t = 0: E(X 3 ) = 6 λ3 But a much nicer way to use the MGF here is via pattern recognition: note that M(t) looks like it came from a geometric series: 1 1 − t λ = ∞ X n=0 t λ n = ∞ X n=0 n! λn tn n! The coefficient of tn n! here is the nth moment of X, so we have E(Xn ) = n! λn for all nonnegative integers n. Markov chains (1) Suppose Xn is a two-state Markov chain with transition matrix Q = 0 1 0 1 − α α 1 β 1 − β Find the stationary distribution ~ s = (s0, s1) of Xn by solving ~ sQ = ~ s, and show that the chain is reversible with respect to ~ s. Answer: The equation ~ sQ = ~ s says that s0 = s0(1 − α) + s1β and s1 = s0(α) + s0(1 − β) By solving this system of linear equations, we have ~ s = β α + β , α α + β To show that the chain is reversible with respect to ~ s, we must show siqij = sj qji for all i, j. This is done if we can show s0q01 = s1q10. And indeed, s0q01 = αβ α + β = s1q10 Markov chains (2) William and Sebastian play a modified game of Settlers of Catan, where every turn they randomly move the robber (which starts on the center tile) to one of the adjacent hexagons. Robber (a) Is this Markov chain irreducible? Is it aperiodic? Answer: Yes to both. The Markov chain is irreducible because it can get from anywhere to anywhere else. The Markov chain is aperiodic because the robber can return back to a square in 2, 3, 4, 5, . . . moves, and the GCD of those numbers is 1. (b) What is the stationary distribution of this Markov chain? Answer: Since this is a random walk on an undirected graph, the stationary distribution is proportional to the degree sequence. The degree for the corner pieces is 3, the degree for the edge pieces is 4, and the degree for the center pieces is 6. To normalize this degree sequence, we divide by its sum. The sum of the degrees is 6(3) + 6(4) + 7(6) = 84. Thus the stationary probability of being on a corner is 3/84 = 1/28, on an edge is 4/84 = 1/21, and in the center is 6/84 = 1/14. (c) What fraction of the time will the robber be in the center tile in this game, in the long run? Answer: By the above, 1/14 . (d) What is the expected amount of moves it will take for the robber to return to the center tile? Answer: Since this chain is irreducible and aperiodic, to get the expected time to return we can just invert the stationary probability. Thus on average it will take 14 turns for the robber to return to the center tile. Problem-Solving Strategies Contributions from Jessy Hwang, Yuan Jiang, Yuqi Hou 1. Getting started. Start by defining relevant events and random variables. (“Let A be the event that I pick the fair coin”; “Let X be the number of successes.”) Clear notion is important for clear thinking! Then decide what it is that you’re supposed to be finding, in terms of your notation (“I want to find P (X = 3|A)”). Think about what type of object your answer should be (a number? A random variable? A PMF? A PDF?) and what it should be in terms of. Try simple and extreme cases. To make an abstract experiment more concrete, try drawing a picture or making up numbers that could have happened. Pattern recognition: does the structure of the problem resemble something we’ve seen before? 2. Calculating probability of an event. Use counting principles if the naive definition of probability applies. Is the probability of the complement easier to find? Look for symmetries. Look for something to condition on, then apply Bayes’ Rule or the Law of Total Probability. 3. Finding the distribution of a random variable. First make sure you need the full distribution not just the mean (see next item). Check the support of the random variable: what values can it take on? Use this to rule out distributions that don’t fit. Is there a story for one of the named distributions that fits the problem at hand? Can you write the random variable as a function of an r.v. with a known distribution, say Y = g(X)? 4. Calculating expectation. If it has a named distribution, check out the table of distributions. If it’s a function of an r.v. with a named distribution, try LOTUS. If it’s a count of something, try breaking it up into indicator r.v.s. If you can condition on something natural, consider using Adam’s law. 5. Calculating variance. Consider independence, named distributions, and LOTUS. If it’s a count of something, break it up into a sum of indicator r.v.s. If it’s a sum, use properties of covariance. If you can condition on something natural, consider using Eve’s Law. 6. Calculating E(X2 ). Do you already know E(X) or Var(X)? Recall that Var(X) = E(X2 ) − (E(X))2 . Otherwise try LOTUS. 7. Calculating covariance. Use the properties of covariance. If you’re trying to find the covariance between two components of a Multinomial distribution, Xi, Xj , then the covariance is −npipj for i 6= j. 8. Symmetry. If X1, . . . , Xn are i.i.d., consider using symmetry. 9. Calculating probabilities of orderings. Remember that all n! ordering of i.i.d. continuous random variables X1, . . . , Xn are equally likely. 10. Determining independence. There are several equivalent definitions. Think about simple and extreme cases to see if you can find a counterexample. 11. Do a painful integral. If your integral looks painful, see if you can write your integral in terms of a known PDF (like Gamma or Beta), and use the fact that PDFs integrate to 1? 12. Before moving on. Check some simple and extreme cases, check whether the answer seems plausible, check for biohazards. Biohazards Contributions from Jessy Hwang 1. Don’t misuse the naive definition of probability. When answering “What is the probability that in a group of 3 people, no two have the same birth month?”, it is not correct to treat the people as indistinguishable balls being placed into 12 boxes, since that assumes the list of birth months {January, January, January} is just as likely as the list {January, April, June}, even though the latter is six times more likely. 2. Don’t confuse unconditional, conditional, and joint probabilities. In applying P (A|B) = P (B|A)P (A) P (B) , it is not correct to say “P (B) = 1 because we know B happened”; P (B) is the prior probability of B. Don’t confuse P (A|B) with P (A, B). 3. Don’t assume independence without justification. In the matching problem, the probability that card 1 is a match and card 2 is a match is not 1/n2 . Binomial and Hypergeometric are often confused; the trials are independent in the Binomial story and dependent in the Hypergeometric story. 4. Don’t forget to do sanity checks. Probabilities must be between 0 and 1. Variances must be ≥ 0. Supports must make sense. PMFs must sum to 1. PDFs must integrate to 1. 5. Don’t confuse random variables, numbers, and events. Let X be an r.v. Then g(X) is an r.v. for any function g. In particular, X2 , |X|, F (X), and IX3 are r.v.s. P (X2 X|X ≥ 0), E(X), Var(X), and g(E(X)) are numbers. X = 2 and F (X) ≥ −1 are events. It does not make sense to write R ∞ −∞ F (X)dx, because F (X) is a random variable. It does not make sense to write P (X), because X is not an event.
  • 9. 6. Don’t confuse a random variable with its distribution. To get the PDF of X2 , you can’t just square the PDF of X. The right way is to use transformations. To get the PDF of X + Y , you can’t just add the PDF of X and the PDF of Y . The right way is to compute the convolution. 7. Don’t pull non-linear functions out of expectations. E(g(X)) does not equal g(E(X)) in general. The St. Petersburg paradox is an extreme example. See also Jensen’s inequality. The right way to find E(g(X)) is with LOTUS. Distributions in R Command What it does help(distributions) shows documentation on distributions dbinom(k,n,p) PMF P (X = k) for X ∼ Bin(n, p) pbinom(x,n,p) CDF P (X ≤ x) for X ∼ Bin(n, p) qbinom(a,n,p) ath quantile for X ∼ Bin(n, p) rbinom(r,n,p) vector of r i.i.d. Bin(n, p) r.v.s dgeom(k,p) PMF P (X = k) for X ∼ Geom(p) dhyper(k,w,b,n) PMF P (X = k) for X ∼ HGeom(w, b, n) dnbinom(k,r,p) PMF P (X = k) for X ∼ NBin(r, p) dpois(k,r) PMF P (X = k) for X ∼ Pois(r) dbeta(x,a,b) PDF f(x) for X ∼ Beta(a, b) dchisq(x,n) PDF f(x) for X ∼ χ2 n dexp(x,b) PDF f(x) for X ∼ Expo(b) dgamma(x,a,r) PDF f(x) for X ∼ Gamma(a, r) dlnorm(x,m,s) PDF f(x) for X ∼ LN (m, s2 ) dnorm(x,m,s) PDF f(x) for X ∼ N (m, s2 ) dt(x,n) PDF f(x) for X ∼ tn dunif(x,a,b) PDF f(x) for X ∼ Unif(a, b) The table above gives R commands for working with various named distributions. Commands analogous to pbinom, qbinom, and rbinom work for the other distributions in the table. For example, pnorm, qnorm, and rnorm can be used to get the CDF, quantiles, and random generation for the Normal. For the Multinomial, dmultinom can be used for calculating the joint PMF and rmultinom can be used for generating random vectors. For the Multivariate Normal, after installing and loading the mvtnorm package dmvnorm can be used for calculating the joint PDF and rmvnorm can be used for generating random vectors. Recommended Resources • Introduction to Probability Book (http://bit.ly/introprobability) • Stat 110 Online (http://stat110.net) • Stat 110 Quora Blog (https://stat110.quora.com/) • Quora Probability FAQ (http://bit.ly/probabilityfaq) • R Studio (https://www.rstudio.com) • LaTeX File (github.com/wzchen/probability cheatsheet) Please share this cheatsheet with friends! http://wzchen.com/probability-cheatsheet
  • 10. Table of Distributions Distribution PMF/PDF and Support Expected Value Variance MGF Bernoulli Bern(p) P(X = 1) = p P(X = 0) = q = 1 − p p pq q + pet Binomial Bin(n, p) P(X = k) = n k pkqn−k k ∈ {0, 1, 2, . . . n} np npq (q + pet)n Geometric Geom(p) P(X = k) = qkp k ∈ {0, 1, 2, . . . } q/p q/p2 p 1−qet , qet 1 Negative Binomial NBin(r, p) P(X = n) = r+n−1 r−1 prqn n ∈ {0, 1, 2, . . . } rq/p rq/p2 ( p 1−qet )r, qet 1 Hypergeometric HGeom(w, b, n) P(X = k) = w k b n−k / w+b n k ∈ {0, 1, 2, . . . , n} µ = nw b+w w+b−n w+b−1 n µ n (1 − µ n ) messy Poisson Pois(λ) P(X = k) = e−λ λk k! k ∈ {0, 1, 2, . . . } λ λ eλ(et −1) Uniform Unif(a, b) f(x) = 1 b−a x ∈ (a, b) a+b 2 (b−a)2 12 etb −eta t(b−a) Normal N(µ, σ2) f(x) = 1 σ √ 2π e−(x − µ)2/(2σ2) x ∈ (−∞, ∞) µ σ2 etµ+ σ2t2 2 Exponential Expo(λ) f(x) = λe−λx x ∈ (0, ∞) 1 λ 1 λ2 λ λ−t , t λ Gamma Gamma(a, λ) f(x) = 1 Γ(a) (λx)ae−λx 1 x x ∈ (0, ∞) a λ a λ2 λ λ−t a , t λ Beta Beta(a, b) f(x) = Γ(a+b) Γ(a)Γ(b) xa−1(1 − x)b−1 x ∈ (0, 1) µ = a a+b µ(1−µ) (a+b+1) messy Log-Normal LN(µ, σ2) 1 xσ √ 2π e−(log x−µ)2 /(2σ2 ) x ∈ (0, ∞) θ = eµ+σ2 /2 θ2(eσ2 − 1) doesn’t exist Chi-Square χ2 n 1 2n/2Γ(n/2) xn/2−1e−x/2 x ∈ (0, ∞) n 2n (1 − 2t)−n/2, t 1/2 Student-t tn Γ((n+1)/2) √ nπΓ(n/2) (1 + x2/n)−(n+1)/2 x ∈ (−∞, ∞) 0 if n 1 n n−2 if n 2 doesn’t exist