CS229 Lecture notes
Andrew Ng
Mixtures of Gaussians and the EM algorithm
In this set of notes, we discuss the EM (Expectation-Maximization) algorithm
for density estimation.
Suppose that we are given a training set {x(1)
, . . . , x(n)
} as usual. Since
we are in the unsupervised learning setting, these points do not come with
any labels.
We wish to model the data by specifying a joint distribution p(x(i)
, z(i)
) =
p(x(i)
|z(i)
)p(z(i)
). Here, z(i)
∼ Multinomial(φ) (where φj ≥ 0, k
j=1 φj = 1,
and the parameter φj gives p(z(i)
= j)), and x(i)
|z(i)
= j ∼ N(µj, Σj). We
let k denote the number of values that the z(i)
’s can take on. Thus, our
model posits that each x(i)
was generated by randomly choosing z(i)
from
{1, . . . , k}, and then x(i)
was drawn from one of k Gaussians depending on
z(i)
. This is called the mixture of Gaussians model. Also, note that the
z(i)
’s are latent random variables, meaning that they’re hidden/unobserved.
This is what will make our estimation problem difficult.
The parameters of our model are thus φ, µ and Σ. To estimate them, we
can write down the likelihood of our data:
ℓ(φ, µ, Σ) =
n
i=1
log p(x(i)
; φ, µ, Σ)
=
n
i=1
log
k
z(i)=1
p(x(i)
|z(i)
; µ, Σ)p(z(i)
; φ).
However, if we set to zero the derivatives of this formula with respect to
the parameters and try to solve, we’ll find that it is not possible to find the
maximum likelihood estimates of the parameters in closed form. (Try this
yourself at home.)
The random variables z(i)
indicate which of the k Gaussians each x(i)
had come from. Note that if we knew what the z(i)
’s were, the maximum
1
2
likelihood problem would have been easy. Specifically, we could then write
down the likelihood as
ℓ(φ, µ, Σ) =
n
i=1
log p(x(i)
|z(i)
; µ, Σ) + log p(z(i)
; φ).
Maximizing this with respect to φ, µ and Σ gives the parameters:
φj =
1
n
n
i=1
1{z(i)
= j},
µj =
n
i=1 1{z(i)
= j}x(i)
n
i=1 1{z(i) = j}
,
Σj =
n
i=1 1{z(i)
= j}(x(i)
− µj)(x(i)
− µj)T
n
i=1 1{z(i) = j}
.
Indeed, we see that if the z(i)
’s were known, then maximum likelihood
estimation becomes nearly identical to what we had when estimating the
parameters of the Gaussian discriminant analysis model, except that here
the z(i)
’s playing the role of the class labels.1
However, in our density estimation problem, the z(i)
’s are not known.
What can we do?
The EM algorithm is an iterative algorithm that has two main steps.
Applied to our problem, in the E-step, it tries to “guess” the values of the
z(i)
’s. In the M-step, it updates the parameters of our model based on our
guesses. Since in the M-step we are pretending that the guesses in the first
part were correct, the maximization becomes easy. Here’s the algorithm:
Repeat until convergence: {
(E-step) For each i, j, set
w
(i)
j := p(z(i)
= j|x(i)
; φ, µ, Σ)
1
There are other minor differences in the formulas here from what we’d obtained in
PS1 with Gaussian discriminant analysis, first because we’ve generalized the z(i)
’s to be
multinomial rather than Bernoulli, and second because here we are using a different Σj
for each Gaussian.
3
(M-step) Update the parameters:
φj :=
1
n
n
i=1
w
(i)
j ,
µj :=
n
i=1 w
(i)
j x(i)
n
i=1 w
(i)
j
,
Σj :=
n
i=1 w
(i)
j (x(i)
− µj)(x(i)
− µj)T
n
i=1 w
(i)
j
}
In the E-step, we calculate the posterior probability of our parameters
the z(i)
’s, given the x(i)
and using the current setting of our parameters. I.e.,
using Bayes rule, we obtain:
p(z(i)
= j|x(i)
; φ, µ, Σ) =
p(x(i)
|z(i)
= j; µ, Σ)p(z(i)
= j; φ)
k
l=1 p(x(i)|z(i) = l; µ, Σ)p(z(i) = l; φ)
Here, p(x(i)
|z(i)
= j; µ, Σ) is given by evaluating the density of a Gaussian
with mean µj and covariance Σj at x(i)
; p(z(i)
= j; φ) is given by φj, and so
on. The values w
(i)
j calculated in the E-step represent our “soft” guesses2
for
the values of z(i)
.
Also, you should contrast the updates in the M-step with the formulas we
had when the z(i)
’s were known exactly. They are identical, except that in-
stead of the indicator functions “1{z(i)
= j}” indicating from which Gaussian
each datapoint had come, we now instead have the w
(i)
j ’s.
The EM-algorithm is also reminiscent of the K-means clustering algo-
rithm, except that instead of the “hard” cluster assignments c(i), we instead
have the “soft” assignments w
(i)
j . Similar to K-means, it is also susceptible
to local optima, so reinitializing at several different initial parameters may
be a good idea.
It’s clear that the EM algorithm has a very natural interpretation of
repeatedly trying to guess the unknown z(i)
’s; but how did it come about,
and can we make any guarantees about it, such as regarding its convergence?
In the next set of notes, we will describe a more general view of EM, one
2
The term “soft” refers to our guesses being probabilities and taking values in [0, 1]; in
contrast, a “hard” guess is one that represents a single best guess (such as taking values
in {0, 1} or {1, . . . , k}).
4
that will allow us to easily apply it to other estimation problems in which
there are also latent variables, and which will allow us to give a convergence
guarantee.

More Related Content

PDF
Machine learning (8)
PDF
L7 fuzzy relations
PPTX
Classical relations and fuzzy relations
PPTX
5.3 integration by substitution dfs-102
PDF
Limits and Continuity - Intuitive Approach part 1
PDF
5.4 Saddle-point interpretation, 5.5 Optimality conditions, 5.6 Perturbation ...
PDF
Indefinite and Definite Integrals Using the Substitution Method
PPT
Functions limits and continuity
Machine learning (8)
L7 fuzzy relations
Classical relations and fuzzy relations
5.3 integration by substitution dfs-102
Limits and Continuity - Intuitive Approach part 1
5.4 Saddle-point interpretation, 5.5 Optimality conditions, 5.6 Perturbation ...
Indefinite and Definite Integrals Using the Substitution Method
Functions limits and continuity

What's hot (20)

PDF
L8 fuzzy relations contd.
PDF
Limits and Continuity - Intuitive Approach part 3
PDF
2.3 Operations that preserve convexity & 2.4 Generalized inequalities
PPT
functions limits and continuity
ZIP
Partial derivatives love coffee.key
PDF
8 fixed point theorem in complete fuzzy metric space 8 megha shrivastava
PDF
Prob review
PPT
4.3 derivatives of inv erse trig. functions
PPTX
Limits and continuity powerpoint
PPT
CHAIN RULE AND IMPLICIT FUNCTION
PDF
limits and continuity
PPT
1551 limits and continuity
PPTX
FUZZY LOGIC
PPTX
limits and continuity
PDF
Daniel Hong ENGR 019 Q6
PPT
Limits and derivatives
PDF
Ece3075 a 8
PDF
Lesson 5: Continuity
PDF
Lesson 27: Integration by Substitution (Section 041 slides)
PPT
Limit and continuity (2)
L8 fuzzy relations contd.
Limits and Continuity - Intuitive Approach part 3
2.3 Operations that preserve convexity & 2.4 Generalized inequalities
functions limits and continuity
Partial derivatives love coffee.key
8 fixed point theorem in complete fuzzy metric space 8 megha shrivastava
Prob review
4.3 derivatives of inv erse trig. functions
Limits and continuity powerpoint
CHAIN RULE AND IMPLICIT FUNCTION
limits and continuity
1551 limits and continuity
FUZZY LOGIC
limits and continuity
Daniel Hong ENGR 019 Q6
Limits and derivatives
Ece3075 a 8
Lesson 5: Continuity
Lesson 27: Integration by Substitution (Section 041 slides)
Limit and continuity (2)
Ad

Similar to Cs229 notes7b (20)

PDF
Machine learning (9)
PDF
Jensen's inequality, EM 알고리즘
PDF
Machine learning (10)
PDF
Cs229 notes8
PDF
Cs229 notes9
PDF
expectation maximization and Guassian Mixture.pdf
PDF
Machine learning (2)
PDF
Litv_Denmark_Weak_Supervised_Learning.pdf
PDF
lec-ugc-sse.pdf
PPTX
Statistical Method In Economics
PPTX
Programming Exam Help
PDF
Machine learning (1)
PDF
Nokton theory-en
PDF
X01 Supervised learning problem linear regression one feature theorie
PDF
Cheatsheet supervised-learning
PDF
Propensity albert
PDF
Probabilistic Machine Learning - Continuous Variables
PDF
CS229 Machine Learning Lecture Notes
PDF
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
Machine learning (9)
Jensen's inequality, EM 알고리즘
Machine learning (10)
Cs229 notes8
Cs229 notes9
expectation maximization and Guassian Mixture.pdf
Machine learning (2)
Litv_Denmark_Weak_Supervised_Learning.pdf
lec-ugc-sse.pdf
Statistical Method In Economics
Programming Exam Help
Machine learning (1)
Nokton theory-en
X01 Supervised learning problem linear regression one feature theorie
Cheatsheet supervised-learning
Propensity albert
Probabilistic Machine Learning - Continuous Variables
CS229 Machine Learning Lecture Notes
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
Ad

More from VuTran231 (12)

PDF
4 work readiness and reflection on wr and pol
PDF
2 project-oriented learning examined
PDF
2 blended learning directions
PDF
4 jigsaw review
PDF
2 learning alone and together
PDF
Cs229 notes11
PDF
Cs229 notes10
PDF
Cs229 notes7a
PDF
Cs229 notes12
PDF
Cs229 notes-deep learning
PDF
Cs229 notes4
PDF
4 work readiness and reflection on wr and pol
2 project-oriented learning examined
2 blended learning directions
4 jigsaw review
2 learning alone and together
Cs229 notes11
Cs229 notes10
Cs229 notes7a
Cs229 notes12
Cs229 notes-deep learning
Cs229 notes4

Recently uploaded (20)

PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
My India Quiz Book_20210205121199924.pdf
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PDF
IGGE1 Understanding the Self1234567891011
PDF
International_Financial_Reporting_Standa.pdf
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PDF
Empowerment Technology for Senior High School Guide
PDF
Environmental Education MCQ BD2EE - Share Source.pdf
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
HVAC Specification 2024 according to central public works department
PDF
Hazard Identification & Risk Assessment .pdf
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
Introduction to pro and eukaryotes and differences.pptx
My India Quiz Book_20210205121199924.pdf
Share_Module_2_Power_conflict_and_negotiation.pptx
IGGE1 Understanding the Self1234567891011
International_Financial_Reporting_Standa.pdf
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Empowerment Technology for Senior High School Guide
Environmental Education MCQ BD2EE - Share Source.pdf
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
What if we spent less time fighting change, and more time building what’s rig...
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
TNA_Presentation-1-Final(SAVE)) (1).pptx
A powerpoint presentation on the Revised K-10 Science Shaping Paper
HVAC Specification 2024 according to central public works department
Hazard Identification & Risk Assessment .pdf

Cs229 notes7b

  • 1. CS229 Lecture notes Andrew Ng Mixtures of Gaussians and the EM algorithm In this set of notes, we discuss the EM (Expectation-Maximization) algorithm for density estimation. Suppose that we are given a training set {x(1) , . . . , x(n) } as usual. Since we are in the unsupervised learning setting, these points do not come with any labels. We wish to model the data by specifying a joint distribution p(x(i) , z(i) ) = p(x(i) |z(i) )p(z(i) ). Here, z(i) ∼ Multinomial(φ) (where φj ≥ 0, k j=1 φj = 1, and the parameter φj gives p(z(i) = j)), and x(i) |z(i) = j ∼ N(µj, Σj). We let k denote the number of values that the z(i) ’s can take on. Thus, our model posits that each x(i) was generated by randomly choosing z(i) from {1, . . . , k}, and then x(i) was drawn from one of k Gaussians depending on z(i) . This is called the mixture of Gaussians model. Also, note that the z(i) ’s are latent random variables, meaning that they’re hidden/unobserved. This is what will make our estimation problem difficult. The parameters of our model are thus φ, µ and Σ. To estimate them, we can write down the likelihood of our data: ℓ(φ, µ, Σ) = n i=1 log p(x(i) ; φ, µ, Σ) = n i=1 log k z(i)=1 p(x(i) |z(i) ; µ, Σ)p(z(i) ; φ). However, if we set to zero the derivatives of this formula with respect to the parameters and try to solve, we’ll find that it is not possible to find the maximum likelihood estimates of the parameters in closed form. (Try this yourself at home.) The random variables z(i) indicate which of the k Gaussians each x(i) had come from. Note that if we knew what the z(i) ’s were, the maximum 1
  • 2. 2 likelihood problem would have been easy. Specifically, we could then write down the likelihood as ℓ(φ, µ, Σ) = n i=1 log p(x(i) |z(i) ; µ, Σ) + log p(z(i) ; φ). Maximizing this with respect to φ, µ and Σ gives the parameters: φj = 1 n n i=1 1{z(i) = j}, µj = n i=1 1{z(i) = j}x(i) n i=1 1{z(i) = j} , Σj = n i=1 1{z(i) = j}(x(i) − µj)(x(i) − µj)T n i=1 1{z(i) = j} . Indeed, we see that if the z(i) ’s were known, then maximum likelihood estimation becomes nearly identical to what we had when estimating the parameters of the Gaussian discriminant analysis model, except that here the z(i) ’s playing the role of the class labels.1 However, in our density estimation problem, the z(i) ’s are not known. What can we do? The EM algorithm is an iterative algorithm that has two main steps. Applied to our problem, in the E-step, it tries to “guess” the values of the z(i) ’s. In the M-step, it updates the parameters of our model based on our guesses. Since in the M-step we are pretending that the guesses in the first part were correct, the maximization becomes easy. Here’s the algorithm: Repeat until convergence: { (E-step) For each i, j, set w (i) j := p(z(i) = j|x(i) ; φ, µ, Σ) 1 There are other minor differences in the formulas here from what we’d obtained in PS1 with Gaussian discriminant analysis, first because we’ve generalized the z(i) ’s to be multinomial rather than Bernoulli, and second because here we are using a different Σj for each Gaussian.
  • 3. 3 (M-step) Update the parameters: φj := 1 n n i=1 w (i) j , µj := n i=1 w (i) j x(i) n i=1 w (i) j , Σj := n i=1 w (i) j (x(i) − µj)(x(i) − µj)T n i=1 w (i) j } In the E-step, we calculate the posterior probability of our parameters the z(i) ’s, given the x(i) and using the current setting of our parameters. I.e., using Bayes rule, we obtain: p(z(i) = j|x(i) ; φ, µ, Σ) = p(x(i) |z(i) = j; µ, Σ)p(z(i) = j; φ) k l=1 p(x(i)|z(i) = l; µ, Σ)p(z(i) = l; φ) Here, p(x(i) |z(i) = j; µ, Σ) is given by evaluating the density of a Gaussian with mean µj and covariance Σj at x(i) ; p(z(i) = j; φ) is given by φj, and so on. The values w (i) j calculated in the E-step represent our “soft” guesses2 for the values of z(i) . Also, you should contrast the updates in the M-step with the formulas we had when the z(i) ’s were known exactly. They are identical, except that in- stead of the indicator functions “1{z(i) = j}” indicating from which Gaussian each datapoint had come, we now instead have the w (i) j ’s. The EM-algorithm is also reminiscent of the K-means clustering algo- rithm, except that instead of the “hard” cluster assignments c(i), we instead have the “soft” assignments w (i) j . Similar to K-means, it is also susceptible to local optima, so reinitializing at several different initial parameters may be a good idea. It’s clear that the EM algorithm has a very natural interpretation of repeatedly trying to guess the unknown z(i) ’s; but how did it come about, and can we make any guarantees about it, such as regarding its convergence? In the next set of notes, we will describe a more general view of EM, one 2 The term “soft” refers to our guesses being probabilities and taking values in [0, 1]; in contrast, a “hard” guess is one that represents a single best guess (such as taking values in {0, 1} or {1, . . . , k}).
  • 4. 4 that will allow us to easily apply it to other estimation problems in which there are also latent variables, and which will allow us to give a convergence guarantee.