SlideShare a Scribd company logo
Likelihood free computational statistics
Pierre Pudlo
Universit´e Montpellier 2
Institut de Math´ematiques et Mod´elisation de Montpellier (I3M)
Institut de Biologie Computationelle
Labex NUMEV
17/04/2015
Pierre Pudlo (UM2) Avignon 17/04/2015 1 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 2 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 3 / 20
Intractable likelihoods
Problem
How to perform a Bayesian analysis when the likelihood f(y|φ) is intractable?
Example 1. Gibbs random fields
f(y|φ) ∝ exp(−H(y, φ))
is known up to a constant
Z(φ) =
y
exp(−H(y, φ))
Example 2. Neutral population
genetics
Aim. Infer demographic parameters on
the past of some populations based on
the trace left in genomes of individuals
sampled from current populations.
Latent process (past history of the
sample) ∈ space of high dimension.
If y is the genetic data of the sample,
the likelihood is
f(y|φ) =
Z
f(y, z | φ) dz
Typically, dim(Z ) dim(Y ).
No hope to compute the likelihood with
clever Monte Carlo algorithms?
Coralie Merle, Rapha¨el Leblois et
Franc¸ois Rousset
Pierre Pudlo (UM2) Avignon 17/04/2015 4 / 20
A bend via importance sampling
If y is the genetic data of the sample,
the likelihood is
f(y|φ) =
Z
f(y, z | φ) dz
We are trying to compute this integral
with importance sampling.
Actually z = (z1, . . . , zT) is a measured
valued Markov chain, stopped at a
given optional time T and y = zT hence
f(y|φ) =
Z
1{y = zT} f(z1, . . . , zT | φ) dz
Importance sampling introduces an
auxiliary distribution q(dz | φ)
f(y|φ) =
Z
1{y = zT}
f(z | φ)
q(z | φ)
weight of z
sampling distr.
q(z | φ)dz
The most efficient q is the conditional
distribution of the Markov chain
knowing that zT = y, but even harder to
compute than f(y | φ).
Any other q who is a Markovian
distribution is inefficient as the
variance of the weight grows
exponentially with T.
Need a clever q: see the seminal paper
of Stephens and Donnelly (2000)
And resampling algorithms. . .
Pierre Pudlo (UM2) Avignon 17/04/2015 5 / 20
Approximate Bayesian computation
Idea
Infer conditional distribution of φ given yobs on simulations from the joint π(φ)f(y|φ)
ABC algorithm
A) Generate a large set of (φ, y)
from the Bayesian model
π(φ) f(y|φ)
B) Keep the particles (φ, y) such
that d(η(yobs), η(y)) ≤ ε
C) Return the φ’s of the kept
particles
Curse of dimensionality: y is replaced
by some numerical summaries η(y)
Stage A) is computationally heavy!
We end up rejecting almost all
simulations except if fallen in the
neighborhood of η(yobs)
Sequential ABC algorithms try to avoid
drawing φ is area of low π(φ|y).
An auto-calibrated ABC-SMC
sampler with Mohammed Sedki,
Jean-Michel Marin, Jean-Marie
Cornuet and Christian P. Robert
Pierre Pudlo (UM2) Avignon 17/04/2015 6 / 20
ABC sequential sampler
How to calibrate ε1 ≥ ε2 ≥ · · · ≥ εT and T to be efficient?
The auto-calibrated ABC-SMC sampler developed with Mohammed Sedki,
Jean-Michel Marin, Jean-Marie Cornet and Christian P. Robert
Pierre Pudlo (UM2) Avignon 17/04/2015 7 / 20
ABC target
Three levels of approximation of the
posterior π φ yobs
1 the ABC posterior distribution
π φ η(yobs)
2 approximated with a kernel of
bandwidth ε (or with k-nearest
neighbours)
π φ d(η(y), η(yobs)) ≤ ε
3 a Monte Carlo error:
sample size N < ∞
See, e.g., our review with J.-M. Marin,
C. Robert and R. Ryder
If η(y) are not sufficient statistics,
π φ yobs π φ η(yobs)
Information regarding yobs might be
lost!
Curse of dimensionality:
cannot have both ε small, N large
when η(y) is of large dimension
Post-processing of Beaumont et al.
(2002) with local linear regression.
But the lack of sufficiency might still be
problematic. See Robert et al. (2011)
for model choice.
Pierre Pudlo (UM2) Avignon 17/04/2015 8 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 9 / 20
ABC model choice
ABC model choice
A) Generate a large set of
(m, φ, y) from the Bayesian
model, π(m)πm(φ) fm(y|φ)
B) Keep the particles (m, φ, y)
such that d(η(y), η(yobs)) ≤ ε
C) For each m, return
pm(yobs) = porportion of m
among the kept particles
Likewise, if η(y) is not sufficient for the
model choice issue,
π m y π m η(y)
It might be difficult to design
informative η(y).
Toy example.
Model 1. yi
iid
∼ N (φ, 1)
Model 2. yi
iid
∼ N (φ, 2)
Same prior on φ (whatever the model)
& uniform prior on model index
η(y) = y1 + · · · + yn is sufficient to
estimate φ in both models
But η(y) carries no information
regarding the variance (hence the
model choice issue)
Other examples in Robert et al. (2011)
In population genetics. Might be
difficult to find summary statistics that
help discriminate between models
(= possible historical scenarios on the
sampled populations)
Pierre Pudlo (UM2) Avignon 17/04/2015 10 / 20
ABC model choice
ABC model choice
A) Generate a large set of
(m, φ, y) from the Bayesian
model π(m)πm(φ) fm(y|φ)
B) Keep the particles (m, φ, y)
such that d(η(y), η(yobs)) ≤ ε
C) For each m, return
pm(yobs) = porportion of m
among the kept particles
If ε is tuned so that the number of kept
particles is k, then pm is a k-nearest
neighbor estimate of
E 1{M = m} η(yobs)
Approximating the posterior
probabilities of model m is a
regression problem where
the response is 1{M = m},
the co-variables are the summary
statistics η(y),
the loss is L2
(conditional
expectation)
The prefered method to approximate
the postererior probabilities in DIYABC
is a local multinomial regression.
Ticklish if dim(η(y)) large, or high
correlation in the summary statistics.
Pierre Pudlo (UM2) Avignon 17/04/2015 11 / 20
Choosing between hidden random fields
Choosing between dependency
graph: 4 or 8 neighbours?
Models. α, β ∼ prior
z | β ∼ Potts on G4 or G8 with interaction β
y | z, α ∼ i P(yi|zi, α)
How to sum up the noisy y?
Without noise (directly observed field),
sufficient statistics for the model choice
issue.
With Julien Stoehr and Lionel Cucala
a method to design new summary
statistics
Based on a clustering of the observed
data on possible dependency graphs
number of connected components
size of the largest connected
component,
. . .
Pierre Pudlo (UM2) Avignon 17/04/2015 12 / 20
Machine learning to analyse machine simulated data
ABC model choice
A) Generate a large set of
(m, φ, y) from π(m)πm(φ) fm(y|φ)
B) Infer (anything?) about
m η(y) with machine learning
methods
In this machine learning perspective:
the (iid) simulations of A) form the
training set
yobs becomes a new data point
With J.-M. Marin, J.-M. Cornuet, A.
Estoup, M. Gautier and C. P. Robert
Predicting m is a classification
problem
Computing π(m|η(y)) is a
regression problem
It is well known that classification is
much simple than regression.
(dimension of the object we infer)
Why computing π(m|η(y)) if we know
that
π(m|y) π(m|η(y))?
Pierre Pudlo (UM2) Avignon 17/04/2015 13 / 20
An example with random forest on human SNP data
Out of Africa
6 scenarios, 6 models
Observed data. 4 populations, 30
individuals per population; 10,000
genotyped SNP from the 1000 Genome
Project
Random forest trained on 40, 000
simulations (112 summary statistics)
predict the model which supports
a single out-of-Africa colonization
event,
a secondary split between European
and Asian lineages and
a recent admixture for Americans
with African origin
Confidence in the selected model?
Pierre Pudlo (UM2) Avignon 17/04/2015 14 / 20
Example (continued)
Observed data. 4 populations, 30
individuals per population; 10,000
genotyped SNP from the 1000 Genome
Project
Random forest trained on 40, 000
simulations (112 summary statistics)
predict the model which supports
a single out-of-Africa colonization
event,
a secondary split between European
and Asian lineages and
a recent admixture for Americans
with African origin
Benefits of random forests?
1 Can find the relevant statistics in a
large set of statistics (112) to
discriminate models
2 Lower prior misclassification error
(≈ 6%) than other methods (ABC, i.e.
k-nn ≈ 18%)
3 Supply a similarity measure to
compare η(y) and η(yobs)
Confidence in the selected model?
Compute the average of the
misclassification error over an ABC
approximation of the predictive (∗). Here,
≤ 0.1%
(∗) π(m, φ, y | ηobs) = π(m | ηobs)πm(φ | ηobs)fm(y | φ)
Pierre Pudlo (UM2) Avignon 17/04/2015 15 / 20
Contents
1 Approximate Bayesian computation
2 ABC model choice
3 Bayesian computation with empirical likelihood
Pierre Pudlo (UM2) Avignon 17/04/2015 16 / 20
Another approximation of the likelihood
What if both
the likelihood is intractable and
unable to simulate a dataset in a reasonable amount of time to resort on ABC?
First answer: use pseudo-likelihoods
such as the pairwise composite likelihood
fPCL(y | φ) =
i<j
f(yi, yj | φ)
Maximum composite likelihood
estimators φ(y) are suitable estimators
But cannot substitute a true likelihood
in a Bayesian framework
leads to credible intervals which are
too narrow: over-confidence in φ(y), see
e.g. Ribatet et al. (2012)
Our proposal with Kerrie Mengersen and
Christian P. Robert:
use the empirical likelihood of Owen
(2001, 2011)
It relies on iid blocks in the dataset y to
reconstruct a likelihood
& permits likelihood ratio tests
confidence intervals are correct
Original aim of Owen: remove parametric
assumptions
Pierre Pudlo (UM2) Avignon 17/04/2015 17 / 20
Bayesian computation via empirical likelihood
Our proposal with Kerrie Mengersen and
Christian P. Robert:
use the empirical likelihood of Owen
(2001, 2011)
It relies on iid blocks in the dataset y to
reconstruct a likelihood
& permits likelihood ratio tests
confidence intervals are correct
Original aim of Owen: remove parametric
assumptions
With empirical likelihood, the parameter φ
is defined as
(∗) E h(yb, φ) = 0
where
yb is one block of y,
E the expected value according to
the true distribution of the block yb
h is a known function
E.g, if φ is the mean of an iid sample,
h(yb, φ) = yb − φ
In population genetics, what is (∗) with
dates of population splits
population sizes, etc. ?
Pierre Pudlo (UM2) Avignon 17/04/2015 18 / 20
Bayesian computation via empirical likelihood
With empirical likelihood, the parameter φ
is defined as
(∗) E h(yb, φ) = 0
where
yb is one block of y,
E the expected value according to
the true distribution of the block yb
h is a known function
E.g, if φ is the mean of an iid sample,
h(yb, φ) = yb − φ
In population genetics, what is (∗) with
dates of population splits
population sizes, etc. ?
A block = genetic data at given locus
h(yb, φ) is the pairwise composite score
function we can explicitly compute in
many situations:
h(yb, φ) = φ log fPCL(yb | φ)
Benefits.
much faster than ABC (no need to
simulate fake data)
same accuracy than ABC or even
much precise: no loss of information
with summary statistics
Pierre Pudlo (UM2) Avignon 17/04/2015 19 / 20
An experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all populations
φ = log10(θ, τ1, τ2)
non-informative prior
Comparison of ABC and EL
histogram = EL
curve = ABC
vertical line = “true” parameter
Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
An experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all populations
φ = log10(θ, τ1, τ2)
non-informative prior
Comparison of ABC and EL
histogram = EL
curve = ABC
vertical line = “true” parameter
Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
An experiment
Evolutionary scenario:
MRCA
POP 0 POP 1 POP 2
τ1
τ2
Dataset:
50 genes per populations,
100 microsat. loci
Assumptions:
Ne identical over all populations
φ = log10(θ, τ1, τ2)
non-informative prior
Comparison of ABC and EL
histogram = EL
curve = ABC
vertical line = “true” parameter
Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20

More Related Content

PDF
Habilitation à diriger des recherches
PDF
ABC short course: final chapters
PDF
ABC short course: model choice chapter
PDF
Reliable ABC model choice via random forests
PDF
ABC short course: survey chapter
PDF
ABC short course: introduction chapters
PDF
ABC-Gibbs
PDF
Approximate Bayesian computation and machine learning (BigMC 2014)
Habilitation à diriger des recherches
ABC short course: final chapters
ABC short course: model choice chapter
Reliable ABC model choice via random forests
ABC short course: survey chapter
ABC short course: introduction chapters
ABC-Gibbs
Approximate Bayesian computation and machine learning (BigMC 2014)

What's hot (20)

PDF
Convergence of ABC methods
PDF
ABC in London, May 5, 2011
PDF
Colloquium in honor of Hans Ruedi Künsch
PDF
Bayesian inference on mixtures
PDF
Boston talk
PDF
ABC workshop: 17w5025
PDF
better together? statistical learning in models made of modules
PDF
Approximate Bayesian model choice via random forests
PDF
BIRS 12w5105 meeting
PDF
random forests for ABC model choice and parameter estimation
PDF
CISEA 2019: ABC consistency and convergence
PDF
Laplace's Demon: seminar #1
PDF
from model uncertainty to ABC
PDF
asymptotics of ABC
PDF
Intractable likelihoods
PDF
NBBC15, Reyjavik, June 08, 2015
PDF
Coordinate sampler : A non-reversible Gibbs-like sampler
PDF
Likelihood-free Design: a discussion
PDF
A Note on Confidence Bands for Linear Regression Means-07-24-2015
PDF
Columbia workshop [ABC model choice]
Convergence of ABC methods
ABC in London, May 5, 2011
Colloquium in honor of Hans Ruedi Künsch
Bayesian inference on mixtures
Boston talk
ABC workshop: 17w5025
better together? statistical learning in models made of modules
Approximate Bayesian model choice via random forests
BIRS 12w5105 meeting
random forests for ABC model choice and parameter estimation
CISEA 2019: ABC consistency and convergence
Laplace's Demon: seminar #1
from model uncertainty to ABC
asymptotics of ABC
Intractable likelihoods
NBBC15, Reyjavik, June 08, 2015
Coordinate sampler : A non-reversible Gibbs-like sampler
Likelihood-free Design: a discussion
A Note on Confidence Bands for Linear Regression Means-07-24-2015
Columbia workshop [ABC model choice]
Ad

Similar to Likelihood free computational statistics (20)

PDF
Intro to Approximate Bayesian Computation (ABC)
PDF
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
PDF
ABC-Gibbs
PDF
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
PDF
von Mises lecture, Berlin
PDF
Asymptotics of ABC, lecture, Collège de France
PDF
Workshop in honour of Don Poskitt and Gael Martin
PDF
Bayesian Inference: An Introduction to Principles and ...
PDF
seminar at Princeton University
PDF
ABC-Gibbs
PDF
ABC model choice
PDF
Pittsburgh and Toronto "Halloween US trip" seminars
PDF
[A]BCel : a presentation at ABC in Roma
PDF
Intro to ABC
PDF
Approximate Bayesian Computation with Quasi-Likelihoods
PDF
Accelerated approximate Bayesian computation with applications to protein fol...
PDF
the ABC of ABC
ODP
Iwsmbvs
PDF
Considerate Approaches to ABC Model Selection
PPT
Introduction to Machine Learning Aristotelis Tsirigos
Intro to Approximate Bayesian Computation (ABC)
3rd NIPS Workshop on PROBABILISTIC PROGRAMMING
ABC-Gibbs
Workshop on Bayesian Inference for Latent Gaussian Models with Applications
von Mises lecture, Berlin
Asymptotics of ABC, lecture, Collège de France
Workshop in honour of Don Poskitt and Gael Martin
Bayesian Inference: An Introduction to Principles and ...
seminar at Princeton University
ABC-Gibbs
ABC model choice
Pittsburgh and Toronto "Halloween US trip" seminars
[A]BCel : a presentation at ABC in Roma
Intro to ABC
Approximate Bayesian Computation with Quasi-Likelihoods
Accelerated approximate Bayesian computation with applications to protein fol...
the ABC of ABC
Iwsmbvs
Considerate Approaches to ABC Model Selection
Introduction to Machine Learning Aristotelis Tsirigos
Ad

Recently uploaded (20)

PPTX
2Systematics of Living Organisms t-.pptx
PPTX
Introduction to Fisheries Biotechnology_Lesson 1.pptx
PDF
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
PPTX
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
PPTX
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
PPTX
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
PDF
. Radiology Case Scenariosssssssssssssss
PPTX
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
PPTX
neck nodes and dissection types and lymph nodes levels
PPT
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
PDF
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
PPTX
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
PPT
protein biochemistry.ppt for university classes
PPTX
microscope-Lecturecjchchchchcuvuvhc.pptx
PPTX
Cell Membrane: Structure, Composition & Functions
PPTX
Introduction to Cardiovascular system_structure and functions-1
PPTX
TOTAL hIP ARTHROPLASTY Presentation.pptx
DOCX
Viruses (History, structure and composition, classification, Bacteriophage Re...
PPTX
Comparative Structure of Integument in Vertebrates.pptx
PPTX
Taita Taveta Laboratory Technician Workshop Presentation.pptx
2Systematics of Living Organisms t-.pptx
Introduction to Fisheries Biotechnology_Lesson 1.pptx
Mastering Bioreactors and Media Sterilization: A Complete Guide to Sterile Fe...
GEN. BIO 1 - CELL TYPES & CELL MODIFICATIONS
EPIDURAL ANESTHESIA ANATOMY AND PHYSIOLOGY.pptx
ANEMIA WITH LEUKOPENIA MDS 07_25.pptx htggtftgt fredrctvg
. Radiology Case Scenariosssssssssssssss
G5Q1W8 PPT SCIENCE.pptx 2025-2026 GRADE 5
neck nodes and dissection types and lymph nodes levels
The World of Physical Science, • Labs: Safety Simulation, Measurement Practice
IFIT3 RNA-binding activity primores influenza A viruz infection and translati...
DRUG THERAPY FOR SHOCK gjjjgfhhhhh.pptx.
protein biochemistry.ppt for university classes
microscope-Lecturecjchchchchcuvuvhc.pptx
Cell Membrane: Structure, Composition & Functions
Introduction to Cardiovascular system_structure and functions-1
TOTAL hIP ARTHROPLASTY Presentation.pptx
Viruses (History, structure and composition, classification, Bacteriophage Re...
Comparative Structure of Integument in Vertebrates.pptx
Taita Taveta Laboratory Technician Workshop Presentation.pptx

Likelihood free computational statistics

  • 1. Likelihood free computational statistics Pierre Pudlo Universit´e Montpellier 2 Institut de Math´ematiques et Mod´elisation de Montpellier (I3M) Institut de Biologie Computationelle Labex NUMEV 17/04/2015 Pierre Pudlo (UM2) Avignon 17/04/2015 1 / 20
  • 2. Contents 1 Approximate Bayesian computation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 2 / 20
  • 3. Contents 1 Approximate Bayesian computation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 3 / 20
  • 4. Intractable likelihoods Problem How to perform a Bayesian analysis when the likelihood f(y|φ) is intractable? Example 1. Gibbs random fields f(y|φ) ∝ exp(−H(y, φ)) is known up to a constant Z(φ) = y exp(−H(y, φ)) Example 2. Neutral population genetics Aim. Infer demographic parameters on the past of some populations based on the trace left in genomes of individuals sampled from current populations. Latent process (past history of the sample) ∈ space of high dimension. If y is the genetic data of the sample, the likelihood is f(y|φ) = Z f(y, z | φ) dz Typically, dim(Z ) dim(Y ). No hope to compute the likelihood with clever Monte Carlo algorithms? Coralie Merle, Rapha¨el Leblois et Franc¸ois Rousset Pierre Pudlo (UM2) Avignon 17/04/2015 4 / 20
  • 5. A bend via importance sampling If y is the genetic data of the sample, the likelihood is f(y|φ) = Z f(y, z | φ) dz We are trying to compute this integral with importance sampling. Actually z = (z1, . . . , zT) is a measured valued Markov chain, stopped at a given optional time T and y = zT hence f(y|φ) = Z 1{y = zT} f(z1, . . . , zT | φ) dz Importance sampling introduces an auxiliary distribution q(dz | φ) f(y|φ) = Z 1{y = zT} f(z | φ) q(z | φ) weight of z sampling distr. q(z | φ)dz The most efficient q is the conditional distribution of the Markov chain knowing that zT = y, but even harder to compute than f(y | φ). Any other q who is a Markovian distribution is inefficient as the variance of the weight grows exponentially with T. Need a clever q: see the seminal paper of Stephens and Donnelly (2000) And resampling algorithms. . . Pierre Pudlo (UM2) Avignon 17/04/2015 5 / 20
  • 6. Approximate Bayesian computation Idea Infer conditional distribution of φ given yobs on simulations from the joint π(φ)f(y|φ) ABC algorithm A) Generate a large set of (φ, y) from the Bayesian model π(φ) f(y|φ) B) Keep the particles (φ, y) such that d(η(yobs), η(y)) ≤ ε C) Return the φ’s of the kept particles Curse of dimensionality: y is replaced by some numerical summaries η(y) Stage A) is computationally heavy! We end up rejecting almost all simulations except if fallen in the neighborhood of η(yobs) Sequential ABC algorithms try to avoid drawing φ is area of low π(φ|y). An auto-calibrated ABC-SMC sampler with Mohammed Sedki, Jean-Michel Marin, Jean-Marie Cornuet and Christian P. Robert Pierre Pudlo (UM2) Avignon 17/04/2015 6 / 20
  • 7. ABC sequential sampler How to calibrate ε1 ≥ ε2 ≥ · · · ≥ εT and T to be efficient? The auto-calibrated ABC-SMC sampler developed with Mohammed Sedki, Jean-Michel Marin, Jean-Marie Cornet and Christian P. Robert Pierre Pudlo (UM2) Avignon 17/04/2015 7 / 20
  • 8. ABC target Three levels of approximation of the posterior π φ yobs 1 the ABC posterior distribution π φ η(yobs) 2 approximated with a kernel of bandwidth ε (or with k-nearest neighbours) π φ d(η(y), η(yobs)) ≤ ε 3 a Monte Carlo error: sample size N < ∞ See, e.g., our review with J.-M. Marin, C. Robert and R. Ryder If η(y) are not sufficient statistics, π φ yobs π φ η(yobs) Information regarding yobs might be lost! Curse of dimensionality: cannot have both ε small, N large when η(y) is of large dimension Post-processing of Beaumont et al. (2002) with local linear regression. But the lack of sufficiency might still be problematic. See Robert et al. (2011) for model choice. Pierre Pudlo (UM2) Avignon 17/04/2015 8 / 20
  • 9. Contents 1 Approximate Bayesian computation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 9 / 20
  • 10. ABC model choice ABC model choice A) Generate a large set of (m, φ, y) from the Bayesian model, π(m)πm(φ) fm(y|φ) B) Keep the particles (m, φ, y) such that d(η(y), η(yobs)) ≤ ε C) For each m, return pm(yobs) = porportion of m among the kept particles Likewise, if η(y) is not sufficient for the model choice issue, π m y π m η(y) It might be difficult to design informative η(y). Toy example. Model 1. yi iid ∼ N (φ, 1) Model 2. yi iid ∼ N (φ, 2) Same prior on φ (whatever the model) & uniform prior on model index η(y) = y1 + · · · + yn is sufficient to estimate φ in both models But η(y) carries no information regarding the variance (hence the model choice issue) Other examples in Robert et al. (2011) In population genetics. Might be difficult to find summary statistics that help discriminate between models (= possible historical scenarios on the sampled populations) Pierre Pudlo (UM2) Avignon 17/04/2015 10 / 20
  • 11. ABC model choice ABC model choice A) Generate a large set of (m, φ, y) from the Bayesian model π(m)πm(φ) fm(y|φ) B) Keep the particles (m, φ, y) such that d(η(y), η(yobs)) ≤ ε C) For each m, return pm(yobs) = porportion of m among the kept particles If ε is tuned so that the number of kept particles is k, then pm is a k-nearest neighbor estimate of E 1{M = m} η(yobs) Approximating the posterior probabilities of model m is a regression problem where the response is 1{M = m}, the co-variables are the summary statistics η(y), the loss is L2 (conditional expectation) The prefered method to approximate the postererior probabilities in DIYABC is a local multinomial regression. Ticklish if dim(η(y)) large, or high correlation in the summary statistics. Pierre Pudlo (UM2) Avignon 17/04/2015 11 / 20
  • 12. Choosing between hidden random fields Choosing between dependency graph: 4 or 8 neighbours? Models. α, β ∼ prior z | β ∼ Potts on G4 or G8 with interaction β y | z, α ∼ i P(yi|zi, α) How to sum up the noisy y? Without noise (directly observed field), sufficient statistics for the model choice issue. With Julien Stoehr and Lionel Cucala a method to design new summary statistics Based on a clustering of the observed data on possible dependency graphs number of connected components size of the largest connected component, . . . Pierre Pudlo (UM2) Avignon 17/04/2015 12 / 20
  • 13. Machine learning to analyse machine simulated data ABC model choice A) Generate a large set of (m, φ, y) from π(m)πm(φ) fm(y|φ) B) Infer (anything?) about m η(y) with machine learning methods In this machine learning perspective: the (iid) simulations of A) form the training set yobs becomes a new data point With J.-M. Marin, J.-M. Cornuet, A. Estoup, M. Gautier and C. P. Robert Predicting m is a classification problem Computing π(m|η(y)) is a regression problem It is well known that classification is much simple than regression. (dimension of the object we infer) Why computing π(m|η(y)) if we know that π(m|y) π(m|η(y))? Pierre Pudlo (UM2) Avignon 17/04/2015 13 / 20
  • 14. An example with random forest on human SNP data Out of Africa 6 scenarios, 6 models Observed data. 4 populations, 30 individuals per population; 10,000 genotyped SNP from the 1000 Genome Project Random forest trained on 40, 000 simulations (112 summary statistics) predict the model which supports a single out-of-Africa colonization event, a secondary split between European and Asian lineages and a recent admixture for Americans with African origin Confidence in the selected model? Pierre Pudlo (UM2) Avignon 17/04/2015 14 / 20
  • 15. Example (continued) Observed data. 4 populations, 30 individuals per population; 10,000 genotyped SNP from the 1000 Genome Project Random forest trained on 40, 000 simulations (112 summary statistics) predict the model which supports a single out-of-Africa colonization event, a secondary split between European and Asian lineages and a recent admixture for Americans with African origin Benefits of random forests? 1 Can find the relevant statistics in a large set of statistics (112) to discriminate models 2 Lower prior misclassification error (≈ 6%) than other methods (ABC, i.e. k-nn ≈ 18%) 3 Supply a similarity measure to compare η(y) and η(yobs) Confidence in the selected model? Compute the average of the misclassification error over an ABC approximation of the predictive (∗). Here, ≤ 0.1% (∗) π(m, φ, y | ηobs) = π(m | ηobs)πm(φ | ηobs)fm(y | φ) Pierre Pudlo (UM2) Avignon 17/04/2015 15 / 20
  • 16. Contents 1 Approximate Bayesian computation 2 ABC model choice 3 Bayesian computation with empirical likelihood Pierre Pudlo (UM2) Avignon 17/04/2015 16 / 20
  • 17. Another approximation of the likelihood What if both the likelihood is intractable and unable to simulate a dataset in a reasonable amount of time to resort on ABC? First answer: use pseudo-likelihoods such as the pairwise composite likelihood fPCL(y | φ) = i<j f(yi, yj | φ) Maximum composite likelihood estimators φ(y) are suitable estimators But cannot substitute a true likelihood in a Bayesian framework leads to credible intervals which are too narrow: over-confidence in φ(y), see e.g. Ribatet et al. (2012) Our proposal with Kerrie Mengersen and Christian P. Robert: use the empirical likelihood of Owen (2001, 2011) It relies on iid blocks in the dataset y to reconstruct a likelihood & permits likelihood ratio tests confidence intervals are correct Original aim of Owen: remove parametric assumptions Pierre Pudlo (UM2) Avignon 17/04/2015 17 / 20
  • 18. Bayesian computation via empirical likelihood Our proposal with Kerrie Mengersen and Christian P. Robert: use the empirical likelihood of Owen (2001, 2011) It relies on iid blocks in the dataset y to reconstruct a likelihood & permits likelihood ratio tests confidence intervals are correct Original aim of Owen: remove parametric assumptions With empirical likelihood, the parameter φ is defined as (∗) E h(yb, φ) = 0 where yb is one block of y, E the expected value according to the true distribution of the block yb h is a known function E.g, if φ is the mean of an iid sample, h(yb, φ) = yb − φ In population genetics, what is (∗) with dates of population splits population sizes, etc. ? Pierre Pudlo (UM2) Avignon 17/04/2015 18 / 20
  • 19. Bayesian computation via empirical likelihood With empirical likelihood, the parameter φ is defined as (∗) E h(yb, φ) = 0 where yb is one block of y, E the expected value according to the true distribution of the block yb h is a known function E.g, if φ is the mean of an iid sample, h(yb, φ) = yb − φ In population genetics, what is (∗) with dates of population splits population sizes, etc. ? A block = genetic data at given locus h(yb, φ) is the pairwise composite score function we can explicitly compute in many situations: h(yb, φ) = φ log fPCL(yb | φ) Benefits. much faster than ABC (no need to simulate fake data) same accuracy than ABC or even much precise: no loss of information with summary statistics Pierre Pudlo (UM2) Avignon 17/04/2015 19 / 20
  • 20. An experiment Evolutionary scenario: MRCA POP 0 POP 1 POP 2 τ1 τ2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ = log10(θ, τ1, τ2) non-informative prior Comparison of ABC and EL histogram = EL curve = ABC vertical line = “true” parameter Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
  • 21. An experiment Evolutionary scenario: MRCA POP 0 POP 1 POP 2 τ1 τ2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ = log10(θ, τ1, τ2) non-informative prior Comparison of ABC and EL histogram = EL curve = ABC vertical line = “true” parameter Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20
  • 22. An experiment Evolutionary scenario: MRCA POP 0 POP 1 POP 2 τ1 τ2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ = log10(θ, τ1, τ2) non-informative prior Comparison of ABC and EL histogram = EL curve = ABC vertical line = “true” parameter Pierre Pudlo (UM2) Avignon 17/04/2015 20 / 20