Approximate Bayesian Computation (ABC) and
             empirical likelihood

                  Christian P. Robert
   Structure and uncertainty, Bristol, Sept. 25, 2012

            Universit´ Paris-Dauphine, IuF, & CREST
                     e
        Joint work with Kerrie L. Mengersen and P. Pudlo
Outline




Introduction

ABC

ABC as an inference machine

ABCel
Intractable likelihood



   Case of a well-defined statistical model where the likelihood
   function
                         (θ|y) = f (y1 , . . . , yn |θ)


       is (really!) not available in closed form
       can (easily!) be neither completed nor demarginalised
       cannot be estimated by an unbiased estimator
    c Prohibits direct implementation of a generic MCMC algorithm
   like Metropolis–Hastings
Intractable likelihood



   Case of a well-defined statistical model where the likelihood
   function
                         (θ|y) = f (y1 , . . . , yn |θ)


       is (really!) not available in closed form
       can (easily!) be neither completed nor demarginalised
       cannot be estimated by an unbiased estimator
    c Prohibits direct implementation of a generic MCMC algorithm
   like Metropolis–Hastings
Different perspectives on abc



   What is the (most) fundamental issue?
       a mere computational issue (that will eventually end up being
       solved by more powerful computers, &tc, even if too costly in
       the short term)
       an inferential issue (opening opportunities for new inference
       machine, with different legitimity than classical B approach)
       a Bayesian conundrum (while inferencial methods available,
       how closely related to the B approach?)
Different perspectives on abc



   What is the (most) fundamental issue?
       a mere computational issue (that will eventually end up being
       solved by more powerful computers, &tc, even if too costly in
       the short term)
       an inferential issue (opening opportunities for new inference
       machine, with different legitimity than classical B approach)
       a Bayesian conundrum (while inferencial methods available,
       how closely related to the B approach?)
Different perspectives on abc



   What is the (most) fundamental issue?
       a mere computational issue (that will eventually end up being
       solved by more powerful computers, &tc, even if too costly in
       the short term)
       an inferential issue (opening opportunities for new inference
       machine, with different legitimity than classical B approach)
       a Bayesian conundrum (while inferencial methods available,
       how closely related to the B approach?)
Econom’ections


   Similar exploration of simulation-based and approximation
   techniques in Econometrics
       Simulated method of moments
       Method of simulated moments
       Simulated pseudo-maximum-likelihood
       Indirect inference
                                       [Gouri´roux & Monfort, 1996]
                                             e

   even though motivation is partly-defined models rather than
   complex likelihoods
Econom’ections


   Similar exploration of simulation-based and approximation
   techniques in Econometrics
       Simulated method of moments
       Method of simulated moments
       Simulated pseudo-maximum-likelihood
       Indirect inference
                                       [Gouri´roux & Monfort, 1996]
                                             e

   even though motivation is partly-defined models rather than
   complex likelihoods
Indirect inference




                                                 ^
   Minimise [in θ] a distance between estimators β based on a
   pseudo-model for genuine observations and for observations
   simulated under the true model and the parameter θ.

                             [Gouri´roux, Monfort, & Renault, 1993;
                                   e
                             Smith, 1993; Gallant & Tauchen, 1996]
Indirect inference (PML vs. PSE)


   Example of the pseudo-maximum-likelihood (PML)

                ^
                β(y) = arg max          log f (yt |β, y1:(t−1) )
                               β
                                    t


   leading to

                arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2
                          ^        ^
                     θ

   when
                    ys (θ) ∼ f (y|θ)        s = 1, . . . , S
Indirect inference (PML vs. PSE)


   Example of the pseudo-score-estimator (PSE)
                                                                      2
                                         ∂ log f
                ^
                β(y) = arg min                   (yt |β, y1:(t−1) )
                              β
                                     t
                                            ∂β

   leading to

                   arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2
                             ^        ^
                        θ

   when
                       ys (θ) ∼ f (y|θ)        s = 1, . . . , S
Consistent indirect inference



           “...in order to get a unique solution the dimension of
       the auxiliary parameter β must be larger than or equal to
       the dimension of the initial parameter θ. If the problem is
       just identified the different methods become easier...”

   Consistency depending on the criterion and on the asymptotic
   identifiability of θ
                                [Gouri´roux & Monfort, 1996, p. 66]
                                       e
Consistent indirect inference



           “...in order to get a unique solution the dimension of
       the auxiliary parameter β must be larger than or equal to
       the dimension of the initial parameter θ. If the problem is
       just identified the different methods become easier...”

   Consistency depending on the criterion and on the asymptotic
   identifiability of θ
                                [Gouri´roux & Monfort, 1996, p. 66]
                                       e
Choice of pseudo-model




   Arbitrariness of pseudo-model
   Pick model such that
        ^
    1. β(θ) not flat (i.e. sensitive to changes in θ)
        ^
    2. β(θ) not dispersed (i.e. robust agains changes in ys (θ))
                                          [Frigessi & Heggland, 2004]
Approximate Bayesian computation



 Introduction

 ABC
   Genesis of ABC
   ABC basics
   Advances and interpretations
   ABC as knn

 ABC as an inference machine

 ABCel
Genetic background of ABC



    skip genetics


   ABC is a recent computational technique that only requires being
   able to sample from the likelihood f (·|θ)
   This technique stemmed from population genetics models, about
   15 years ago, and population geneticists still contribute
   significantly to methodological developments of ABC.
                             [Griffith & al., 1997; Tavar´ & al., 1999]
                                                          e
Demo-genetic inference



   Each model is characterized by a set of parameters θ that cover
   historical (time divergence, admixture time ...), demographics
   (population sizes, admixture rates, migration rates, ...) and genetic
   (mutation rate, ...) factors
   The goal is to estimate these parameters from a dataset of
   polymorphism (DNA sample) y observed at the present time

   Problem:
   most of the time, we cannot calculate the likelihood of the
   polymorphism data f (y|θ)...
Demo-genetic inference



   Each model is characterized by a set of parameters θ that cover
   historical (time divergence, admixture time ...), demographics
   (population sizes, admixture rates, migration rates, ...) and genetic
   (mutation rate, ...) factors
   The goal is to estimate these parameters from a dataset of
   polymorphism (DNA sample) y observed at the present time

   Problem:
   most of the time, we cannot calculate the likelihood of the
   polymorphism data f (y|θ)...
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium



                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
                                     • MRCA = 100
                                     • independent mutations:
                                     ±1 with pr. 1/2
     Sample of 8 genes
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
                                     Kingman’s genealogy
                                     When time axis is
                                     normalized,
                                     T (k) ∼ Exp(k(k − 1)/2)

                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
                                     • MRCA = 100
                                     • independent mutations:
                                     ±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
                                     Kingman’s genealogy
                                     When time axis is
                                     normalized,
                                     T (k) ∼ Exp(k(k − 1)/2)

                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
                                     • MRCA = 100
                                     • independent mutations:
                                     ±1 with pr. 1/2
Neutral model at a given microsatellite locus, in a closed
panmictic population at equilibrium
                                     Kingman’s genealogy
                                     When time axis is
                                     normalized,
                                     T (k) ∼ Exp(k(k − 1)/2)

                                     Mutations according to
                                     the Simple stepwise
                                     Mutation Model
                                     (SMM)
                                     • date of the mutations ∼
                                     Poisson process with
                                     intensity θ/2 over the
                                     branches
 Observations: leafs of the tree
                                     • MRCA = 100
                  ^
                  θ=?
                                     • independent mutations:
                                     ±1 with pr. 1/2
Much more interesting models. . .

       several independent locus
       Independent gene genealogies and mutations
       different populations
       linked by an evolutionary scenario made of divergences,
       admixtures, migrations between populations, etc.
       larger sample size
       usually between 50 and 100 genes

                                               MRCA
                                                                 τ2
                                                                 τ1


   A typical evolutionary scenario:   POP 0    POP 1   POP 2
Intractable likelihood



   Missing (too missing!) data structure:

                     f (y|θ) =         f (y|G , θ)f (G |θ)dG
                                   G

   cannot be computed in a manageable way...
   The genealogies are considered as nuisance parameters
       This modelling clearly differs from the phylogenetic perspective
       where the tree is the parameter of interest.
Intractable likelihood



   Missing (too missing!) data structure:

                     f (y|θ) =         f (y|G , θ)f (G |θ)dG
                                   G

   cannot be computed in a manageable way...
   The genealogies are considered as nuisance parameters
       This modelling clearly differs from the phylogenetic perspective
       where the tree is the parameter of interest.
not-so-obvious ancestry...




                     You went to school to learn, girl (. . . )
                     Why 2 plus 2 makes four
                     Now, now, now, I’m gonna teach you (. . . )

                     All you gotta do is repeat after me!
                     A, B, C!
                     It’s easy as 1, 2, 3!
                     Or simple as Do, Re, Mi! (. . . )
A?B?C?




    A stands for approximate
    [wrong likelihood /
    picture]
    B stands for Bayesian
    C stands for computation
    [producing a parameter
    sample]
A?B?C?




    A stands for approximate
    [wrong likelihood /
    picture]
    B stands for Bayesian
    C stands for computation
    [producing a parameter
    sample]
A?B?C?



                                                        ESS=108.9                                                                       ESS=81.48                                                      ESS=105.2




                                                                                                      3.0
                                         2.0




                                                                                                                                                                                   2.0
                               Density




                                                                                            Density




                                                                                                                                                                         Density
                                                                                                      1.5
                                         1.0




                                                                                                                                                                                   1.0
                                         0.0




                                                                                                      0.0




                                                                                                                                                                                   0.0
    A stands for approximate                   −0.4      0.0    0.2

                                                        ESS=133.3
                                                            θ
                                                                        0.4     0.6                                     −0.8     −0.6    −0.4

                                                                                                                                        ESS=87.75
                                                                                                                                            θ
                                                                                                                                                 −0.2        0.0                           −0.2    0.0   0.2

                                                                                                                                                                                                       ESS=72.89
                                                                                                                                                                                                           θ
                                                                                                                                                                                                                  0.4      0.6     0.8




                                         3.0




                                                                                                      2.0




                                                                                                                                                                                   4
    [wrong likelihood /




                                         2.0
                               Density




                                                                                            Density




                                                                                                                                                                         Density
                                                                                                      1.0
                                         1.0




                                                                                                                                                                                   2
                                         0.0




                                                                                                      0.0




                                                                                                                                                                                   0
    picture]                                   −0.8     −0.4

                                                        ESS=116.5
                                                            θ
                                                                      0.0     0.2     0.4                                 −0.2     0.0    0.2

                                                                                                                                        ESS=103.9
                                                                                                                                            θ
                                                                                                                                                 0.4     0.6       0.8                          −0.2     0.0

                                                                                                                                                                                                       ESS=126.9
                                                                                                                                                                                                           θ
                                                                                                                                                                                                                     0.2         0.4




                                         3.0




                                                                                                                                                                                   2.0
                                                                                                      3.0
                                         2.0
                               Density




                                                                                            Density




                                                                                                                                                                         Density

                                                                                                                                                                                   1.0
                                                                                                      1.5
                                         1.0
    B stands for Bayesian




                                         0.0




                                                                                                      0.0




                                                                                                                                                                                   0.0
                                               −0.4      0.0    0.2     0.4     0.6                                     −0.4     −0.2      0.0         0.2         0.4                   −0.8     −0.4         0.0          0.4

                                                        ESS=113.3
                                                            θ                                                                           ESS=92.99
                                                                                                                                            θ                                                          ESS=121.4
                                                                                                                                                                                                           θ




                                                                                                      3.0




                                                                                                                                                                                   2.0
    C stands for computation


                                         2.0
                               Density




                                                                                            Density




                                                                                                                                                                         Density
                                                                                                      1.5




                                                                                                                                                                                   1.0
                                         1.0
                                         0.0




                                                                                                      0.0




                                                                                                                                                                                   0.0
    [producing a parameter                       −0.6   −0.2

                                                        ESS=133.6
                                                            θ
                                                                      0.2           0.6                                 −0.2      0.0      0.2

                                                                                                                                        ESS=116.4
                                                                                                                                            θ
                                                                                                                                                       0.4         0.6                            −0.5

                                                                                                                                                                                                       ESS=131.6
                                                                                                                                                                                                           θ
                                                                                                                                                                                                                 0.0             0.5




                                                                                                      0.0 1.0 2.0 3.0
                                         2.0
    sample]
                               Density




                                                                                            Density




                                                                                                                                                                         Density

                                                                                                                                                                                   1.0
                                         1.0
                                         0.0




                                                                                                                                                                                   0.0
                                                 −0.6    −0.2          0.2 0.4 0.6                                      −0.4     −0.2     0.0    0.2         0.4                                −0.5       0.0             0.5
How Bayesian is aBc?




  Could we turn the resolution into a Bayesian answer?
      ideally so (not meaningfull: requires ∞-ly powerful computer
      asymptotically so (when sample size goes to ∞: meaningfull?)
      approximation error unknown (w/o costly simulation)
      true Bayes for wrong model (formal and artificial)
      true Bayes for estimated likelihood (back to econometrics?)
Untractable likelihood



Back to stage zero: what can we do
when a likelihood function f (y|θ) is
well-defined but impossible / too
costly to compute...?
    MCMC cannot be implemented!
    shall we give up Bayesian
    inference altogether?!
    or settle for an almost Bayesian
    inference/picture...?
Untractable likelihood



Back to stage zero: what can we do
when a likelihood function f (y|θ) is
well-defined but impossible / too
costly to compute...?
    MCMC cannot be implemented!
    shall we give up Bayesian
    inference altogether?!
    or settle for an almost Bayesian
    inference/picture...?
ABC methodology

  Bayesian setting: target is π(θ)f (x|θ)
  When likelihood f (x|θ) not in closed form, likelihood-free rejection
  technique:
  Foundation
  For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
  jointly simulating
                       θ ∼ π(θ) , z ∼ f (z|θ ) ,
  until the auxiliary variable z is equal to the observed value, z = y,
  then the selected
                                θ ∼ π(θ|y)

          [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
                                                     e
ABC methodology

  Bayesian setting: target is π(θ)f (x|θ)
  When likelihood f (x|θ) not in closed form, likelihood-free rejection
  technique:
  Foundation
  For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
  jointly simulating
                       θ ∼ π(θ) , z ∼ f (z|θ ) ,
  until the auxiliary variable z is equal to the observed value, z = y,
  then the selected
                                θ ∼ π(θ|y)

          [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
                                                     e
ABC methodology

  Bayesian setting: target is π(θ)f (x|θ)
  When likelihood f (x|θ) not in closed form, likelihood-free rejection
  technique:
  Foundation
  For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps
  jointly simulating
                       θ ∼ π(θ) , z ∼ f (z|θ ) ,
  until the auxiliary variable z is equal to the observed value, z = y,
  then the selected
                                θ ∼ π(θ|y)

          [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997]
                                                     e
A as A...pproximative



   When y is a continuous random variable, strict equality z = y is
   replaced with a tolerance zone

                              ρ(y, z)

   where ρ is a distance
   Output distributed from
                                     def
               π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < )

                                              [Pritchard et al., 1999]
A as A...pproximative



   When y is a continuous random variable, strict equality z = y is
   replaced with a tolerance zone

                              ρ(y, z)

   where ρ is a distance
   Output distributed from
                                     def
               π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < )

                                              [Pritchard et al., 1999]
ABC algorithm


  In most implementations, further degree of A...pproximation:

  Algorithm 1 Likelihood-free rejection sampler
    for i = 1 to N do
      repeat
         generate θ from the prior distribution π(·)
         generate z from the likelihood f (·|θ )
      until ρ{η(z), η(y)}
      set θi = θ
    end for

  where η(y) defines a (not necessarily sufficient) statistic
Output


  The likelihood-free algorithm samples from the marginal in z of:

                                   π(θ)f (z|θ)IA ,y (z)
                   π (θ, z|y) =                           ,
                                  A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A   ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  posterior distribution:

                    π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .


                                                              ...does it?!
Output


  The likelihood-free algorithm samples from the marginal in z of:

                                   π(θ)f (z|θ)IA ,y (z)
                   π (θ, z|y) =                           ,
                                  A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A   ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  posterior distribution:

                    π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .


                                                              ...does it?!
Output


  The likelihood-free algorithm samples from the marginal in z of:

                                   π(θ)f (z|θ)IA ,y (z)
                   π (θ, z|y) =                           ,
                                  A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A   ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  posterior distribution:

                    π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) .


                                                              ...does it?!
Output

  The likelihood-free algorithm samples from the marginal in z of:

                                        π(θ)f (z|θ)IA ,y (z)
                        π (θ, z|y) =                           ,
                                       A ,y ×Θ π(θ)f (z|θ)dzdθ

  where A       ,y   = {z ∈ D|ρ(η(z), η(y)) < }.
  The idea behind ABC is that the summary statistics coupled with a
  small tolerance should provide a good approximation of the
  restricted posterior distribution:

                        π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) .


                                                               Not so good..!
    skip convergence details!
Convergence of ABC


  What happens when                     → 0?
  For B ⊂ Θ, we have

               A         f (z|θ)dz                                       f (z|θ)π(θ)dθ
                    ,y                                                B
                                           π(θ)dθ =                                        dz
   B   A   ,y ×Θ
                 π(θ)f (z|θ)dzdθ                        A   ,y   A   ,y ×Θ
                                                                           π(θ)f (z|θ)dzdθ

                         B   f (z|θ)π(θ)dθ              m(z)
       =                                                               dz
           A   ,y
                                m(z)         A   ,y ×Θ
                                                       π(θ)f (z|θ)dzdθ
                                             m(z)
       =            π(B|z)                                  dz
           A   ,y                 A   ,y ×Θ π(θ)f (z|θ)dzdθ



  which indicates convergence for a continuous π(B|z).
Convergence of ABC


  What happens when                     → 0?
  For B ⊂ Θ, we have

               A         f (z|θ)dz                                       f (z|θ)π(θ)dθ
                    ,y                                                B
                                           π(θ)dθ =                                        dz
   B   A   ,y ×Θ
                 π(θ)f (z|θ)dzdθ                        A   ,y   A   ,y ×Θ
                                                                           π(θ)f (z|θ)dzdθ

                         B   f (z|θ)π(θ)dθ              m(z)
       =                                                               dz
           A   ,y
                                m(z)         A   ,y ×Θ
                                                       π(θ)f (z|θ)dzdθ
                                             m(z)
       =            π(B|z)                                  dz
           A   ,y                 A   ,y ×Θ π(θ)f (z|θ)dzdθ



  which indicates convergence for a continuous π(B|z).
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
Convergence (do not attempt!)


   ...and the above does not apply to insufficient statistics:
   If η(y) is not a sufficient statistics, the best one can hope for is

                        π(θ|η(y)) ,   not π(θ|y)

   If η(y) is an ancillary statistic, the whole information contained in
   y is lost!, the “best” one can “hope” for is

                            π(θ|η(y)) = π(θ)

                                                               Bummer!!!
MA example


  Inference on the parameters of a MA(q) model
                                       q
                        xt =   t   +         ϑi   t−i          t−i     i.i.d.w.n.
                                       i=1

    bypass MA illustration

  Simple prior: uniform over the inverse [real and complex] roots in
                                                        q
                                   Q(u) = 1 −                 ϑi u i
                                                        i=1

  under the identifiability conditions
MA example




  Inference on the parameters of a MA(q) model
                                       q
                        xt =   t   +         ϑi   t−i   t−i   i.i.d.w.n.
                                       i=1

    bypass MA illustration

  Simple prior: uniform prior over the identifiability zone in the
  parameter space, i.e. triangle for MA(2)
MA example (2)

  ABC algorithm thus made of
    1. picking a new value (ϑ1 , ϑ2 ) in the triangle
    2. generating an iid sequence ( t )−q<t              T
    3. producing a simulated series (xt )1            t T
  Distance: basic distance between the series
                                                         T
               ρ((xt )1   t   T , (xt )1   t   T) =          (xt − xt )2
                                                      t=1

  or distance between summary statistics like the q = 2
  autocorrelations
                                           T
                               τj =            xt xt−j
                                      t=j+1
MA example (2)

  ABC algorithm thus made of
    1. picking a new value (ϑ1 , ϑ2 ) in the triangle
    2. generating an iid sequence ( t )−q<t              T
    3. producing a simulated series (xt )1            t T
  Distance: basic distance between the series
                                                         T
               ρ((xt )1   t   T , (xt )1   t   T) =          (xt − xt )2
                                                      t=1

  or distance between summary statistics like the q = 2
  autocorrelations
                                           T
                               τj =            xt xt−j
                                      t=j+1
Comparison of distance impact




   Impact of tolerance on ABC sample against either distance
   ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact



            4




                                              1.5
            3




                                              1.0
            2




                                              0.5
            1




                                              0.0
            0




                0.0   0.2   0.4   0.6   0.8         −2.0   −1.0    0.0   0.5   1.0   1.5

                             θ1                                   θ2




   Impact of tolerance on ABC sample against either distance
   ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comparison of distance impact



            4




                                              1.5
            3




                                              1.0
            2




                                              0.5
            1




                                              0.0
            0




                0.0   0.2   0.4   0.6   0.8         −2.0   −1.0    0.0   0.5   1.0   1.5

                             θ1                                   θ2




   Impact of tolerance on ABC sample against either distance
   ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
Comments




     Role of distance paramount (because     = 0)
     Scaling of components of η(y) is also determinant
       matters little if “small enough”
     representative of “curse of dimensionality”
     small is beautiful!
     the data as a whole may be paradoxically weakly informative
     for ABC
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC (simul’) advances


                         how approximative is ABC?                ABC as knn

   Simulating from the prior is often poor in efficiency
   Either modify the proposal distribution on θ to increase the density
   of x’s within the vicinity of y ...
        [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007]

   ...or by viewing the problem as a conditional density estimation
   and by developing techniques to allow for larger
                                               [Beaumont et al., 2002]

   .....or even by including     in the inferential framework [ABCµ ]
                                                     [Ratmann et al., 2009]
ABC-NP


  Better usage of [prior] simulations by
 adjustement: instead of throwing away
 θ such that ρ(η(z), η(y)) > , replace
 θ’s with locally regressed transforms

       θ∗ = θ − {η(z) − η(y)}T β
                               ^
                                            [Csill´ry et al., TEE, 2010]
                                                  e

         ^
   where β is obtained by [NP] weighted least square regression on
   (η(z) − η(y)) with weights

                           Kδ {ρ(η(z), η(y))}

                                    [Beaumont et al., 2002, Genetics]
ABC-NP (regression)



   Also found in the subsequent literature, e.g. in        Fearnhead-Prangle (2012)   :
   weight directly simulation by

                           Kδ {ρ(η(z(θ)), η(y))}

   or
                           S
                       1
                                 Kδ {ρ(η(zs (θ)), η(y))}
                       S
                           s=1

                                       [consistent estimate of f (η|θ)]
   Curse of dimensionality: poor estimate when d = dim(η) is large...
ABC-NP (regression)



   Also found in the subsequent literature, e.g. in        Fearnhead-Prangle (2012)   :
   weight directly simulation by

                           Kδ {ρ(η(z(θ)), η(y))}

   or
                           S
                       1
                                 Kδ {ρ(η(zs (θ)), η(y))}
                       S
                           s=1

                                       [consistent estimate of f (η|θ)]
   Curse of dimensionality: poor estimate when d = dim(η) is large...
ABC-NP (density estimation)



   Use of the kernel weights

                             Kδ {ρ(η(z(θ)), η(y))}

   leads to the NP estimate of the posterior expectation

                         i   θi Kδ {ρ(η(z(θi )), η(y))}
                             i Kδ {ρ(η(z(θi )), η(y))}

                                                          [Blum, JASA, 2010]
ABC-NP (density estimation)



   Use of the kernel weights

                            Kδ {ρ(η(z(θ)), η(y))}

   leads to the NP estimate of the posterior conditional density

                    i
                        ˜
                        Kb (θi − θ)Kδ {ρ(η(z(θi )), η(y))}
                            i Kδ {ρ(η(z(θi )), η(y))}

                                                     [Blum, JASA, 2010]
ABC-NP (density estimations)



   Other versions incorporating regression adjustments

                         i
                             ˜
                             Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))}
                                   i
                                  i Kδ {ρ(η(z(θi )), η(y))}

   In all cases, error

      E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd )
        g
                               c
              var(^ (θ|y)) =
                  g                (1 + oP (1))
                             nbδd
ABC-NP (density estimations)



   Other versions incorporating regression adjustments

                         i
                             ˜
                             Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))}
                                   i
                                  i Kδ {ρ(η(z(θi )), η(y))}

   In all cases, error

      E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd )
        g
                               c
              var(^ (θ|y)) =
                  g                (1 + oP (1))
                             nbδd
                                                         [Blum, JASA, 2010]
ABC-NP (density estimations)



   Other versions incorporating regression adjustments

                         i
                             ˜
                             Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))}
                                   i
                                  i Kδ {ρ(η(z(θi )), η(y))}

   In all cases, error

      E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd )
        g
                               c
              var(^ (θ|y)) =
                  g                (1 + oP (1))
                             nbδd
                                                    [standard NP calculations]
ABC-NCH


  Incorporating non-linearities and heterocedasticities:

                                                 σ(η(y))
                                                 ^
                θ∗ = m(η(y)) + [θ − m(η(z))]
                     ^              ^
                                                 σ(η(z))
                                                 ^

  where
      m(η) estimated by non-linear regression (e.g., neural network)
      ^
      σ(η) estimated by non-linear regression on residuals
      ^

                     log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi
                              ^

                                              [Blum & Fran¸ois, 2009]
                                                          c
ABC-NCH


  Incorporating non-linearities and heterocedasticities:

                                                 σ(η(y))
                                                 ^
                θ∗ = m(η(y)) + [θ − m(η(z))]
                     ^              ^
                                                 σ(η(z))
                                                 ^

  where
      m(η) estimated by non-linear regression (e.g., neural network)
      ^
      σ(η) estimated by non-linear regression on residuals
      ^

                     log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi
                              ^

                                              [Blum & Fran¸ois, 2009]
                                                          c
ABC as knn


  Practice of ABC: determine tolerance         as a quantile on observed
  distances, say 10% or 1% quantile,

                        =   N   = qα (d1 , . . . , dN )

      Interpretation of ε as non-
      parametric bandwidth only approximation of the actual practice
                                               [Blum & Fran¸ois, 2010]
                                                              c
      ABC is a k-nearest neighbour (knn) method with kN = N N
                                    [Loftsgaarden & Quesenberry, 1965]
                                    [Biau et al., 2012, arxiv:1207.6461]
ABC as knn


  Practice of ABC: determine tolerance         as a quantile on observed
  distances, say 10% or 1% quantile,

                        =   N   = qα (d1 , . . . , dN )

      Interpretation of ε as non-
      parametric bandwidth only approximation of the actual practice
                                               [Blum & Fran¸ois, 2010]
                                                              c
      ABC is a k-nearest neighbour (knn) method with kN = N N
                                    [Loftsgaarden & Quesenberry, 1965]
                                    [Biau et al., 2012, arxiv:1207.6461]
ABC as knn


  Practice of ABC: determine tolerance         as a quantile on observed
  distances, say 10% or 1% quantile,

                        =   N   = qα (d1 , . . . , dN )

      Interpretation of ε as non-
      parametric bandwidth only approximation of the actual practice
                                               [Blum & Fran¸ois, 2010]
                                                              c
      ABC is a k-nearest neighbour (knn) method with kN = N N
                                    [Loftsgaarden & Quesenberry, 1965]
                                    [Biau et al., 2012, arxiv:1207.6461]
ABC consistency
   Provided

                kN / log log N −→ ∞ and kN /N −→ 0

   as N → ∞, for almost all s0 (with respect to the distribution of
   S), with probability 1,
                        kN
                    1
                              ϕ(θj ) −→ E[ϕ(θj )|S = s0 ]
                   kN
                        j=1

                                                            [Devroye, 1982]
   Biau et al. (2012) also recall pointwise and integrated mean square
   error consistency results on the corresponding kernel estimate of
   the conditional posterior distribution, under constraints
                                                  p
         kN → ∞,     kN /N → 0,       hN → 0 and hN kN → ∞,
ABC consistency
   Provided

                kN / log log N −→ ∞ and kN /N −→ 0

   as N → ∞, for almost all s0 (with respect to the distribution of
   S), with probability 1,
                        kN
                    1
                              ϕ(θj ) −→ E[ϕ(θj )|S = s0 ]
                   kN
                        j=1

                                                            [Devroye, 1982]
   Biau et al. (2012) also recall pointwise and integrated mean square
   error consistency results on the corresponding kernel estimate of
   the conditional posterior distribution, under constraints
                                                  p
         kN → ∞,     kN /N → 0,       hN → 0 and hN kN → ∞,
Rates of convergence


   Further assumptions (on target and kernel) allow for precise
   (integrated mean square) convergence rates (as a power of the
   sample size N), derived from classical k-nearest neighbour
   regression, like
                                                           4
       when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8
                                                      4
       when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N
                                                          4
       when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4
                                  [Biau et al., 2012, arxiv:1207.6461]


   Only applies to sufficient summary statistics
Rates of convergence


   Further assumptions (on target and kernel) allow for precise
   (integrated mean square) convergence rates (as a power of the
   sample size N), derived from classical k-nearest neighbour
   regression, like
                                                           4
       when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8
                                                      4
       when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N
                                                          4
       when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4
                                  [Biau et al., 2012, arxiv:1207.6461]


   Only applies to sufficient summary statistics
ABC inference machine



 Introduction

 ABC

 ABC as an inference machine
   Error inc.
   Exact BC and approximate
   targets
   summary statistic

 ABCel
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)

                                          ...should Bayesians care?!
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)

                                                   yes they should!!!
How much Bayesian aBc is..?




      maybe a convergent method of inference (meaningful?
      sufficient? foreign?)
      approximation error unknown (w/o simulation)
      pragmatic Bayes (there is no other solution!)
      many calibration issues (tolerance, distance, statistics)

                                                                  to ABCel
ABCµ



  Idea Infer about the error as well as about the parameter:
  Use of a joint density

                f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )

  where y is the data, and ξ( |y, θ) is the prior predictive density of
  ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
  approximation.
             [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ



  Idea Infer about the error as well as about the parameter:
  Use of a joint density

                f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )

  where y is the data, and ξ( |y, θ) is the prior predictive density of
  ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
  approximation.
             [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ



  Idea Infer about the error as well as about the parameter:
  Use of a joint density

                f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( )

  where y is the data, and ξ( |y, θ) is the prior predictive density of
  ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ)
  Warning! Replacement of ξ( |y, θ) with a non-parametric kernel
  approximation.
             [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
ABCµ details


   Multidimensional distances ρk (k = 1, . . . , K ) and errors
    k = ρk (ηk (z), ηk (y)), with

                                       1
    k   ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) =
                        ^                       K [{   k −ρk (ηk (zb ), ηk (y))}/hk ]
                                      Bhk
                                            b

   then used in replacing ξ( |y, θ) with mink ξk ( |y, θ)
                                              ^
   ABCµ involves acceptance probability

                 π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ )
                                             ^
                  π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ)
                                              ^
ABCµ details


   Multidimensional distances ρk (k = 1, . . . , K ) and errors
    k = ρk (ηk (z), ηk (y)), with

                                       1
    k   ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) =
                        ^                       K [{   k −ρk (ηk (zb ), ηk (y))}/hk ]
                                      Bhk
                                            b

   then used in replacing ξ( |y, θ) with mink ξk ( |y, θ)
                                              ^
   ABCµ involves acceptance probability

                 π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ )
                                             ^
                  π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ)
                                              ^
ABCµ multiple errors




                       [ c Ratmann et al., PNAS, 2009]
ABCµ for model choice




                        [ c Ratmann et al., PNAS, 2009]
Wilkinson’s exact BC (not exactly!)

   ABC approximation error (i.e. non-zero tolerance) replaced with
   exact simulation from a controlled approximation to the target,
   convolution of true posterior with kernel function

                               π(θ)f (z|θ)K (y − z)
               π (θ, z|y) =                            ,
                              π(θ)f (z|θ)K (y − z)dzdθ

   with K kernel parameterised by bandwidth .
                                                     [Wilkinson, 2008]

   Theorem
   The ABC algorithm based on the assumption of a randomised
   observation y = y + ξ, ξ ∼ K , and an acceptance probability of
                   ˜

                              K (y − z)/M

   gives draws from the posterior distribution π(θ|y).
Wilkinson’s exact BC (not exactly!)

   ABC approximation error (i.e. non-zero tolerance) replaced with
   exact simulation from a controlled approximation to the target,
   convolution of true posterior with kernel function

                               π(θ)f (z|θ)K (y − z)
               π (θ, z|y) =                            ,
                              π(θ)f (z|θ)K (y − z)dzdθ

   with K kernel parameterised by bandwidth .
                                                     [Wilkinson, 2008]

   Theorem
   The ABC algorithm based on the assumption of a randomised
   observation y = y + ξ, ξ ∼ K , and an acceptance probability of
                   ˜

                              K (y − z)/M

   gives draws from the posterior distribution π(θ|y).
How exact a BC?




      “Using to represent measurement error is
      straightforward, whereas using to model the model
      discrepancy is harder to conceptualize and not as
      commonly used”
                                       [Richard Wilkinson, 2008]
How exact a BC?

  Pros
         Pseudo-data from true model and observed data from noisy
         model
         Interesting perspective in that outcome is completely
         controlled
         Link with ABCµ and assuming y is observed with a
         measurement error with density K
         Relates to the theory of model approximation
                                               [Kennedy & O’Hagan, 2001]
  Cons
         Requires K to be bounded by M
         True approximation error never assessed
         Requires a modification of the standard ABC algorithm
ABC for HMMs



  Specific case of a hidden     Markov model



                             Xt+1 ∼ Qθ (Xt , ·)
                             Yt+1 ∼ gθ (·|xt )

  where only y0 is observed.
              1:n
                                   [Dean, Singh, Jasra, & Peters, 2011]
  Use of specific constraints, adapted to the Markov structure:

                y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , )
                        0                       0
ABC for HMMs



  Specific case of a hidden     Markov model



                             Xt+1 ∼ Qθ (Xt , ·)
                             Yt+1 ∼ gθ (·|xt )

  where only y0 is observed.
              1:n
                                   [Dean, Singh, Jasra, & Peters, 2011]
  Use of specific constraints, adapted to the Markov structure:

                y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , )
                        0                       0
ABC-MLE for HMMs


  ABC-MLE defined by

         θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , )
         ^                       0                      0
                    θ


  Exact MLE for the likelihood       same basis as Wilkinson!


                                  0
                             pθ (y1 , . . . , yn )

  corresponding to the perturbed process

                 (xt , yt + zt )1   t n       zt ∼ U(B(0, 1)

                                    [Dean, Singh, Jasra, & Peters, 2011]
ABC-MLE for HMMs


  ABC-MLE defined by

         θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , )
         ^                       0                      0
                    θ


  Exact MLE for the likelihood       same basis as Wilkinson!


                                  0
                             pθ (y1 , . . . , yn )

  corresponding to the perturbed process

                 (xt , yt + zt )1   t n       zt ∼ U(B(0, 1)

                                    [Dean, Singh, Jasra, & Peters, 2011]
ABC-MLE is biased



      ABC-MLE is asymptotically (in n) biased with target

                      l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )]


      but ABC-MLE converges to true value in the sense

                             l n (θn ) → l (θ)

      for all sequences (θn ) converging to θ and   n
ABC-MLE is biased



      ABC-MLE is asymptotically (in n) biased with target

                      l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )]


      but ABC-MLE converges to true value in the sense

                             l n (θn ) → l (θ)

      for all sequences (θn ) converging to θ and   n
Noisy ABC-MLE



  Idea: Modify instead the data from the start
                        0
                      (y1 + ζ1 , . . . , yn + ζn )

                                                     [   see Fearnhead-Prangle   ]
  noisy ABC-MLE estimate

     arg max Pθ Y1 ∈ B(y1 + ζ1 , ), . . . , Yn ∈ B(yn + ζn , )
                        0                           0
          θ

                                [Dean, Singh, Jasra, & Peters, 2011]
Consistent noisy ABC-MLE




      Degrading the data improves the estimation performances:
          Noisy ABC-MLE is asymptotically (in n) consistent
          under further assumptions, the noisy ABC-MLE is
          asymptotically normal
          increase in variance of order −2
      likely degradation in precision or computing time due to the
      lack of summary statistic [curse of dimensionality]
SMC for ABC likelihood

   Algorithm 2 SMC ABC for HMMs
     Given θ
     for k = 1, . . . , n do
                             1 1                N    N
       generate proposals (xk , yk ), . . . , (xk , yk ) from the model
       weigh each proposal with ωk    l =I                     l
                                               B(yk + ζk , ) (yk )
                                                   0

                                                         l
       renormalise the weights and sample the xk ’s accordingly
     end for
     approximate the likelihood by
                                n      N
                                             ωlk N
                               k=1     l=1




                                    [Jasra, Singh, Martin, & McCoy, 2010]
Which summary?


  Fundamental difficulty of the choice of the summary statistic when
  there is no non-trivial sufficient statistics
  Starting from a large collection of summary statistics is available,
  Joyce and Marjoram (2008) consider the sequential inclusion into
  the ABC target, with a stopping rule based on a likelihood ratio
  test
      Not taking into account the sequential nature of the tests
      Depends on parameterisation
      Order of inclusion matters
      likelihood ratio test?!
Which summary?


  Fundamental difficulty of the choice of the summary statistic when
  there is no non-trivial sufficient statistics
  Starting from a large collection of summary statistics is available,
  Joyce and Marjoram (2008) consider the sequential inclusion into
  the ABC target, with a stopping rule based on a likelihood ratio
  test
      Not taking into account the sequential nature of the tests
      Depends on parameterisation
      Order of inclusion matters
      likelihood ratio test?!
Which summary?


  Fundamental difficulty of the choice of the summary statistic when
  there is no non-trivial sufficient statistics
  Starting from a large collection of summary statistics is available,
  Joyce and Marjoram (2008) consider the sequential inclusion into
  the ABC target, with a stopping rule based on a likelihood ratio
  test
      Not taking into account the sequential nature of the tests
      Depends on parameterisation
      Order of inclusion matters
      likelihood ratio test?!
Which summary for model choice?



   Depending on the choice of η(·), the Bayes factor based on this
   insufficient statistic,

                    η          π1 (θ1 )f1η (η(y)|θ1 ) dθ1
                   B12 (y) =                              ,
                               π2 (θ2 )f2η (η(y)|θ2 ) dθ2

   is consistent or not.
                                    [X, Cornuet, Marin, & Pillai, 2012]
   Consistency only depends on the range of Ei [η(y)] under both
   models.
                                [Marin, Pillai, X, & Rousseau, 2012]
Which summary for model choice?



   Depending on the choice of η(·), the Bayes factor based on this
   insufficient statistic,

                    η          π1 (θ1 )f1η (η(y)|θ1 ) dθ1
                   B12 (y) =                              ,
                               π2 (θ2 )f2η (η(y)|θ2 ) dθ2

   is consistent or not.
                                    [X, Cornuet, Marin, & Pillai, 2012]
   Consistency only depends on the range of Ei [η(y)] under both
   models.
                                [Marin, Pillai, X, & Rousseau, 2012]
Semi-automatic ABC



  Fearnhead and Prangle (2010) study ABC and the selection of the
  summary statistic in close proximity to Wilkinson’s proposal
      ABC considered as inferential method and calibrated as such
      randomised (or ‘noisy’) version of the summary statistics

                            ˜
                            η(y) = η(y) + τ

      derivation of a well-calibrated version of ABC, i.e. an
      algorithm that gives proper predictions for the distribution
      associated with this randomised summary statistic
Summary [of F&P/statistics)


      optimality of the posterior expectation

                                  E[θ|y]

      of the parameter of interest as summary statistics η(y)!
      use of the standard quadratic loss function

                           (θ − θ0 )T A(θ − θ0 ) .

      recent extension to model choice, optimality of Bayes factor

                                  B12 (y)

                                                     [F&P, ISBA 2012 talk]
Summary [of F&P/statistics)


      optimality of the posterior expectation

                                  E[θ|y]

      of the parameter of interest as summary statistics η(y)!
      use of the standard quadratic loss function

                           (θ − θ0 )T A(θ − θ0 ) .

      recent extension to model choice, optimality of Bayes factor

                                  B12 (y)

                                                     [F&P, ISBA 2012 talk]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Conclusion



      Choice of summary statistics is paramount for ABC
      validation/performance
      At best, ABC approximates π(. | η(y))
      Model selection feasible with ABC [with caution!]
      For estimation, consistency if {θ; µ(θ) = µ0 } = θ0
      For testing consistency if
      {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅
                                                        [Marin et al., 2011]
Empirical likelihood (EL)



 Introduction

 ABC

 ABC as an inference machine

 ABCel
   ABC and EL
   Composite likelihood
   Illustrations
Empirical likelihood (EL)

   Dataset x made of n independent replicates x = (x1 , . . . , xn ) of
   some X ∼ F
   Generalized moment condition model
                          EF h(X , φ) = 0,
   where h is a known function, and φ an unknown parameter

   Corresponding empirical likelihood
                                              n
                           Lel (φ|x) = max         pi
                                        p
                                             i=1
   for all p such that 0   pi    1,    pi = 1,          i   pi h(xi , φ) = 0.

                                      [Owen, 1988, Bio’ka; Owen, 2001]
Empirical likelihood (EL)

   Dataset x made of n independent replicates x = (x1 , . . . , xn ) of
   some X ∼ F
   Generalized moment condition model
                          EF h(X , φ) = 0,
   where h is a known function, and φ an unknown parameter

   Corresponding empirical likelihood
                                              n
                           Lel (φ|x) = max         pi
                                        p
                                             i=1
   for all p such that 0   pi    1,    pi = 1,          i   pi h(xi , φ) = 0.

                                      [Owen, 1988, Bio’ka; Owen, 2001]
Convergence of EL [3.4]

   Theorem 3.4 Let X , Y1 , . . . , Yn be independent rv’s with common
   distribution F0 . For θ ∈ Θ, and the function h(X , θ) ∈ Rs , let
   θ0 ∈ Θ be such that
                              Var(h(Yi , θ0 ))
   is finite and has rank q > 0. If θ0 satisfies

                            E(h(X , θ0 )) = 0,

   then
                           Lel (θ0 |Y1 , . . . , Yn )
                  −2 log                                → χ2
                                                           (q)
                                    n−n
   in distribution when n → ∞.
                                                                 [Owen, 2001]
Convergence of EL [3.4]




   “...The interesting thing about Theorem 3.4 is what is not there. It
                                   ^
   includes no conditions to make θ a good estimate of θ0 , nor even
   conditions to ensure a unique value for θ0 , nor even that any solution θ0
   exists. Theorem 3.4 applies in the just determined, over-determined, and
   under-determined cases. When we can prove that our estimating
                                                                     ^
   equations uniquely define θ0 , and provide a consistent estimator θ of it,
   then confidence regions and tests follow almost automatically through
   Theorem 3.4.”.
                                                                [Owen, 2001]
Raw ABCel sampler


   We act as if EL was an exact likelihood

     for i = 1 → N do
       generate φi from the prior distribution π(·)
       set the weight ωi = Lel (φi |xobs )
     end for
     return (φi , ωi ), i = 1, . . . , N

       The output is sample of parameters of size N with associated
       weights
                                              [Cornuet et al., 2012]
Raw ABCel sampler

   We act as if EL was an exact likelihood

     for i = 1 → N do
       generate φi from the prior distribution π(·)
       set the weight ωi = Lel (φi |xobs )
     end for
     return (φi , ωi ), i = 1, . . . , N

       Performance of the output evaluated through effective sample
       size                                   2
                              N         N     
                   ESS = 1         ωi       ωj
                                              
                               i=1           j=1

                                                   [Cornuet et al., 2012]
Raw ABCel sampler

   We act as if EL was an exact likelihood

     for i = 1 → N do
       generate φi from the prior distribution π(·)
       set the weight ωi = Lel (φi |xobs )
     end for
     return (φi , ωi ), i = 1, . . . , N

       Other classical sampling algorithms might be adapted to use
       EL.
       We resorted to the adaptive multiple importance sampling
       (AMIS) of Cornuet to speed up computations
                                              [Cornuet et al., 2012]
Moment condition in population genetics?

   EL does not require a fully defined and often complex (hence
   debatable) parametric model

   Main difficulty
   Derive a constraint
                            EF h(X , φ) = 0,
   on the parameters of interest φ when X is made of the genotypes
   of the sample of individuals at a given locus

   E.g., in phylogeography, φ is composed of
       dates of divergence between populations,
       ratio of population sizes,
       mutation rates, etc.
   None of them are moments of the distribution of the allelic states
   of the sample
Moment condition in population genetics?


   EL does not require a fully defined and often complex (hence
   debatable) parametric model

   Main difficulty
   Derive a constraint
                            EF h(X , φ) = 0,
   on the parameters of interest φ when X is made of the genotypes
   of the sample of individuals at a given locus


   c h pairwise composite scores whose zero is the pairwise
   maximum likelihood estimator
Pairwise composite likelihood


   The intra-locus pairwise likelihood
                                                         j
                          2 (xk |φ)             2 (xk , xk |φ)
                                                    i
                                      =
                                          i<j
         1            n
   with xk , . . . , xk : allelic states of the gene sample at the k-th locus

   The pairwise score function
                                                                j
                   φ log 2 (xk |φ)               φ log 2 (xk , xk |φ)
                                                           i
                                      =
                                          i<j

       Composite likelihoods are often much narrower than the
       original likelihood of the model

   Safe with EL because we only use position of its mode
Pairwise likelihood: a simple case

                                                      1
   Assumptions                          2 (δ|θ)   =√       ρ (θ)|δ|
                                                    1 + 2θ
         sample ⊂ closed, panmictic     with
                                                         θ
         population at equilibrium      ρ(θ) =           √
                                                  1 + θ + 1 + 2θ
         marker: microsatellite
         mutation rate: θ/2             Pairwise score function
                                        ∂θ log 2 (δ|θ) =
       i     j                               1           |δ|
   if xk et xk are two genes of the     −         + √
   sample,                                1 + 2θ θ 1 + 2θ

             j
    2 (xk , xk |θ)
        i             depends only on
           i      j
   δ = xk − xk
Pairwise likelihood: a simple case

                                                      1
   Assumptions                          2 (δ|θ)   =√       ρ (θ)|δ|
                                                    1 + 2θ
         sample ⊂ closed, panmictic     with
                                                         θ
         population at equilibrium      ρ(θ) =           √
                                                  1 + θ + 1 + 2θ
         marker: microsatellite
         mutation rate: θ/2             Pairwise score function
                                        ∂θ log 2 (δ|θ) =
       i     j                               1           |δ|
   if xk et xk are two genes of the     −         + √
   sample,                                1 + 2θ θ 1 + 2θ

             j
    2 (xk , xk |θ)
        i             depends only on
           i      j
   δ = xk − xk
Pairwise likelihood: a simple case

                                                      1
   Assumptions                          2 (δ|θ)   =√       ρ (θ)|δ|
                                                    1 + 2θ
         sample ⊂ closed, panmictic     with
                                                         θ
         population at equilibrium      ρ(θ) =           √
                                                  1 + θ + 1 + 2θ
         marker: microsatellite
         mutation rate: θ/2             Pairwise score function
                                        ∂θ log 2 (δ|θ) =
       i     j                               1           |δ|
   if xk et xk are two genes of the     −         + √
   sample,                                1 + 2θ θ 1 + 2θ

             j
    2 (xk , xk |θ)
        i             depends only on
           i      j
   δ = xk − xk
Pairwise likelihood: 2 diverging populations

            MRCA                   Then 2 (δ|θ, τ) =
                                               +∞
                                       e−τθ
                                   √                ρ(θ)|k| Iδ−k (τθ).
                               τ       1 + 2θ k=−∞
                                   where
                                   In (z) nth-order modified
    POP a            POP b         Bessel function of the first
   Assumptions                     kind
       τ: divergence date of
       pop. a and b
       θ/2: mutation rate
        i      j
   Let xk and xk be two genes
   coming resp. from pop. a and
   b
            i    j
   Set δ = xk − xk .
Pairwise likelihood: 2 diverging populations

            MRCA
                                   A 2-dim score function
                                   ∂τ log 2 (δ|θ, τ) =
                               τ   −θ +
                                   θ 2 (δ − 1|θ, τ) + 2 (δ + 1|θ, τ)
                                   2              2 (δ|θ, τ)

    POP a            POP b
                                   ∂θ log 2 (δ|θ, τ) =
   Assumptions                               1
                                   −τ −            +
                                          1 + 2θ
       τ: divergence date of       τ 2 (δ − 1|θ, τ) + 2 (δ + 1|θ, τ)
                                                                     +
       pop. a and b                2              2 (δ|θ, τ)
                                   q(δ|θ, τ)
       θ/2: mutation rate
                                    2 (δ|θ, τ)
        i      j
   Let xk and xk be two genes
   coming resp. from pop. a and    where
   b                               q(δ|θ, τ) :=
                                                     ∞
            i    j
   Set δ = xk − xk .                 e−τθ ρ (θ)
                                   √                      |k|ρ(θ)|k| Iδ−k (τθ)
                                     1 + 2θ ρ(θ)   k=−∞
Example: normal posterior
            ABCel with two constraints
                            ESS=108.9                                                                       ESS=81.48                                                      ESS=105.2




                                                                          3.0
             2.0




                                                                                                                                                       2.0
  Density




                                                                Density




                                                                                                                                             Density
                                                                          1.5
             1.0




                                                                                                                                                       1.0
             0.0




                                                                          0.0




                                                                                                                                                       0.0
                   −0.4      0.0    0.2     0.4     0.6                                     −0.8     −0.6    −0.4    −0.2        0.0                           −0.2    0.0   0.2      0.4      0.6     0.8

                            ESS=133.3
                                θ                                                                           ESS=87.75
                                                                                                                θ                                                          ESS=72.89
                                                                                                                                                                               θ
             3.0




                                                                          2.0




                                                                                                                                                       4
             2.0
  Density




                                                                Density




                                                                                                                                             Density
                                                                          1.0
             1.0




                                                                                                                                                       2
             0.0




                                                                          0.0




                                                                                                                                                       0
                   −0.8     −0.4          0.0     0.2     0.4                                 −0.2     0.0    0.2    0.4     0.6       0.8                          −0.2     0.0         0.2         0.4

                            ESS=116.5
                                θ                                                                           ESS=103.9
                                                                                                                θ                                                          ESS=126.9
                                                                                                                                                                               θ
             3.0




                                                                                                                                                       2.0
                                                                          3.0
             2.0
  Density




                                                                Density




                                                                                                                                             Density

                                                                                                                                                       1.0
                                                                          1.5
             1.0
             0.0




                                                                          0.0




                   −0.4      0.0    0.2     0.4     0.6                                     −0.4     −0.2      0.0         0.2         0.4             0.0   −0.8     −0.4         0.0          0.4

                            ESS=113.3
                                θ                                                                           ESS=92.99
                                                                                                                θ                                                          ESS=121.4
                                                                                                                                                                               θ
                                                                          3.0




                                                                                                                                                       2.0
             2.0
  Density




                                                                Density




                                                                                                                                             Density
                                                                          1.5




                                                                                                                                                       1.0
             1.0
             0.0




                                                                          0.0




                                                                                                                                                       0.0




                     −0.6   −0.2          0.2           0.6                                 −0.2      0.0      0.2         0.4         0.6                            −0.5           0.0             0.5

                            ESS=133.6
                                θ                                                                           ESS=116.4
                                                                                                                θ                                                          ESS=131.6
                                                                                                                                                                               θ
                                                                          0.0 1.0 2.0 3.0
             2.0
  Density




                                                                Density




                                                                                                                                             Density

                                                                                                                                                       1.0
             1.0
             0.0




                                                                                                                                                       0.0




                     −0.6    −0.2          0.2 0.4 0.6                                      −0.4     −0.2     0.0    0.2         0.4                                −0.5       0.0             0.5




            Sample sizes are of 21 (column 3), 41 (column 1) and 61 (column 2)
            observations
Example: normal posterior
            ABCel with three constraints
                                          ESS=233.8                                                             ESS=186.6                                                                       ESS=370.1
             3.0




                                                                                          3.0




                                                                                                                                                                2.0
             2.0
  Density




                                                                                Density




                                                                                                                                                      Density
                                                                                          1.5




                                                                                                                                                                1.0
             1.0
             0.0




                                                                                          0.0




                                                                                                                                                                0.0
                                  −0.4   −0.2    0.0     0.2        0.4                         −0.8   −0.6     −0.4     −0.2       0.0         0.2                               −0.4      0.0 0.2 0.4 0.6 0.8

                                          ESS=248.2
                                              θ                                                                 ESS=189.9
                                                                                                                    θ                                                                           ESS=274.1
                                                                                                                                                                                                    θ
             0.0 1.0 2.0 3.0




                                                                                                                                                                4
                                                                                          2.0
  Density




                                                                                Density




                                                                                                                                                      Density

                                                                                                                                                                3
                                                                                                                                                                2
                                                                                          1.0




                                                                                                                                                                1
                                                                                          0.0




                                                                                                                                                                0
                                  0.0     0.2     0.4         0.6         0.8                   −0.4     −0.2     0.0        0.2      0.4                                            −0.3   −0.2    −0.1     0.0         0.1

                                          ESS=228.0
                                              θ                                                                 ESS=166.4
                                                                                                                    θ                                                                           ESS=348.2
                                                                                                                                                                                                    θ
             3.0




                                                                                          4




                                                                                                                                                                0.0 1.0 2.0 3.0
                                                                                          3
             2.0
  Density




                                                                                Density




                                                                                                                                                      Density
                                                                                          2
             1.0




                                                                                          1
             0.0




                                                                                          0




                               −0.6       −0.2          0.2    0.4        0.6                     0.1 0.2 0.3 0.4 0.5 0.6 0.7                                                        −0.8       −0.6       −0.4     −0.2

                                          ESS=202.5
                                              θ                                                                 ESS=163.3
                                                                                                                    θ                                                                           ESS=228.7
                                                                                                                                                                                                    θ
                                                                                          4




                                                                                                                                                                8
             2.0




                                                                                          3
  Density




                                                                                Density




                                                                                                                                                      Density

                                                                                                                                                                6
                                                                                          2




                                                                                                                                                                4
             1.0




                                                                                          1




                                                                                                                                                                2
             0.0




                                                                                          0




                                                                                                                                                                0




                                  −0.4    −0.2    0.0         0.2         0.4                     −0.3          −0.1         0.1 0.2 0.3                                            −0.55          −0.45          −0.35

                                          ESS=223.7
                                              θ                                                                 ESS=133.7
                                                                                                                    θ                                                                           ESS=352.1
                                                                                                                                                                                                    θ
                                                                                          3.0




                                                                                                                                                                2.0
             2.0
  Density




                                                                                Density




                                                                                                                                                      Density
                                                                                          1.5




                                                                                                                                                                1.0
             1.0
             0.0




                                                                                          0.0




                                                                                                                                                                0.0




                               −0.6       −0.2    0.0    0.2        0.4                            −0.4    −0.2        0.0    0.2         0.4                                     −0.8   −0.6   −0.4   −0.2        0.0         0.2




            Sample sizes are of 21 (column 3), 41 (column 1) and 61 (column 2)
            observations
Example: Superposition of gamma processes


   Example of superposition of N renewal processes with waiting
   times τij (i = 1, . . . , M), j = 1, . . .) ∼ G(α, β), when N is unknown.
   Renewal processes

                        ζi1 = τi1 , ζi2 = ζi1 + τi2 , . . .

   with observations made of first n values of the ζij ’s,

                 z1 = min{ζij }, z2 = min{ζij ; ζij > z1 }, . . . ,

   ending with
                           zn = min{ζij ; ζij > zn−1 } .
                                            [Cox & Kartsonaki, B’ka, 2012]
Example: Superposition of gamma processes (ABC)

   Interesting testing ground for ABCel since data (zt ) neither iid nor
   Markov
   Recovery of an iid structure by
     1. simulating a pseudo-dataset,       Comparison of ABC and
        (z , . . . , z ), as in regular    ABCel posteriors
          1        n
        ABC,




                                                                                                                              0.08
                                                      1.4




                                                                                          1.5
                                                      1.2




                                                                                                                              0.06
                                                      1.0




                                                                                          1.0
                                                      0.8
                                            Density




                                                                                Density




                                                                                                                    Density
    2. deriving sequence of




                                                                                                                              0.04
                                                      0.6




                                                                                          0.5
                                                      0.4




                                                                                                                              0.02
                                                      0.2
       indicators (ν1 , . . . , νn ), as




                                                                                                                              0.00
                                                      0.0




                                                                                          0.0
                                                            0   1   2   3   4                   0   1   2   3   4                    0   5   10   15   20

                                                                    α                                   β                                    N




         z1 = ζν1 1 , z2 = ζν2 j2 , . . .


                                                      1.5




                                                                                                                              0.06
                                                                                          1.0




                                                                                                                              0.05
                                                                                          0.8
                                                      1.0




                                                                                                                              0.04
                                                                                          0.6
                                            Density




                                                                                Density




                                                                                                                    Density

                                                                                                                              0.03
                                                                                          0.4
                                                      0.5




                                                                                                                              0.02
    3. exploiting that those



                                                                                          0.2




                                                                                                                              0.01
                                                                                                                              0.00
                                                      0.0




                                                                                          0.0
       indicators are distributed                           0   1   2

                                                                    α
                                                                        3   4                   0   1   2

                                                                                                        β
                                                                                                            3   4                    0   5   10

                                                                                                                                             N
                                                                                                                                                  15   20




       from the prior distribution          Top: ABCel
       on the νt ’s leading to an iid       Bottom: regular ABC
       sample of G(α, β) variables
Example: Superposition of gamma processes (ABC)

   Interesting testing ground for ABCel since data (zt ) neither iid nor
   Markov
   Recovery of an iid structure by
     1. simulating a pseudo-dataset,       Comparison of ABC and
        (z , . . . , z ), as in regular    ABCel posteriors
          1        n
        ABC,




                                                                                                                              0.08
                                                      1.4




                                                                                          1.5
                                                      1.2




                                                                                                                              0.06
                                                      1.0




                                                                                          1.0
                                                      0.8
                                            Density




                                                                                Density




                                                                                                                    Density
    2. deriving sequence of




                                                                                                                              0.04
                                                      0.6




                                                                                          0.5
                                                      0.4




                                                                                                                              0.02
                                                      0.2
       indicators (ν1 , . . . , νn ), as




                                                                                                                              0.00
                                                      0.0




                                                                                          0.0
                                                            0   1   2   3   4                   0   1   2   3   4                    0   5   10   15   20

                                                                    α                                   β                                    N




         z1 = ζν1 1 , z2 = ζν2 j2 , . . .


                                                      1.5




                                                                                                                              0.06
                                                                                          1.0




                                                                                                                              0.05
                                                                                          0.8
                                                      1.0




                                                                                                                              0.04
                                                                                          0.6
                                            Density




                                                                                Density




                                                                                                                    Density

                                                                                                                              0.03
                                                                                          0.4
                                                      0.5




                                                                                                                              0.02
    3. exploiting that those



                                                                                          0.2




                                                                                                                              0.01
                                                                                                                              0.00
                                                      0.0




                                                                                          0.0
       indicators are distributed                           0   1   2

                                                                    α
                                                                        3   4                   0   1   2

                                                                                                        β
                                                                                                            3   4                    0   5   10

                                                                                                                                             N
                                                                                                                                                  15   20




       from the prior distribution          Top: ABCel
       on the νt ’s leading to an iid       Bottom: regular ABC
       sample of G(α, β) variables
Pop’gen’: A first experiment
   Evolutionary scenario:          Comparison of the original
        MRCA                       ABC with ABCel
                                                                        ESS=7034




                                             15
                      τ




                                             10
                                   Density

                                             5
   POP 0      POP 1




                                             0
                                                  0.00    0.05      0.10         0.15         0.20    0.25

   Dataset:                                                             log(theta)




       50 genes per populations,




                                             7
                                             6
       100 microsat. loci


                                             5
                                   Density

                                             4
                                             3
   Assumptions:

                                             2
                                             1
                                             0
       Ne identical over all                      −0.3   −0.2    −0.1      0.0          0.1     0.2   0.3

                                                                         log(tau1)

       populations
       φ = (log10 θ, log10 τ)      histogram = ABCel
                                   curve = original ABC
       uniform prior over
                                   vertical line = “true”
       (−1., 1.5) × (−1., 1.)
                                   parameter
Pop’gen’: A first experiment
   Evolutionary scenario:          Comparison of the original
        MRCA                       ABC with ABCel
                                                                        ESS=7034




                                             15
                      τ




                                             10
                                   Density

                                             5
   POP 0      POP 1




                                             0
                                                  0.00    0.05      0.10         0.15         0.20    0.25

   Dataset:                                                             log(theta)




       50 genes per populations,




                                             7
                                             6
       100 microsat. loci


                                             5
                                   Density

                                             4
                                             3
   Assumptions:

                                             2
                                             1
                                             0
       Ne identical over all                      −0.3   −0.2    −0.1      0.0          0.1     0.2   0.3

                                                                         log(tau1)

       populations
       φ = (log10 θ, log10 τ)      histogram = ABCel
                                   curve = original ABC
       uniform prior over
                                   vertical line = “true”
       (−1., 1.5) × (−1., 1.)
                                   parameter
ABC vs. ABCel on 100 replicates of the 1st experiment

   Accuracy:
                  log10 θ               log10 τ
             ABC        ABCel      ABC        ABCel
    (1)      0.097      0.094      0.315      0.117
    (2)      0.071      0.059      0.272      0.077
    (3)      0.68        0.81       1.0        0.80

   (1) Root Mean Square Error of the posterior mean
   (2) Median Absolute Deviation of the posterior median
   (3) Coverage of the credibility interval of probability 0.8

   Computation time: on a recent 6-core computer
   (C++/OpenMP)
          ABC ≈ 4 hours
          ABCel ≈ 2 minutes
Pop’gen’: Second experiment
 Evolutionary scenario:                Comparison of the original ABC
            MRCA                       with ABCel
                              τ2       histogram = ABCel
                                       curve = original ABC
                              τ1
                                       vertical line = “true” parameter

  POP 0     POP 1    POP 2
 Dataset:
     50 genes per populations,
     100 microsat. loci
 Assumptions:
     Ne identical over all
     populations
     φ=
     (log10 θ, log10 τ1 , log10 τ2 )
     non-informative uniform
Pop’gen’: Second experiment
 Evolutionary scenario:                Comparison of the original ABC
            MRCA                       with ABCel
                              τ2
                              τ1

  POP 0     POP 1    POP 2
 Dataset:
     50 genes per populations,
     100 microsat. loci
 Assumptions:
     Ne identical over all
                                       histogram = ABCel
     populations
                                       curve = original ABC
     φ=                                vertical line = “true” parameter
     (log10 θ, log10 τ1 , log10 τ2 )
     non-informative uniform
Pop’gen’: Second experiment
 Evolutionary scenario:                Comparison of the original ABC
            MRCA                       with ABCel
                              τ2
                              τ1

  POP 0     POP 1    POP 2
 Dataset:
     50 genes per populations,
     100 microsat. loci
 Assumptions:
     Ne identical over all
     populations
     φ=                                histogram = ABCel
     (log10 θ, log10 τ1 , log10 τ2 )   curve = original ABC
     non-informative uniform           vertical line = “true” parameter
Pop’gen’: Second experiment
 Evolutionary scenario:                Comparison of the original ABC
            MRCA                       with ABCel
                              τ2
                              τ1

  POP 0     POP 1    POP 2
 Dataset:
     50 genes per populations,
     100 microsat. loci
 Assumptions:
     Ne identical over all
     populations
     φ=                                histogram = ABCel
     (log10 θ, log10 τ1 , log10 τ2 )   curve = original ABC
     non-informative uniform           vertical line = “true” parameter
ABC vs. ABCel on 100 replicates of the 2nd experiment
   Accuracy:
                   log10 θ                 log10 τ1                   log10 τ2
              ABC        ABCel        ABC         ABCel          ABC         ABCel
    (1)      0.0059      0.0794       0.472       0.483          29.3         4.76
    (2)      0.048        0.053       0.32         0.28          4.13         3.36
    (3)       0.79        0.76        0.88         0.76          0.89         0.79


   (1) Root Mean Square Error of the posterior mean
   (2) Median Absolute Deviation of the posterior median
   (3) Coverage of the credibility interval of probability 0.8


   Computation time: on a recent 6-core computer
   (C++/OpenMP)
          ABC ≈ 6 hours
          ABCel ≈ 8 minutes
Why?

  On large datasets, ABCel gives more accurate results than ABC

  ABC simplifies the dataset through summary statistics
  Due to the large dimension of x, the original ABC algorithm
  estimates
                            π θ η(xobs ) ,
  where η(xobs ) is some (non-linear) projection of the observed
  dataset on a space with smaller dimension
  → Some information is lost

  ABCel simplifies the model through a generalized moment
  condition model.
  → Here, the moment condition model is based on pairwise
  composition likelihood
To be continued...

More Related Content

PDF
ABC in Venezia
PDF
ABC and empirical likelihood
PDF
ABC in Varanasi
PDF
Is ABC a new empirical Bayes approach?
PDF
Parameter Uncertainty and Learning in Dynamic Financial Decisions
PDF
The Black-Litterman model in the light of Bayesian portfolio analysis
PDF
Lecture on solving1
ABC in Venezia
ABC and empirical likelihood
ABC in Varanasi
Is ABC a new empirical Bayes approach?
Parameter Uncertainty and Learning in Dynamic Financial Decisions
The Black-Litterman model in the light of Bayesian portfolio analysis
Lecture on solving1

What's hot (20)

PDF
Chapter 3 projection
PDF
Chapter 2 pertubation
PDF
Discussion of Faming Liang's talk
PDF
Monash University short course, part II
PDF
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
PDF
EM algorithm and its application in probabilistic latent semantic analysis
PDF
Introduction to Bootstrap and elements of Markov Chains
PDF
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
PDF
A copula-based Simulation Method for Clustered Multi-State Survival Data
PDF
MCMSki III (poster)
PDF
MCMC and likelihood-free methods
PDF
Nber slides11 lecture2
PDF
Monte Carlo Statistical Methods
PDF
Monte Carlo Statistical Methods
PDF
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
PDF
Senior Seminar: Systems of Differential Equations
PDF
Monte Carlo Statistical Methods
PDF
Random Matrix Theory and Machine Learning - Part 1
PDF
Random Matrix Theory and Machine Learning - Part 4
PDF
Doering Savov
 
Chapter 3 projection
Chapter 2 pertubation
Discussion of Faming Liang's talk
Monash University short course, part II
A likelihood-free version of the stochastic approximation EM algorithm (SAEM)...
EM algorithm and its application in probabilistic latent semantic analysis
Introduction to Bootstrap and elements of Markov Chains
Bayesian inference for mixed-effects models driven by SDEs and other stochast...
A copula-based Simulation Method for Clustered Multi-State Survival Data
MCMSki III (poster)
MCMC and likelihood-free methods
Nber slides11 lecture2
Monte Carlo Statistical Methods
Monte Carlo Statistical Methods
Spillover Dynamics for Systemic Risk Measurement Using Spatial Financial Time...
Senior Seminar: Systems of Differential Equations
Monte Carlo Statistical Methods
Random Matrix Theory and Machine Learning - Part 1
Random Matrix Theory and Machine Learning - Part 4
Doering Savov
 
Ad

Viewers also liked (13)

PDF
Statistics symposium talk, Harvard University
PDF
folding Markov chains: the origaMCMC
PDF
ABC short course: model choice chapter
PDF
Approximate Bayesian model choice via random forests
PPTX
Monte carlo
PDF
ABC short course: survey chapter
PDF
ABC short course: introduction chapters
PDF
ABC short course: final chapters
PDF
Introducing Monte Carlo Methods with R
PDF
Simulation (AMSI Public Lecture)
PPT
Monte carlo
PPTX
Monte carlo simulation
PDF
Monte carlo simulation
Statistics symposium talk, Harvard University
folding Markov chains: the origaMCMC
ABC short course: model choice chapter
Approximate Bayesian model choice via random forests
Monte carlo
ABC short course: survey chapter
ABC short course: introduction chapters
ABC short course: final chapters
Introducing Monte Carlo Methods with R
Simulation (AMSI Public Lecture)
Monte carlo
Monte carlo simulation
Monte carlo simulation
Ad

Similar to ABC and empirical likelihood (20)

PDF
ABC & Empirical Lkd
PDF
(Approximate) Bayesian computation as a new empirical Bayes (something)?
PDF
Pittsburgh and Toronto "Halloween US trip" seminars
PDF
slides of ABC talk at i-like workshop, Warwick, May 16
PDF
[A]BCel : a presentation at ABC in Roma
PDF
Mcmc & lkd free II
PDF
NBBC15, Reyjavik, June 08, 2015
PDF
Considerate Approaches to ABC Model Selection
PDF
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
PDF
from model uncertainty to ABC
PDF
WSC 2011, advanced tutorial on simulation in Statistics
PDF
PhD defense talk slides
PDF
An investigation of inference of the generalized extreme value distribution b...
PDF
random forests for ABC model choice and parameter estimation
PDF
PDF
ABC in London, May 5, 2011
PDF
Dealing with intractability: Recent Bayesian Monte Carlo methods for dealing ...
PDF
A bit about мcmc
PDF
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...
ABC & Empirical Lkd
(Approximate) Bayesian computation as a new empirical Bayes (something)?
Pittsburgh and Toronto "Halloween US trip" seminars
slides of ABC talk at i-like workshop, Warwick, May 16
[A]BCel : a presentation at ABC in Roma
Mcmc & lkd free II
NBBC15, Reyjavik, June 08, 2015
Considerate Approaches to ABC Model Selection
MUMS: Bayesian, Fiducial, and Frequentist Conference - Multidimensional Monot...
from model uncertainty to ABC
WSC 2011, advanced tutorial on simulation in Statistics
PhD defense talk slides
An investigation of inference of the generalized extreme value distribution b...
random forests for ABC model choice and parameter estimation
ABC in London, May 5, 2011
Dealing with intractability: Recent Bayesian Monte Carlo methods for dealing ...
A bit about мcmc
MUMS Opening Workshop - An Overview of Reduced-Order Models and Emulators (ED...

More from Christian Robert (20)

PDF
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
PDF
The future of conferences towards sustainability and inclusivity
PDF
Adaptive Restore algorithm & importance Monte Carlo
PDF
Asymptotics of ABC, lecture, Collège de France
PDF
Workshop in honour of Don Poskitt and Gael Martin
PDF
discussion of ICML23.pdf
PDF
How many components in a mixture?
PDF
restore.pdf
PDF
Testing for mixtures at BNP 13
PDF
Inferring the number of components: dream or reality?
PDF
CDT 22 slides.pdf
PDF
Testing for mixtures by seeking components
PDF
discussion on Bayesian restricted likelihood
PDF
NCE, GANs & VAEs (and maybe BAC)
PDF
ABC-Gibbs
PDF
Coordinate sampler : A non-reversible Gibbs-like sampler
PDF
eugenics and statistics
PDF
Laplace's Demon: seminar #1
PDF
ABC-Gibbs
PDF
asymptotics of ABC
Insufficient Gibbs sampling (A. Luciano, C.P. Robert and R. Ryder)
The future of conferences towards sustainability and inclusivity
Adaptive Restore algorithm & importance Monte Carlo
Asymptotics of ABC, lecture, Collège de France
Workshop in honour of Don Poskitt and Gael Martin
discussion of ICML23.pdf
How many components in a mixture?
restore.pdf
Testing for mixtures at BNP 13
Inferring the number of components: dream or reality?
CDT 22 slides.pdf
Testing for mixtures by seeking components
discussion on Bayesian restricted likelihood
NCE, GANs & VAEs (and maybe BAC)
ABC-Gibbs
Coordinate sampler : A non-reversible Gibbs-like sampler
eugenics and statistics
Laplace's Demon: seminar #1
ABC-Gibbs
asymptotics of ABC

Recently uploaded (20)

PDF
Weekly quiz Compilation Jan -July 25.pdf
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Uderstanding digital marketing and marketing stratergie for engaging the digi...
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
Empowerment Technology for Senior High School Guide
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
Hazard Identification & Risk Assessment .pdf
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PDF
advance database management system book.pdf
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
Weekly quiz Compilation Jan -July 25.pdf
Cambridge-Practice-Tests-for-IELTS-12.docx
Paper A Mock Exam 9_ Attempt review.pdf.
Uderstanding digital marketing and marketing stratergie for engaging the digi...
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Empowerment Technology for Senior High School Guide
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
A powerpoint presentation on the Revised K-10 Science Shaping Paper
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
LDMMIA Reiki Yoga Finals Review Spring Summer
Hazard Identification & Risk Assessment .pdf
Practical Manual AGRO-233 Principles and Practices of Natural Farming
advance database management system book.pdf
TNA_Presentation-1-Final(SAVE)) (1).pptx
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
History, Philosophy and sociology of education (1).pptx
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα

ABC and empirical likelihood

  • 1. Approximate Bayesian Computation (ABC) and empirical likelihood Christian P. Robert Structure and uncertainty, Bristol, Sept. 25, 2012 Universit´ Paris-Dauphine, IuF, & CREST e Joint work with Kerrie L. Mengersen and P. Pudlo
  • 2. Outline Introduction ABC ABC as an inference machine ABCel
  • 3. Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is (really!) not available in closed form can (easily!) be neither completed nor demarginalised cannot be estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings
  • 4. Intractable likelihood Case of a well-defined statistical model where the likelihood function (θ|y) = f (y1 , . . . , yn |θ) is (really!) not available in closed form can (easily!) be neither completed nor demarginalised cannot be estimated by an unbiased estimator c Prohibits direct implementation of a generic MCMC algorithm like Metropolis–Hastings
  • 5. Different perspectives on abc What is the (most) fundamental issue? a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term) an inferential issue (opening opportunities for new inference machine, with different legitimity than classical B approach) a Bayesian conundrum (while inferencial methods available, how closely related to the B approach?)
  • 6. Different perspectives on abc What is the (most) fundamental issue? a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term) an inferential issue (opening opportunities for new inference machine, with different legitimity than classical B approach) a Bayesian conundrum (while inferencial methods available, how closely related to the B approach?)
  • 7. Different perspectives on abc What is the (most) fundamental issue? a mere computational issue (that will eventually end up being solved by more powerful computers, &tc, even if too costly in the short term) an inferential issue (opening opportunities for new inference machine, with different legitimity than classical B approach) a Bayesian conundrum (while inferencial methods available, how closely related to the B approach?)
  • 8. Econom’ections Similar exploration of simulation-based and approximation techniques in Econometrics Simulated method of moments Method of simulated moments Simulated pseudo-maximum-likelihood Indirect inference [Gouri´roux & Monfort, 1996] e even though motivation is partly-defined models rather than complex likelihoods
  • 9. Econom’ections Similar exploration of simulation-based and approximation techniques in Econometrics Simulated method of moments Method of simulated moments Simulated pseudo-maximum-likelihood Indirect inference [Gouri´roux & Monfort, 1996] e even though motivation is partly-defined models rather than complex likelihoods
  • 10. Indirect inference ^ Minimise [in θ] a distance between estimators β based on a pseudo-model for genuine observations and for observations simulated under the true model and the parameter θ. [Gouri´roux, Monfort, & Renault, 1993; e Smith, 1993; Gallant & Tauchen, 1996]
  • 11. Indirect inference (PML vs. PSE) Example of the pseudo-maximum-likelihood (PML) ^ β(y) = arg max log f (yt |β, y1:(t−1) ) β t leading to arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2 ^ ^ θ when ys (θ) ∼ f (y|θ) s = 1, . . . , S
  • 12. Indirect inference (PML vs. PSE) Example of the pseudo-score-estimator (PSE) 2 ∂ log f ^ β(y) = arg min (yt |β, y1:(t−1) ) β t ∂β leading to arg min ||β(yo ) − β(y1 (θ), . . . , yS (θ))||2 ^ ^ θ when ys (θ) ∼ f (y|θ) s = 1, . . . , S
  • 13. Consistent indirect inference “...in order to get a unique solution the dimension of the auxiliary parameter β must be larger than or equal to the dimension of the initial parameter θ. If the problem is just identified the different methods become easier...” Consistency depending on the criterion and on the asymptotic identifiability of θ [Gouri´roux & Monfort, 1996, p. 66] e
  • 14. Consistent indirect inference “...in order to get a unique solution the dimension of the auxiliary parameter β must be larger than or equal to the dimension of the initial parameter θ. If the problem is just identified the different methods become easier...” Consistency depending on the criterion and on the asymptotic identifiability of θ [Gouri´roux & Monfort, 1996, p. 66] e
  • 15. Choice of pseudo-model Arbitrariness of pseudo-model Pick model such that ^ 1. β(θ) not flat (i.e. sensitive to changes in θ) ^ 2. β(θ) not dispersed (i.e. robust agains changes in ys (θ)) [Frigessi & Heggland, 2004]
  • 16. Approximate Bayesian computation Introduction ABC Genesis of ABC ABC basics Advances and interpretations ABC as knn ABC as an inference machine ABCel
  • 17. Genetic background of ABC skip genetics ABC is a recent computational technique that only requires being able to sample from the likelihood f (·|θ) This technique stemmed from population genetics models, about 15 years ago, and population geneticists still contribute significantly to methodological developments of ABC. [Griffith & al., 1997; Tavar´ & al., 1999] e
  • 18. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...
  • 19. Demo-genetic inference Each model is characterized by a set of parameters θ that cover historical (time divergence, admixture time ...), demographics (population sizes, admixture rates, migration rates, ...) and genetic (mutation rate, ...) factors The goal is to estimate these parameters from a dataset of polymorphism (DNA sample) y observed at the present time Problem: most of the time, we cannot calculate the likelihood of the polymorphism data f (y|θ)...
  • 20. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2 Sample of 8 genes
  • 21. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 22. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches • MRCA = 100 • independent mutations: ±1 with pr. 1/2
  • 23. Neutral model at a given microsatellite locus, in a closed panmictic population at equilibrium Kingman’s genealogy When time axis is normalized, T (k) ∼ Exp(k(k − 1)/2) Mutations according to the Simple stepwise Mutation Model (SMM) • date of the mutations ∼ Poisson process with intensity θ/2 over the branches Observations: leafs of the tree • MRCA = 100 ^ θ=? • independent mutations: ±1 with pr. 1/2
  • 24. Much more interesting models. . . several independent locus Independent gene genealogies and mutations different populations linked by an evolutionary scenario made of divergences, admixtures, migrations between populations, etc. larger sample size usually between 50 and 100 genes MRCA τ2 τ1 A typical evolutionary scenario: POP 0 POP 1 POP 2
  • 25. Intractable likelihood Missing (too missing!) data structure: f (y|θ) = f (y|G , θ)f (G |θ)dG G cannot be computed in a manageable way... The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.
  • 26. Intractable likelihood Missing (too missing!) data structure: f (y|θ) = f (y|G , θ)f (G |θ)dG G cannot be computed in a manageable way... The genealogies are considered as nuisance parameters This modelling clearly differs from the phylogenetic perspective where the tree is the parameter of interest.
  • 27. not-so-obvious ancestry... You went to school to learn, girl (. . . ) Why 2 plus 2 makes four Now, now, now, I’m gonna teach you (. . . ) All you gotta do is repeat after me! A, B, C! It’s easy as 1, 2, 3! Or simple as Do, Re, Mi! (. . . )
  • 28. A?B?C? A stands for approximate [wrong likelihood / picture] B stands for Bayesian C stands for computation [producing a parameter sample]
  • 29. A?B?C? A stands for approximate [wrong likelihood / picture] B stands for Bayesian C stands for computation [producing a parameter sample]
  • 30. A?B?C? ESS=108.9 ESS=81.48 ESS=105.2 3.0 2.0 2.0 Density Density Density 1.5 1.0 1.0 0.0 0.0 0.0 A stands for approximate −0.4 0.0 0.2 ESS=133.3 θ 0.4 0.6 −0.8 −0.6 −0.4 ESS=87.75 θ −0.2 0.0 −0.2 0.0 0.2 ESS=72.89 θ 0.4 0.6 0.8 3.0 2.0 4 [wrong likelihood / 2.0 Density Density Density 1.0 1.0 2 0.0 0.0 0 picture] −0.8 −0.4 ESS=116.5 θ 0.0 0.2 0.4 −0.2 0.0 0.2 ESS=103.9 θ 0.4 0.6 0.8 −0.2 0.0 ESS=126.9 θ 0.2 0.4 3.0 2.0 3.0 2.0 Density Density Density 1.0 1.5 1.0 B stands for Bayesian 0.0 0.0 0.0 −0.4 0.0 0.2 0.4 0.6 −0.4 −0.2 0.0 0.2 0.4 −0.8 −0.4 0.0 0.4 ESS=113.3 θ ESS=92.99 θ ESS=121.4 θ 3.0 2.0 C stands for computation 2.0 Density Density Density 1.5 1.0 1.0 0.0 0.0 0.0 [producing a parameter −0.6 −0.2 ESS=133.6 θ 0.2 0.6 −0.2 0.0 0.2 ESS=116.4 θ 0.4 0.6 −0.5 ESS=131.6 θ 0.0 0.5 0.0 1.0 2.0 3.0 2.0 sample] Density Density Density 1.0 1.0 0.0 0.0 −0.6 −0.2 0.2 0.4 0.6 −0.4 −0.2 0.0 0.2 0.4 −0.5 0.0 0.5
  • 31. How Bayesian is aBc? Could we turn the resolution into a Bayesian answer? ideally so (not meaningfull: requires ∞-ly powerful computer asymptotically so (when sample size goes to ∞: meaningfull?) approximation error unknown (w/o costly simulation) true Bayes for wrong model (formal and artificial) true Bayes for estimated likelihood (back to econometrics?)
  • 32. Untractable likelihood Back to stage zero: what can we do when a likelihood function f (y|θ) is well-defined but impossible / too costly to compute...? MCMC cannot be implemented! shall we give up Bayesian inference altogether?! or settle for an almost Bayesian inference/picture...?
  • 33. Untractable likelihood Back to stage zero: what can we do when a likelihood function f (y|θ) is well-defined but impossible / too costly to compute...? MCMC cannot be implemented! shall we give up Bayesian inference altogether?! or settle for an almost Bayesian inference/picture...?
  • 34. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 35. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 36. ABC methodology Bayesian setting: target is π(θ)f (x|θ) When likelihood f (x|θ) not in closed form, likelihood-free rejection technique: Foundation For an observation y ∼ f (y|θ), under the prior π(θ), if one keeps jointly simulating θ ∼ π(θ) , z ∼ f (z|θ ) , until the auxiliary variable z is equal to the observed value, z = y, then the selected θ ∼ π(θ|y) [Rubin, 1984; Diggle & Gratton, 1984; Tavar´ et al., 1997] e
  • 37. A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from def π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]
  • 38. A as A...pproximative When y is a continuous random variable, strict equality z = y is replaced with a tolerance zone ρ(y, z) where ρ is a distance Output distributed from def π(θ) Pθ {ρ(y, z) < } ∝ π(θ|ρ(y, z) < ) [Pritchard et al., 1999]
  • 39. ABC algorithm In most implementations, further degree of A...pproximation: Algorithm 1 Likelihood-free rejection sampler for i = 1 to N do repeat generate θ from the prior distribution π(·) generate z from the likelihood f (·|θ ) until ρ{η(z), η(y)} set θi = θ end for where η(y) defines a (not necessarily sufficient) statistic
  • 40. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 41. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 42. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|y) . ...does it?!
  • 43. Output The likelihood-free algorithm samples from the marginal in z of: π(θ)f (z|θ)IA ,y (z) π (θ, z|y) = , A ,y ×Θ π(θ)f (z|θ)dzdθ where A ,y = {z ∈ D|ρ(η(z), η(y)) < }. The idea behind ABC is that the summary statistics coupled with a small tolerance should provide a good approximation of the restricted posterior distribution: π (θ|y) = π (θ, z|y)dz ≈ π(θ|η(y)) . Not so good..! skip convergence details!
  • 44. Convergence of ABC What happens when → 0? For B ⊂ Θ, we have A f (z|θ)dz f (z|θ)π(θ)dθ ,y B π(θ)dθ = dz B A ,y ×Θ π(θ)f (z|θ)dzdθ A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ B f (z|θ)π(θ)dθ m(z) = dz A ,y m(z) A ,y ×Θ π(θ)f (z|θ)dzdθ m(z) = π(B|z) dz A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ which indicates convergence for a continuous π(B|z).
  • 45. Convergence of ABC What happens when → 0? For B ⊂ Θ, we have A f (z|θ)dz f (z|θ)π(θ)dθ ,y B π(θ)dθ = dz B A ,y ×Θ π(θ)f (z|θ)dzdθ A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ B f (z|θ)π(θ)dθ m(z) = dz A ,y m(z) A ,y ×Θ π(θ)f (z|θ)dzdθ m(z) = π(B|z) dz A ,y A ,y ×Θ π(θ)f (z|θ)dzdθ which indicates convergence for a continuous π(B|z).
  • 46. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 47. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 48. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 49. Convergence (do not attempt!) ...and the above does not apply to insufficient statistics: If η(y) is not a sufficient statistics, the best one can hope for is π(θ|η(y)) , not π(θ|y) If η(y) is an ancillary statistic, the whole information contained in y is lost!, the “best” one can “hope” for is π(θ|η(y)) = π(θ) Bummer!!!
  • 50. MA example Inference on the parameters of a MA(q) model q xt = t + ϑi t−i t−i i.i.d.w.n. i=1 bypass MA illustration Simple prior: uniform over the inverse [real and complex] roots in q Q(u) = 1 − ϑi u i i=1 under the identifiability conditions
  • 51. MA example Inference on the parameters of a MA(q) model q xt = t + ϑi t−i t−i i.i.d.w.n. i=1 bypass MA illustration Simple prior: uniform prior over the identifiability zone in the parameter space, i.e. triangle for MA(2)
  • 52. MA example (2) ABC algorithm thus made of 1. picking a new value (ϑ1 , ϑ2 ) in the triangle 2. generating an iid sequence ( t )−q<t T 3. producing a simulated series (xt )1 t T Distance: basic distance between the series T ρ((xt )1 t T , (xt )1 t T) = (xt − xt )2 t=1 or distance between summary statistics like the q = 2 autocorrelations T τj = xt xt−j t=j+1
  • 53. MA example (2) ABC algorithm thus made of 1. picking a new value (ϑ1 , ϑ2 ) in the triangle 2. generating an iid sequence ( t )−q<t T 3. producing a simulated series (xt )1 t T Distance: basic distance between the series T ρ((xt )1 t T , (xt )1 t T) = (xt − xt )2 t=1 or distance between summary statistics like the q = 2 autocorrelations T τj = xt xt−j t=j+1
  • 54. Comparison of distance impact Impact of tolerance on ABC sample against either distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 55. Comparison of distance impact 4 1.5 3 1.0 2 0.5 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −2.0 −1.0 0.0 0.5 1.0 1.5 θ1 θ2 Impact of tolerance on ABC sample against either distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 56. Comparison of distance impact 4 1.5 3 1.0 2 0.5 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −2.0 −1.0 0.0 0.5 1.0 1.5 θ1 θ2 Impact of tolerance on ABC sample against either distance ( = 100%, 10%, 1%, 0.1%) for an MA(2) model
  • 57. Comments Role of distance paramount (because = 0) Scaling of components of η(y) is also determinant matters little if “small enough” representative of “curse of dimensionality” small is beautiful! the data as a whole may be paradoxically weakly informative for ABC
  • 58. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 59. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 60. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 61. ABC (simul’) advances how approximative is ABC? ABC as knn Simulating from the prior is often poor in efficiency Either modify the proposal distribution on θ to increase the density of x’s within the vicinity of y ... [Marjoram et al, 2003; Bortot et al., 2007, Sisson et al., 2007] ...or by viewing the problem as a conditional density estimation and by developing techniques to allow for larger [Beaumont et al., 2002] .....or even by including in the inferential framework [ABCµ ] [Ratmann et al., 2009]
  • 62. ABC-NP Better usage of [prior] simulations by adjustement: instead of throwing away θ such that ρ(η(z), η(y)) > , replace θ’s with locally regressed transforms θ∗ = θ − {η(z) − η(y)}T β ^ [Csill´ry et al., TEE, 2010] e ^ where β is obtained by [NP] weighted least square regression on (η(z) − η(y)) with weights Kδ {ρ(η(z), η(y))} [Beaumont et al., 2002, Genetics]
  • 63. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or S 1 Kδ {ρ(η(zs (θ)), η(y))} S s=1 [consistent estimate of f (η|θ)] Curse of dimensionality: poor estimate when d = dim(η) is large...
  • 64. ABC-NP (regression) Also found in the subsequent literature, e.g. in Fearnhead-Prangle (2012) : weight directly simulation by Kδ {ρ(η(z(θ)), η(y))} or S 1 Kδ {ρ(η(zs (θ)), η(y))} S s=1 [consistent estimate of f (η|θ)] Curse of dimensionality: poor estimate when d = dim(η) is large...
  • 65. ABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior expectation i θi Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 66. ABC-NP (density estimation) Use of the kernel weights Kδ {ρ(η(z(θ)), η(y))} leads to the NP estimate of the posterior conditional density i ˜ Kb (θi − θ)Kδ {ρ(η(z(θi )), η(y))} i Kδ {ρ(η(z(θi )), η(y))} [Blum, JASA, 2010]
  • 67. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜ Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd ) g c var(^ (θ|y)) = g (1 + oP (1)) nbδd
  • 68. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜ Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd ) g c var(^ (θ|y)) = g (1 + oP (1)) nbδd [Blum, JASA, 2010]
  • 69. ABC-NP (density estimations) Other versions incorporating regression adjustments i ˜ Kb (θ∗ − θ)Kδ {ρ(η(z(θi )), η(y))} i i Kδ {ρ(η(z(θi )), η(y))} In all cases, error E[^ (θ|y)] − g (θ|y) = cb 2 + cδ2 + OP (b 2 + δ2 ) + OP (1/nδd ) g c var(^ (θ|y)) = g (1 + oP (1)) nbδd [standard NP calculations]
  • 70. ABC-NCH Incorporating non-linearities and heterocedasticities: σ(η(y)) ^ θ∗ = m(η(y)) + [θ − m(η(z))] ^ ^ σ(η(z)) ^ where m(η) estimated by non-linear regression (e.g., neural network) ^ σ(η) estimated by non-linear regression on residuals ^ log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi ^ [Blum & Fran¸ois, 2009] c
  • 71. ABC-NCH Incorporating non-linearities and heterocedasticities: σ(η(y)) ^ θ∗ = m(η(y)) + [θ − m(η(z))] ^ ^ σ(η(z)) ^ where m(η) estimated by non-linear regression (e.g., neural network) ^ σ(η) estimated by non-linear regression on residuals ^ log{θi − m(ηi )}2 = log σ2 (ηi ) + ξi ^ [Blum & Fran¸ois, 2009] c
  • 72. ABC as knn Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα (d1 , . . . , dN ) Interpretation of ε as non- parametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965] [Biau et al., 2012, arxiv:1207.6461]
  • 73. ABC as knn Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα (d1 , . . . , dN ) Interpretation of ε as non- parametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965] [Biau et al., 2012, arxiv:1207.6461]
  • 74. ABC as knn Practice of ABC: determine tolerance as a quantile on observed distances, say 10% or 1% quantile, = N = qα (d1 , . . . , dN ) Interpretation of ε as non- parametric bandwidth only approximation of the actual practice [Blum & Fran¸ois, 2010] c ABC is a k-nearest neighbour (knn) method with kN = N N [Loftsgaarden & Quesenberry, 1965] [Biau et al., 2012, arxiv:1207.6461]
  • 75. ABC consistency Provided kN / log log N −→ ∞ and kN /N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, kN 1 ϕ(θj ) −→ E[ϕ(θj )|S = s0 ] kN j=1 [Devroye, 1982] Biau et al. (2012) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints p kN → ∞, kN /N → 0, hN → 0 and hN kN → ∞,
  • 76. ABC consistency Provided kN / log log N −→ ∞ and kN /N −→ 0 as N → ∞, for almost all s0 (with respect to the distribution of S), with probability 1, kN 1 ϕ(θj ) −→ E[ϕ(θj )|S = s0 ] kN j=1 [Devroye, 1982] Biau et al. (2012) also recall pointwise and integrated mean square error consistency results on the corresponding kernel estimate of the conditional posterior distribution, under constraints p kN → ∞, kN /N → 0, hN → 0 and hN kN → ∞,
  • 77. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like 4 when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8 4 when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N 4 when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4 [Biau et al., 2012, arxiv:1207.6461] Only applies to sufficient summary statistics
  • 78. Rates of convergence Further assumptions (on target and kernel) allow for precise (integrated mean square) convergence rates (as a power of the sample size N), derived from classical k-nearest neighbour regression, like 4 when m = 1, 2, 3, kN ≈ N (p+4)/(p+8) and rate N − p+8 4 when m = 4, kN ≈ N (p+4)/(p+8) and rate N − p+8 log N 4 when m > 4, kN ≈ N (p+4)/(m+p+4) and rate N − m+p+4 [Biau et al., 2012, arxiv:1207.6461] Only applies to sufficient summary statistics
  • 79. ABC inference machine Introduction ABC ABC as an inference machine Error inc. Exact BC and approximate targets summary statistic ABCel
  • 80. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics)
  • 81. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics) ...should Bayesians care?!
  • 82. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics) yes they should!!!
  • 83. How much Bayesian aBc is..? maybe a convergent method of inference (meaningful? sufficient? foreign?) approximation error unknown (w/o simulation) pragmatic Bayes (there is no other solution!) many calibration issues (tolerance, distance, statistics) to ABCel
  • 84. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 85. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 86. ABCµ Idea Infer about the error as well as about the parameter: Use of a joint density f (θ, |y) ∝ ξ( |y, θ) × πθ (θ) × π ( ) where y is the data, and ξ( |y, θ) is the prior predictive density of ρ(η(z), η(y)) given θ and y when z ∼ f (z|θ) Warning! Replacement of ξ( |y, θ) with a non-parametric kernel approximation. [Ratmann, Andrieu, Wiuf and Richardson, 2009, PNAS]
  • 87. ABCµ details Multidimensional distances ρk (k = 1, . . . , K ) and errors k = ρk (ηk (z), ηk (y)), with 1 k ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) = ^ K [{ k −ρk (ηk (zb ), ηk (y))}/hk ] Bhk b then used in replacing ξ( |y, θ) with mink ξk ( |y, θ) ^ ABCµ involves acceptance probability π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ ) ^ π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ) ^
  • 88. ABCµ details Multidimensional distances ρk (k = 1, . . . , K ) and errors k = ρk (ηk (z), ηk (y)), with 1 k ∼ ξk ( |y, θ) ≈ ξk ( |y, θ) = ^ K [{ k −ρk (ηk (zb ), ηk (y))}/hk ] Bhk b then used in replacing ξ( |y, θ) with mink ξk ( |y, θ) ^ ABCµ involves acceptance probability π(θ , ) q(θ , θ)q( , ) mink ξk ( |y, θ ) ^ π(θ, ) q(θ, θ )q( , ) mink ξk ( |y, θ) ^
  • 89. ABCµ multiple errors [ c Ratmann et al., PNAS, 2009]
  • 90. ABCµ for model choice [ c Ratmann et al., PNAS, 2009]
  • 91. Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π(θ)f (z|θ)K (y − z) π (θ, z|y) = , π(θ)f (z|θ)K (y − z)dzdθ with K kernel parameterised by bandwidth . [Wilkinson, 2008] Theorem The ABC algorithm based on the assumption of a randomised observation y = y + ξ, ξ ∼ K , and an acceptance probability of ˜ K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 92. Wilkinson’s exact BC (not exactly!) ABC approximation error (i.e. non-zero tolerance) replaced with exact simulation from a controlled approximation to the target, convolution of true posterior with kernel function π(θ)f (z|θ)K (y − z) π (θ, z|y) = , π(θ)f (z|θ)K (y − z)dzdθ with K kernel parameterised by bandwidth . [Wilkinson, 2008] Theorem The ABC algorithm based on the assumption of a randomised observation y = y + ξ, ξ ∼ K , and an acceptance probability of ˜ K (y − z)/M gives draws from the posterior distribution π(θ|y).
  • 93. How exact a BC? “Using to represent measurement error is straightforward, whereas using to model the model discrepancy is harder to conceptualize and not as commonly used” [Richard Wilkinson, 2008]
  • 94. How exact a BC? Pros Pseudo-data from true model and observed data from noisy model Interesting perspective in that outcome is completely controlled Link with ABCµ and assuming y is observed with a measurement error with density K Relates to the theory of model approximation [Kennedy & O’Hagan, 2001] Cons Requires K to be bounded by M True approximation error never assessed Requires a modification of the standard ABC algorithm
  • 95. ABC for HMMs Specific case of a hidden Markov model Xt+1 ∼ Qθ (Xt , ·) Yt+1 ∼ gθ (·|xt ) where only y0 is observed. 1:n [Dean, Singh, Jasra, & Peters, 2011] Use of specific constraints, adapted to the Markov structure: y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , ) 0 0
  • 96. ABC for HMMs Specific case of a hidden Markov model Xt+1 ∼ Qθ (Xt , ·) Yt+1 ∼ gθ (·|xt ) where only y0 is observed. 1:n [Dean, Singh, Jasra, & Peters, 2011] Use of specific constraints, adapted to the Markov structure: y1 ∈ B(y1 , ) × · · · × yn ∈ B(yn , ) 0 0
  • 97. ABC-MLE for HMMs ABC-MLE defined by θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , ) ^ 0 0 θ Exact MLE for the likelihood same basis as Wilkinson! 0 pθ (y1 , . . . , yn ) corresponding to the perturbed process (xt , yt + zt )1 t n zt ∼ U(B(0, 1) [Dean, Singh, Jasra, & Peters, 2011]
  • 98. ABC-MLE for HMMs ABC-MLE defined by θn = arg max Pθ Y1 ∈ B(y1 , ), . . . , Yn ∈ B(yn , ) ^ 0 0 θ Exact MLE for the likelihood same basis as Wilkinson! 0 pθ (y1 , . . . , yn ) corresponding to the perturbed process (xt , yt + zt )1 t n zt ∼ U(B(0, 1) [Dean, Singh, Jasra, & Peters, 2011]
  • 99. ABC-MLE is biased ABC-MLE is asymptotically (in n) biased with target l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )] but ABC-MLE converges to true value in the sense l n (θn ) → l (θ) for all sequences (θn ) converging to θ and n
  • 100. ABC-MLE is biased ABC-MLE is asymptotically (in n) biased with target l (θ) = Eθ∗ [log pθ (Y1 |Y−∞:0 )] but ABC-MLE converges to true value in the sense l n (θn ) → l (θ) for all sequences (θn ) converging to θ and n
  • 101. Noisy ABC-MLE Idea: Modify instead the data from the start 0 (y1 + ζ1 , . . . , yn + ζn ) [ see Fearnhead-Prangle ] noisy ABC-MLE estimate arg max Pθ Y1 ∈ B(y1 + ζ1 , ), . . . , Yn ∈ B(yn + ζn , ) 0 0 θ [Dean, Singh, Jasra, & Peters, 2011]
  • 102. Consistent noisy ABC-MLE Degrading the data improves the estimation performances: Noisy ABC-MLE is asymptotically (in n) consistent under further assumptions, the noisy ABC-MLE is asymptotically normal increase in variance of order −2 likely degradation in precision or computing time due to the lack of summary statistic [curse of dimensionality]
  • 103. SMC for ABC likelihood Algorithm 2 SMC ABC for HMMs Given θ for k = 1, . . . , n do 1 1 N N generate proposals (xk , yk ), . . . , (xk , yk ) from the model weigh each proposal with ωk l =I l B(yk + ζk , ) (yk ) 0 l renormalise the weights and sample the xk ’s accordingly end for approximate the likelihood by n N ωlk N k=1 l=1 [Jasra, Singh, Martin, & McCoy, 2010]
  • 104. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test Not taking into account the sequential nature of the tests Depends on parameterisation Order of inclusion matters likelihood ratio test?!
  • 105. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test Not taking into account the sequential nature of the tests Depends on parameterisation Order of inclusion matters likelihood ratio test?!
  • 106. Which summary? Fundamental difficulty of the choice of the summary statistic when there is no non-trivial sufficient statistics Starting from a large collection of summary statistics is available, Joyce and Marjoram (2008) consider the sequential inclusion into the ABC target, with a stopping rule based on a likelihood ratio test Not taking into account the sequential nature of the tests Depends on parameterisation Order of inclusion matters likelihood ratio test?!
  • 107. Which summary for model choice? Depending on the choice of η(·), the Bayes factor based on this insufficient statistic, η π1 (θ1 )f1η (η(y)|θ1 ) dθ1 B12 (y) = , π2 (θ2 )f2η (η(y)|θ2 ) dθ2 is consistent or not. [X, Cornuet, Marin, & Pillai, 2012] Consistency only depends on the range of Ei [η(y)] under both models. [Marin, Pillai, X, & Rousseau, 2012]
  • 108. Which summary for model choice? Depending on the choice of η(·), the Bayes factor based on this insufficient statistic, η π1 (θ1 )f1η (η(y)|θ1 ) dθ1 B12 (y) = , π2 (θ2 )f2η (η(y)|θ2 ) dθ2 is consistent or not. [X, Cornuet, Marin, & Pillai, 2012] Consistency only depends on the range of Ei [η(y)] under both models. [Marin, Pillai, X, & Rousseau, 2012]
  • 109. Semi-automatic ABC Fearnhead and Prangle (2010) study ABC and the selection of the summary statistic in close proximity to Wilkinson’s proposal ABC considered as inferential method and calibrated as such randomised (or ‘noisy’) version of the summary statistics ˜ η(y) = η(y) + τ derivation of a well-calibrated version of ABC, i.e. an algorithm that gives proper predictions for the distribution associated with this randomised summary statistic
  • 110. Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) . recent extension to model choice, optimality of Bayes factor B12 (y) [F&P, ISBA 2012 talk]
  • 111. Summary [of F&P/statistics) optimality of the posterior expectation E[θ|y] of the parameter of interest as summary statistics η(y)! use of the standard quadratic loss function (θ − θ0 )T A(θ − θ0 ) . recent extension to model choice, optimality of Bayes factor B12 (y) [F&P, ISBA 2012 talk]
  • 112. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 113. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 114. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 115. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 116. Conclusion Choice of summary statistics is paramount for ABC validation/performance At best, ABC approximates π(. | η(y)) Model selection feasible with ABC [with caution!] For estimation, consistency if {θ; µ(θ) = µ0 } = θ0 For testing consistency if {µ1 (θ1 ), θ1 ∈ Θ1 } ∩ {µ2 (θ2 ), θ2 ∈ Θ2 } = ∅ [Marin et al., 2011]
  • 117. Empirical likelihood (EL) Introduction ABC ABC as an inference machine ABCel ABC and EL Composite likelihood Illustrations
  • 118. Empirical likelihood (EL) Dataset x made of n independent replicates x = (x1 , . . . , xn ) of some X ∼ F Generalized moment condition model EF h(X , φ) = 0, where h is a known function, and φ an unknown parameter Corresponding empirical likelihood n Lel (φ|x) = max pi p i=1 for all p such that 0 pi 1, pi = 1, i pi h(xi , φ) = 0. [Owen, 1988, Bio’ka; Owen, 2001]
  • 119. Empirical likelihood (EL) Dataset x made of n independent replicates x = (x1 , . . . , xn ) of some X ∼ F Generalized moment condition model EF h(X , φ) = 0, where h is a known function, and φ an unknown parameter Corresponding empirical likelihood n Lel (φ|x) = max pi p i=1 for all p such that 0 pi 1, pi = 1, i pi h(xi , φ) = 0. [Owen, 1988, Bio’ka; Owen, 2001]
  • 120. Convergence of EL [3.4] Theorem 3.4 Let X , Y1 , . . . , Yn be independent rv’s with common distribution F0 . For θ ∈ Θ, and the function h(X , θ) ∈ Rs , let θ0 ∈ Θ be such that Var(h(Yi , θ0 )) is finite and has rank q > 0. If θ0 satisfies E(h(X , θ0 )) = 0, then Lel (θ0 |Y1 , . . . , Yn ) −2 log → χ2 (q) n−n in distribution when n → ∞. [Owen, 2001]
  • 121. Convergence of EL [3.4] “...The interesting thing about Theorem 3.4 is what is not there. It ^ includes no conditions to make θ a good estimate of θ0 , nor even conditions to ensure a unique value for θ0 , nor even that any solution θ0 exists. Theorem 3.4 applies in the just determined, over-determined, and under-determined cases. When we can prove that our estimating ^ equations uniquely define θ0 , and provide a consistent estimator θ of it, then confidence regions and tests follow almost automatically through Theorem 3.4.”. [Owen, 2001]
  • 122. Raw ABCel sampler We act as if EL was an exact likelihood for i = 1 → N do generate φi from the prior distribution π(·) set the weight ωi = Lel (φi |xobs ) end for return (φi , ωi ), i = 1, . . . , N The output is sample of parameters of size N with associated weights [Cornuet et al., 2012]
  • 123. Raw ABCel sampler We act as if EL was an exact likelihood for i = 1 → N do generate φi from the prior distribution π(·) set the weight ωi = Lel (φi |xobs ) end for return (φi , ωi ), i = 1, . . . , N Performance of the output evaluated through effective sample size  2 N  N  ESS = 1 ωi ωj   i=1 j=1 [Cornuet et al., 2012]
  • 124. Raw ABCel sampler We act as if EL was an exact likelihood for i = 1 → N do generate φi from the prior distribution π(·) set the weight ωi = Lel (φi |xobs ) end for return (φi , ωi ), i = 1, . . . , N Other classical sampling algorithms might be adapted to use EL. We resorted to the adaptive multiple importance sampling (AMIS) of Cornuet to speed up computations [Cornuet et al., 2012]
  • 125. Moment condition in population genetics? EL does not require a fully defined and often complex (hence debatable) parametric model Main difficulty Derive a constraint EF h(X , φ) = 0, on the parameters of interest φ when X is made of the genotypes of the sample of individuals at a given locus E.g., in phylogeography, φ is composed of dates of divergence between populations, ratio of population sizes, mutation rates, etc. None of them are moments of the distribution of the allelic states of the sample
  • 126. Moment condition in population genetics? EL does not require a fully defined and often complex (hence debatable) parametric model Main difficulty Derive a constraint EF h(X , φ) = 0, on the parameters of interest φ when X is made of the genotypes of the sample of individuals at a given locus c h pairwise composite scores whose zero is the pairwise maximum likelihood estimator
  • 127. Pairwise composite likelihood The intra-locus pairwise likelihood j 2 (xk |φ) 2 (xk , xk |φ) i = i<j 1 n with xk , . . . , xk : allelic states of the gene sample at the k-th locus The pairwise score function j φ log 2 (xk |φ) φ log 2 (xk , xk |φ) i = i<j Composite likelihoods are often much narrower than the original likelihood of the model Safe with EL because we only use position of its mode
  • 128. Pairwise likelihood: a simple case 1 Assumptions 2 (δ|θ) =√ ρ (θ)|δ| 1 + 2θ sample ⊂ closed, panmictic with θ population at equilibrium ρ(θ) = √ 1 + θ + 1 + 2θ marker: microsatellite mutation rate: θ/2 Pairwise score function ∂θ log 2 (δ|θ) = i j 1 |δ| if xk et xk are two genes of the − + √ sample, 1 + 2θ θ 1 + 2θ j 2 (xk , xk |θ) i depends only on i j δ = xk − xk
  • 129. Pairwise likelihood: a simple case 1 Assumptions 2 (δ|θ) =√ ρ (θ)|δ| 1 + 2θ sample ⊂ closed, panmictic with θ population at equilibrium ρ(θ) = √ 1 + θ + 1 + 2θ marker: microsatellite mutation rate: θ/2 Pairwise score function ∂θ log 2 (δ|θ) = i j 1 |δ| if xk et xk are two genes of the − + √ sample, 1 + 2θ θ 1 + 2θ j 2 (xk , xk |θ) i depends only on i j δ = xk − xk
  • 130. Pairwise likelihood: a simple case 1 Assumptions 2 (δ|θ) =√ ρ (θ)|δ| 1 + 2θ sample ⊂ closed, panmictic with θ population at equilibrium ρ(θ) = √ 1 + θ + 1 + 2θ marker: microsatellite mutation rate: θ/2 Pairwise score function ∂θ log 2 (δ|θ) = i j 1 |δ| if xk et xk are two genes of the − + √ sample, 1 + 2θ θ 1 + 2θ j 2 (xk , xk |θ) i depends only on i j δ = xk − xk
  • 131. Pairwise likelihood: 2 diverging populations MRCA Then 2 (δ|θ, τ) = +∞ e−τθ √ ρ(θ)|k| Iδ−k (τθ). τ 1 + 2θ k=−∞ where In (z) nth-order modified POP a POP b Bessel function of the first Assumptions kind τ: divergence date of pop. a and b θ/2: mutation rate i j Let xk and xk be two genes coming resp. from pop. a and b i j Set δ = xk − xk .
  • 132. Pairwise likelihood: 2 diverging populations MRCA A 2-dim score function ∂τ log 2 (δ|θ, τ) = τ −θ + θ 2 (δ − 1|θ, τ) + 2 (δ + 1|θ, τ) 2 2 (δ|θ, τ) POP a POP b ∂θ log 2 (δ|θ, τ) = Assumptions 1 −τ − + 1 + 2θ τ: divergence date of τ 2 (δ − 1|θ, τ) + 2 (δ + 1|θ, τ) + pop. a and b 2 2 (δ|θ, τ) q(δ|θ, τ) θ/2: mutation rate 2 (δ|θ, τ) i j Let xk and xk be two genes coming resp. from pop. a and where b q(δ|θ, τ) := ∞ i j Set δ = xk − xk . e−τθ ρ (θ) √ |k|ρ(θ)|k| Iδ−k (τθ) 1 + 2θ ρ(θ) k=−∞
  • 133. Example: normal posterior ABCel with two constraints ESS=108.9 ESS=81.48 ESS=105.2 3.0 2.0 2.0 Density Density Density 1.5 1.0 1.0 0.0 0.0 0.0 −0.4 0.0 0.2 0.4 0.6 −0.8 −0.6 −0.4 −0.2 0.0 −0.2 0.0 0.2 0.4 0.6 0.8 ESS=133.3 θ ESS=87.75 θ ESS=72.89 θ 3.0 2.0 4 2.0 Density Density Density 1.0 1.0 2 0.0 0.0 0 −0.8 −0.4 0.0 0.2 0.4 −0.2 0.0 0.2 0.4 0.6 0.8 −0.2 0.0 0.2 0.4 ESS=116.5 θ ESS=103.9 θ ESS=126.9 θ 3.0 2.0 3.0 2.0 Density Density Density 1.0 1.5 1.0 0.0 0.0 −0.4 0.0 0.2 0.4 0.6 −0.4 −0.2 0.0 0.2 0.4 0.0 −0.8 −0.4 0.0 0.4 ESS=113.3 θ ESS=92.99 θ ESS=121.4 θ 3.0 2.0 2.0 Density Density Density 1.5 1.0 1.0 0.0 0.0 0.0 −0.6 −0.2 0.2 0.6 −0.2 0.0 0.2 0.4 0.6 −0.5 0.0 0.5 ESS=133.6 θ ESS=116.4 θ ESS=131.6 θ 0.0 1.0 2.0 3.0 2.0 Density Density Density 1.0 1.0 0.0 0.0 −0.6 −0.2 0.2 0.4 0.6 −0.4 −0.2 0.0 0.2 0.4 −0.5 0.0 0.5 Sample sizes are of 21 (column 3), 41 (column 1) and 61 (column 2) observations
  • 134. Example: normal posterior ABCel with three constraints ESS=233.8 ESS=186.6 ESS=370.1 3.0 3.0 2.0 2.0 Density Density Density 1.5 1.0 1.0 0.0 0.0 0.0 −0.4 −0.2 0.0 0.2 0.4 −0.8 −0.6 −0.4 −0.2 0.0 0.2 −0.4 0.0 0.2 0.4 0.6 0.8 ESS=248.2 θ ESS=189.9 θ ESS=274.1 θ 0.0 1.0 2.0 3.0 4 2.0 Density Density Density 3 2 1.0 1 0.0 0 0.0 0.2 0.4 0.6 0.8 −0.4 −0.2 0.0 0.2 0.4 −0.3 −0.2 −0.1 0.0 0.1 ESS=228.0 θ ESS=166.4 θ ESS=348.2 θ 3.0 4 0.0 1.0 2.0 3.0 3 2.0 Density Density Density 2 1.0 1 0.0 0 −0.6 −0.2 0.2 0.4 0.6 0.1 0.2 0.3 0.4 0.5 0.6 0.7 −0.8 −0.6 −0.4 −0.2 ESS=202.5 θ ESS=163.3 θ ESS=228.7 θ 4 8 2.0 3 Density Density Density 6 2 4 1.0 1 2 0.0 0 0 −0.4 −0.2 0.0 0.2 0.4 −0.3 −0.1 0.1 0.2 0.3 −0.55 −0.45 −0.35 ESS=223.7 θ ESS=133.7 θ ESS=352.1 θ 3.0 2.0 2.0 Density Density Density 1.5 1.0 1.0 0.0 0.0 0.0 −0.6 −0.2 0.0 0.2 0.4 −0.4 −0.2 0.0 0.2 0.4 −0.8 −0.6 −0.4 −0.2 0.0 0.2 Sample sizes are of 21 (column 3), 41 (column 1) and 61 (column 2) observations
  • 135. Example: Superposition of gamma processes Example of superposition of N renewal processes with waiting times τij (i = 1, . . . , M), j = 1, . . .) ∼ G(α, β), when N is unknown. Renewal processes ζi1 = τi1 , ζi2 = ζi1 + τi2 , . . . with observations made of first n values of the ζij ’s, z1 = min{ζij }, z2 = min{ζij ; ζij > z1 }, . . . , ending with zn = min{ζij ; ζij > zn−1 } . [Cox & Kartsonaki, B’ka, 2012]
  • 136. Example: Superposition of gamma processes (ABC) Interesting testing ground for ABCel since data (zt ) neither iid nor Markov Recovery of an iid structure by 1. simulating a pseudo-dataset, Comparison of ABC and (z , . . . , z ), as in regular ABCel posteriors 1 n ABC, 0.08 1.4 1.5 1.2 0.06 1.0 1.0 0.8 Density Density Density 2. deriving sequence of 0.04 0.6 0.5 0.4 0.02 0.2 indicators (ν1 , . . . , νn ), as 0.00 0.0 0.0 0 1 2 3 4 0 1 2 3 4 0 5 10 15 20 α β N z1 = ζν1 1 , z2 = ζν2 j2 , . . . 1.5 0.06 1.0 0.05 0.8 1.0 0.04 0.6 Density Density Density 0.03 0.4 0.5 0.02 3. exploiting that those 0.2 0.01 0.00 0.0 0.0 indicators are distributed 0 1 2 α 3 4 0 1 2 β 3 4 0 5 10 N 15 20 from the prior distribution Top: ABCel on the νt ’s leading to an iid Bottom: regular ABC sample of G(α, β) variables
  • 137. Example: Superposition of gamma processes (ABC) Interesting testing ground for ABCel since data (zt ) neither iid nor Markov Recovery of an iid structure by 1. simulating a pseudo-dataset, Comparison of ABC and (z , . . . , z ), as in regular ABCel posteriors 1 n ABC, 0.08 1.4 1.5 1.2 0.06 1.0 1.0 0.8 Density Density Density 2. deriving sequence of 0.04 0.6 0.5 0.4 0.02 0.2 indicators (ν1 , . . . , νn ), as 0.00 0.0 0.0 0 1 2 3 4 0 1 2 3 4 0 5 10 15 20 α β N z1 = ζν1 1 , z2 = ζν2 j2 , . . . 1.5 0.06 1.0 0.05 0.8 1.0 0.04 0.6 Density Density Density 0.03 0.4 0.5 0.02 3. exploiting that those 0.2 0.01 0.00 0.0 0.0 indicators are distributed 0 1 2 α 3 4 0 1 2 β 3 4 0 5 10 N 15 20 from the prior distribution Top: ABCel on the νt ’s leading to an iid Bottom: regular ABC sample of G(α, β) variables
  • 138. Pop’gen’: A first experiment Evolutionary scenario: Comparison of the original MRCA ABC with ABCel ESS=7034 15 τ 10 Density 5 POP 0 POP 1 0 0.00 0.05 0.10 0.15 0.20 0.25 Dataset: log(theta) 50 genes per populations, 7 6 100 microsat. loci 5 Density 4 3 Assumptions: 2 1 0 Ne identical over all −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 log(tau1) populations φ = (log10 θ, log10 τ) histogram = ABCel curve = original ABC uniform prior over vertical line = “true” (−1., 1.5) × (−1., 1.) parameter
  • 139. Pop’gen’: A first experiment Evolutionary scenario: Comparison of the original MRCA ABC with ABCel ESS=7034 15 τ 10 Density 5 POP 0 POP 1 0 0.00 0.05 0.10 0.15 0.20 0.25 Dataset: log(theta) 50 genes per populations, 7 6 100 microsat. loci 5 Density 4 3 Assumptions: 2 1 0 Ne identical over all −0.3 −0.2 −0.1 0.0 0.1 0.2 0.3 log(tau1) populations φ = (log10 θ, log10 τ) histogram = ABCel curve = original ABC uniform prior over vertical line = “true” (−1., 1.5) × (−1., 1.) parameter
  • 140. ABC vs. ABCel on 100 replicates of the 1st experiment Accuracy: log10 θ log10 τ ABC ABCel ABC ABCel (1) 0.097 0.094 0.315 0.117 (2) 0.071 0.059 0.272 0.077 (3) 0.68 0.81 1.0 0.80 (1) Root Mean Square Error of the posterior mean (2) Median Absolute Deviation of the posterior median (3) Coverage of the credibility interval of probability 0.8 Computation time: on a recent 6-core computer (C++/OpenMP) ABC ≈ 4 hours ABCel ≈ 2 minutes
  • 141. Pop’gen’: Second experiment Evolutionary scenario: Comparison of the original ABC MRCA with ABCel τ2 histogram = ABCel curve = original ABC τ1 vertical line = “true” parameter POP 0 POP 1 POP 2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ= (log10 θ, log10 τ1 , log10 τ2 ) non-informative uniform
  • 142. Pop’gen’: Second experiment Evolutionary scenario: Comparison of the original ABC MRCA with ABCel τ2 τ1 POP 0 POP 1 POP 2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all histogram = ABCel populations curve = original ABC φ= vertical line = “true” parameter (log10 θ, log10 τ1 , log10 τ2 ) non-informative uniform
  • 143. Pop’gen’: Second experiment Evolutionary scenario: Comparison of the original ABC MRCA with ABCel τ2 τ1 POP 0 POP 1 POP 2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ= histogram = ABCel (log10 θ, log10 τ1 , log10 τ2 ) curve = original ABC non-informative uniform vertical line = “true” parameter
  • 144. Pop’gen’: Second experiment Evolutionary scenario: Comparison of the original ABC MRCA with ABCel τ2 τ1 POP 0 POP 1 POP 2 Dataset: 50 genes per populations, 100 microsat. loci Assumptions: Ne identical over all populations φ= histogram = ABCel (log10 θ, log10 τ1 , log10 τ2 ) curve = original ABC non-informative uniform vertical line = “true” parameter
  • 145. ABC vs. ABCel on 100 replicates of the 2nd experiment Accuracy: log10 θ log10 τ1 log10 τ2 ABC ABCel ABC ABCel ABC ABCel (1) 0.0059 0.0794 0.472 0.483 29.3 4.76 (2) 0.048 0.053 0.32 0.28 4.13 3.36 (3) 0.79 0.76 0.88 0.76 0.89 0.79 (1) Root Mean Square Error of the posterior mean (2) Median Absolute Deviation of the posterior median (3) Coverage of the credibility interval of probability 0.8 Computation time: on a recent 6-core computer (C++/OpenMP) ABC ≈ 6 hours ABCel ≈ 8 minutes
  • 146. Why? On large datasets, ABCel gives more accurate results than ABC ABC simplifies the dataset through summary statistics Due to the large dimension of x, the original ABC algorithm estimates π θ η(xobs ) , where η(xobs ) is some (non-linear) projection of the observed dataset on a space with smaller dimension → Some information is lost ABCel simplifies the model through a generalized moment condition model. → Here, the moment condition model is based on pairwise composition likelihood