Monte Carlo and
quasi-Monte Carlo Integration
John D. Cook
M. D. Anderson Cancer Center
July 24, 2002
Trapezoid rule in one dimension

  Error bound proportional to product of
     Step size squared
     Second derivative of integrand
  N = number of function evaluations
  Step size h = N-1
  Error proportional to N-2
Simpson’s rule in one dimensions

  Error bound proportional to product of
     Step size to the fourth power
     Fourth derivative of integrand
  Step size h = N-1
  Error proportional to N-4
  All bets are off if integrand doesn’t have
  a fourth derivative.
Product rules
  In two dimensions, trapezoid error
  proportional to N-1
  In d dimensions, trapezoid error
  proportional to N-2/d.
  If 1-dimensional rule has error N-p,
  n-dimensional product has error N-p/d
Dimension in a nutshell
  Assume the number of integration
  points N is fixed, as well as the order of
  the integration rule p.
  Moving from 1 dimension to
  d dimensions divides the number of
  correct figures by d.
Monte Carlo to the rescue
  Error proportional to N-1/2,
  independent of dimension!
  Convergence is slow, but doesn’t get
  worse as dimension increases.
  Quadruple points to double accuracy.
How many figures can you get
with a million integration points?
Dimension   Trapezoid   Monte Carlo
1           12          3
2           6           3
3           4           3
4           3           3
6           2           3
12          1           3
Fine print
  Error estimate means something
  different for product rules than for MC.
  Proportionality factors other than
  number of points very important.
  Different factors improve performance
  of the two methods.
Interpreting error bounds
  Trapezoid rule has deterministic error
  bounds: if you know an upper bound on
  the second derivative, you can bracket
  the error.
  Monte Carlo error is probabilistic.
  Roughly a 2/3 chance of integral being
  within one standard deviation.
Proportionality factors
  Error bound in classical methods
  depends on maximum of derivatives.
  MC error proportional to variance of
  function, E[f2] – E[f]2
Contrasting proportionality
  Classical methods improve with smooth
  integrands
  Monte Carlo doesn’t depend on
  differentiability at all, but improves with
  overall “flatness”.
Good MC, bad trapezoid

        1


      0.8


      0.6


      0.4


      0.2



            1.5   2   2.5   3
Good trapeziod, bad MC

                    8


                    6


                    4


                    2



     -3   -2   -1       1   2   3
Simple Monte Carlo

 If xi is a sequence of independent samples from
 a uniform random variable
Importance Sampling

Suppose X is a random variable with PDF and xi is a
sequence of independent samples from X.
Variance reduction (example)
If an integrand f is well approximated by a PDF   that is
easy to sample from, use the equation




and apply importance sampling.

Variance of the integrand will be small, and so
convergence will be fast.
MC Good news / Bad news
 MC doesn’t get any worse when the
 integrand is not smooth.
 MC doesn’t get any better when the
 integrand is smooth.
 MC converges like N-1/2 in the worst
 case.
 MC converges like N-1/2 in the best case.
Quasi-random vs. Pseudo-random

  Both are deterministic.
  Pseudo-random numbers mimic the
  statistical properties of truly random
  numbers.
  Quasi-random numbers mimic the
  space-filling properties of random
  numbers, and improves on them.
120 Point Comparison

1 .0                                                1 .0



0 .8                                                0 .8



0 .6                                                0 .6



0 .4                                                0 .4



0 .2                                                0 .2



0 .0                                                0 .0
       0 .0   0 .2      0 .4   0 .6   0 .8   1 .0          0 .0   0 .2     0 .4   0 .6   0 .8   1 .0


                     Sobol’ Sequence                                     Excel’s PRNG
Quasi-random pros and cons
  The asymptotic convergence rate is more like
  N-1 than N-1/2.
  Actually, it’s more like log(N)dN-1.
  These bounds are very pessimistic in practice.
  QMC always beats MC eventually.
  Whether “eventually” is good enough
  depends on the problem and the particular
  QMC sequence.
MC-QMC compromise
 Randomized QMC
 Evaluate integral using a number of randomly
 shifted QMC series.
 Return average of estimates as integral.
 Return standard deviation of estimates as
 error estimate.
 Maybe better than MC or QMC!
 Can view as a variance reduction technique.
Some quasi-random sequences

  Halton – bit reversal in relatively prime
  bases
  Hammersly – finite sequence with one
  uniform component
  Sobol’ – common in practice, based on
  primitive polynomials over binary field
Sequence recommendations
 Experiment!
 Hammersley probably best for low
 dimensions if you know up front how many
 you’ll need. Must go through entire cycle or
 coverage will be uneven in one coordinate.
 Halton probably best for low dimensions.
 Sobol’ probably best for high dimensions.
Lattice Rules
  Nothing remotely random about them
  “Low discrepancy”
  Periodic functions on a unit cube
  There are standard transformations to
  reduce other integrals to this form
Lattice Example
Advantages and disadvantages
  Lattices work very well for smooth integrands
  Don’t work so well for discontinuous
  integrands
  Have good projections on to coordinate axes
  Finite sequences
  Good error posterior estimates
  Some a priori estimates, sometimes
  pessimistic
Software written
  QMC integration implemented for
  generic sequence generator
  Generators implemented: Sobol’,
  Halton, Hammersley
  Randomized QMC
  Lattice rules
  Randomized lattice rules
Randomization approaches
  Randomized lattice uses specified lattice size,
  randomize until error goal met
  RQMC uses specified number of
  randomizations, generate QMC until error
  goal met
  Lattice rules require this approach: they’re
  finite, and new ones found manually.
  QMC sequences can be expensive to compute
  (Halton, not Sobol) so compute once and
  reuse.
Future development
  Variance reduction. Good
  transformations make any technique
  work better.
  Need for lots of experiments.
Contact
  http://www.JohnDCook.com

More Related Content

PDF
Markov Chain Monte Carlo explained
PDF
Sampling and Markov Chain Monte Carlo Techniques
PPTX
Hidden Markov Models
PDF
Ordinal Regression and Machine Learning: Applications, Methods, Metrics
PPTX
Hidden markov model
PPT
Hidden Markov Models with applications to speech recognition
PPTX
Hidden Markov Model - The Most Probable Path
PPTX
Markov Chain Monte Carlo explained
Sampling and Markov Chain Monte Carlo Techniques
Hidden Markov Models
Ordinal Regression and Machine Learning: Applications, Methods, Metrics
Hidden markov model
Hidden Markov Models with applications to speech recognition
Hidden Markov Model - The Most Probable Path

What's hot (20)

PPT
HIDDEN MARKOV MODEL AND ITS APPLICATION
PPT
HMM (Hidden Markov Model)
PPTX
Hidden markov model
PPTX
Hidden markov model
PDF
Data Science - Part XIII - Hidden Markov Models
PPTX
Viterbi algorithm
PPTX
Financial Networks III. Centrality and Systemic Importance
PPT
Markov analysis
PPT
Hidden markov model ppt
PPTX
Quantum Fuzzy Logic
PDF
Test-bench for Task Offloading Mechanisms: Modelling the rewards of Non-stati...
PPTX
Hidden Markov Model
PDF
Grovers Algorithm
PDF
Quantum algorithm for solving linear systems of equations
PPTX
Hidden markov model explained
PDF
A bit about мcmc
PPTX
Recurrent Neuron Network-from point of dynamic system & state machine
PDF
Queueing theory
PDF
Cryptanalysis with a Quantum Computer - An Exposition on Shor's Factoring Alg...
PPTX
Markov chain-model
HIDDEN MARKOV MODEL AND ITS APPLICATION
HMM (Hidden Markov Model)
Hidden markov model
Hidden markov model
Data Science - Part XIII - Hidden Markov Models
Viterbi algorithm
Financial Networks III. Centrality and Systemic Importance
Markov analysis
Hidden markov model ppt
Quantum Fuzzy Logic
Test-bench for Task Offloading Mechanisms: Modelling the rewards of Non-stati...
Hidden Markov Model
Grovers Algorithm
Quantum algorithm for solving linear systems of equations
Hidden markov model explained
A bit about мcmc
Recurrent Neuron Network-from point of dynamic system & state machine
Queueing theory
Cryptanalysis with a Quantum Computer - An Exposition on Shor's Factoring Alg...
Markov chain-model
Ad

Viewers also liked (20)

PDF
Sampling-Importance-Sampling을 이용한 선수 경기능력 측정
PDF
Sampling strategies for Sequential Monte Carlo (SMC) methods
PDF
Build Lightmap system
PPT
The pink night by eddy.odp
DOC
Zend db
PPTX
ABC Breakfast Club m Solar: Opret politikker for ind- og udfasning
DOCX
Basic ads should be content
PDF
About him
PPSX
Vct anafuertes2-120111093510
PPT
Handwork sarees online
PDF
PPT
Our exclusive wedding collection
PPTX
ABC Breakfast Club m Tresu: Effektiv lageroprydning
PDF
Resume A. Rinaldi (ENG-ME)
DOC
Reiki simag
PPT
Resumeนที
PPTX
ABC Dream Team - Skab dit analytiske dream team, ABC Softwork best practice
PPTX
Rode Duivels - Red Devils
DOC
Hira ratan manek
Sampling-Importance-Sampling을 이용한 선수 경기능력 측정
Sampling strategies for Sequential Monte Carlo (SMC) methods
Build Lightmap system
The pink night by eddy.odp
Zend db
ABC Breakfast Club m Solar: Opret politikker for ind- og udfasning
Basic ads should be content
About him
Vct anafuertes2-120111093510
Handwork sarees online
Our exclusive wedding collection
ABC Breakfast Club m Tresu: Effektiv lageroprydning
Resume A. Rinaldi (ENG-ME)
Reiki simag
Resumeนที
ABC Dream Team - Skab dit analytiske dream team, ABC Softwork best practice
Rode Duivels - Red Devils
Hira ratan manek
Ad

Similar to Monte Carlo and quasi-Monte Carlo integration (20)

PDF
Interview Preparation
PPTX
Monte Carlo Simulation
PDF
Finite Syllabus
PDF
Lecture: Monte Carlo Methods
PPT
Submodularity slides
PPT
MAchin learning graphoalmodesland bayesian netorls
PPTX
Monte carlo option pricing final v3
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
DOCX
 Igcse mathematics additional standards
PDF
06 Mean Var
PDF
volesti: sampling efficiently from high dimensional distributions
PPT
Monte carlo
PDF
Thalesian_Monte_Carlo_NAG
PDF
Phylogeny Discrete and Random Processes in Evolution 1st Edition Mike Steel
PDF
Session 2
PDF
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
PPTX
Statistical quality__control_2
PDF
An Introduction to Statistical Inference and Its Applications.pdf
PDF
Mathematical formula handbook (1)
PDF
Mathematical formula handbook
Interview Preparation
Monte Carlo Simulation
Finite Syllabus
Lecture: Monte Carlo Methods
Submodularity slides
MAchin learning graphoalmodesland bayesian netorls
Monte carlo option pricing final v3
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
 Igcse mathematics additional standards
06 Mean Var
volesti: sampling efficiently from high dimensional distributions
Monte carlo
Thalesian_Monte_Carlo_NAG
Phylogeny Discrete and Random Processes in Evolution 1st Edition Mike Steel
Session 2
toaz.info-instructor-solution-manual-probability-and-statistics-for-engineers...
Statistical quality__control_2
An Introduction to Statistical Inference and Its Applications.pdf
Mathematical formula handbook (1)
Mathematical formula handbook

More from John Cook (6)

PDF
Bayesian adaptive clinical trials: Promises and pitfalls
PDF
Erasure Coding Costs and Benefits
PDF
Combining Intuition and Data
PDF
Fast coputation of Phi(x) inverse
PDF
Bayesian hypothesis testing
PDF
Bayesian clinical trials: software and logistics
Bayesian adaptive clinical trials: Promises and pitfalls
Erasure Coding Costs and Benefits
Combining Intuition and Data
Fast coputation of Phi(x) inverse
Bayesian hypothesis testing
Bayesian clinical trials: software and logistics

Recently uploaded (20)

PDF
Taming the Chaos: How to Turn Unstructured Data into Decisions
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PPTX
O2C Customer Invoices to Receipt V15A.pptx
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
A contest of sentiment analysis: k-nearest neighbor versus neural network
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PPTX
Benefits of Physical activity for teenagers.pptx
PDF
August Patch Tuesday
PDF
1 - Historical Antecedents, Social Consideration.pdf
PDF
Zenith AI: Advanced Artificial Intelligence
PDF
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
PDF
NewMind AI Weekly Chronicles – August ’25 Week III
PDF
Hybrid model detection and classification of lung cancer
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPTX
Final SEM Unit 1 for mit wpu at pune .pptx
DOCX
search engine optimization ppt fir known well about this
PDF
Unlock new opportunities with location data.pdf
PPTX
The various Industrial Revolutions .pptx
PDF
sustainability-14-14877-v2.pddhzftheheeeee
Taming the Chaos: How to Turn Unstructured Data into Decisions
Assigned Numbers - 2025 - Bluetooth® Document
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
O2C Customer Invoices to Receipt V15A.pptx
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
A contest of sentiment analysis: k-nearest neighbor versus neural network
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
Benefits of Physical activity for teenagers.pptx
August Patch Tuesday
1 - Historical Antecedents, Social Consideration.pdf
Zenith AI: Advanced Artificial Intelligence
How ambidextrous entrepreneurial leaders react to the artificial intelligence...
NewMind AI Weekly Chronicles – August ’25 Week III
Hybrid model detection and classification of lung cancer
Enhancing emotion recognition model for a student engagement use case through...
Final SEM Unit 1 for mit wpu at pune .pptx
search engine optimization ppt fir known well about this
Unlock new opportunities with location data.pdf
The various Industrial Revolutions .pptx
sustainability-14-14877-v2.pddhzftheheeeee

Monte Carlo and quasi-Monte Carlo integration

  • 1. Monte Carlo and quasi-Monte Carlo Integration John D. Cook M. D. Anderson Cancer Center July 24, 2002
  • 2. Trapezoid rule in one dimension Error bound proportional to product of  Step size squared  Second derivative of integrand N = number of function evaluations Step size h = N-1 Error proportional to N-2
  • 3. Simpson’s rule in one dimensions Error bound proportional to product of  Step size to the fourth power  Fourth derivative of integrand Step size h = N-1 Error proportional to N-4 All bets are off if integrand doesn’t have a fourth derivative.
  • 4. Product rules In two dimensions, trapezoid error proportional to N-1 In d dimensions, trapezoid error proportional to N-2/d. If 1-dimensional rule has error N-p, n-dimensional product has error N-p/d
  • 5. Dimension in a nutshell Assume the number of integration points N is fixed, as well as the order of the integration rule p. Moving from 1 dimension to d dimensions divides the number of correct figures by d.
  • 6. Monte Carlo to the rescue Error proportional to N-1/2, independent of dimension! Convergence is slow, but doesn’t get worse as dimension increases. Quadruple points to double accuracy.
  • 7. How many figures can you get with a million integration points? Dimension Trapezoid Monte Carlo 1 12 3 2 6 3 3 4 3 4 3 3 6 2 3 12 1 3
  • 8. Fine print Error estimate means something different for product rules than for MC. Proportionality factors other than number of points very important. Different factors improve performance of the two methods.
  • 9. Interpreting error bounds Trapezoid rule has deterministic error bounds: if you know an upper bound on the second derivative, you can bracket the error. Monte Carlo error is probabilistic. Roughly a 2/3 chance of integral being within one standard deviation.
  • 10. Proportionality factors Error bound in classical methods depends on maximum of derivatives. MC error proportional to variance of function, E[f2] – E[f]2
  • 11. Contrasting proportionality Classical methods improve with smooth integrands Monte Carlo doesn’t depend on differentiability at all, but improves with overall “flatness”.
  • 12. Good MC, bad trapezoid 1 0.8 0.6 0.4 0.2 1.5 2 2.5 3
  • 13. Good trapeziod, bad MC 8 6 4 2 -3 -2 -1 1 2 3
  • 14. Simple Monte Carlo If xi is a sequence of independent samples from a uniform random variable
  • 15. Importance Sampling Suppose X is a random variable with PDF and xi is a sequence of independent samples from X.
  • 16. Variance reduction (example) If an integrand f is well approximated by a PDF that is easy to sample from, use the equation and apply importance sampling. Variance of the integrand will be small, and so convergence will be fast.
  • 17. MC Good news / Bad news MC doesn’t get any worse when the integrand is not smooth. MC doesn’t get any better when the integrand is smooth. MC converges like N-1/2 in the worst case. MC converges like N-1/2 in the best case.
  • 18. Quasi-random vs. Pseudo-random Both are deterministic. Pseudo-random numbers mimic the statistical properties of truly random numbers. Quasi-random numbers mimic the space-filling properties of random numbers, and improves on them.
  • 19. 120 Point Comparison 1 .0 1 .0 0 .8 0 .8 0 .6 0 .6 0 .4 0 .4 0 .2 0 .2 0 .0 0 .0 0 .0 0 .2 0 .4 0 .6 0 .8 1 .0 0 .0 0 .2 0 .4 0 .6 0 .8 1 .0 Sobol’ Sequence Excel’s PRNG
  • 20. Quasi-random pros and cons The asymptotic convergence rate is more like N-1 than N-1/2. Actually, it’s more like log(N)dN-1. These bounds are very pessimistic in practice. QMC always beats MC eventually. Whether “eventually” is good enough depends on the problem and the particular QMC sequence.
  • 21. MC-QMC compromise Randomized QMC Evaluate integral using a number of randomly shifted QMC series. Return average of estimates as integral. Return standard deviation of estimates as error estimate. Maybe better than MC or QMC! Can view as a variance reduction technique.
  • 22. Some quasi-random sequences Halton – bit reversal in relatively prime bases Hammersly – finite sequence with one uniform component Sobol’ – common in practice, based on primitive polynomials over binary field
  • 23. Sequence recommendations Experiment! Hammersley probably best for low dimensions if you know up front how many you’ll need. Must go through entire cycle or coverage will be uneven in one coordinate. Halton probably best for low dimensions. Sobol’ probably best for high dimensions.
  • 24. Lattice Rules Nothing remotely random about them “Low discrepancy” Periodic functions on a unit cube There are standard transformations to reduce other integrals to this form
  • 26. Advantages and disadvantages Lattices work very well for smooth integrands Don’t work so well for discontinuous integrands Have good projections on to coordinate axes Finite sequences Good error posterior estimates Some a priori estimates, sometimes pessimistic
  • 27. Software written QMC integration implemented for generic sequence generator Generators implemented: Sobol’, Halton, Hammersley Randomized QMC Lattice rules Randomized lattice rules
  • 28. Randomization approaches Randomized lattice uses specified lattice size, randomize until error goal met RQMC uses specified number of randomizations, generate QMC until error goal met Lattice rules require this approach: they’re finite, and new ones found manually. QMC sequences can be expensive to compute (Halton, not Sobol) so compute once and reuse.
  • 29. Future development Variance reduction. Good transformations make any technique work better. Need for lots of experiments.