SlideShare a Scribd company logo
Efficient Monte Carlo

  Are low-discrepancy sequences
better than pseudo random numbers
    for Monte-Carlo simulation ?

             Ashwin Rao
“Random” numbers for Monte Carlo

• Say you have 10,000 paths with quarterly time steps for 30 years
• Say you have a 4 factor model (say 4 uncorrelated dWs at every time step)
• We pick 10,000 points in a uniform distribution over [0,1]480
• This gives us 480 uncorrelated dWs on every path (using inverse gaussian)
• On each path: Generate diffusion, Calculate payoffs, Discount => Path PV
• Path price function is a f: [0,1]480 → R
• We approximate Price E[f(x)] by averaging f(x) over the 10,000 480-D pts
• To get an accurate price, we need to pick “good” points in the [0,1]480 space
• The rest of the talk is on picking “good” [0,1]s points using
 PRN: Pseudo random number generation
 QRN: Quasi random number generation (i.e., low-discrepancy sequences)
What’s the key objective ?

• Generate points in the s-dimensional cube [0,1]s
• Ideally, you want uniformity and independence of the points
• PRN: Tries to mimic a truly random u.i.i.d deterministically
• PRN: Moderate success in passing uniformity tests
• PRN: Moderate success in passing independence tests
• QRN: Arranges “well-spaced” points uniformly on the cube
• QRN: Aims to minimize clusters and holes
• QRN: Doesn’t attempt to make points independent
Fundamental Methodology

• PRN: Cycles over large-sized finite fields
• PRN: Powers of a primitive root of Zpgenerates Zp
• PRN: Primitive polynomial over Zp generates Zp extension
• QRN: Van der Corput sequences
• QRN: In multi-D, generate permutations of VdC sequences
• QRN: (t,m,s) nets and (t,s) sequences
Multi-dimension

• PRN: Generated points are in one dimension

• PRN: Successive 1-D points pieced together to form multi-D points

• QRN: Generated points are in a specific multi-D

• QRN: Arranging points uniformly gets harder as dimension increases

• QRN: Curse of dimensionality
Big picture view
• PRN:   Passes independence tests reasonably well

• PRN: Does okay on uniformity in lower dimensions

• PRN: In high dimensions, points end up lying in lower dim planes

• PRN: Mersenne Twister is an exception, doing quite well in high dim

• QRN: Uniformity is its USP

• QRN: Correlation between points & between neighboring coordinates

• QRN: This correlation causes difficulties in high dimensions

• QRN: Sobol is an exception, doing quite well in high dim
Theoretical Error Convergence

• PRN: O(1/sqrt(n))
• PRN: Doesn’t depend on s
• PRN: sqrt(n) convergence is fundamentally slow
• QRN: O( (log n)^s / n )
Practical performance

• QRN is much better in lower dimensions
• But QRN performance deteriorates very quickly as dim goes up
• Basic QRN methods do worse than basic PRN methods in high dim
• However, Sobol and Generalized Faure do very well in high dim
Abstract Algebra Basics

• Group: Set with closed operator(+), Associativity, Identity(0), Inverse
• Monoid: Take away “existence of inverse” from a Group
•Semigroup: Take away “existence of identity” from a Monoid
• Ring: Group under +, Monoid under *, Distributivity of + and *
• Field: Group under +, Group under *, Distributivity of + and *
• Z is a Ring, Q and R are Fields
• Zm is a group under + modulo m for any m > 1
• Zp is a field under + and * modulo p for any prime p
Finite Fields

• Multiplying Zp by any a ε Zp simply permutes Zp
• Hence, every element in Zp has an inverse
• Fermat’s Little Theorem: ap-1 ≡ 1 (mod p )
• Primitive Root of Zp : Any a ε Zp whose powers generate Zp
Finite Field Extensions
• Polynomials over Zp modulo an irreducible polynomial f(x)
• This set is clearly a ring
• Multiplying this ring with any element of the ring permutes it
• Hence, each such polynomial has an * inverse modulo f(x)
• Hence, this set of pr polynomials form a field K extending Zp
• For any α ε K s.t. f(α)=0, “attaching” α to Zp generates K
• Essentially, polynomial-symbolic x represents α
• If powers of α generate K, f(x) is called a primitive polynomial
• Then powers of x modulo f(x) generate all polynomials of deg < r
Simple PRN: Linear Congruential Generator

• sn = a * sn-1 + b (mod p )
• a ≥ 1, b ≥ 0
• Seed 0 ≤ s0 ≤ p
• Identify a and b s.t. we cycle through all elements of Zp
• This is called a generator with a “full period”
• For case of b = 0, choose for a => any primitive root
Higher order Linear Recursive Generators

• sn = cr-1 sn-1 + cr-2 sn-2 + … + c0 sn-r b (mod p )
• Define f(x) = xr – cr-1 xr-1 – cr-2xr-2 - … - c0
• Vector vn = < sn+r-1, sn+r-2, …, sn >
• vn is associated with the polynomial
   gv{n}(x)= sn+r-1 xr-1 + sn+r-2 xr-2 + …. + sn
• Multiplying gv{n}(x) by x modulo f(x) gives gv{n+1}(x)
• If f(x) is a primitive polynomial,
  this generation gives a full period of pr
QRN - Definition of Discrepancy

• Discrepancy: Deviation from uniformity
• Given a point set {x1,…, xn}, each xi ε [0,1]s
• Given a collection S of Lebesgue measurable subsets of [0,1]s
• D(x1,…, xn ; S) = supAεS | (#{xi ε A}) / n – volume(A) |
Van der Corput Sequence

• For each n = 1,2, … in sequence, write n in base p
• Flip the base p representation across decimal point
• Sequence of resulting [0,1] numbers is low-discrepancy
• At each incremental n = pr, we fill [0,1] at a finer level
• Period of cycling over [0,1] is p
• Hence, lower values of p are desirable
• Note the connection with finite field extensions
Van der Corput Sequence in base 3

0      0     0.001    1/27   0.002    2/27
0.1    1/3   0.101   10/27   0.102   11/27
0.2    2/3   0.201   19/27   0.202   20/27
0.01   1/9   0.011    4/27   0.012    5/27
0.11   4/9   0.111   13/27   0.112   14/27
0.21   7/9   0.211   22/27   0.212   23/27
0.02   2/9   0.021    7/27   0.022    8/27
0.12   5/9   0.121   16/27   0.122   17/27
0.22   8/9   0.221   25/27   0.222   26/27
Van der Corput Sequence in base 10

     0         0.9       0.81
     0.1       0.01      0.91
     0.2       0.11      0.02
     0.3       0.21      0.12
     0.4       0.31      0.22
     0.5       0.41      0.32
     0.6       0.51      0.42
     0.7       0.61      0.52
     0.8       0.71      0.62
(t,m,s) nets and (t,s) sequences in base b

• b-ary box in base b: Πi=1..s [ ai / bji, (ai + 1) / bji ]
        ji ε {0,1,…} and ai ε {0,1, …, bji -1 }.
• Volume of elementary interval is 1 / bj1+…+js
• A (t,m,s)-net in base b is a set of bm points in [0,1]s such that
 exactly bt points fall in each b-ary box of volume bt-m
• The net correctly estimates the volume of each b-ary box
• A sequence of points x1,x2, … in [0,1]s is a (t,s) sequence in base b
if for all m > t each segment {xi : jbm ≤ i < (j+1)bm}, j = 0,1. …
is a (t,m,s) net in base b
• Smaller t is better and smaller b is better
Multi-dimensional QRN
• Halton sequences:
- Each dimension is a Van der Corput sequence
- Each dimension has a different prime base
- So the highest base is the s-th prime number (high base is bad!)
- Successive dimensions highly collinear
- Halton sequences are not (t,s) sequences
• Faure sequences:
- Common base (>s) across dimensions
- Permute VdC sequence across dimensions
- Permutation at every level of granularity
- Think of permutation as multiplying by a finite field element
- Faure sequences are (0,s) sequences in a base > s
Sobol sequences

• Base 2 Van der Corput sequences in the 1st dimension
• Other dimensions are permuted VdC sequences
• Permutation at every level of granularity
• Think of permutation as multiplying by a finite field element
• Permutation matrix consists of “direction numbers”
• Primitive polynomials used to construct direction numbers
• Different primitive polynomial in every dimension
• Gray code representation makes algorithm very efficient
• Sobol sequences are (t,s) sequences in base 2
• But t is a function of s

More Related Content

PDF
Abstract Algebra in 3 Hours
PPS
Unit vii
PPTX
Half range sine cosine fourier series
PPTX
fourier series and fourier transform
PPTX
Fourier transform
PDF
Maximizing Submodular Function over the Integer Lattice
PDF
Regret Minimization in Multi-objective Submodular Function Maximization
PDF
The low-rank basis problem for a matrix subspace
Abstract Algebra in 3 Hours
Unit vii
Half range sine cosine fourier series
fourier series and fourier transform
Fourier transform
Maximizing Submodular Function over the Integer Lattice
Regret Minimization in Multi-objective Submodular Function Maximization
The low-rank basis problem for a matrix subspace

What's hot (20)

PDF
Signal Processing Introduction using Fourier Transforms
PPT
Fourier transform
PPTX
Fourier series
PPT
Fourier series
PPTX
Ic batch b1 sem 3(2015) introduction to some special functions and fourier se...
PDF
Interpolation
PPTX
Maximums and minimum
PPT
Chapter 5 Image Processing: Fourier Transformation
PPT
1531 fourier series- integrals and trans
PPTX
AEM Fourier series
PPTX
Fourier series and fourier integral
PDF
Half range sine and cosine series
PPT
Optics Fourier Transform I
PPTX
Properties of fourier transform
PPTX
Topic: Fourier Series ( Periodic Function to change of interval)
PPTX
Fourier series and it's examples
PPTX
15 puzzle problem using branch and bound
PPTX
Complex form fourier series
PDF
Fourier series
PPT
07 periodic functions and fourier series
Signal Processing Introduction using Fourier Transforms
Fourier transform
Fourier series
Fourier series
Ic batch b1 sem 3(2015) introduction to some special functions and fourier se...
Interpolation
Maximums and minimum
Chapter 5 Image Processing: Fourier Transformation
1531 fourier series- integrals and trans
AEM Fourier series
Fourier series and fourier integral
Half range sine and cosine series
Optics Fourier Transform I
Properties of fourier transform
Topic: Fourier Series ( Periodic Function to change of interval)
Fourier series and it's examples
15 puzzle problem using branch and bound
Complex form fourier series
Fourier series
07 periodic functions and fourier series
Ad

Viewers also liked (17)

PDF
IIT Bombay - First Steps to a Successful Career
PDF
The Fuss about || Haskell | Scala | F# ||
PDF
Category Theory made easy with (ugly) pictures
PDF
Careers in Quant Finance talk at UCLA Financial Engineering
PDF
Columbia CS - Roles in Quant Finance
PPTX
Introduction to Risk-Neutral Pricing
PDF
Stanford FinMath - Careers in Quant Finance
PDF
Berkeley Financial Engineering - Guidance on Careers in Quantitative Finance
PDF
Careers outside Academia - USC Computer Science Masters and Ph.D. Students
PDF
Career Advice at University College of London, Mathematics Department.
PDF
Introduction to Stochastic calculus
PDF
Careers in Quant Finance - IIT Delhi
PDF
Implementing Higher-Kinded Types in Dotty
PDF
The Newsvendor meets the Options Trader
PDF
Risk Pooling sensitivity to Correlation
PPTX
Fitness 101 - Mumbai
PDF
HOW can India's Parliament function better?
IIT Bombay - First Steps to a Successful Career
The Fuss about || Haskell | Scala | F# ||
Category Theory made easy with (ugly) pictures
Careers in Quant Finance talk at UCLA Financial Engineering
Columbia CS - Roles in Quant Finance
Introduction to Risk-Neutral Pricing
Stanford FinMath - Careers in Quant Finance
Berkeley Financial Engineering - Guidance on Careers in Quantitative Finance
Careers outside Academia - USC Computer Science Masters and Ph.D. Students
Career Advice at University College of London, Mathematics Department.
Introduction to Stochastic calculus
Careers in Quant Finance - IIT Delhi
Implementing Higher-Kinded Types in Dotty
The Newsvendor meets the Options Trader
Risk Pooling sensitivity to Correlation
Fitness 101 - Mumbai
HOW can India's Parliament function better?
Ad

Similar to Pseudo and Quasi Random Number Generation (20)

PDF
Sampling with Halton Points on n-Sphere
PDF
Some properties of m sequences over finite field fp
PDF
Direct tall-and-skinny QR factorizations in MapReduce architectures
PDF
Number Theory for Security
PDF
Composed short m sequences
PDF
Matrix Theory And Linear Algebra First Peter Selinger
PDF
Matrix Theory And Linear Algebra First Peter Selinger
PPT
2010 3-24 cryptography stamatiou
PDF
Tall-and-skinny QR factorizations in MapReduce architectures
PDF
M152 notes
PDF
Lecture50
PDF
math-basics.pdf
PDF
Machine Learning
PDF
Visualization of general defined space data
PDF
Functional Programming In Mathematica
PDF
Hermite integrators and Riordan arrays
PDF
PDF
A primer on computer algebra
PPTX
Converting High Dimensional Problems to Low Dimensional Ones
Sampling with Halton Points on n-Sphere
Some properties of m sequences over finite field fp
Direct tall-and-skinny QR factorizations in MapReduce architectures
Number Theory for Security
Composed short m sequences
Matrix Theory And Linear Algebra First Peter Selinger
Matrix Theory And Linear Algebra First Peter Selinger
2010 3-24 cryptography stamatiou
Tall-and-skinny QR factorizations in MapReduce architectures
M152 notes
Lecture50
math-basics.pdf
Machine Learning
Visualization of general defined space data
Functional Programming In Mathematica
Hermite integrators and Riordan arrays
A primer on computer algebra
Converting High Dimensional Problems to Low Dimensional Ones

More from Ashwin Rao (19)

PDF
Stochastic Control/Reinforcement Learning for Optimal Market Making
PDF
Adaptive Multistage Sampling Algorithm: The Origins of Monte Carlo Tree Search
PDF
Fundamental Theorems of Asset Pricing
PDF
Evolutionary Strategies as an alternative to Reinforcement Learning
PDF
Principles of Mathematical Economics applied to a Physical-Stores Retail Busi...
PDF
Understanding Dynamic Programming through Bellman Operators
PDF
Stochastic Control of Optimal Trade Order Execution
PDF
A.I. for Dynamic Decisioning under Uncertainty (for real-world problems in Re...
PDF
Overview of Stochastic Calculus Foundations
PDF
Risk-Aversion, Risk-Premium and Utility Theory
PDF
Value Function Geometry and Gradient TD
PDF
Stanford CME 241 - Reinforcement Learning for Stochastic Control Problems in ...
PDF
HJB Equation and Merton's Portfolio Problem
PDF
Policy Gradient Theorem
PDF
A Quick and Terse Introduction to Efficient Frontier Mathematics
PDF
Towards Improved Pricing and Hedging of Agency Mortgage-backed Securities
PDF
Recursive Formulation of Gradient in a Dense Feed-Forward Deep Neural Network
PDF
Demystifying the Bias-Variance Tradeoff
PDF
OmniChannelNewsvendor
Stochastic Control/Reinforcement Learning for Optimal Market Making
Adaptive Multistage Sampling Algorithm: The Origins of Monte Carlo Tree Search
Fundamental Theorems of Asset Pricing
Evolutionary Strategies as an alternative to Reinforcement Learning
Principles of Mathematical Economics applied to a Physical-Stores Retail Busi...
Understanding Dynamic Programming through Bellman Operators
Stochastic Control of Optimal Trade Order Execution
A.I. for Dynamic Decisioning under Uncertainty (for real-world problems in Re...
Overview of Stochastic Calculus Foundations
Risk-Aversion, Risk-Premium and Utility Theory
Value Function Geometry and Gradient TD
Stanford CME 241 - Reinforcement Learning for Stochastic Control Problems in ...
HJB Equation and Merton's Portfolio Problem
Policy Gradient Theorem
A Quick and Terse Introduction to Efficient Frontier Mathematics
Towards Improved Pricing and Hedging of Agency Mortgage-backed Securities
Recursive Formulation of Gradient in a Dense Feed-Forward Deep Neural Network
Demystifying the Bias-Variance Tradeoff
OmniChannelNewsvendor

Recently uploaded (20)

PPTX
Machine Learning_overview_presentation.pptx
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PPTX
1. Introduction to Computer Programming.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Getting Started with Data Integration: FME Form 101
PDF
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
PDF
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
A Presentation on Artificial Intelligence
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Machine learning based COVID-19 study performance prediction
Machine Learning_overview_presentation.pptx
Digital-Transformation-Roadmap-for-Companies.pptx
1. Introduction to Computer Programming.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
SOPHOS-XG Firewall Administrator PPT.pptx
Getting Started with Data Integration: FME Form 101
Architecting across the Boundaries of two Complex Domains - Healthcare & Tech...
Video forgery: An extensive analysis of inter-and intra-frame manipulation al...
Mobile App Security Testing_ A Comprehensive Guide.pdf
20250228 LYD VKU AI Blended-Learning.pptx
“AI and Expert System Decision Support & Business Intelligence Systems”
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Reach Out and Touch Someone: Haptics and Empathic Computing
A Presentation on Artificial Intelligence
gpt5_lecture_notes_comprehensive_20250812015547.pdf
cuic standard and advanced reporting.pdf
Per capita expenditure prediction using model stacking based on satellite ima...
Machine learning based COVID-19 study performance prediction

Pseudo and Quasi Random Number Generation

  • 1. Efficient Monte Carlo Are low-discrepancy sequences better than pseudo random numbers for Monte-Carlo simulation ? Ashwin Rao
  • 2. “Random” numbers for Monte Carlo • Say you have 10,000 paths with quarterly time steps for 30 years • Say you have a 4 factor model (say 4 uncorrelated dWs at every time step) • We pick 10,000 points in a uniform distribution over [0,1]480 • This gives us 480 uncorrelated dWs on every path (using inverse gaussian) • On each path: Generate diffusion, Calculate payoffs, Discount => Path PV • Path price function is a f: [0,1]480 → R • We approximate Price E[f(x)] by averaging f(x) over the 10,000 480-D pts • To get an accurate price, we need to pick “good” points in the [0,1]480 space • The rest of the talk is on picking “good” [0,1]s points using  PRN: Pseudo random number generation  QRN: Quasi random number generation (i.e., low-discrepancy sequences)
  • 3. What’s the key objective ? • Generate points in the s-dimensional cube [0,1]s • Ideally, you want uniformity and independence of the points • PRN: Tries to mimic a truly random u.i.i.d deterministically • PRN: Moderate success in passing uniformity tests • PRN: Moderate success in passing independence tests • QRN: Arranges “well-spaced” points uniformly on the cube • QRN: Aims to minimize clusters and holes • QRN: Doesn’t attempt to make points independent
  • 4. Fundamental Methodology • PRN: Cycles over large-sized finite fields • PRN: Powers of a primitive root of Zpgenerates Zp • PRN: Primitive polynomial over Zp generates Zp extension • QRN: Van der Corput sequences • QRN: In multi-D, generate permutations of VdC sequences • QRN: (t,m,s) nets and (t,s) sequences
  • 5. Multi-dimension • PRN: Generated points are in one dimension • PRN: Successive 1-D points pieced together to form multi-D points • QRN: Generated points are in a specific multi-D • QRN: Arranging points uniformly gets harder as dimension increases • QRN: Curse of dimensionality
  • 6. Big picture view • PRN: Passes independence tests reasonably well • PRN: Does okay on uniformity in lower dimensions • PRN: In high dimensions, points end up lying in lower dim planes • PRN: Mersenne Twister is an exception, doing quite well in high dim • QRN: Uniformity is its USP • QRN: Correlation between points & between neighboring coordinates • QRN: This correlation causes difficulties in high dimensions • QRN: Sobol is an exception, doing quite well in high dim
  • 7. Theoretical Error Convergence • PRN: O(1/sqrt(n)) • PRN: Doesn’t depend on s • PRN: sqrt(n) convergence is fundamentally slow • QRN: O( (log n)^s / n )
  • 8. Practical performance • QRN is much better in lower dimensions • But QRN performance deteriorates very quickly as dim goes up • Basic QRN methods do worse than basic PRN methods in high dim • However, Sobol and Generalized Faure do very well in high dim
  • 9. Abstract Algebra Basics • Group: Set with closed operator(+), Associativity, Identity(0), Inverse • Monoid: Take away “existence of inverse” from a Group •Semigroup: Take away “existence of identity” from a Monoid • Ring: Group under +, Monoid under *, Distributivity of + and * • Field: Group under +, Group under *, Distributivity of + and * • Z is a Ring, Q and R are Fields • Zm is a group under + modulo m for any m > 1 • Zp is a field under + and * modulo p for any prime p
  • 10. Finite Fields • Multiplying Zp by any a ε Zp simply permutes Zp • Hence, every element in Zp has an inverse • Fermat’s Little Theorem: ap-1 ≡ 1 (mod p ) • Primitive Root of Zp : Any a ε Zp whose powers generate Zp
  • 11. Finite Field Extensions • Polynomials over Zp modulo an irreducible polynomial f(x) • This set is clearly a ring • Multiplying this ring with any element of the ring permutes it • Hence, each such polynomial has an * inverse modulo f(x) • Hence, this set of pr polynomials form a field K extending Zp • For any α ε K s.t. f(α)=0, “attaching” α to Zp generates K • Essentially, polynomial-symbolic x represents α • If powers of α generate K, f(x) is called a primitive polynomial • Then powers of x modulo f(x) generate all polynomials of deg < r
  • 12. Simple PRN: Linear Congruential Generator • sn = a * sn-1 + b (mod p ) • a ≥ 1, b ≥ 0 • Seed 0 ≤ s0 ≤ p • Identify a and b s.t. we cycle through all elements of Zp • This is called a generator with a “full period” • For case of b = 0, choose for a => any primitive root
  • 13. Higher order Linear Recursive Generators • sn = cr-1 sn-1 + cr-2 sn-2 + … + c0 sn-r b (mod p ) • Define f(x) = xr – cr-1 xr-1 – cr-2xr-2 - … - c0 • Vector vn = < sn+r-1, sn+r-2, …, sn > • vn is associated with the polynomial gv{n}(x)= sn+r-1 xr-1 + sn+r-2 xr-2 + …. + sn • Multiplying gv{n}(x) by x modulo f(x) gives gv{n+1}(x) • If f(x) is a primitive polynomial, this generation gives a full period of pr
  • 14. QRN - Definition of Discrepancy • Discrepancy: Deviation from uniformity • Given a point set {x1,…, xn}, each xi ε [0,1]s • Given a collection S of Lebesgue measurable subsets of [0,1]s • D(x1,…, xn ; S) = supAεS | (#{xi ε A}) / n – volume(A) |
  • 15. Van der Corput Sequence • For each n = 1,2, … in sequence, write n in base p • Flip the base p representation across decimal point • Sequence of resulting [0,1] numbers is low-discrepancy • At each incremental n = pr, we fill [0,1] at a finer level • Period of cycling over [0,1] is p • Hence, lower values of p are desirable • Note the connection with finite field extensions
  • 16. Van der Corput Sequence in base 3 0 0 0.001 1/27 0.002 2/27 0.1 1/3 0.101 10/27 0.102 11/27 0.2 2/3 0.201 19/27 0.202 20/27 0.01 1/9 0.011 4/27 0.012 5/27 0.11 4/9 0.111 13/27 0.112 14/27 0.21 7/9 0.211 22/27 0.212 23/27 0.02 2/9 0.021 7/27 0.022 8/27 0.12 5/9 0.121 16/27 0.122 17/27 0.22 8/9 0.221 25/27 0.222 26/27
  • 17. Van der Corput Sequence in base 10 0 0.9 0.81 0.1 0.01 0.91 0.2 0.11 0.02 0.3 0.21 0.12 0.4 0.31 0.22 0.5 0.41 0.32 0.6 0.51 0.42 0.7 0.61 0.52 0.8 0.71 0.62
  • 18. (t,m,s) nets and (t,s) sequences in base b • b-ary box in base b: Πi=1..s [ ai / bji, (ai + 1) / bji ] ji ε {0,1,…} and ai ε {0,1, …, bji -1 }. • Volume of elementary interval is 1 / bj1+…+js • A (t,m,s)-net in base b is a set of bm points in [0,1]s such that exactly bt points fall in each b-ary box of volume bt-m • The net correctly estimates the volume of each b-ary box • A sequence of points x1,x2, … in [0,1]s is a (t,s) sequence in base b if for all m > t each segment {xi : jbm ≤ i < (j+1)bm}, j = 0,1. … is a (t,m,s) net in base b • Smaller t is better and smaller b is better
  • 19. Multi-dimensional QRN • Halton sequences: - Each dimension is a Van der Corput sequence - Each dimension has a different prime base - So the highest base is the s-th prime number (high base is bad!) - Successive dimensions highly collinear - Halton sequences are not (t,s) sequences • Faure sequences: - Common base (>s) across dimensions - Permute VdC sequence across dimensions - Permutation at every level of granularity - Think of permutation as multiplying by a finite field element - Faure sequences are (0,s) sequences in a base > s
  • 20. Sobol sequences • Base 2 Van der Corput sequences in the 1st dimension • Other dimensions are permuted VdC sequences • Permutation at every level of granularity • Think of permutation as multiplying by a finite field element • Permutation matrix consists of “direction numbers” • Primitive polynomials used to construct direction numbers • Different primitive polynomial in every dimension • Gray code representation makes algorithm very efficient • Sobol sequences are (t,s) sequences in base 2 • But t is a function of s