Hierarchical Deterministic Quadrature Methods
for Option Pricing under the Rough Bergomi Model
Chiheb Ben Hammouda
Christian Bayer Raúl Tempone
SIAM Conference on Financial Mathematics and Engineering
(FM21)
June 1-4, 2021
1 Option Pricing under the Rough Bergomi Model: Motivation &
Challenges
2 Our Approach Based on Hierarchical Deterministic Quadrature
Methods (Bayer, Ben Hammouda, and Tempone 2020a)
3 Numerical Experiments and Results
4 Conclusions and Future Work
0
Rough Volatility 1
1
Jim Gatheral, Thibault Jaisson, and Mathieu Rosenbaum. “Volatility is rough”.
In: Quantitative Finance 18.6 (2018), pp. 933–949 1
The Rough Bergomi Model
(Bayer, Friz, and Gatheral 2016)
This model, under a pricing measure, is given by
⎧
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎩
dSt =
√
vtStdZt,
vt = ξ0(t)exp(η̃
WH
t − 1
2
η2
t2H
),
Zt ∶= ρW1
t + ρ̄W⊥
t ≡ ρW1
+
√
1 − ρ2W⊥
,
(1)
(W1
,W⊥
): two independent standard Brownian motions
̃
WH
is Riemann-Liouville process, defined by
̃
WH
t = ∫
t
0
KH
(t − s)dW1
s , t ≥ 0,
KH
(t − s) =
√
2H(t − s)H−1/2
, ∀ 0 ≤ s ≤ t. (2)
H ∈ (0,1/2] controls the roughness of paths, ρ ∈ [−1,1] and η > 0.
t ↦ ξ0(t): forward variance curve at time 0.
B Model allows for accurate fit to market prices for different
ranges of maturities, with few parameters. 2
Model Challenges
Numerically:
▸ The model is non-Markovian and non-affine ⇒ Standard numerical
methods (PDEs, Transform methods) seem inapplicable.
▸ Prevalent pricing method for vanilla options is Monte Carlo (MC)
/ still computationally expensive.
▸ Discretization methods with poor strong convergence error (rate of
order H) (Neuenkirch and Shalaiko 2016) ⇒ Variance reduction
methods, such as multilevel Monte Carlo, are inefficient.
▸ Improvements to the available numerical methods, based on deep
learning.
Theoretically:
▸ Hard to analyse the available simulation methods.2
Due to
▸ Non Markovianity ⇒ infinite dimensional state.
▸ Singularity in the Kernel, KH
(.), in (2).
2
(Bayer, Hall, and Tempone 2020) provides a preliminary weak error analysis but
not for the standard schemes (Exact and Hybrid schemes). 3
Option Pricing Challenges
European option price: V (S,0) = e−rT
EQ[g(ST )]
The integration problem is challenging
Issue 1: Time-discretization of the rough Bergomi process (large N
(number of time steps)) ⇒ S takes values in a high-dimensional
space ⇒ / Curse of dimensionality3
when using standard
deterministic quadrature methods.
Issue 2: The payoff function g is typically not smooth ⇒ low
regularity⇒ / Deterministic quadrature methods suffer from a
slow convergence.
3
Curse of dimensionality: An exponential growth of the work (number of
function evaluations) in terms of the dimension of the integration problem.
1 Option Pricing under the Rough Bergomi Model: Motivation &
Challenges
2 Our Approach Based on Hierarchical Deterministic Quadrature
Methods (Bayer, Ben Hammouda, and Tempone 2020a)
3 Numerical Experiments and Results
4 Conclusions and Future Work
4
Methodology 4
We design efficient fast hierarchical pricing methods, for options whose
underlyings follow the rough Bergomi model, based on
1 Analytic smoothing to uncover the available regularity.
2 Approximating the resulting integral of the smoothed payoff using
deterministic quadrature methods
▸ Adaptive sparse grids quadrature (ASGQ).
▸ Quasi Monte Carlo (QMC).
3 Combining our methods with hierarchical representations
▸ Brownian bridges as a Wiener path generation method ⇒ ↘ the
effective dimension of the problem.
▸ Richardson Extrapolation (Condition: stable weak error
expansion in ∆t to extrapolate) ⇒ Faster convergence of the weak
error ⇒ ↘ number of time steps (smaller input dimension).
4
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Hierarchical
adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough
Bergomi model”. In: Quantitative Finance 20.9 (2020), pp. 1457–1473
5
Conditional Expectation for Analytic Smoothing
CrB (T,K) = E [(ST − K)
+
]
= E[E[(ST − K)+
∣ σ(W1
(t),t ≤ T)]]
= E [CBS (S0 = exp(ρ∫
T
0
√
vtdW1
t −
1
2
ρ2
∫
T
0
vtdt),k = K, σ2
= (1 − ρ2
)∫
T
0
vtdt)]
≈ ∫
R2N
CBS (GrB(w(1)
,w(2)
))ρN (w(1)
)ρN (w(2)
)dw(1)
dw(2)
∶= CN
rB. (3)
Idea: (3) obtained by using the orthogonal decomposition of St into
S1
t = E{ρ∫
t
0
√
vsdW1
s }, S2
t = E{
√
1 − ρ2
∫
t
0
√
vsdW⊥
s },
and then apply conditional log-normality.
Notation:
CBS(S0,k,σ2
): the Black-Scholes call price, for initial spot price S0, strike price k, and
volatility σ2
.
GrB maps 2N independent standard Gaussian random inputs to the parameters fed to
Black-Scholes formula. GrB depends on the simulation scheme.
ρN : the multivariate Gaussian density, N: number of time steps.
For a continuous (semi)martingale Z, E(Z)t = exp(Zt − Z0 − 1
2
[Z,Z]0,t), where [Z,Z]0,t is
the quadratic variation of Z.
6
Simulation of the Rough Bergomi Dynamics
Goal: Simulate jointly (W1
t , ̃
WH
t ∶ 0 ≤ t ≤ T), resulting in W1
t1
,...,WtN
and ̃
WH
t1
,..., ̃
WH
tN
along a given grid t1 < ⋅⋅⋅ < tN
1 Covariance based approach (Bayer, Friz, and Gatheral 2016)
▸ 2N-dimensional Gaussian random as input vector.
▸ Based on Cholesky decomposition of the covariance matrix of the
(2N)-dimensional Gaussian random vector
W1
t1
,...,W1
tN
, ̃
WH
t1
,..., ̃
WtN
.
▸ Exact method but slow
▸ Work: O (N2
).
2 The hybrid scheme (Bennedsen, Lunde, and Pakkanen 2017)
▸ 2N-dimensional Gaussian random as input vector.
▸ Based on Euler discretization
▸ Approximates the kernel function in (2) by a power function near
zero and by a step function elsewhere.
▸ Accurate scheme that is much faster than the Covariance based
approach.
▸ Work: O (N) up to logarithmic factors.
7
On the Choice of the Simulation Scheme
Figure 2.1: The convergence of the relative weak error EB, using MC with
6 × 106
samples, for example parameters:
H = 0.07, K = 1,S0 = 1, T = 1, ρ = −0.9, η = 1.9, ξ0 = 0.0552. The upper and
lower bounds are 95% confidence intervals.
10−2 10−1
Δt
10−2
10−1
∣
E
[
g
∣
X
Δ
t
)
−
g
∣
X
)]
∣
weak_error
Lb
Ub
rate=Δ1.02
rate=Δ1.00
(a) With the hybrid scheme
10−2 10−1
Δt
10−3
10−2
10−1
∣
E
[
g
∣
X
Δ
t
)
−
g
∣
X
)]
∣
weak_error
Lb
Ub
rate=Δ0.76
rate=Δ1.00
(b) With the exact scheme
B The weak error behavior in Figure 2.1 was observed consistently
for different sets of parameters.
8
Sparse Grids Quadrature (I)
Aim: Approximate E[F], with F ∶ Rd
→ R.
Notation:
▸ a multi-index β ∈ Nd
+.
▸ Fβ ∶= Qm(β)
[F] a quadrature operator based on a Cartesian quadrature grid
(Qm(β)
∶= ⊗d
j=1 Qm(βj )
); m(βj) quadrature points along dimension yj; m(.) ∶ N+ → N+
an increasing function).
Issue: Approximating E[F] with Fβ is not an appropriate option due to the
well-known curse of dimensionality.
Alternative Idea: A quadrature estimate of E[F] is MI`
[F]
E[F] = ∑
β∈Nd
+
∆[Fβ] ≈ ∑
β∈I`
∆[Fβ] ∶= MI`
[F],
where I` ⊂ Nd
+ is a properly chosen index set.
▸ The mixed (first-order tensor) difference operators
∆[Fβ] =
d
⊗
i=1
∆iFβ ∶= ∑
α∈{0,1}d
(−1)∑
d
i=1 αi
Fβ−α. (4)
▸ The first-order difference operators
∆iFβ = {
Fβ − Fβ−ei , if βi > 1
Fβ if βi = 1
, (5)
where ei denotes the ith d-dimensional unit vector
9
Sparse Grids Quadrature (II)
E[F] ≈ MI`
[F] = ∑
β∈I`
∆[Fβ],
Product approach: I` = {∣∣ β ∣∣∞≤ `; β ∈ Nd
+} ⇒ EQ(M) = O (M−r/d
)
(for functions with bounded total derivatives up to order r).
Regular sparse grids:
I` = {∣∣ β ∣∣1≤ ` + d − 1; β ∈ Nd
+} ⇒ EQ(M) = O (M−s
(log M)(d−1)(s+1)
)
(for functions with bounded mixed derivatives up to order s).
Adaptive sparse grids quadrature (ASGQ): I` = IASGQ
(defined in next slides)
⇒ EQ(M) = O (M−sw
) (for functions with bounded weighted mixed derivatives
up to order sw).
B Notation: M: number of quadrature points; EQ: quadrature error.
Figure 2.2: Left: product
grids ∆β1 ⊗∆β2 for
1 ≤ β1,β2 ≤ 3.
Right: the corresponding
sparse grids construction.
10
ASGQ in Practice
E[F] ≈ MIASGQ [F] = ∑
β∈IASGQ
∆[Fβ],
The construction of IASGQ
is done by profit thresholding
IASGQ
(T) = {β ∈ Nd
+ ∶ Pβ ≥ T}.
Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
Error contribution: ∆Eβ = ∣MI∪{β} − MI∣.
Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI].
Figure 2.3: A posteriori,
adaptive construction as in
(Beck et al. 2012; Haji-Ali
et al. 2016): Given an index
set Ik, compute the profits
of the neighbor indices and
select the most profitable
one
11
ASGQ in Practice
E[F] ≈ MIASGQ [F] = ∑
β∈IASGQ
∆[Fβ],
The construction of IASGQ
is done by profit thresholding
IASGQ
(T) = {β ∈ Nd
+ ∶ Pβ ≥ T}.
Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
Error contribution: ∆Eβ = ∣MI∪{β} − MI∣.
Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI].
Figure 2.4: A posteriori,
adaptive construction as in
(Beck et al. 2012; Haji-Ali
et al. 2016): Given an index
set Ik, compute the profits
of the neighbor indices and
select the most profitable
one
11
ASGQ in Practice
E[F] ≈ MIASGQ [F] = ∑
β∈IASGQ
∆[Fβ],
The construction of IASGQ
is done by profit thresholding
IASGQ
(T) = {β ∈ Nd
+ ∶ Pβ ≥ T}.
Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
Error contribution: ∆Eβ = ∣MI∪{β} − MI∣.
Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI].
Figure 2.5: A posteriori,
adaptive construction as in
(Beck et al. 2012; Haji-Ali
et al. 2016): Given an index
set Ik, compute the profits
of the neighbor indices and
select the most profitable
one
11
ASGQ in Practice
E[F] ≈ MIASGQ [F] = ∑
β∈IASGQ
∆[Fβ],
The construction of IASGQ
is done by profit thresholding
IASGQ
(T) = {β ∈ Nd
+ ∶ Pβ ≥ T}.
Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
Error contribution: ∆Eβ = ∣MI∪{β} − MI∣.
Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI].
Figure 2.6: A posteriori,
adaptive construction as in
(Beck et al. 2012; Haji-Ali
et al. 2016): Given an index
set Ik, compute the profits
of the neighbor indices and
select the most profitable
one
11
ASGQ in Practice
E[F] ≈ MIASGQ [F] = ∑
β∈IASGQ
∆[Fβ],
The construction of IASGQ
is done by profit thresholding
IASGQ
(T) = {β ∈ Nd
+ ∶ Pβ ≥ T}.
Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
Error contribution: ∆Eβ = ∣MI∪{β} − MI∣.
Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI].
Figure 2.7: A posteriori,
adaptive construction as in
(Beck et al. 2012; Haji-Ali
et al. 2016): Given an index
set Ik, compute the profits
of the neighbor indices and
select the most profitable
one
11
ASGQ in Practice
E[F] ≈ MIASGQ [F] = ∑
β∈IASGQ
∆[Fβ],
The construction of IASGQ
is done by profit thresholding
IASGQ
(T) = {β ∈ Nd
+ ∶ Pβ ≥ T}.
Profit of a hierarchical surplus Pβ =
∣∆Eβ∣
∆Wβ
.
Error contribution: ∆Eβ = ∣MI∪{β} − MI∣.
Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI].
Figure 2.8: A posteriori,
adaptive construction as in
(Beck et al. 2012; Haji-Ali
et al. 2016): Given an index
set Ik, compute the profits
of the neighbor indices and
select the most profitable
one
11
Randomized QMC
A (rank-1) lattice rule (Sloan 1985; Nuyens 2014) with n points
Qn(F) ∶=
1
n
n−1
∑
k=0
F (
kz mod n
n
),
where z = (z1,...,zd) ∈ Nd
(the generated vector).
A randomly shifted lattice rule (for practical error estimates)
Qn,q(F) =
1
q
q−1
∑
i=0
Q(i)
n (F) =
1
q
q−1
∑
i=0
(
1
n
n−1
∑
k=0
F (
kz + ∆(i)
mod n
n
)), (6)
where {∆(i)
}q
i=1: independent random shifts, and MQMC
= q × n.
12
Wiener Path Generation Methods
{ti}N
i=0: Grid of time steps, {Bti }N
i=0: Brownian motion increments
Random Walk
▸ Proceeds incrementally, given Bti ,
Bti+1 = Bti +
√
∆tZi, Zi ∼ N(0,1).
▸ All components of Z = (Z1,...,ZN ) have the same scale of importance (in
terms of variance): isotropic.
Hierarchical Brownian Bridge
▸ Given a past value Bti and a future value Btk
, the value Btj (with
ti < tj < tk) can be generated according to (ρ = j−i
k−i
)
Btj = (1 − ρ)Bti + ρBtk
+ Zj
√
ρ(1 − ρ)(k − i)∆t, Zj ∼ N(0,1). (7)
▸ The most important values (capture a large part of the total variance) are
the first components of Z = (Z1,...,ZN ).
▸ ↘ the effective dimension (# important dimensions) by ↗ anisotropy
between different directions ⇒ Faster convergence of deterministic
quadrature methods. 13
Richardson Extrapolation (Talay and Tubaro 1990)
Conjecture
Let us denote by (Xt)0≤t≤T a stochastic process following the rBergomi dynamics,
and by (X̂∆t
ti
)0≤ti≤T its approximation using the hybrid scheme with a time step ∆t.
Then, for sufficiently small ∆t, and a suitable smooth function f, we assume that
E[f(X̂∆t
T )] = E[f(XT )] + c∆t + O (∆t2
). (8)
⇒ 2E[f( ̂
X
∆t/2
T )] − E[f( ̂
X∆t
T )] = E[f(XT )] + O (∆t2
).
General Formulation
{∆tJ = ∆t02−J
}J≥0: grid sizes, KR: level of Richardson extrapolation, I(J,KR):
approximation of E[f(XT )] by terms up to level KR
I(J,KR) =
2KR
I(J,KR − 1) − I(J − 1,KR − 1)
2KR − 1
, J = 1,2,...,KR = 1,2,... (9)
Advantage
Applying level KR of Richardson extrapolation dramatically reduces the bias ⇒↘
the number of time steps N needed to achieve a certain error tolerance ⇒↘ the
total dimension of the integration problem.
14
1 Option Pricing under the Rough Bergomi Model: Motivation &
Challenges
2 Our Approach Based on Hierarchical Deterministic Quadrature
Methods (Bayer, Ben Hammouda, and Tempone 2020a)
3 Numerical Experiments and Results
4 Conclusions and Future Work
14
Numerical Experiments
Table 1: Reference solution of call option price under the rough Bergomi model, for
different parameter constellations. The numbers between parentheses correspond to
the statistical errors estimates.
Parameters Reference solution
H = 0.07,K = 1,S0 = 1,T = 1,ρ = −0.9,η = 1.9,ξ0 = 0.0552 0.0791
(5.6e−05)
H = 0.02,K = 1,S0 = 1,T = 1,ρ = −0.7,η = 0.4,ξ0 = 0.1 0.1246
(9.0e−05)
H = 0.02,K = 0.8,S0 = 1,T = 1,ρ = −0.7,η = 0.4,ξ0 = 0.1 0.2412
(5.4e−05)
H = 0.02,K = 1.2,S0 = 1,T = 1,ρ = −0.7,η = 0.4,ξ0 = 0.1 0.0570
(8.0e−05)
Set 1 is the closest to the empirical findings (Gatheral, Jaisson, and
Rosenbaum 2018), suggesting that H ≈ 0.1. The choice ν = 1.9 and
ρ = −0.9 is justified by (Bayer, Friz, and Gatheral 2016).
For the remaining three sets, we test the potential of our method for a
very rough case, where variance reduction methods are inefficient.
15
Error Comparison
Etot: the total error of approximating the expectation in (3).
When using ASGQ estimator, QN
Etot ≤ ∣CrB − CN
rB∣ + ∣CN
rB − QN ∣ ≤ EB(N) + EQ(TOLASGQ,N),
where EQ is the quadrature error, EB is the bias, TOLASGQ is a user
selected tolerance for ASGQ method.
When using randomized QMC or MC estimator, Q
MC (QMC)
N
Etot ≤ ∣CrB − CN
rB∣ + ∣CN
rB − Q
MC (QMC)
N ∣ ≤ EB(N) + ES(M,N),
where ES is the statistical error, M is the number of samples used
for MC or randomized QMC method.
MQMC
and MMC
, are chosen so that ES,QMC(MQMC
) and
ES,MC(MMC
) satisfy
ES,QMC(MQMC
) = ES,MC(MMC
) = EB(N) =
Etot
2
.
16
Relative Errors and Computational Gains
Table 2: Computational gains achieved by ASGQ and QMC over the MC
method to meet a certain error tolerance. The ratios (ASGQ/MC) and
(QMC/MC) are referred to CPU time ratios in %, and are computed for the
best configuration with Richardson extrapolation for each method.
Parameters Relative CPU time ratio CPU time ratio
Set error (ASGQ/MC) in % (QMC/MC) in %
Set 1 1% 7% 10%
Set 2 0.2% 5% 1%
Set 3 0.4% 4% 5%
Set 4 2% 20% 10%
17
Computational Work of the MC Method
with Different Configurations
Figure 3.1: Computational work of the MC method with the different
configurations in terms of Richardson extrapolation’s level. Case of parameter
set 1 in Table 1.
10−2 10−1 100
Relative Error
10−3
10−1
101
103
105
CPU
time
MC
slope= -3.33
MC+Rich(level 1)
slope= -2.51
MC+Rich(level 2)
Computational Work of the QMC Method
with Different Configurations
Figure 3.2: Computational work of the QMC method with the different
configurations in terms of Richardson extrapolation’s level. Case of parameter
set 1 in Table 1.
10−2 10−1 100
Relative Error
10−2
10−1
100
101
102
103
CPU
time
QMC
lope= -1.98
QMC+Rich(level 1)
slope= -1.57
QMC+Rich(level 2)
Computational Work of the ASGQ Method
with Different Configurations
Figure 3.3: Computational work of the ASGQ method with the different
configurations in terms of Richardson extrapolation’s level. Case of parameter
set 1 in Table 1.
10−2 10−1 100
Relative Error
10−2
10−1
100
101
102
103
CPU
time
ASGQ
slo e= -5.18
ASGQ+Rich(level 1)
slope= -1.79
ASGQ+Rich(level 2)
slope= -1.27
Computational Work of the Different Methods
with their Best Configurations
Figure 3.4: Computational work comparison of the different methods with the
best configurations, for the case of parameter set 1 in Table 1.
10−2 10−1 100
Relative Error
10−3
10−2
10−1
100
101
102
103
104
CPU
time
MC+Rich(level 1)
slo e= -2.51
QMC+Rich(level 1)
slope= -1.57
ASGQ+Rich(level 2)
slope= -1.27
1 Option Pricing under the Rough Bergomi Model: Motivation &
Challenges
2 Our Approach Based on Hierarchical Deterministic Quadrature
Methods (Bayer, Ben Hammouda, and Tempone 2020a)
3 Numerical Experiments and Results
4 Conclusions and Future Work
21
Conclusions and Contributions
1 Proposed novel fast option pricers, for options whose underlyings
follow the rough Bergomi model, based on
▸ Conditional expectations for analytic smoothing.
▸ Hierarchical deterministic quadrature methods.
2 For different parameter constellations, our proposed methods
significantly outperform the MC approach, which is the prevalent
method in this context.
3 We propose an original way to overcome the high dimensionality
of the integration domain, by
▸ Reducing the total input dimension using Richardson extrapolation.
▸ Combining the Brownian bridge construction with ASGQ or QMC.
22
Conclusions and Contributions
4 Our proposed methodology shows a robust performance with
respect to the values of the Hurst parameter H.
5 Our approach is applicable to a wide class of stochastic volatility
models, in particular rough volatility models.
6 More details can be found in
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Hierarchical adaptive sparse grids and quasi-Monte Carlo for
option pricing under the rough Bergomi model”. In: Quantitative
Finance 20.9 (2020), pp. 1457–1473.
23
Future Work and Remarks
1 For cases where analytic smoothing is not possible, one can use
similar numerical smoothing techniques as proposed in (Bayer,
Ben Hammouda, and Tempone 2020b).
2 Accelerating our novel approach can be achieved by using
▸ Better versions of the ASGQ and QMC methods.
▸ Alternative simulation schemes for rough volatility models.
B Using the fast scheme based on Donsker type approximation
(Horvath, Jacquier, and Muguruza 2017) is not helpful since the
observed weak error rate is of order H for small values of H.
24
References I
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Hierarchical adaptive sparse grids and quasi-Monte Carlo for
option pricing under the rough Bergomi model”. In: Quantitative
Finance 20.9 (2020), pp. 1457–1473.
Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone.
“Numerical smoothing and hierarchical approximations for
efficient option pricing and density estimation”. In: arXiv
preprint arXiv:2003.05708 (2020).
Christian Bayer, Peter Friz, and Jim Gatheral. “Pricing under
rough volatility”. In: Quantitative Finance 16.6 (2016),
pp. 887–904.
Christian Bayer, Eric Joseph Hall, and Raúl Tempone. “Weak
error rates for option pricing under the rough Bergomi model”.
In: arXiv preprint arXiv:2009.01219 (2020).
25
References II
Joakim Beck et al. “On the optimal polynomial approximation of
stochastic PDEs by Galerkin and collocation methods”. In:
Mathematical Models and Methods in Applied Sciences 22.09
(2012), p. 1250023.
Mikkel Bennedsen, Asger Lunde, and Mikko S Pakkanen.
“Hybrid scheme for Brownian semistationary processes”. In:
Finance and Stochastics 21.4 (2017), pp. 931–965.
Jim Gatheral, Thibault Jaisson, and Mathieu Rosenbaum.
“Volatility is rough”. In: Quantitative Finance 18.6 (2018),
pp. 933–949.
Abdul-Lateef Haji-Ali et al. “Multi-index stochastic collocation
for random PDEs”. In: Computer Methods in Applied Mechanics
and Engineering (2016).
26
References III
Blanka Horvath, Antoine Jacquier, and Aitor Muguruza.
“Functional central limit theorems for rough volatility”. In:
Available at SSRN 3078743 (2017).
Andreas Neuenkirch and Taras Shalaiko. “The Order Barrier for
Strong Approximation of Rough Volatility Models”. In: arXiv
preprint arXiv:1606.03854 (2016).
Dirk Nuyens. The construction of good lattice rules and
polynomial lattice rules. 2014.
Ian H Sloan. “Lattice methods for multiple integration”. In:
Journal of Computational and Applied Mathematics 12 (1985),
pp. 131–143.
Denis Talay and Luciano Tubaro. “Expansion of the global error
for numerical schemes solving stochastic differential equations”.
In: Stochastic analysis and applications 8.4 (1990), pp. 483–509.
27
Thank you for your attention
27

More Related Content

PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
prior selection for mixture estimation
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
PDF
ABC with Wasserstein distances
PDF
Clustering in Hilbert simplex geometry
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
prior selection for mixture estimation
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
ABC with Wasserstein distances
Clustering in Hilbert simplex geometry

What's hot (20)

PDF
Vancouver18
PDF
Gentle Introduction to Dirichlet Processes
PDF
Internal workshop jub talk jan 2013
PDF
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
PDF
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
PDF
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
PDF
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
PDF
Numerical smoothing and hierarchical approximations for efficient option pric...
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
QMC Error SAMSI Tutorial Aug 2017
PPT
Unit 3-Greedy Method
PDF
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
PDF
CLIM Fall 2017 Course: Statistics for Climate Research, Geostats for Large Da...
PDF
Ica group 3[1]
PDF
A new generalized lindley distribution
PDF
An elementary introduction to information geometry
PDF
Information geometry: Dualistic manifold structures and their uses
PDF
IRJET- On Greatest Common Divisor and its Application for a Geometrical S...
Vancouver18
Gentle Introduction to Dirichlet Processes
Internal workshop jub talk jan 2013
QMC Program: Trends and Advances in Monte Carlo Sampling Algorithms Workshop,...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
MUMS: Bayesian, Fiducial, and Frequentist Conference - Model Selection in the...
MCQMC 2020 talk: Importance Sampling for a Robust and Efficient Multilevel Mo...
Numerical smoothing and hierarchical approximations for efficient option pric...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
QMC Error SAMSI Tutorial Aug 2017
Unit 3-Greedy Method
QMC Opening Workshop, Support Points - a new way to compact distributions, wi...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
Program on Quasi-Monte Carlo and High-Dimensional Sampling Methods for Applie...
CLIM Fall 2017 Course: Statistics for Climate Research, Geostats for Large Da...
Ica group 3[1]
A new generalized lindley distribution
An elementary introduction to information geometry
Information geometry: Dualistic manifold structures and their uses
IRJET- On Greatest Common Divisor and its Application for a Geometrical S...
Ad

Similar to Hierarchical Deterministic Quadrature Methods for Option Pricing under the Rough Bergomi Model (20)

PDF
Talk iccf 19_ben_hammouda
PDF
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...
PDF
talk_NASPDE.pdf
PDF
ICCF_2022_talk.pdf
PDF
Talk_HU_Berlin_Chiheb_benhammouda.pdf
PDF
MCQMC_talk_Chiheb_Ben_hammouda.pdf
PDF
Presentation.pdf
PDF
Empowering Fourier-based Pricing Methods for Efficient Valuation of High-Dime...
PDF
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
PDF
CDT 22 slides.pdf
PDF
Automatic bayesian cubature
PDF
MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban ...
PDF
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
PDF
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
PDF
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
PDF
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
PDF
NCE, GANs & VAEs (and maybe BAC)
PDF
SIAM - Minisymposium on Guaranteed numerical algorithms
PDF
MLHEP 2015: Introductory Lecture #4
PDF
International journal of engineering and mathematical modelling vol2 no1_2015_1
Talk iccf 19_ben_hammouda
Numerical Smoothing and Hierarchical Approximations for E cient Option Pricin...
talk_NASPDE.pdf
ICCF_2022_talk.pdf
Talk_HU_Berlin_Chiheb_benhammouda.pdf
MCQMC_talk_Chiheb_Ben_hammouda.pdf
Presentation.pdf
Empowering Fourier-based Pricing Methods for Efficient Valuation of High-Dime...
Hierarchical Deterministic Quadrature Methods for Option Pricing under the Ro...
CDT 22 slides.pdf
Automatic bayesian cubature
MUMS: Transition & SPUQ Workshop - Practical Bayesian Optimization for Urban ...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Time-Series Analysis on Multiperiodic Conditional Correlation by Sparse Covar...
NCE, GANs & VAEs (and maybe BAC)
SIAM - Minisymposium on Guaranteed numerical algorithms
MLHEP 2015: Introductory Lecture #4
International journal of engineering and mathematical modelling vol2 no1_2015_1
Ad

More from Chiheb Ben Hammouda (8)

PDF
Efficient Fourier Pricing of Multi-Asset Options: Quasi-Monte Carlo & Domain ...
PDF
Leiden_VU_Delft_seminar short.pdf
PDF
Presentation.pdf
PDF
KAUST_talk_short.pdf
PDF
Fourier_Pricing_ICCF_2022.pdf
PDF
PhD defense talk slides
PDF
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...
PDF
Efficient Fourier Pricing of Multi-Asset Options: Quasi-Monte Carlo & Domain ...
Leiden_VU_Delft_seminar short.pdf
Presentation.pdf
KAUST_talk_short.pdf
Fourier_Pricing_ICCF_2022.pdf
PhD defense talk slides
Seminar Talk: Multilevel Hybrid Split Step Implicit Tau-Leap for Stochastic R...

Recently uploaded (20)

PDF
Module 7 guard mounting of security pers
PPTX
Sustainable Forest Management ..SFM.pptx
PDF
_Nature and dynamics of communities and community development .pdf
DOCX
CLASS XII bbbbbnjhcvfyfhfyfyhPROJECT.docx
PDF
5_tips_to_become_a_Presentation_Jedi_@itseugenec.pdf
PDF
PM Narendra Modi's speech from Red Fort on 79th Independence Day.pdf
PPTX
CAPE CARIBBEAN STUDIES- Integration-1.pptx
PDF
Presentation on cloud computing and ppt..
PPTX
Unit 8#Concept of teaching and learning.pptx
PDF
Microsoft-365-Administrator-s-Guide_.pdf
PPTX
Kompem Part Untuk MK Komunikasi Pembangunan 5.pptx
PPTX
Phylogeny and disease transmission of Dipteran Fly (ppt).pptx
PDF
MODULE 3 BASIC SECURITY DUTIES AND ROLES.pdf
PDF
Unnecessary information is required for the
PPTX
Module_4_Updated_Presentation CORRUPTION AND GRAFT IN THE PHILIPPINES.pptx
PPTX
Knowledge Knockout ( General Knowledge Quiz )
PPTX
Research Process - Research Methods course
PPTX
Copy- of-Lesson-6-Digestive-System.pptx
PPTX
Literatura en Star Wars (Legends y Canon)
PPTX
ANICK 6 BIRTHDAY....................................................
Module 7 guard mounting of security pers
Sustainable Forest Management ..SFM.pptx
_Nature and dynamics of communities and community development .pdf
CLASS XII bbbbbnjhcvfyfhfyfyhPROJECT.docx
5_tips_to_become_a_Presentation_Jedi_@itseugenec.pdf
PM Narendra Modi's speech from Red Fort on 79th Independence Day.pdf
CAPE CARIBBEAN STUDIES- Integration-1.pptx
Presentation on cloud computing and ppt..
Unit 8#Concept of teaching and learning.pptx
Microsoft-365-Administrator-s-Guide_.pdf
Kompem Part Untuk MK Komunikasi Pembangunan 5.pptx
Phylogeny and disease transmission of Dipteran Fly (ppt).pptx
MODULE 3 BASIC SECURITY DUTIES AND ROLES.pdf
Unnecessary information is required for the
Module_4_Updated_Presentation CORRUPTION AND GRAFT IN THE PHILIPPINES.pptx
Knowledge Knockout ( General Knowledge Quiz )
Research Process - Research Methods course
Copy- of-Lesson-6-Digestive-System.pptx
Literatura en Star Wars (Legends y Canon)
ANICK 6 BIRTHDAY....................................................

Hierarchical Deterministic Quadrature Methods for Option Pricing under the Rough Bergomi Model

  • 1. Hierarchical Deterministic Quadrature Methods for Option Pricing under the Rough Bergomi Model Chiheb Ben Hammouda Christian Bayer Raúl Tempone SIAM Conference on Financial Mathematics and Engineering (FM21) June 1-4, 2021
  • 2. 1 Option Pricing under the Rough Bergomi Model: Motivation & Challenges 2 Our Approach Based on Hierarchical Deterministic Quadrature Methods (Bayer, Ben Hammouda, and Tempone 2020a) 3 Numerical Experiments and Results 4 Conclusions and Future Work 0
  • 3. Rough Volatility 1 1 Jim Gatheral, Thibault Jaisson, and Mathieu Rosenbaum. “Volatility is rough”. In: Quantitative Finance 18.6 (2018), pp. 933–949 1
  • 4. The Rough Bergomi Model (Bayer, Friz, and Gatheral 2016) This model, under a pricing measure, is given by ⎧ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎩ dSt = √ vtStdZt, vt = ξ0(t)exp(η̃ WH t − 1 2 η2 t2H ), Zt ∶= ρW1 t + ρ̄W⊥ t ≡ ρW1 + √ 1 − ρ2W⊥ , (1) (W1 ,W⊥ ): two independent standard Brownian motions ̃ WH is Riemann-Liouville process, defined by ̃ WH t = ∫ t 0 KH (t − s)dW1 s , t ≥ 0, KH (t − s) = √ 2H(t − s)H−1/2 , ∀ 0 ≤ s ≤ t. (2) H ∈ (0,1/2] controls the roughness of paths, ρ ∈ [−1,1] and η > 0. t ↦ ξ0(t): forward variance curve at time 0. B Model allows for accurate fit to market prices for different ranges of maturities, with few parameters. 2
  • 5. Model Challenges Numerically: ▸ The model is non-Markovian and non-affine ⇒ Standard numerical methods (PDEs, Transform methods) seem inapplicable. ▸ Prevalent pricing method for vanilla options is Monte Carlo (MC) / still computationally expensive. ▸ Discretization methods with poor strong convergence error (rate of order H) (Neuenkirch and Shalaiko 2016) ⇒ Variance reduction methods, such as multilevel Monte Carlo, are inefficient. ▸ Improvements to the available numerical methods, based on deep learning. Theoretically: ▸ Hard to analyse the available simulation methods.2 Due to ▸ Non Markovianity ⇒ infinite dimensional state. ▸ Singularity in the Kernel, KH (.), in (2). 2 (Bayer, Hall, and Tempone 2020) provides a preliminary weak error analysis but not for the standard schemes (Exact and Hybrid schemes). 3
  • 6. Option Pricing Challenges European option price: V (S,0) = e−rT EQ[g(ST )] The integration problem is challenging Issue 1: Time-discretization of the rough Bergomi process (large N (number of time steps)) ⇒ S takes values in a high-dimensional space ⇒ / Curse of dimensionality3 when using standard deterministic quadrature methods. Issue 2: The payoff function g is typically not smooth ⇒ low regularity⇒ / Deterministic quadrature methods suffer from a slow convergence. 3 Curse of dimensionality: An exponential growth of the work (number of function evaluations) in terms of the dimension of the integration problem.
  • 7. 1 Option Pricing under the Rough Bergomi Model: Motivation & Challenges 2 Our Approach Based on Hierarchical Deterministic Quadrature Methods (Bayer, Ben Hammouda, and Tempone 2020a) 3 Numerical Experiments and Results 4 Conclusions and Future Work 4
  • 8. Methodology 4 We design efficient fast hierarchical pricing methods, for options whose underlyings follow the rough Bergomi model, based on 1 Analytic smoothing to uncover the available regularity. 2 Approximating the resulting integral of the smoothed payoff using deterministic quadrature methods ▸ Adaptive sparse grids quadrature (ASGQ). ▸ Quasi Monte Carlo (QMC). 3 Combining our methods with hierarchical representations ▸ Brownian bridges as a Wiener path generation method ⇒ ↘ the effective dimension of the problem. ▸ Richardson Extrapolation (Condition: stable weak error expansion in ∆t to extrapolate) ⇒ Faster convergence of the weak error ⇒ ↘ number of time steps (smaller input dimension). 4 Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model”. In: Quantitative Finance 20.9 (2020), pp. 1457–1473 5
  • 9. Conditional Expectation for Analytic Smoothing CrB (T,K) = E [(ST − K) + ] = E[E[(ST − K)+ ∣ σ(W1 (t),t ≤ T)]] = E [CBS (S0 = exp(ρ∫ T 0 √ vtdW1 t − 1 2 ρ2 ∫ T 0 vtdt),k = K, σ2 = (1 − ρ2 )∫ T 0 vtdt)] ≈ ∫ R2N CBS (GrB(w(1) ,w(2) ))ρN (w(1) )ρN (w(2) )dw(1) dw(2) ∶= CN rB. (3) Idea: (3) obtained by using the orthogonal decomposition of St into S1 t = E{ρ∫ t 0 √ vsdW1 s }, S2 t = E{ √ 1 − ρ2 ∫ t 0 √ vsdW⊥ s }, and then apply conditional log-normality. Notation: CBS(S0,k,σ2 ): the Black-Scholes call price, for initial spot price S0, strike price k, and volatility σ2 . GrB maps 2N independent standard Gaussian random inputs to the parameters fed to Black-Scholes formula. GrB depends on the simulation scheme. ρN : the multivariate Gaussian density, N: number of time steps. For a continuous (semi)martingale Z, E(Z)t = exp(Zt − Z0 − 1 2 [Z,Z]0,t), where [Z,Z]0,t is the quadratic variation of Z. 6
  • 10. Simulation of the Rough Bergomi Dynamics Goal: Simulate jointly (W1 t , ̃ WH t ∶ 0 ≤ t ≤ T), resulting in W1 t1 ,...,WtN and ̃ WH t1 ,..., ̃ WH tN along a given grid t1 < ⋅⋅⋅ < tN 1 Covariance based approach (Bayer, Friz, and Gatheral 2016) ▸ 2N-dimensional Gaussian random as input vector. ▸ Based on Cholesky decomposition of the covariance matrix of the (2N)-dimensional Gaussian random vector W1 t1 ,...,W1 tN , ̃ WH t1 ,..., ̃ WtN . ▸ Exact method but slow ▸ Work: O (N2 ). 2 The hybrid scheme (Bennedsen, Lunde, and Pakkanen 2017) ▸ 2N-dimensional Gaussian random as input vector. ▸ Based on Euler discretization ▸ Approximates the kernel function in (2) by a power function near zero and by a step function elsewhere. ▸ Accurate scheme that is much faster than the Covariance based approach. ▸ Work: O (N) up to logarithmic factors. 7
  • 11. On the Choice of the Simulation Scheme Figure 2.1: The convergence of the relative weak error EB, using MC with 6 × 106 samples, for example parameters: H = 0.07, K = 1,S0 = 1, T = 1, ρ = −0.9, η = 1.9, ξ0 = 0.0552. The upper and lower bounds are 95% confidence intervals. 10−2 10−1 Δt 10−2 10−1 ∣ E [ g ∣ X Δ t ) − g ∣ X )] ∣ weak_error Lb Ub rate=Δ1.02 rate=Δ1.00 (a) With the hybrid scheme 10−2 10−1 Δt 10−3 10−2 10−1 ∣ E [ g ∣ X Δ t ) − g ∣ X )] ∣ weak_error Lb Ub rate=Δ0.76 rate=Δ1.00 (b) With the exact scheme B The weak error behavior in Figure 2.1 was observed consistently for different sets of parameters. 8
  • 12. Sparse Grids Quadrature (I) Aim: Approximate E[F], with F ∶ Rd → R. Notation: ▸ a multi-index β ∈ Nd +. ▸ Fβ ∶= Qm(β) [F] a quadrature operator based on a Cartesian quadrature grid (Qm(β) ∶= ⊗d j=1 Qm(βj ) ); m(βj) quadrature points along dimension yj; m(.) ∶ N+ → N+ an increasing function). Issue: Approximating E[F] with Fβ is not an appropriate option due to the well-known curse of dimensionality. Alternative Idea: A quadrature estimate of E[F] is MI` [F] E[F] = ∑ β∈Nd + ∆[Fβ] ≈ ∑ β∈I` ∆[Fβ] ∶= MI` [F], where I` ⊂ Nd + is a properly chosen index set. ▸ The mixed (first-order tensor) difference operators ∆[Fβ] = d ⊗ i=1 ∆iFβ ∶= ∑ α∈{0,1}d (−1)∑ d i=1 αi Fβ−α. (4) ▸ The first-order difference operators ∆iFβ = { Fβ − Fβ−ei , if βi > 1 Fβ if βi = 1 , (5) where ei denotes the ith d-dimensional unit vector 9
  • 13. Sparse Grids Quadrature (II) E[F] ≈ MI` [F] = ∑ β∈I` ∆[Fβ], Product approach: I` = {∣∣ β ∣∣∞≤ `; β ∈ Nd +} ⇒ EQ(M) = O (M−r/d ) (for functions with bounded total derivatives up to order r). Regular sparse grids: I` = {∣∣ β ∣∣1≤ ` + d − 1; β ∈ Nd +} ⇒ EQ(M) = O (M−s (log M)(d−1)(s+1) ) (for functions with bounded mixed derivatives up to order s). Adaptive sparse grids quadrature (ASGQ): I` = IASGQ (defined in next slides) ⇒ EQ(M) = O (M−sw ) (for functions with bounded weighted mixed derivatives up to order sw). B Notation: M: number of quadrature points; EQ: quadrature error. Figure 2.2: Left: product grids ∆β1 ⊗∆β2 for 1 ≤ β1,β2 ≤ 3. Right: the corresponding sparse grids construction. 10
  • 14. ASGQ in Practice E[F] ≈ MIASGQ [F] = ∑ β∈IASGQ ∆[Fβ], The construction of IASGQ is done by profit thresholding IASGQ (T) = {β ∈ Nd + ∶ Pβ ≥ T}. Profit of a hierarchical surplus Pβ = ∣∆Eβ∣ ∆Wβ . Error contribution: ∆Eβ = ∣MI∪{β} − MI∣. Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI]. Figure 2.3: A posteriori, adaptive construction as in (Beck et al. 2012; Haji-Ali et al. 2016): Given an index set Ik, compute the profits of the neighbor indices and select the most profitable one 11
  • 15. ASGQ in Practice E[F] ≈ MIASGQ [F] = ∑ β∈IASGQ ∆[Fβ], The construction of IASGQ is done by profit thresholding IASGQ (T) = {β ∈ Nd + ∶ Pβ ≥ T}. Profit of a hierarchical surplus Pβ = ∣∆Eβ∣ ∆Wβ . Error contribution: ∆Eβ = ∣MI∪{β} − MI∣. Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI]. Figure 2.4: A posteriori, adaptive construction as in (Beck et al. 2012; Haji-Ali et al. 2016): Given an index set Ik, compute the profits of the neighbor indices and select the most profitable one 11
  • 16. ASGQ in Practice E[F] ≈ MIASGQ [F] = ∑ β∈IASGQ ∆[Fβ], The construction of IASGQ is done by profit thresholding IASGQ (T) = {β ∈ Nd + ∶ Pβ ≥ T}. Profit of a hierarchical surplus Pβ = ∣∆Eβ∣ ∆Wβ . Error contribution: ∆Eβ = ∣MI∪{β} − MI∣. Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI]. Figure 2.5: A posteriori, adaptive construction as in (Beck et al. 2012; Haji-Ali et al. 2016): Given an index set Ik, compute the profits of the neighbor indices and select the most profitable one 11
  • 17. ASGQ in Practice E[F] ≈ MIASGQ [F] = ∑ β∈IASGQ ∆[Fβ], The construction of IASGQ is done by profit thresholding IASGQ (T) = {β ∈ Nd + ∶ Pβ ≥ T}. Profit of a hierarchical surplus Pβ = ∣∆Eβ∣ ∆Wβ . Error contribution: ∆Eβ = ∣MI∪{β} − MI∣. Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI]. Figure 2.6: A posteriori, adaptive construction as in (Beck et al. 2012; Haji-Ali et al. 2016): Given an index set Ik, compute the profits of the neighbor indices and select the most profitable one 11
  • 18. ASGQ in Practice E[F] ≈ MIASGQ [F] = ∑ β∈IASGQ ∆[Fβ], The construction of IASGQ is done by profit thresholding IASGQ (T) = {β ∈ Nd + ∶ Pβ ≥ T}. Profit of a hierarchical surplus Pβ = ∣∆Eβ∣ ∆Wβ . Error contribution: ∆Eβ = ∣MI∪{β} − MI∣. Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI]. Figure 2.7: A posteriori, adaptive construction as in (Beck et al. 2012; Haji-Ali et al. 2016): Given an index set Ik, compute the profits of the neighbor indices and select the most profitable one 11
  • 19. ASGQ in Practice E[F] ≈ MIASGQ [F] = ∑ β∈IASGQ ∆[Fβ], The construction of IASGQ is done by profit thresholding IASGQ (T) = {β ∈ Nd + ∶ Pβ ≥ T}. Profit of a hierarchical surplus Pβ = ∣∆Eβ∣ ∆Wβ . Error contribution: ∆Eβ = ∣MI∪{β} − MI∣. Work contribution: ∆Wβ = Work[MI∪{β}] − Work[MI]. Figure 2.8: A posteriori, adaptive construction as in (Beck et al. 2012; Haji-Ali et al. 2016): Given an index set Ik, compute the profits of the neighbor indices and select the most profitable one 11
  • 20. Randomized QMC A (rank-1) lattice rule (Sloan 1985; Nuyens 2014) with n points Qn(F) ∶= 1 n n−1 ∑ k=0 F ( kz mod n n ), where z = (z1,...,zd) ∈ Nd (the generated vector). A randomly shifted lattice rule (for practical error estimates) Qn,q(F) = 1 q q−1 ∑ i=0 Q(i) n (F) = 1 q q−1 ∑ i=0 ( 1 n n−1 ∑ k=0 F ( kz + ∆(i) mod n n )), (6) where {∆(i) }q i=1: independent random shifts, and MQMC = q × n. 12
  • 21. Wiener Path Generation Methods {ti}N i=0: Grid of time steps, {Bti }N i=0: Brownian motion increments Random Walk ▸ Proceeds incrementally, given Bti , Bti+1 = Bti + √ ∆tZi, Zi ∼ N(0,1). ▸ All components of Z = (Z1,...,ZN ) have the same scale of importance (in terms of variance): isotropic. Hierarchical Brownian Bridge ▸ Given a past value Bti and a future value Btk , the value Btj (with ti < tj < tk) can be generated according to (ρ = j−i k−i ) Btj = (1 − ρ)Bti + ρBtk + Zj √ ρ(1 − ρ)(k − i)∆t, Zj ∼ N(0,1). (7) ▸ The most important values (capture a large part of the total variance) are the first components of Z = (Z1,...,ZN ). ▸ ↘ the effective dimension (# important dimensions) by ↗ anisotropy between different directions ⇒ Faster convergence of deterministic quadrature methods. 13
  • 22. Richardson Extrapolation (Talay and Tubaro 1990) Conjecture Let us denote by (Xt)0≤t≤T a stochastic process following the rBergomi dynamics, and by (X̂∆t ti )0≤ti≤T its approximation using the hybrid scheme with a time step ∆t. Then, for sufficiently small ∆t, and a suitable smooth function f, we assume that E[f(X̂∆t T )] = E[f(XT )] + c∆t + O (∆t2 ). (8) ⇒ 2E[f( ̂ X ∆t/2 T )] − E[f( ̂ X∆t T )] = E[f(XT )] + O (∆t2 ). General Formulation {∆tJ = ∆t02−J }J≥0: grid sizes, KR: level of Richardson extrapolation, I(J,KR): approximation of E[f(XT )] by terms up to level KR I(J,KR) = 2KR I(J,KR − 1) − I(J − 1,KR − 1) 2KR − 1 , J = 1,2,...,KR = 1,2,... (9) Advantage Applying level KR of Richardson extrapolation dramatically reduces the bias ⇒↘ the number of time steps N needed to achieve a certain error tolerance ⇒↘ the total dimension of the integration problem. 14
  • 23. 1 Option Pricing under the Rough Bergomi Model: Motivation & Challenges 2 Our Approach Based on Hierarchical Deterministic Quadrature Methods (Bayer, Ben Hammouda, and Tempone 2020a) 3 Numerical Experiments and Results 4 Conclusions and Future Work 14
  • 24. Numerical Experiments Table 1: Reference solution of call option price under the rough Bergomi model, for different parameter constellations. The numbers between parentheses correspond to the statistical errors estimates. Parameters Reference solution H = 0.07,K = 1,S0 = 1,T = 1,ρ = −0.9,η = 1.9,ξ0 = 0.0552 0.0791 (5.6e−05) H = 0.02,K = 1,S0 = 1,T = 1,ρ = −0.7,η = 0.4,ξ0 = 0.1 0.1246 (9.0e−05) H = 0.02,K = 0.8,S0 = 1,T = 1,ρ = −0.7,η = 0.4,ξ0 = 0.1 0.2412 (5.4e−05) H = 0.02,K = 1.2,S0 = 1,T = 1,ρ = −0.7,η = 0.4,ξ0 = 0.1 0.0570 (8.0e−05) Set 1 is the closest to the empirical findings (Gatheral, Jaisson, and Rosenbaum 2018), suggesting that H ≈ 0.1. The choice ν = 1.9 and ρ = −0.9 is justified by (Bayer, Friz, and Gatheral 2016). For the remaining three sets, we test the potential of our method for a very rough case, where variance reduction methods are inefficient. 15
  • 25. Error Comparison Etot: the total error of approximating the expectation in (3). When using ASGQ estimator, QN Etot ≤ ∣CrB − CN rB∣ + ∣CN rB − QN ∣ ≤ EB(N) + EQ(TOLASGQ,N), where EQ is the quadrature error, EB is the bias, TOLASGQ is a user selected tolerance for ASGQ method. When using randomized QMC or MC estimator, Q MC (QMC) N Etot ≤ ∣CrB − CN rB∣ + ∣CN rB − Q MC (QMC) N ∣ ≤ EB(N) + ES(M,N), where ES is the statistical error, M is the number of samples used for MC or randomized QMC method. MQMC and MMC , are chosen so that ES,QMC(MQMC ) and ES,MC(MMC ) satisfy ES,QMC(MQMC ) = ES,MC(MMC ) = EB(N) = Etot 2 . 16
  • 26. Relative Errors and Computational Gains Table 2: Computational gains achieved by ASGQ and QMC over the MC method to meet a certain error tolerance. The ratios (ASGQ/MC) and (QMC/MC) are referred to CPU time ratios in %, and are computed for the best configuration with Richardson extrapolation for each method. Parameters Relative CPU time ratio CPU time ratio Set error (ASGQ/MC) in % (QMC/MC) in % Set 1 1% 7% 10% Set 2 0.2% 5% 1% Set 3 0.4% 4% 5% Set 4 2% 20% 10% 17
  • 27. Computational Work of the MC Method with Different Configurations Figure 3.1: Computational work of the MC method with the different configurations in terms of Richardson extrapolation’s level. Case of parameter set 1 in Table 1. 10−2 10−1 100 Relative Error 10−3 10−1 101 103 105 CPU time MC slope= -3.33 MC+Rich(level 1) slope= -2.51 MC+Rich(level 2)
  • 28. Computational Work of the QMC Method with Different Configurations Figure 3.2: Computational work of the QMC method with the different configurations in terms of Richardson extrapolation’s level. Case of parameter set 1 in Table 1. 10−2 10−1 100 Relative Error 10−2 10−1 100 101 102 103 CPU time QMC lope= -1.98 QMC+Rich(level 1) slope= -1.57 QMC+Rich(level 2)
  • 29. Computational Work of the ASGQ Method with Different Configurations Figure 3.3: Computational work of the ASGQ method with the different configurations in terms of Richardson extrapolation’s level. Case of parameter set 1 in Table 1. 10−2 10−1 100 Relative Error 10−2 10−1 100 101 102 103 CPU time ASGQ slo e= -5.18 ASGQ+Rich(level 1) slope= -1.79 ASGQ+Rich(level 2) slope= -1.27
  • 30. Computational Work of the Different Methods with their Best Configurations Figure 3.4: Computational work comparison of the different methods with the best configurations, for the case of parameter set 1 in Table 1. 10−2 10−1 100 Relative Error 10−3 10−2 10−1 100 101 102 103 104 CPU time MC+Rich(level 1) slo e= -2.51 QMC+Rich(level 1) slope= -1.57 ASGQ+Rich(level 2) slope= -1.27
  • 31. 1 Option Pricing under the Rough Bergomi Model: Motivation & Challenges 2 Our Approach Based on Hierarchical Deterministic Quadrature Methods (Bayer, Ben Hammouda, and Tempone 2020a) 3 Numerical Experiments and Results 4 Conclusions and Future Work 21
  • 32. Conclusions and Contributions 1 Proposed novel fast option pricers, for options whose underlyings follow the rough Bergomi model, based on ▸ Conditional expectations for analytic smoothing. ▸ Hierarchical deterministic quadrature methods. 2 For different parameter constellations, our proposed methods significantly outperform the MC approach, which is the prevalent method in this context. 3 We propose an original way to overcome the high dimensionality of the integration domain, by ▸ Reducing the total input dimension using Richardson extrapolation. ▸ Combining the Brownian bridge construction with ASGQ or QMC. 22
  • 33. Conclusions and Contributions 4 Our proposed methodology shows a robust performance with respect to the values of the Hurst parameter H. 5 Our approach is applicable to a wide class of stochastic volatility models, in particular rough volatility models. 6 More details can be found in Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model”. In: Quantitative Finance 20.9 (2020), pp. 1457–1473. 23
  • 34. Future Work and Remarks 1 For cases where analytic smoothing is not possible, one can use similar numerical smoothing techniques as proposed in (Bayer, Ben Hammouda, and Tempone 2020b). 2 Accelerating our novel approach can be achieved by using ▸ Better versions of the ASGQ and QMC methods. ▸ Alternative simulation schemes for rough volatility models. B Using the fast scheme based on Donsker type approximation (Horvath, Jacquier, and Muguruza 2017) is not helpful since the observed weak error rate is of order H for small values of H. 24
  • 35. References I Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Hierarchical adaptive sparse grids and quasi-Monte Carlo for option pricing under the rough Bergomi model”. In: Quantitative Finance 20.9 (2020), pp. 1457–1473. Christian Bayer, Chiheb Ben Hammouda, and Raúl Tempone. “Numerical smoothing and hierarchical approximations for efficient option pricing and density estimation”. In: arXiv preprint arXiv:2003.05708 (2020). Christian Bayer, Peter Friz, and Jim Gatheral. “Pricing under rough volatility”. In: Quantitative Finance 16.6 (2016), pp. 887–904. Christian Bayer, Eric Joseph Hall, and Raúl Tempone. “Weak error rates for option pricing under the rough Bergomi model”. In: arXiv preprint arXiv:2009.01219 (2020). 25
  • 36. References II Joakim Beck et al. “On the optimal polynomial approximation of stochastic PDEs by Galerkin and collocation methods”. In: Mathematical Models and Methods in Applied Sciences 22.09 (2012), p. 1250023. Mikkel Bennedsen, Asger Lunde, and Mikko S Pakkanen. “Hybrid scheme for Brownian semistationary processes”. In: Finance and Stochastics 21.4 (2017), pp. 931–965. Jim Gatheral, Thibault Jaisson, and Mathieu Rosenbaum. “Volatility is rough”. In: Quantitative Finance 18.6 (2018), pp. 933–949. Abdul-Lateef Haji-Ali et al. “Multi-index stochastic collocation for random PDEs”. In: Computer Methods in Applied Mechanics and Engineering (2016). 26
  • 37. References III Blanka Horvath, Antoine Jacquier, and Aitor Muguruza. “Functional central limit theorems for rough volatility”. In: Available at SSRN 3078743 (2017). Andreas Neuenkirch and Taras Shalaiko. “The Order Barrier for Strong Approximation of Rough Volatility Models”. In: arXiv preprint arXiv:1606.03854 (2016). Dirk Nuyens. The construction of good lattice rules and polynomial lattice rules. 2014. Ian H Sloan. “Lattice methods for multiple integration”. In: Journal of Computational and Applied Mathematics 12 (1985), pp. 131–143. Denis Talay and Luciano Tubaro. “Expansion of the global error for numerical schemes solving stochastic differential equations”. In: Stochastic analysis and applications 8.4 (1990), pp. 483–509. 27
  • 38. Thank you for your attention 27