SlideShare a Scribd company logo
Initial covariance matrix for the Kalman Filter
Alexander Litvinenko
Group of Raul Tempone, SRI UQ, and Group of David Keyes,
Extreme Computing Research Center KAUST
Center for Uncertainty
Quantification
ntification Logo Lock-up
http://sri-uq.kaust.edu.sa/
4*
Two variants
Either we assume that matrix of snapshots is given
[q(x, θ1), ..., q(x, θnq )]
Or we assume that the covariance function is of a certain type:
The Mat´ern class of covariance functions is defined as
C(r) := Cν, (r) =
2σ2
Γ(ν)
r
2
ν
Kν
r
, (1)
Center for Uncertainty
Quantification
tion Logo Lock-up
2 / 12
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
0
0.05
0.1
0.15
0.2
0.25
Matern covariance (nu=1)
σ=0.5, l=0.5
σ=0.5, l=0.3
σ=0.5, l=0.2
σ=0.5, l=0.1
−2 −1.5 −1 −0.5 0 0.5 1 1.5 2
0
0.05
0.1
0.15
0.2
0.25
nu=0.15
nu=0.3
nu=0.5
nu=1
nu=2
nu=30
Figure : Matern function for different parameters (computed in sglib).
Center for Uncertainty
Quantification
tion Logo Lock-up
3 / 12
4*
Types of Matern covariance
Cν=3/2(r) = −
√
2νr Γ(p + 1)
Γ(2p + 1)
p
i=0
(p + i)!
i!(p − i)!
(
√
8νr
)p−i
. (2)
The most interesting cases are ν = 3/2:
Cν=3/2(r) = 1 +
√
3r
exp −
√
3r
(3)
and ν = 5/2, for which
Cν=5/2(r) = 1 +
√
5r
+
5r2
3 2
exp −
√
5r
(4)
Center for Uncertainty
Quantification
tion Logo Lock-up
4 / 12
4*
Comparison
[q(x, θ1), ..., q(x, θnq )] ≈ ABT
.
n rank k size, MB t, sec. ε max
i=1..10
|λi − ˜λi |, i ε2
for ˜C C ˜C C ˜C
4.0 · 103
10 48 3 0.8 0.08 7 · 10−3
7.0 · 10−2
, 9 2.0 · 10−4
1.05 · 104
18 439 19 7.0 0.4 7 · 10−4
5.5 · 10−2
, 2 1.0 · 10−4
2.1 · 104
25 2054 64 45.0 1.4 1 · 10−5
5.0 · 10−2
, 9 4.4 · 10−6
Table : Accuracy of H-matrix approximation, l1 = l3 = 0.1, l2 = 0.5.
Center for Uncertainty
Quantification
tion Logo Lock-up
5 / 12
4*
Storage cost and computing time
k size, MB t, sec.
1 1548 33
2 1865 42
3 2181 50
4 2497 59
6 nem -
k size, MB t, sec.
4 463 11
8 850 22
12 1236 32
16 1623 43
20 nem -
Table : Dependence of the computing time and storage requirement on
the H-matrix rank k, l1 = 0.1, l2 = 0.5, n = 2.3 · 105
. (right) l1 = 0.1,
l2 = 0.5, l3 = 0.1, n = 4.61 · 105
.
Center for Uncertainty
Quantification
tion Logo Lock-up
6 / 12
4*
Examples of H-matrix approximation
25 20
20 20
20 16
20 16
20 20
16 16
20 16
16 16
19 20
20 19 32
19 19
16 16 32
19 20
20 19
19 16
19 16
32 32
20 20
20 20 32
32 32
20 19
19 19 32
20 19
16 16 32
32 20
32 32
20 32
32 32
32 20
32 32
20 19
19 19
20 16
19 16
32 32
20 32
32 32
32 32
20 32
32 32
20 32
20 20
20 20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
20 20
20 19
20 20 32
32 32 20
20 20
32 20
32 32 20
20 20
32 32
20 32 20
20 20
32 32
32 32 20
20
20 20
19 20 32
32 32
20 20
20
32 20
32 32
20 20
20
32 32
20 32
20 20
20
32 32
32 32
20 20
20 20
20 20 32
32 32
20 19
20 19 32
32 32
20 20
19 19 32
32 32
20 20
20 20 32
32 32
32 20
32 32
32 20
32 32
32 20
32 32
32 20
32 32
32 32
20 32
32 32
20 32
32 32
20 32
32 32
20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
20 20
20 20 2019 20
20 20 32
32 32
32 20
32 32
20 32
32 32
32 20
32 32
20 20
20 20
20 20
20 20 20
20 20
32 20
32 32 20
20 20
20 20
20 20
20 20 20
20
32 20
32 32
20 20
20 20
20 20
20 20
20 20 20
32 20
32 32
32 20
32 32
32 20
32 32
32 20
32 32
20 20
20 20
20 20
20 20
19 20
20 20 32
32 32
20 32
32 32
32 32
20 32
32 32
20 32
20
20 20
20 20
20 20
20 20
20 20
32 32
20 32 20
20
20 20
20 20
20 20
20 20
20
32 32
20 32
20 20
20
20 20
20 20
20 20
20 20
32 32
20 32
32 32
20 32
32 32
20 32
32 32
20 32
20
20 20
20 20
20 20
20 20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
19 19
20 20 32
32 32
32 20
32 32
20 32
32 32
32 20
32 32
19 20
19 20 32
32 32
20 32
32 32
32 32
20 32
32 32
20 32
20 20
20 20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
20 20
32 32
32 32 20
20 20
32 20
32 32 20
20 20
32 32
20 32 20
20 20
32 32
32 32 20
20
32 32
32 32
20 20
20
32 20
32 32
20 20
20
32 32
20 32
20 20
20
32 32
32 32
20 20
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 20
32 32
32 20
32 20
32 20
32 32
32 20
32 20
32 32
20 32
32 32
20 32
32 32
20 20
32 32
20 20
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
25 9
9 20 9
9
20 7
7 16
9
9
20 9
9 20 9
9 32
9
9
20 9
9 20 9
9 32 9
9
32 9
9 32
9
9
20 9
9 20 9
9 32 9
9
20 9
9 20 9
9 32
9
9
32 9
9 32 9
9
32 9
9 32
9
9
20 9
9 20 9
9 32 9
9
32 9
9 32
9
9
20 9
9 20 9
9 32 9
9
32 9
9 32
9
9
32 9
9 32 9
9
32 9
9 32
9
9
32 9
9 32 9
9
32 9
9 32
Center for Uncertainty
Quantification
tion Logo Lock-up
7 / 12
4*
Kullback-Leibler divergence (KLD)
DKL(P Q) is measure of the information lost when distribution Q
is used to approximate P:
DKL(P Q) =
i
P(i) ln
P(i)
Q(i)
, DKL(P Q) =
∞
−∞
p(x) ln
p(x)
q(x)
dx,
where p, q densities of P and Q. For miltivariate normal
distributions (µ0, Σ0) and (µ1, Σ1)
2DKL(N0 N1) = tr(Σ−1
1 Σ0)+(µ1 −µ0)T
Σ−1
1 (µ1 −µ0)−k −ln
det Σ0
det Σ1
Center for Uncertainty
Quantification
tion Logo Lock-up
8 / 12
4*
Convergence of KLD with increasing the rank k
k KLD C − CH
2 C(CH
)−1
− I 2
L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75
5 0.51 2.3 4.0e-2 0.1 4.8 63
6 0.34 1.6 9.4e-3 0.02 3.4 22
8 5.3e-2 0.4 1.9e-3 0.003 1.2 8
10 2.6e-3 0.2 7.7e-4 7.0e-4 6.0e-2 3.1
12 5.0e-4 2e-2 9.7e-5 5.6e-5 1.6e-2 0.5
15 1.0e-5 9e-4 2.0e-5 1.1e-5 8.0e-4 0.02
20 4.5e-7 4.8e-5 6.5e-7 2.8e-7 2.1e-5 1.2e-3
50 3.4e-13 5e-12 2.0e-13 2.4e-13 4e-11 2.7e-9
Table : Dependence of KLD on the approximation H-matrix rank k,
Matern covariance with parameters L = {0.25, 0.75} and ν = 0.5,
domain G = [0, 1]2
, C(L=0.25,0.75) 2 = {212, 568}.
Center for Uncertainty
Quantification
tion Logo Lock-up
9 / 12
4*
Convergence of KLD with increasing the rank k
k KLD C − CH
2 C(CH
)−1
− I 2
L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75
5 nan nan 0.05 6e-2 2.1e+13 1e+28
10 10 10e+17 4e-4 5.5e-4 276 1e+19
15 3.7 1.8 1.1e-5 3e-6 112 4e+3
18 1.2 2.7 1.2e-6 7.4e-7 31 5e+2
20 0.12 2.7 5.3e-7 2e-7 4.5 72
30 3.2e-5 0.4 1.3e-9 5e-10 4.8e-3 20
40 6.5e-8 1e-2 1.5e-11 8e-12 7.4e-6 0.5
50 8.3e-10 3e-3 2.0e-13 1.5e-13 1.5e-7 0.1
Table : Dependence of KLD on the approximation H-matrix rank k,
Matern covariance with parameters L = {0.25, 0.75} and ν = 1.5,
domain G = [0, 1]2
, C(L=0.25,0.75) 2 = {720, 1068}.
Center for Uncertainty
Quantification
tion Logo Lock-up
10 / 12
4*
Application of large covariance matrices
1. Kriging estimate ˆs := Csy C−1
yy y
2. Estimation of variance ˆσ, is the diagonal of conditional cov.
matrix Css|y = diag Css − Csy C−1
yy Cys ,
3. Gestatistical optimal design ϕA := n−1traceCss|y ,
ϕC := cT Css − Csy C−1
yy Cys c,
Center for Uncertainty
Quantification
tion Logo Lock-up
11 / 12
4*
Mean and variance in the rank-k format
u :=
1
Z
Z
i=1
ui =
1
Z
Z
i=1
A · bi = Ab. (5)
Cost is O(k(Z + n)).
C =
1
Z − 1
WcWT
c ≈
1
Z − 1
UkΣkΣT
k UT
k . (6)
Cost is O(k2(Z + n)).
Lemma: Let W − Wk 2 ≤ ε, and uk be a rank-k approximation
of the mean u. Then a) u − uk ≤ ε√
Z
,
b) C − Ck ≤ 1
Z−1ε2.
Center for Uncertainty
Quantification
tion Logo Lock-up
12 / 12

More Related Content

PDF
Hierarchical matrix approximation of large covariance matrices
PDF
Data sparse approximation of the Karhunen-Loeve expansion
DOCX
Newton two Equation method
DOCX
Math basic
PDF
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
PDF
Semana 10 numeros complejos i álgebra-uni ccesa007
PDF
Elliptic Curve Cryptography and Zero Knowledge Proof
PDF
Exams in college algebra
Hierarchical matrix approximation of large covariance matrices
Data sparse approximation of the Karhunen-Loeve expansion
Newton two Equation method
Math basic
Low-rank matrix approximations in Python by Christian Thurau PyData 2014
Semana 10 numeros complejos i álgebra-uni ccesa007
Elliptic Curve Cryptography and Zero Knowledge Proof
Exams in college algebra

What's hot (14)

PDF
Tucker tensor analysis of Matern functions in spatial statistics
PPTX
Backtracking
PDF
Trial pahang 2014 spm add math k2 dan skema [scan]
PDF
Trial kedah 2014 spm add math k2 skema
PDF
Trial terengganu 2014 spm add math k2 skema
PPTX
3D Geometry QA 5
PDF
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
PDF
Parallel Coordinate Descent Algorithms
PPTX
Miguel angel 2
PDF
Lecture note4coordinatedescent
PDF
Tarea de Matematicas
DOCX
Reviewer in math i
PPTX
Trident International Graphics Workshop2014 3/5
PPTX
Ejercicios resueltos
Tucker tensor analysis of Matern functions in spatial statistics
Backtracking
Trial pahang 2014 spm add math k2 dan skema [scan]
Trial kedah 2014 spm add math k2 skema
Trial terengganu 2014 spm add math k2 skema
3D Geometry QA 5
A Mathematically Derived Number of Resamplings for Noisy Optimization (GECCO2...
Parallel Coordinate Descent Algorithms
Miguel angel 2
Lecture note4coordinatedescent
Tarea de Matematicas
Reviewer in math i
Trident International Graphics Workshop2014 3/5
Ejercicios resueltos
Ad

Viewers also liked (8)

PPTX
Slideshare
PPTX
Kalmanfilter
PPTX
Kalman filters
PDF
(DL hacks輪読) Deep Kalman Filters
PDF
Kalman filter implimention in mathlab
PPTX
Kalman Filter | Statistics
PPTX
DIGITAL SIGNAL PROCESSING
PPTX
Dsp ppt
Slideshare
Kalmanfilter
Kalman filters
(DL hacks輪読) Deep Kalman Filters
Kalman filter implimention in mathlab
Kalman Filter | Statistics
DIGITAL SIGNAL PROCESSING
Dsp ppt
Ad

Similar to Talk litvinenko prior_cov (20)

PDF
Hierarchical matrix techniques for maximum likelihood covariance estimation
PDF
Game Playing RL Agent
PDF
A small introduction into H-matrices which I gave for my colleagues
PDF
Data sparse approximation of the Karhunen-Loeve expansion
PDF
Data sparse approximation of Karhunen-Loeve Expansion
PDF
A Factor Graph Approach To Constrained Optimization
PDF
Lec2 sampling-based-approximations-and-function-fitting
PPTX
Av 738 - Adaptive Filtering - Kalman Filters
PDF
Scalable hierarchical algorithms for stochastic PDEs and UQ
PDF
Identification of unknown parameters and prediction of missing values. Compar...
PPT
a questionnaire for q learning and its whys
PPT
why you need q learning and what are the reasonings
PDF
Solutions for Problems from Applied Optimization by Ross Baldick
PDF
Solutions for Problems from Applied Optimization by Ross Baldick
PDF
2012 mdsp pr03 kalman filter
PDF
Zap Q-Learning - ISMP 2018
PDF
Overview of sparse and low-rank matrix / tensor techniques
PDF
Identification of unknown parameters and prediction with hierarchical matrice...
PDF
QMC: Transition Workshop - Approximating Multivariate Functions When Function...
Hierarchical matrix techniques for maximum likelihood covariance estimation
Game Playing RL Agent
A small introduction into H-matrices which I gave for my colleagues
Data sparse approximation of the Karhunen-Loeve expansion
Data sparse approximation of Karhunen-Loeve Expansion
A Factor Graph Approach To Constrained Optimization
Lec2 sampling-based-approximations-and-function-fitting
Av 738 - Adaptive Filtering - Kalman Filters
Scalable hierarchical algorithms for stochastic PDEs and UQ
Identification of unknown parameters and prediction of missing values. Compar...
a questionnaire for q learning and its whys
why you need q learning and what are the reasonings
Solutions for Problems from Applied Optimization by Ross Baldick
Solutions for Problems from Applied Optimization by Ross Baldick
2012 mdsp pr03 kalman filter
Zap Q-Learning - ISMP 2018
Overview of sparse and low-rank matrix / tensor techniques
Identification of unknown parameters and prediction with hierarchical matrice...
QMC: Transition Workshop - Approximating Multivariate Functions When Function...

More from Alexander Litvinenko (20)

PDF
Poster_density_driven_with_fracture_MLMC.pdf
PDF
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
PDF
litvinenko_Intrusion_Bari_2023.pdf
PDF
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
PDF
litvinenko_Gamm2023.pdf
PDF
Litvinenko_Poster_Henry_22May.pdf
PDF
Uncertain_Henry_problem-poster.pdf
PDF
Litvinenko_RWTH_UQ_Seminar_talk.pdf
PDF
Litv_Denmark_Weak_Supervised_Learning.pdf
PDF
Computing f-Divergences and Distances of High-Dimensional Probability Density...
PDF
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
PDF
Low rank tensor approximation of probability density and characteristic funct...
PDF
Computation of electromagnetic fields scattered from dielectric objects of un...
PDF
Low-rank tensor approximation (Introduction)
PDF
Computation of electromagnetic fields scattered from dielectric objects of un...
PDF
Application of parallel hierarchical matrices for parameter inference and pre...
PDF
Computation of electromagnetic fields scattered from dielectric objects of un...
PDF
Propagation of Uncertainties in Density Driven Groundwater Flow
PDF
Simulation of propagation of uncertainties in density-driven groundwater flow
PDF
Approximation of large covariance matrices in statistics
Poster_density_driven_with_fracture_MLMC.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Intrusion_Bari_2023.pdf
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
litvinenko_Gamm2023.pdf
Litvinenko_Poster_Henry_22May.pdf
Uncertain_Henry_problem-poster.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Low rank tensor approximation of probability density and characteristic funct...
Computation of electromagnetic fields scattered from dielectric objects of un...
Low-rank tensor approximation (Introduction)
Computation of electromagnetic fields scattered from dielectric objects of un...
Application of parallel hierarchical matrices for parameter inference and pre...
Computation of electromagnetic fields scattered from dielectric objects of un...
Propagation of Uncertainties in Density Driven Groundwater Flow
Simulation of propagation of uncertainties in density-driven groundwater flow
Approximation of large covariance matrices in statistics

Recently uploaded (20)

PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PPTX
UNIT III MENTAL HEALTH NURSING ASSESSMENT
PDF
Empowerment Technology for Senior High School Guide
PDF
Indian roads congress 037 - 2012 Flexible pavement
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PPTX
Orientation - ARALprogram of Deped to the Parents.pptx
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PDF
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
PDF
What if we spent less time fighting change, and more time building what’s rig...
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PPTX
Cell Types and Its function , kingdom of life
PDF
Classroom Observation Tools for Teachers
PPTX
Lesson notes of climatology university.
PPTX
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
PPTX
Introduction to Building Materials
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Chinmaya Tiranga quiz Grand Finale.pdf
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
UNIT III MENTAL HEALTH NURSING ASSESSMENT
Empowerment Technology for Senior High School Guide
Indian roads congress 037 - 2012 Flexible pavement
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
Orientation - ARALprogram of Deped to the Parents.pptx
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
SOIL: Factor, Horizon, Process, Classification, Degradation, Conservation
What if we spent less time fighting change, and more time building what’s rig...
Final Presentation General Medicine 03-08-2024.pptx
Paper A Mock Exam 9_ Attempt review.pdf.
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
Cell Types and Its function , kingdom of life
Classroom Observation Tools for Teachers
Lesson notes of climatology university.
UV-Visible spectroscopy..pptx UV-Visible Spectroscopy – Electronic Transition...
Introduction to Building Materials
Practical Manual AGRO-233 Principles and Practices of Natural Farming

Talk litvinenko prior_cov

  • 1. Initial covariance matrix for the Kalman Filter Alexander Litvinenko Group of Raul Tempone, SRI UQ, and Group of David Keyes, Extreme Computing Research Center KAUST Center for Uncertainty Quantification ntification Logo Lock-up http://sri-uq.kaust.edu.sa/
  • 2. 4* Two variants Either we assume that matrix of snapshots is given [q(x, θ1), ..., q(x, θnq )] Or we assume that the covariance function is of a certain type: The Mat´ern class of covariance functions is defined as C(r) := Cν, (r) = 2σ2 Γ(ν) r 2 ν Kν r , (1) Center for Uncertainty Quantification tion Logo Lock-up 2 / 12
  • 3. −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.05 0.1 0.15 0.2 0.25 Matern covariance (nu=1) σ=0.5, l=0.5 σ=0.5, l=0.3 σ=0.5, l=0.2 σ=0.5, l=0.1 −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 0 0.05 0.1 0.15 0.2 0.25 nu=0.15 nu=0.3 nu=0.5 nu=1 nu=2 nu=30 Figure : Matern function for different parameters (computed in sglib). Center for Uncertainty Quantification tion Logo Lock-up 3 / 12
  • 4. 4* Types of Matern covariance Cν=3/2(r) = − √ 2νr Γ(p + 1) Γ(2p + 1) p i=0 (p + i)! i!(p − i)! ( √ 8νr )p−i . (2) The most interesting cases are ν = 3/2: Cν=3/2(r) = 1 + √ 3r exp − √ 3r (3) and ν = 5/2, for which Cν=5/2(r) = 1 + √ 5r + 5r2 3 2 exp − √ 5r (4) Center for Uncertainty Quantification tion Logo Lock-up 4 / 12
  • 5. 4* Comparison [q(x, θ1), ..., q(x, θnq )] ≈ ABT . n rank k size, MB t, sec. ε max i=1..10 |λi − ˜λi |, i ε2 for ˜C C ˜C C ˜C 4.0 · 103 10 48 3 0.8 0.08 7 · 10−3 7.0 · 10−2 , 9 2.0 · 10−4 1.05 · 104 18 439 19 7.0 0.4 7 · 10−4 5.5 · 10−2 , 2 1.0 · 10−4 2.1 · 104 25 2054 64 45.0 1.4 1 · 10−5 5.0 · 10−2 , 9 4.4 · 10−6 Table : Accuracy of H-matrix approximation, l1 = l3 = 0.1, l2 = 0.5. Center for Uncertainty Quantification tion Logo Lock-up 5 / 12
  • 6. 4* Storage cost and computing time k size, MB t, sec. 1 1548 33 2 1865 42 3 2181 50 4 2497 59 6 nem - k size, MB t, sec. 4 463 11 8 850 22 12 1236 32 16 1623 43 20 nem - Table : Dependence of the computing time and storage requirement on the H-matrix rank k, l1 = 0.1, l2 = 0.5, n = 2.3 · 105 . (right) l1 = 0.1, l2 = 0.5, l3 = 0.1, n = 4.61 · 105 . Center for Uncertainty Quantification tion Logo Lock-up 6 / 12
  • 7. 4* Examples of H-matrix approximation 25 20 20 20 20 16 20 16 20 20 16 16 20 16 16 16 19 20 20 19 32 19 19 16 16 32 19 20 20 19 19 16 19 16 32 32 20 20 20 20 32 32 32 20 19 19 19 32 20 19 16 16 32 32 20 32 32 20 32 32 32 32 20 32 32 20 19 19 19 20 16 19 16 32 32 20 32 32 32 32 32 20 32 32 32 20 32 20 20 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 20 20 20 19 20 20 32 32 32 20 20 20 32 20 32 32 20 20 20 32 32 20 32 20 20 20 32 32 32 32 20 20 20 20 19 20 32 32 32 20 20 20 32 20 32 32 20 20 20 32 32 20 32 20 20 20 32 32 32 32 20 20 20 20 20 20 32 32 32 20 19 20 19 32 32 32 20 20 19 19 32 32 32 20 20 20 20 32 32 32 32 20 32 32 32 20 32 32 32 20 32 32 32 20 32 32 32 32 20 32 32 32 20 32 32 32 20 32 32 32 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 20 20 20 20 2019 20 20 20 32 32 32 32 20 32 32 20 32 32 32 32 20 32 32 20 20 20 20 20 20 20 20 20 20 20 32 20 32 32 20 20 20 20 20 20 20 20 20 20 20 32 20 32 32 20 20 20 20 20 20 20 20 20 20 20 32 20 32 32 32 20 32 32 32 20 32 32 32 20 32 32 20 20 20 20 20 20 20 20 19 20 20 20 32 32 32 20 32 32 32 32 32 20 32 32 32 20 32 20 20 20 20 20 20 20 20 20 20 20 32 32 20 32 20 20 20 20 20 20 20 20 20 20 20 32 32 20 32 20 20 20 20 20 20 20 20 20 20 20 32 32 20 32 32 32 20 32 32 32 20 32 32 32 20 32 20 20 20 20 20 20 20 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 19 19 20 20 32 32 32 32 20 32 32 20 32 32 32 32 20 32 32 19 20 19 20 32 32 32 20 32 32 32 32 32 20 32 32 32 20 32 20 20 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 20 20 32 32 32 32 20 20 20 32 20 32 32 20 20 20 32 32 20 32 20 20 20 32 32 32 32 20 20 32 32 32 32 20 20 20 32 20 32 32 20 20 20 32 32 20 32 20 20 20 32 32 32 32 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 20 32 32 32 20 32 20 32 20 32 32 32 20 32 20 32 32 20 32 32 32 20 32 32 32 20 20 32 32 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 25 9 9 20 9 9 20 7 7 16 9 9 20 9 9 20 9 9 32 9 9 20 9 9 20 9 9 32 9 9 32 9 9 32 9 9 20 9 9 20 9 9 32 9 9 20 9 9 20 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 20 9 9 20 9 9 32 9 9 32 9 9 32 9 9 20 9 9 20 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 9 9 32 Center for Uncertainty Quantification tion Logo Lock-up 7 / 12
  • 8. 4* Kullback-Leibler divergence (KLD) DKL(P Q) is measure of the information lost when distribution Q is used to approximate P: DKL(P Q) = i P(i) ln P(i) Q(i) , DKL(P Q) = ∞ −∞ p(x) ln p(x) q(x) dx, where p, q densities of P and Q. For miltivariate normal distributions (µ0, Σ0) and (µ1, Σ1) 2DKL(N0 N1) = tr(Σ−1 1 Σ0)+(µ1 −µ0)T Σ−1 1 (µ1 −µ0)−k −ln det Σ0 det Σ1 Center for Uncertainty Quantification tion Logo Lock-up 8 / 12
  • 9. 4* Convergence of KLD with increasing the rank k k KLD C − CH 2 C(CH )−1 − I 2 L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75 5 0.51 2.3 4.0e-2 0.1 4.8 63 6 0.34 1.6 9.4e-3 0.02 3.4 22 8 5.3e-2 0.4 1.9e-3 0.003 1.2 8 10 2.6e-3 0.2 7.7e-4 7.0e-4 6.0e-2 3.1 12 5.0e-4 2e-2 9.7e-5 5.6e-5 1.6e-2 0.5 15 1.0e-5 9e-4 2.0e-5 1.1e-5 8.0e-4 0.02 20 4.5e-7 4.8e-5 6.5e-7 2.8e-7 2.1e-5 1.2e-3 50 3.4e-13 5e-12 2.0e-13 2.4e-13 4e-11 2.7e-9 Table : Dependence of KLD on the approximation H-matrix rank k, Matern covariance with parameters L = {0.25, 0.75} and ν = 0.5, domain G = [0, 1]2 , C(L=0.25,0.75) 2 = {212, 568}. Center for Uncertainty Quantification tion Logo Lock-up 9 / 12
  • 10. 4* Convergence of KLD with increasing the rank k k KLD C − CH 2 C(CH )−1 − I 2 L = 0.25 L = 0.75 L = 0.25 L = 0.75 L = 0.25 L = 0.75 5 nan nan 0.05 6e-2 2.1e+13 1e+28 10 10 10e+17 4e-4 5.5e-4 276 1e+19 15 3.7 1.8 1.1e-5 3e-6 112 4e+3 18 1.2 2.7 1.2e-6 7.4e-7 31 5e+2 20 0.12 2.7 5.3e-7 2e-7 4.5 72 30 3.2e-5 0.4 1.3e-9 5e-10 4.8e-3 20 40 6.5e-8 1e-2 1.5e-11 8e-12 7.4e-6 0.5 50 8.3e-10 3e-3 2.0e-13 1.5e-13 1.5e-7 0.1 Table : Dependence of KLD on the approximation H-matrix rank k, Matern covariance with parameters L = {0.25, 0.75} and ν = 1.5, domain G = [0, 1]2 , C(L=0.25,0.75) 2 = {720, 1068}. Center for Uncertainty Quantification tion Logo Lock-up 10 / 12
  • 11. 4* Application of large covariance matrices 1. Kriging estimate ˆs := Csy C−1 yy y 2. Estimation of variance ˆσ, is the diagonal of conditional cov. matrix Css|y = diag Css − Csy C−1 yy Cys , 3. Gestatistical optimal design ϕA := n−1traceCss|y , ϕC := cT Css − Csy C−1 yy Cys c, Center for Uncertainty Quantification tion Logo Lock-up 11 / 12
  • 12. 4* Mean and variance in the rank-k format u := 1 Z Z i=1 ui = 1 Z Z i=1 A · bi = Ab. (5) Cost is O(k(Z + n)). C = 1 Z − 1 WcWT c ≈ 1 Z − 1 UkΣkΣT k UT k . (6) Cost is O(k2(Z + n)). Lemma: Let W − Wk 2 ≤ ε, and uk be a rank-k approximation of the mean u. Then a) u − uk ≤ ε√ Z , b) C − Ck ≤ 1 Z−1ε2. Center for Uncertainty Quantification tion Logo Lock-up 12 / 12