SlideShare a Scribd company logo
Data sparse approximation of the
Karhunen-Lo`eve expansion
A. Litvinenko, joint work with B. Khoromskij and H. G. Matthies
Institut f¨ur Wissenschaftliches Rechnen, Technische Universit¨at Braunschweig,
0531-391-3008, litvinen@tu-bs.de
December 5, 2008
Outline
Introduction
KLE
Hierarchical Matrices
Low Kronecker rank approximation
Application
Outline
Introduction
KLE
Hierarchical Matrices
Low Kronecker rank approximation
Application
Stochastic PDE
We consider
− div(κ(x, ω)∇u) = f(x, ω) in G,
u = 0 on ∂G,
with stochastic coefficients κ(x, ω), x ∈ G ⊆ Rd
and ω belongs to the
space of random events Ω.
Figure: Examples of computational domains G with a non-rectangular grid.
Covariance functions
The random field f(x, ω) requires to specify its spatial correl. structure
covf (x, y) = E[(f(x, ·) − µf (x))(f(y, ·) − µf (y))],
Let h =
3
i=1 h2
i /ℓ2
i , where hi := xi − yi, i = 1, 2, 3, ℓi are cov.
lengths.
Examples: Gaussian cov(h) = exp(−h2
), exponential
cov(h) = exp(−h),
Outline
Introduction
KLE
Hierarchical Matrices
Low Kronecker rank approximation
Application
KLE
The Karhunen-Lo`eve expansion is the series
κ(x, ω) = µk (x) +
∞
i=1
λi φi (x)ξi (ω), where
ξi (ω) are uncorrelated random variables and φi are basis functions in
L2
(G).
Eigenpairs λi , φi are the solution of
Tφi = λi φi , φi ∈ L2
(G), i ∈ N, where.
T : L2
(G) → L2
(G),
(Tφ)(x) := G covk (x, y)φ(y)dy.
Discrete eigenvalue problem
Let
Wij :=
k,m G
bi (x)bk (x)dxCkm
G
bj (y)bm(y)dy,
Mij =
G
bi (x)bj (x)dx.
Then we solve
W φh
ℓ = λℓMφh
ℓ , where W := MCM
Approximate C and M in
◮ the H-matrix format
◮ low Kronecker rank format
and use the Lanczos method to compute m largest eigenvalues.
Outline
Introduction
KLE
Hierarchical Matrices
Low Kronecker rank approximation
Application
Examples of H-matrix approximates of
cov(x, y) = e−2|x−y|
25 20
20 20
20 16
20 16
20 20
16 16
20 16
16 16
4 4
20 4 32
4 4
16 4 32
4 20
4 4
4 16
4 4
32 32
20 20
20 20 32
32 32
4 3
4 4 32
20 4
16 4 32
32 4
32 32
4 32
32 32
32 4
32 32
4 4
4 4
20 16
4 4
32 32
4 32
32 32
32 32
4 32
32 32
4 32
20 20
20 20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
4 4
4 4
20 4 32
32 32 4
4 4
32 4
32 32 4
4 4
32 32
4 32 4
4 4
32 32
32 32 4
4
4 20
4 4 32
32 32
4 4
4
32 4
32 32
4 4
4
32 32
4 32
4 4
4
32 32
32 32
4 4
20 20
20 20 32
32 32
4 4
20 4 32
32 32
4 20
4 4 32
32 32
20 20
20 20 32
32 32
32 4
32 32
32 4
32 32
32 4
32 32
32 4
32 32
32 32
4 32
32 32
4 32
32 32
4 32
32 32
4 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
4 4
4 4 44 4
20 4 32
32 32
32 4
32 32
4 32
32 32
32 4
32 32
4 4
4 4
4 4
4 4 4
4 4
32 4
32 32 4
4 4
4 4
4 4
4 4 4
4
32 4
32 32
4 4
4 4
4 4
4 4
4 4 4
32 4
32 32
32 4
32 32
32 4
32 32
32 4
32 32
4 4
4 4
4 4
4 4
4 20
4 4 32
32 32
4 32
32 32
32 32
4 32
32 32
4 32
4
4 4
4 4
4 4
4 4
4 4
32 32
4 32 4
4
4 3
4 4
4 4
4 4
4
32 32
4 32
4 4
4
4 4
4 4
4 4
4 4
32 32
4 32
32 32
4 32
32 32
4 32
32 32
4 32
4
4 4
4 4
20 20
20 20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
4 4
20 4 32
32 32
32 4
32 32
4 32
32 32
32 4
32 32
4 20
4 4 32
32 32
4 32
32 32
32 32
4 32
32 32
4 32
20 20
20 20 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
4 4
32 32
32 32 4
4 4
32 4
32 32 4
4 4
32 32
4 32 4
4 4
32 32
32 32 4
4
32 32
32 32
4 4
4
32 4
32 32
4 4
4
32 32
4 32
4 4
4
32 32
32 32
4 4
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 4
32 32
32 4
32 4
32 4
32 32
32 4
32 4
32 32
4 32
32 32
4 32
32 32
4 4
32 32
4 4
32 32
32 32
32 32
32 32
32 32
32 32
32 32
32 32
25 11
11 20 12
13
20 11
9 16
13
13
20 11
11 20 13
13 32
13
13
20 8
10 20 13
13 32 13
13
32 13
13 32
13
13
20 11
11 20 13
13 32 13
13
20 10
10 20 12
12 32
13
13
32 13
13 32 13
13
32 13
13 32
13
13
20 11
11 20 13
13 32 13
13
32 13
13 32
13
13
20 9
9 20 13
13 32 13
13
32 13
13 32
13
13
32 13
13 32 13
13
32 13
13 32
13
13
32 13
13 32 13
13
32 13
13 32
Figure: H-matrix approximations ˜C ∈ Rn×n
, n = 322
, with standard (left) and
weak (right) admissibility block partitionings. The biggest dense (dark) blocks
∈ Rn×n
, max. rank k = 4 left and k = 13 right.
H - matrices: numerics
To assemble low-rank blocks use ACA [Bebendorf et al. ].
Dependence of the computational time and storage requirements of
˜C on the rank k, n = 322
.
k time (sec.) memory (MB) C− ˜C 2
C 2
2 0.04 2 3.5e − 5
6 0.1 4 1.4e − 5
9 0.14 5.4 1.4e − 5
12 0.17 6.8 3.1e − 7
17 0.23 9.3 6.3e − 8
The time for dense matrix C is 3.3 sec. and the storage 140 MB.
H - matrices: numerics
k size, MB t, sec.
1 1548 33
2 1865 42
3 2181 50
4 2497 59
6 nem -
k size, MB t, sec.
4 463 11
8 850 22
12 1236 32
16 1623 43
20 nem -
Table: Computing times and storage requirements on the H-matrix rank k for
the exp. cov. function. (left) standard admissibility condition, geometry
shown in Fig. 1 (middle), l1 = 0.1, l2 = 0.5, n = 2.3 · 105
. (right) weak
admissibility condition, geometry shown in Fig. 1 (right), l1 = 0.1, l2 = 0.5,
l3 = 0.1, n = 4.61 · 105
.
H - matrices: numerics
k 2.4 · 104
3.5 · 104
6.8 · 104
2.3 · 105
t1 t2 t1 t2 t1 t2 t1 t2
3 3 · 10−3
0.2 6.0 · 10−3
0.4 1 · 10−2
1 5.0 · 10−2
4
6 6 · 10−3
0.4 1.1 · 10−2
0.7 2 · 10−2
2 9.0 · 10−2
7
9 8 · 10−3
0.5 1.5 · 10−2
1.0 3 · 10−2
3 1.3 · 10−1
11
full 0.62 2.48 10 140
Table: t1- computing times (in sec.) required for an H-matrix and dense
matrix vector multiplication, t2 - times to set up ˜C ∈ Rn×n
.
H - matrices: numerics
exponential cov(h) = exp(−h),
The cov. matrix C ∈ Rn×n
, n = 652
.
ℓ1 ℓ2
C− ˜C 2
C 2
0.01 0.02 3 · 10−2
0.1 0.2 8 · 10−3
1 2 2.8 · 10−6
m - eigenvalues
matrix info (MB, sec.) m
n k ˜C, MB ˜C, sec. 2 5 10 20 40 80
2.4 · 104
4 12 0.2 0.6 0.9 1.3 2.3 4.2 8
6.8 · 104
8 95 2 2.4 3.8 5.6 8.4 18.0 28
2.3 · 105
12 570 11 10.0 17.0 24.0 39.0 70.0 150
Table: Time required for computing m eigenpairs of the exp. cov. function
with l1 = l3 = 0.1, l3 = 0.5. The geometry is shown in Fig. 1 (right).
Outline
Introduction
KLE
Hierarchical Matrices
Low Kronecker rank approximation
Application
Sparse tensor decompositions of kernels
cov(x, y) = cov(x − y)
We want to approximate C ∈ RN×N
, N = nd
by
Cr =
r
k=1
V 1
k ⊗ ... ⊗ V d
k
such that C − Cr ≤ ε. The storage of C is O(N2
) = O(n2d
) and the
storage of Cr is O(rdn2
).
To define V i
k use SVD.
Approximate all V i
k in the H-matrix format ⇒ HKT format.
See basic arithmetics in [Hackbusch, Khoromskij, Tyrtyshnikov].
Tensor approximation
W φh
ℓ = λℓMφh
ℓ , where W := MCM.
Approximate
M ≈
d
ν=1
M(1)
ν ⊗ M(2)
ν , C ≈
q
ν=1
C(1)
ν ⊗ C(2)
ν , φ ≈
r
ν=1
φ(1)
ν ⊗ φ(2)
ν ,
where M
(j)
ν , C
(j)
ν ∈ Rn×n
, φ(j)
ν ∈ Rn
,
Example: for mass matrix M ∈ RN×N
holds
M = M(1)
⊗ I + I ⊗ M(1)
, where M(1)
∈ Rn×n
is one-dimensional mass matrix.
Hypothesis: the Kronecker rank of M stays small even for a more
general domain with non-regular grid.
Suppose C = q
ν=1 C
(1)
ν ⊗ C
(2)
ν and φ = r
j=1 φ
(1)
j ⊗ φ
(2)
j . Then
tensor vector product is defined as
Cφ =
q
ν=1
r
j=1
(C(1)
ν φ
(1)
j ) ⊗ (C(2)
ν φ
(2)
j ).
The complexity is O(qrkn log n).
Numerical examples of tensor approximations
Gaussian kernel exp(−h2
) has the Kroneker rank 1.
The exponen. kernel exp(−h) can be approximated by a tensor with
low Kroneker rank
r 1 2 3 4 5 6 10
C−Cr ∞
C ∞
11.5 1.7 0.4 0.14 0.035 0.007 2.8e − 8
C−Cr 2
C 2
6.7 0.52 0.1 0.03 0.008 0.001 5.3e − 9
Example
Let G = [0, 1]2
, Lh the stiffness matrix computed with the five-point
formula. Then Lh 2 ≤ 8h−2
cos2
(πh/2) < 8h−2
.
Lemma
The (n − 1)2
eigenvectors of Lh are uνµ (1 ≤ ν, µ ≤ n − 1):
uνµ(x, y) = sin(νπx) sin(µπy), (x, y) ∈ Gh.
The corresponding eigenvalues are
λνµ = 4h−2
(sin2
(νπh/2) + sin2
(µπh/2)), 1 ≤ ν, µ ≤ n − 1.
Use Lanczos method with the matrix in the HKT format to compute
eigenpairs of
Lhvi = λi vi , i = 1..N.
Then we compare the computed eigenpairs with the analytically
known eigenpairs.
Outline
Introduction
KLE
Hierarchical Matrices
Low Kronecker rank approximation
Application
Higher order moments
Let operator K be deterministic and
Ku(θ) =
α∈J
Ku(α)
Hα(θ) = ˜f(θ) =
α∈J
f(α)
Hα(θ), with
u(α)
= [u
(α)
1 , ..., u
(α)
N ]T
. Projecting onto each Hα obtain
Ku(α)
= f(α)
.
The KLE of f(θ) is
f(θ) = f +
ℓ
λℓφℓ(θ)fℓ =
ℓ α
λℓφ
(α)
ℓ Hα(θ)fℓ
=
α
Hα(θ)f(α)
,
where f(α)
= ℓ
√
λℓφ
(α)
ℓ fℓ.
The 3-rd moment of u is
M
(3)
u = E


α,β,γ
u(α)
⊗ u(β)
⊗ u(γ)
HαHβHγ

 =
α,β,γ
u(α)
⊗u(β)
⊗u(γ)
cα,β,γ,
cα,β,γ := E (Hα(θ)Hβ(θ)Hγ(θ)) = c
(γ)
α,β · γ!, and
c
(γ)
α,β :=
α!β!
(g − α)!(g − β)!(g − γ)!
, g := (α + β + γ)/2.
Using u(α)
= K−1
f(α)
= ℓ
√
λℓφ
(α)
ℓ K−1
fℓ and uℓ := K−1
fℓ,
obtain
M
(3)
u =
p,q,r
tp,q,r up ⊗ uq ⊗ ur , where
tp,q,r := λpλqλr
α,β,γ
φ
(α)
p φ
(β)
q φ
(γ)
r cα,β,γ.
Literature
1. B.N. Khoromskij, A.Litvinenko, H. G. Matthies, Application of
hierarchical matrices for computing the Karhunen-Lo`eve
expansion, Computing, 2008, Springer Wien,
http://dx.doi.org/10.1007/s00607-008-0018-3
2. B.N. Khoromskij, A.Litvinenko, Data Sparse Computation of the
Karhunen-Lo`eve Expansion, 2008, AIP Conference Proceedings,
1048-1, pp. 311-314.
3. H. G. Matthies, Uncertainty Quantification with Stochastic Finite
Elements, Encyclopedia of Computational Mechanics, Wiley,
2007.
4. W. Hackbusch, B. N. Khoromskij, S. A. Sauter, and E. E.
Tyrtyshnikov, Use of Tensor Formats in Elliptic Eigenvalue
Problems, Preprint 78/2008, MPI for mathematics in Leipzig.
Thank you for your attention!
Questions?

More Related Content

PDF
Application of parallel hierarchical matrices and low-rank tensors in spatial...
PDF
A common unique random fixed point theorem in hilbert space using integral ty...
PDF
3130005 cvpde gtu_study_material_e-notes_all_18072019070728_am
PDF
Hierarchical matrix approximation of large covariance matrices
PDF
Solving the energy problem of helium final report
PDF
Fast and efficient exact synthesis of single qubit unitaries generated by cli...
PDF
Ignations Antoniadis "Inflation from supersymmetry breaking"
Application of parallel hierarchical matrices and low-rank tensors in spatial...
A common unique random fixed point theorem in hilbert space using integral ty...
3130005 cvpde gtu_study_material_e-notes_all_18072019070728_am
Hierarchical matrix approximation of large covariance matrices
Solving the energy problem of helium final report
Fast and efficient exact synthesis of single qubit unitaries generated by cli...
Ignations Antoniadis "Inflation from supersymmetry breaking"

What's hot (17)

PDF
Talk litvinenko prior_cov
PDF
Solovay Kitaev theorem
PPT
Identification of the Mathematical Models of Complex Relaxation Processes in ...
PDF
Integration techniques
PDF
Maths04
PDF
Q paper I puc-2014(MATHEMATICS)
PDF
Integration in the complex plane
PDF
Number theoretic-rsa-chailos-new
DOCX
Banco de preguntas para el ap
PDF
International Journal of Engineering and Science Invention (IJESI)
PDF
MinFill_Presentation
PDF
F.Y.B.Sc(2013 pattern) Old Question Papers:Dr.Kshirsagar
PDF
Escola naval 2015
PPTX
countor integral
PDF
Analytic construction of points on modular elliptic curves
PDF
Numerical Methods: curve fitting and interpolation
PDF
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-II
Talk litvinenko prior_cov
Solovay Kitaev theorem
Identification of the Mathematical Models of Complex Relaxation Processes in ...
Integration techniques
Maths04
Q paper I puc-2014(MATHEMATICS)
Integration in the complex plane
Number theoretic-rsa-chailos-new
Banco de preguntas para el ap
International Journal of Engineering and Science Invention (IJESI)
MinFill_Presentation
F.Y.B.Sc(2013 pattern) Old Question Papers:Dr.Kshirsagar
Escola naval 2015
countor integral
Analytic construction of points on modular elliptic curves
Numerical Methods: curve fitting and interpolation
Engineering Mathematics-IV_B.Tech_Semester-IV_Unit-II
Ad

Viewers also liked (20)

PDF
Low-rank tensor methods for stochastic forward and inverse problems
PDF
My PhD on 4 pages
PDF
My PhD talk "Application of H-matrices for computing partial inverse"
PDF
Litvinenko low-rank kriging +FFT poster
PDF
Litvinenko nlbu2016
PDF
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
PDF
add_2_diplom_main
PDF
Response Surface in Tensor Train format for Uncertainty Quantification
PDF
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
PDF
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
PDF
Data sparse approximation of the Karhunen-Loeve expansion
PDF
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
PDF
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
PDF
Scalable hierarchical algorithms for stochastic PDEs and UQ
PDF
Minimum mean square error estimation and approximation of the Bayesian update
PDF
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
PDF
Internal Communications 2007 / 2nd Place / Vide. Veselība. Drošība
DOC
6.13 standard prestasi dunia seni visual kssr tahun 1 (1)
ODT
Activitats tema 5
Low-rank tensor methods for stochastic forward and inverse problems
My PhD on 4 pages
My PhD talk "Application of H-matrices for computing partial inverse"
Litvinenko low-rank kriging +FFT poster
Litvinenko nlbu2016
Possible applications of low-rank tensors in statistics and UQ (my talk in Bo...
add_2_diplom_main
Response Surface in Tensor Train format for Uncertainty Quantification
My paper for Domain Decomposition Conference in Strobl, Austria, 2005
Application H-matrices for solving PDEs with multi-scale coefficients, jumpin...
Data sparse approximation of the Karhunen-Loeve expansion
Low-rank methods for analysis of high-dimensional data (SIAM CSE talk 2017)
Tensor Completion for PDEs with uncertain coefficients and Bayesian Update te...
Scalable hierarchical algorithms for stochastic PDEs and UQ
Minimum mean square error estimation and approximation of the Bayesian update
Computation of Electromagnetic Fields Scattered from Dielectric Objects of Un...
Internal Communications 2007 / 2nd Place / Vide. Veselība. Drošība
6.13 standard prestasi dunia seni visual kssr tahun 1 (1)
Activitats tema 5
Ad

Similar to Data sparse approximation of the Karhunen-Loeve expansion (20)

PDF
Data sparse approximation of Karhunen-Loeve Expansion
PDF
Hierarchical matrices for approximating large covariance matries and computin...
PDF
Hierarchical matrix techniques for maximum likelihood covariance estimation
PDF
My presentation at University of Nottingham "Fast low-rank methods for solvin...
PDF
Integral equation methods for scattering from multi-dielectric cylinders
PDF
MVPA with SpaceNet: sparse structured priors
PDF
Application of parallel hierarchical matrices for parameter inference and pre...
PDF
Lecture 5: Stochastic Hydrology
PDF
Identification of unknown parameters and prediction with hierarchical matrice...
PDF
A small introduction into H-matrices which I gave for my colleagues
PDF
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
PDF
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
PDF
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
PDF
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
PDF
Litvinenko, Uncertainty Quantification - an Overview
PDF
AJMS_384_22.pdf
PDF
Second-order Cosmological Perturbations Engendered by Point-like Masses
PDF
A common random fixed point theorem for rational inequality in hilbert space
PDF
2 random variables notes 2p3
PDF
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...
Data sparse approximation of Karhunen-Loeve Expansion
Hierarchical matrices for approximating large covariance matries and computin...
Hierarchical matrix techniques for maximum likelihood covariance estimation
My presentation at University of Nottingham "Fast low-rank methods for solvin...
Integral equation methods for scattering from multi-dielectric cylinders
MVPA with SpaceNet: sparse structured priors
Application of parallel hierarchical matrices for parameter inference and pre...
Lecture 5: Stochastic Hydrology
Identification of unknown parameters and prediction with hierarchical matrice...
A small introduction into H-matrices which I gave for my colleagues
Developing fast low-rank tensor methods for solving PDEs with uncertain coef...
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
A Probabilistic Algorithm for Computation of Polynomial Greatest Common with ...
A PROBABILISTIC ALGORITHM FOR COMPUTATION OF POLYNOMIAL GREATEST COMMON WITH ...
Litvinenko, Uncertainty Quantification - an Overview
AJMS_384_22.pdf
Second-order Cosmological Perturbations Engendered by Point-like Masses
A common random fixed point theorem for rational inequality in hilbert space
2 random variables notes 2p3
FITTED OPERATOR FINITE DIFFERENCE METHOD FOR SINGULARLY PERTURBED PARABOLIC C...

More from Alexander Litvinenko (20)

PDF
Poster_density_driven_with_fracture_MLMC.pdf
PDF
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
PDF
litvinenko_Intrusion_Bari_2023.pdf
PDF
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
PDF
litvinenko_Gamm2023.pdf
PDF
Litvinenko_Poster_Henry_22May.pdf
PDF
Uncertain_Henry_problem-poster.pdf
PDF
Litvinenko_RWTH_UQ_Seminar_talk.pdf
PDF
Litv_Denmark_Weak_Supervised_Learning.pdf
PDF
Computing f-Divergences and Distances of High-Dimensional Probability Density...
PDF
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
PDF
Low rank tensor approximation of probability density and characteristic funct...
PDF
Identification of unknown parameters and prediction of missing values. Compar...
PDF
Computation of electromagnetic fields scattered from dielectric objects of un...
PDF
Low-rank tensor approximation (Introduction)
PDF
Computation of electromagnetic fields scattered from dielectric objects of un...
PDF
Computation of electromagnetic fields scattered from dielectric objects of un...
PDF
Propagation of Uncertainties in Density Driven Groundwater Flow
PDF
Simulation of propagation of uncertainties in density-driven groundwater flow
PDF
Approximation of large covariance matrices in statistics
Poster_density_driven_with_fracture_MLMC.pdf
litvinenko_Henry_Intrusion_Hong-Kong_2024.pdf
litvinenko_Intrusion_Bari_2023.pdf
Density Driven Groundwater Flow with Uncertain Porosity and Permeability
litvinenko_Gamm2023.pdf
Litvinenko_Poster_Henry_22May.pdf
Uncertain_Henry_problem-poster.pdf
Litvinenko_RWTH_UQ_Seminar_talk.pdf
Litv_Denmark_Weak_Supervised_Learning.pdf
Computing f-Divergences and Distances of High-Dimensional Probability Density...
Computing f-Divergences and Distances of\\ High-Dimensional Probability Densi...
Low rank tensor approximation of probability density and characteristic funct...
Identification of unknown parameters and prediction of missing values. Compar...
Computation of electromagnetic fields scattered from dielectric objects of un...
Low-rank tensor approximation (Introduction)
Computation of electromagnetic fields scattered from dielectric objects of un...
Computation of electromagnetic fields scattered from dielectric objects of un...
Propagation of Uncertainties in Density Driven Groundwater Flow
Simulation of propagation of uncertainties in density-driven groundwater flow
Approximation of large covariance matrices in statistics

Recently uploaded (20)

PDF
RMMM.pdf make it easy to upload and study
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Supply Chain Operations Speaking Notes -ICLT Program
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
Complications of Minimal Access Surgery at WLH
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Pharmacology of Heart Failure /Pharmacotherapy of CHF
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Sports Quiz easy sports quiz sports quiz
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PPTX
Institutional Correction lecture only . . .
PPTX
Cell Structure & Organelles in detailed.
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PPTX
master seminar digital applications in india
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
RMMM.pdf make it easy to upload and study
Basic Mud Logging Guide for educational purpose
Microbial diseases, their pathogenesis and prophylaxis
Supply Chain Operations Speaking Notes -ICLT Program
TR - Agricultural Crops Production NC III.pdf
Complications of Minimal Access Surgery at WLH
GDM (1) (1).pptx small presentation for students
Final Presentation General Medicine 03-08-2024.pptx
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Pharmacology of Heart Failure /Pharmacotherapy of CHF
Microbial disease of the cardiovascular and lymphatic systems
human mycosis Human fungal infections are called human mycosis..pptx
PPH.pptx obstetrics and gynecology in nursing
Sports Quiz easy sports quiz sports quiz
Module 4: Burden of Disease Tutorial Slides S2 2025
Institutional Correction lecture only . . .
Cell Structure & Organelles in detailed.
FourierSeries-QuestionsWithAnswers(Part-A).pdf
master seminar digital applications in india
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf

Data sparse approximation of the Karhunen-Loeve expansion

  • 1. Data sparse approximation of the Karhunen-Lo`eve expansion A. Litvinenko, joint work with B. Khoromskij and H. G. Matthies Institut f¨ur Wissenschaftliches Rechnen, Technische Universit¨at Braunschweig, 0531-391-3008, litvinen@tu-bs.de December 5, 2008
  • 4. Stochastic PDE We consider − div(κ(x, ω)∇u) = f(x, ω) in G, u = 0 on ∂G, with stochastic coefficients κ(x, ω), x ∈ G ⊆ Rd and ω belongs to the space of random events Ω. Figure: Examples of computational domains G with a non-rectangular grid.
  • 5. Covariance functions The random field f(x, ω) requires to specify its spatial correl. structure covf (x, y) = E[(f(x, ·) − µf (x))(f(y, ·) − µf (y))], Let h = 3 i=1 h2 i /ℓ2 i , where hi := xi − yi, i = 1, 2, 3, ℓi are cov. lengths. Examples: Gaussian cov(h) = exp(−h2 ), exponential cov(h) = exp(−h),
  • 7. KLE The Karhunen-Lo`eve expansion is the series κ(x, ω) = µk (x) + ∞ i=1 λi φi (x)ξi (ω), where ξi (ω) are uncorrelated random variables and φi are basis functions in L2 (G). Eigenpairs λi , φi are the solution of Tφi = λi φi , φi ∈ L2 (G), i ∈ N, where. T : L2 (G) → L2 (G), (Tφ)(x) := G covk (x, y)φ(y)dy.
  • 8. Discrete eigenvalue problem Let Wij := k,m G bi (x)bk (x)dxCkm G bj (y)bm(y)dy, Mij = G bi (x)bj (x)dx. Then we solve W φh ℓ = λℓMφh ℓ , where W := MCM Approximate C and M in ◮ the H-matrix format ◮ low Kronecker rank format and use the Lanczos method to compute m largest eigenvalues.
  • 10. Examples of H-matrix approximates of cov(x, y) = e−2|x−y| 25 20 20 20 20 16 20 16 20 20 16 16 20 16 16 16 4 4 20 4 32 4 4 16 4 32 4 20 4 4 4 16 4 4 32 32 20 20 20 20 32 32 32 4 3 4 4 32 20 4 16 4 32 32 4 32 32 4 32 32 32 32 4 32 32 4 4 4 4 20 16 4 4 32 32 4 32 32 32 32 32 4 32 32 32 4 32 20 20 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 4 4 4 4 20 4 32 32 32 4 4 4 32 4 32 32 4 4 4 32 32 4 32 4 4 4 32 32 32 32 4 4 4 20 4 4 32 32 32 4 4 4 32 4 32 32 4 4 4 32 32 4 32 4 4 4 32 32 32 32 4 4 20 20 20 20 32 32 32 4 4 20 4 32 32 32 4 20 4 4 32 32 32 20 20 20 20 32 32 32 32 4 32 32 32 4 32 32 32 4 32 32 32 4 32 32 32 32 4 32 32 32 4 32 32 32 4 32 32 32 4 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 4 4 4 4 44 4 20 4 32 32 32 32 4 32 32 4 32 32 32 32 4 32 32 4 4 4 4 4 4 4 4 4 4 4 32 4 32 32 4 4 4 4 4 4 4 4 4 4 4 32 4 32 32 4 4 4 4 4 4 4 4 4 4 4 32 4 32 32 32 4 32 32 32 4 32 32 32 4 32 32 4 4 4 4 4 4 4 4 4 20 4 4 32 32 32 4 32 32 32 32 32 4 32 32 32 4 32 4 4 4 4 4 4 4 4 4 4 4 32 32 4 32 4 4 4 3 4 4 4 4 4 4 4 32 32 4 32 4 4 4 4 4 4 4 4 4 4 4 32 32 4 32 32 32 4 32 32 32 4 32 32 32 4 32 4 4 4 4 4 20 20 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 4 4 20 4 32 32 32 32 4 32 32 4 32 32 32 32 4 32 32 4 20 4 4 32 32 32 4 32 32 32 32 32 4 32 32 32 4 32 20 20 20 20 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 4 4 32 32 32 32 4 4 4 32 4 32 32 4 4 4 32 32 4 32 4 4 4 32 32 32 32 4 4 32 32 32 32 4 4 4 32 4 32 32 4 4 4 32 32 4 32 4 4 4 32 32 32 32 4 4 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 4 32 32 32 4 32 4 32 4 32 32 32 4 32 4 32 32 4 32 32 32 4 32 32 32 4 4 32 32 4 4 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 25 11 11 20 12 13 20 11 9 16 13 13 20 11 11 20 13 13 32 13 13 20 8 10 20 13 13 32 13 13 32 13 13 32 13 13 20 11 11 20 13 13 32 13 13 20 10 10 20 12 12 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 20 11 11 20 13 13 32 13 13 32 13 13 32 13 13 20 9 9 20 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 13 13 32 Figure: H-matrix approximations ˜C ∈ Rn×n , n = 322 , with standard (left) and weak (right) admissibility block partitionings. The biggest dense (dark) blocks ∈ Rn×n , max. rank k = 4 left and k = 13 right.
  • 11. H - matrices: numerics To assemble low-rank blocks use ACA [Bebendorf et al. ]. Dependence of the computational time and storage requirements of ˜C on the rank k, n = 322 . k time (sec.) memory (MB) C− ˜C 2 C 2 2 0.04 2 3.5e − 5 6 0.1 4 1.4e − 5 9 0.14 5.4 1.4e − 5 12 0.17 6.8 3.1e − 7 17 0.23 9.3 6.3e − 8 The time for dense matrix C is 3.3 sec. and the storage 140 MB.
  • 12. H - matrices: numerics k size, MB t, sec. 1 1548 33 2 1865 42 3 2181 50 4 2497 59 6 nem - k size, MB t, sec. 4 463 11 8 850 22 12 1236 32 16 1623 43 20 nem - Table: Computing times and storage requirements on the H-matrix rank k for the exp. cov. function. (left) standard admissibility condition, geometry shown in Fig. 1 (middle), l1 = 0.1, l2 = 0.5, n = 2.3 · 105 . (right) weak admissibility condition, geometry shown in Fig. 1 (right), l1 = 0.1, l2 = 0.5, l3 = 0.1, n = 4.61 · 105 .
  • 13. H - matrices: numerics k 2.4 · 104 3.5 · 104 6.8 · 104 2.3 · 105 t1 t2 t1 t2 t1 t2 t1 t2 3 3 · 10−3 0.2 6.0 · 10−3 0.4 1 · 10−2 1 5.0 · 10−2 4 6 6 · 10−3 0.4 1.1 · 10−2 0.7 2 · 10−2 2 9.0 · 10−2 7 9 8 · 10−3 0.5 1.5 · 10−2 1.0 3 · 10−2 3 1.3 · 10−1 11 full 0.62 2.48 10 140 Table: t1- computing times (in sec.) required for an H-matrix and dense matrix vector multiplication, t2 - times to set up ˜C ∈ Rn×n .
  • 14. H - matrices: numerics exponential cov(h) = exp(−h), The cov. matrix C ∈ Rn×n , n = 652 . ℓ1 ℓ2 C− ˜C 2 C 2 0.01 0.02 3 · 10−2 0.1 0.2 8 · 10−3 1 2 2.8 · 10−6
  • 15. m - eigenvalues matrix info (MB, sec.) m n k ˜C, MB ˜C, sec. 2 5 10 20 40 80 2.4 · 104 4 12 0.2 0.6 0.9 1.3 2.3 4.2 8 6.8 · 104 8 95 2 2.4 3.8 5.6 8.4 18.0 28 2.3 · 105 12 570 11 10.0 17.0 24.0 39.0 70.0 150 Table: Time required for computing m eigenpairs of the exp. cov. function with l1 = l3 = 0.1, l3 = 0.5. The geometry is shown in Fig. 1 (right).
  • 17. Sparse tensor decompositions of kernels cov(x, y) = cov(x − y) We want to approximate C ∈ RN×N , N = nd by Cr = r k=1 V 1 k ⊗ ... ⊗ V d k such that C − Cr ≤ ε. The storage of C is O(N2 ) = O(n2d ) and the storage of Cr is O(rdn2 ). To define V i k use SVD. Approximate all V i k in the H-matrix format ⇒ HKT format. See basic arithmetics in [Hackbusch, Khoromskij, Tyrtyshnikov].
  • 18. Tensor approximation W φh ℓ = λℓMφh ℓ , where W := MCM. Approximate M ≈ d ν=1 M(1) ν ⊗ M(2) ν , C ≈ q ν=1 C(1) ν ⊗ C(2) ν , φ ≈ r ν=1 φ(1) ν ⊗ φ(2) ν , where M (j) ν , C (j) ν ∈ Rn×n , φ(j) ν ∈ Rn , Example: for mass matrix M ∈ RN×N holds M = M(1) ⊗ I + I ⊗ M(1) , where M(1) ∈ Rn×n is one-dimensional mass matrix. Hypothesis: the Kronecker rank of M stays small even for a more general domain with non-regular grid.
  • 19. Suppose C = q ν=1 C (1) ν ⊗ C (2) ν and φ = r j=1 φ (1) j ⊗ φ (2) j . Then tensor vector product is defined as Cφ = q ν=1 r j=1 (C(1) ν φ (1) j ) ⊗ (C(2) ν φ (2) j ). The complexity is O(qrkn log n).
  • 20. Numerical examples of tensor approximations Gaussian kernel exp(−h2 ) has the Kroneker rank 1. The exponen. kernel exp(−h) can be approximated by a tensor with low Kroneker rank r 1 2 3 4 5 6 10 C−Cr ∞ C ∞ 11.5 1.7 0.4 0.14 0.035 0.007 2.8e − 8 C−Cr 2 C 2 6.7 0.52 0.1 0.03 0.008 0.001 5.3e − 9
  • 21. Example Let G = [0, 1]2 , Lh the stiffness matrix computed with the five-point formula. Then Lh 2 ≤ 8h−2 cos2 (πh/2) < 8h−2 . Lemma The (n − 1)2 eigenvectors of Lh are uνµ (1 ≤ ν, µ ≤ n − 1): uνµ(x, y) = sin(νπx) sin(µπy), (x, y) ∈ Gh. The corresponding eigenvalues are λνµ = 4h−2 (sin2 (νπh/2) + sin2 (µπh/2)), 1 ≤ ν, µ ≤ n − 1. Use Lanczos method with the matrix in the HKT format to compute eigenpairs of Lhvi = λi vi , i = 1..N. Then we compare the computed eigenpairs with the analytically known eigenpairs.
  • 23. Higher order moments Let operator K be deterministic and Ku(θ) = α∈J Ku(α) Hα(θ) = ˜f(θ) = α∈J f(α) Hα(θ), with u(α) = [u (α) 1 , ..., u (α) N ]T . Projecting onto each Hα obtain Ku(α) = f(α) . The KLE of f(θ) is f(θ) = f + ℓ λℓφℓ(θ)fℓ = ℓ α λℓφ (α) ℓ Hα(θ)fℓ = α Hα(θ)f(α) , where f(α) = ℓ √ λℓφ (α) ℓ fℓ.
  • 24. The 3-rd moment of u is M (3) u = E   α,β,γ u(α) ⊗ u(β) ⊗ u(γ) HαHβHγ   = α,β,γ u(α) ⊗u(β) ⊗u(γ) cα,β,γ, cα,β,γ := E (Hα(θ)Hβ(θ)Hγ(θ)) = c (γ) α,β · γ!, and c (γ) α,β := α!β! (g − α)!(g − β)!(g − γ)! , g := (α + β + γ)/2. Using u(α) = K−1 f(α) = ℓ √ λℓφ (α) ℓ K−1 fℓ and uℓ := K−1 fℓ, obtain M (3) u = p,q,r tp,q,r up ⊗ uq ⊗ ur , where tp,q,r := λpλqλr α,β,γ φ (α) p φ (β) q φ (γ) r cα,β,γ.
  • 25. Literature 1. B.N. Khoromskij, A.Litvinenko, H. G. Matthies, Application of hierarchical matrices for computing the Karhunen-Lo`eve expansion, Computing, 2008, Springer Wien, http://dx.doi.org/10.1007/s00607-008-0018-3 2. B.N. Khoromskij, A.Litvinenko, Data Sparse Computation of the Karhunen-Lo`eve Expansion, 2008, AIP Conference Proceedings, 1048-1, pp. 311-314. 3. H. G. Matthies, Uncertainty Quantification with Stochastic Finite Elements, Encyclopedia of Computational Mechanics, Wiley, 2007. 4. W. Hackbusch, B. N. Khoromskij, S. A. Sauter, and E. E. Tyrtyshnikov, Use of Tensor Formats in Elliptic Eigenvalue Problems, Preprint 78/2008, MPI for mathematics in Leipzig.
  • 26. Thank you for your attention! Questions?