SlideShare a Scribd company logo
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Multi-level Reduced Order Modeling with Robust Error
Bounds
Mohammad G. Abdo
and
Hany S. Abdel-Khalik
School of Nuclear Engineering
Purdue University
mgabdo@ncsu.edu and abdelkhalik@purdue.edu
June 10, 2015
1 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Motivation
ROM is indispensible
for analysis with
repetitive executions.
ROM premised on the
assumption: intrinsic
dimensionality
nominal
dimensionality.
ROM discards
componants with
negligible impact on
reactor attributes of
interest and hence
must be equipped
with error metrics.
Can extract "active subspaces" from a reduced complexity model that
undergoes similar physics. 2 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Objectives
Apply the reduction and identify the active subspaces in a much more
efficient methodology.
Equip the reduced model with a robust error bound that can test the
representitaveness of the active subspaces and hence define a
validation domain that includes different conditions and different
scenarios.
3 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Definition
A nonlinear function f is said to be reducable if there exist matrices
Urx ∈ Rn×rx and/or Ury ∈ Rm×ry such that:
f(x) − Ury UT
ry
f(Urx UT
rx
x)
f(x)
≤
4 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Reduction Algorithms
In our context, reduction algorithms refer to two different algorithms [4],
each is used at a different interface:
Gradient-free Snapshot Reduction Algorithm (Reduces response interface).
Gradient-based Reduction Algorithm(Reduces parameter interface).
5 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-free Snapshot Reduction Algorithm
Consider the reducible model under inspection to be described by:
φ = f (x) , (1)
1 Generate k random parameters realizations: xi
k
i=1.
6 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-free Snapshot Reduction Algorithm
Consider the reducible model under inspection to be described by:
φ = f (x) , (1)
1 Generate k random parameters realizations: xi
k
i=1.
2 Execute the forward model k times and record the corresponding k
responses: φi = f xi
k
i=1 , and aggregate them into:
= φ1 φ2 · · · φk ∈ Rm×k .
6 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-free Snapshot Reduction Algorithm
Consider the reducible model under inspection to be described by:
φ = f (x) , (1)
1 Generate k random parameters realizations: xi
k
i=1.
2 Execute the forward model k times and record the corresponding k
responses: φi = f xi
k
i=1 , and aggregate them into:
= φ1 φ2 · · · φk ∈ Rm×k .
6 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Snapshot Reduction (cont.)
3 Calculate the singular value decomposition (SVD):
= Uy Sy VT
y ; where Uy ∈ Rm×k .
4 Collect the first ry columns of Uy in Ury to span the active response
subspace, where ry ≤ min (m, k).
7 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Snapshot Reduction (cont.)
3 Calculate the singular value decomposition (SVD):
= Uy Sy VT
y ; where Uy ∈ Rm×k .
4 Collect the first ry columns of Uy in Ury to span the active response
subspace, where ry ≤ min (m, k).
7 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-based Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times to get:
G =
dR
pseudo
1
dx
x1
· · ·
dR
pseudo
k
dx
xk
.
8 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Gradient-based Reduction
This algorithm may be described by the following steps:
1 Execute the adjoint model k times to get:
G =
dR
pseudo
1
dx
x1
· · ·
dR
pseudo
k
dx
xk
.
2 From SVD of: G = Ux Sx VT
x , one can pick the first rx columns of Ux
(denoted by Urx ) to span the active parameter subspace.
8 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Error Estimation
To estimate the error resulting from the reduction, the ijth entry of the
operator E can be written as:
[E]ij =
fi xj − Ury (i, :) UT
ry
(i, :) fi Urx UT
rx
xj
fi xj
,
where Urx ∈ Rn×rx and Ury ∈ Rm×ry are matrices whose orthonormal
columns span the parameter and response spaces respectively.
We need to estimate an upper bound for the error in each response.
9 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Error Estimation (cont.)
The 2-norm of E (or each row in E) can be estimated using:
P E ≤ η max
i=1,2,...s
Ew(i) ≥ 1 −
1
η2
0
pdfw2
1
(t) dt
s
(2)
where E ∈ Rm×N with N being the number of sampled responses and w
is an N-dimensional random vector sampled from a known distribution D
This probabilistic statement has its roots in Dixon’s theory [1983], where
he sampled w(i) from a standard normal distribution and found an
analytic value for the probability in terms of the multiplier η
It is intuitive that if the user presets a probability of sucess then the value
of the multiplier η depends solely on D from which w is sampled.
careful inspection showed that the estimated error can be multiple order
of magnitudes larger than the actual error (unneccessarily conservative).
10 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Error Estimation (cont.)
This motivated the numerical inspection of many distributions and the
selection of the most practical one (i.e. which gave the least multiplier η).
The inspection showed that the distribution which gave the least η is the
binomial distribution. [2, 3]
11 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Distribution Selection
Figure : Uniform Distribution. Figure : Gaussian Distribution.
Estimated norm is orders of magnitude off.
12 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
ROM
Algorithms
Error Estimation
Distribution Selection [cont.]
The binomial shows a linear
structure arround the 45-degree
solid line.
This means that even if the case
is a failure case (i.e. The actual
norm is greater than the
bound),the estimated norm will
still be very close to the actual
norm
This appealing behaviour
motivates the use of the binomial
distribution to get rid of
unneccessarily conservative
bounds.
Figure : Binomial Distribution
13 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1
Benchmark lattice for Peach Bottom Atomic Power Station Unit2 (PB-2,
1112MWe BWR designed by OECD/NEA and manufactured by General
Electric).
Figure : 7x7 BWR Benchmark.
14 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
The idea is that for such an assembly, running the forward model and the
assembly model rx times to identify the active parameter subspace is
still very expensive !! .
15 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
The idea is that for such an assembly, running the forward model and the
assembly model rx times to identify the active parameter subspace is
still very expensive !! .
Can we extract the active subspace from running only subdomain of the
problem? Or a reduced complexity model?
15 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
The idea is that for such an assembly, running the forward model and the
assembly model rx times to identify the active parameter subspace is
still very expensive !! .
Can we extract the active subspace from running only subdomain of the
problem? Or a reduced complexity model?
Figure : Calculation Levels.
http://www.nrc.gov/about-nrc/emerg-preparedness/images/fuel-pellet-
assembly.jpg
15 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!.
16 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
16 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
16 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
16 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
17 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
18 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
19 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
20 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
21 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
22 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
23 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
24 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 1 [cont.]
Why don’t we extract the parameters active subspace for each of the
nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!!
We are not visualizing to see what does the topology of this
n-dimensional surface look like ! We are only trying to see how far is
each subspace from the other ! How different !
Figure : Scatter Visualization for the active subspaces.
25 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
The previous figure defends that active parameter subspaces for the 9
pins are pretty close.
26 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
The previous figure defends that active parameter subspaces for the 9
pins are pretty close.
This motivates that the active parameter subspace for the whole
assembly might be revealed from sampling a pin or more !!
26 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
The previous figure defends that active parameter subspaces for the 9
pins are pretty close.
This motivates that the active parameter subspace for the whole
assembly might be revealed from sampling a pin or more !!
Tests Description:
Identify the parameter active subspace for one (or more) pin cells
Construct an error bound for each response
test the identified subspace on different pins then on the whole assembly.
If successful, test it in different conditions !!
26 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study [cont.]
Nominal dimension for the parameter space n = 127nuclides
×7reactions ×56 energy groups = 49784
3 pins are depleted to 30 GWd/MTU then used to extract the subspace
(2.93% UO2 with 3 % gd, 1.94%UO2,2.93%UO2 ) and rx is taken to be
1500
Test1: The subspace is tested on the highest enrichment pin and a
completely different one.
First two figures will show the error in flux within two selective energy
ranges: 1.85-3.0 MeV and 0.625-1.01 eV for the most dominant Pin Cell.
Then a figure showing the maximum and mean errors over all energies
and hence are taken as the bounds.
Test2: This is repeated for another pin cell (Mixture 4)
Test3: The extracted subspace is then employed on the whole assembly
depleted to the same point (30GWd/MTU). The results for this test is
shown in 9 figures showing the maximum and mean error for the 9
unique mixtures. 27 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 1
Figure : Fast Flux Error (Mix 500, LF). Figure : Thermal Flux Error(Mix 500, LF).
28 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 2
Figure : Error Boubds (Mix 500, LF). Figure : Actual Errors(Mix 4, LF).
The left figure shows the typical bounds doesn’t exceed 3% for
maximum error and is less than 0.7% for mean error.
29 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3
Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF).
30 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF).
31 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF).
32 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF).
33 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Test 3 [cont.]
Figure : Actual Errors (Mix 213, HF).
Figures show that the
subspace extracted from
the low-fidelity model fully
represent the full
assembly at 30
GWd/MTU and hot
conditions.
the maximum error did
not exceed 3% and the
mean error is always less
than 0.7% which is
exactly consistent with
what we had from the
low-fidelity model.
34 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2
The assembly is depleted to (60 GWd/MTU)
The next 9 figures show the maximum and mean errors for the 9 different
mixtures.
This test aims to inspect the behaviour of the active subspace extracted
from the low fidelity model at different composition due to different
burnup.
35 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF).
36 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF).
37 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF).
38 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF).
39 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 213, HF).
Figures show that the
subspace extracted from
the low-fidelity model fully
represent the full
assembly at 30
GWd/MTU and hot
conditions.
the maximum error did
not exceed 3% and the
mean error is always less
than 0.7% which is
exactly consistent with
what we had from the
low-fidelity model.
40 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3
The assembly is depleted to the End of the first cycle (20 GWd/MTU)
and at Cold conditions
The next 9 figures show the maximum and mean errors for the 9 different
mixtures.
This test aims to inspect the behaviour of the active subspace extracted
from the low fidelity model at different composition due to different
burnup and at different temperature.
41 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF).
42 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF).
43 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF).
44 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 3 [cont.]
Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF).
45 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Case Study 1
Case Study 2
Case Study 3
Case Study 2 [cont.]
Figure : Actual Errors (Mix 213, HF).
Figures show that the
subspace extracted from
the low-fidelity model fully
represent the full
assembly at 30
GWd/MTU and hot
conditions.
the maximum error did
not exceed 3% and the
mean error is always less
than 0.7% which is
exactly consistent with
what we had from the
low-fidelity model.
46 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Conclusions
ROM errors are reliably quantified using realistic bounds (i.e., actual
error is close to error bound).
Provide-a-first-of-a-kind approach to quantify errors resulting from
dimensionality reduction in nuclear reactor calculations.
Can be used to experiment with different ROM techniques to determine
optimum performance for application of interest.
Quantify errors resulting from homogenization theory (a form of
dimensionality reduction). Multi-physics ROM, where one physics
determines active subspace for next physics.
Efficient rendering of active subspaces for expensive model, MLROM
47 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Conclusions
ROM errors are reliably quantified using realistic bounds (i.e., actual
error is close to error bound).
Provide-a-first-of-a-kind approach to quantify errors resulting from
dimensionality reduction in nuclear reactor calculations.
Can be used to experiment with different ROM techniques to determine
optimum performance for application of interest.
Quantify errors resulting from homogenization theory (a form of
dimensionality reduction). Multi-physics ROM, where one physics
determines active subspace for next physics.
Efficient rendering of active subspaces for expensive model, MLROM
Using MLROM enables the application of ROM on all models that was
expensive enough to prohebit executing the model to extract the
subspace.
This enables the determination of the validation space and One can
make a statement about how good is the subspace if used with
conditions different than those used in the sampling process. 47 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Acknowledgements
I’d like to acknowledge the support of the Department of Nuclear
Engineering at North Carolina State University to complete this work in
support of my PhD.
48 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Bibliography I
SCALE:A Comperhensive Modeling and Simulation Suite for Nuclear
Safety Analysis and Design,ORNL/TM-2005/39, Version 6.1, Oak Ridge
National Laboratory, Oak Ridge, Tennessee,June 2011. Available from
Radiation Safety Information Computational Center at Oak Rodge
National Laboratory as CCC-785.
M. G. ABDO AND H. S. ABDEL-KHALIK, Propagation of error bounds
due to active subspace reduction, ANS, 110 (2014), pp. 196–199.
, Further investigation of error bounds for reduced order modeling,
ANS MC2015, (2015).
Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reduced order
modeling applied to nonlinear models, IJNME, 91 (2012), pp. 929–949.
J. D. DIXON, Estimating extremal eigenvalues and condition numbers of
matrices, SIAM, 20 (1983), pp. 812–814.
49 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Bibliography II
N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Finding structure with
randomness:probabilistic algorithms for constructing approximate matrix
decompositions, SIAM, 53 (2011), pp. 217–288.
P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, A randomized
algorithm for the approximation of matrices, tech. report, Yale University.
J. A. TROPP, User-friendly tools for random matrices.
S. S. WILKS, Mathematical statistics, John Wiley, New York, 1st ed.,
1962.
F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fast
randomized algorithm for the approximation of matrices, preliminary
report, Yale University.
50 / 51
Motivation
Objectives
Background of Supporting Algorithms and Theory
Numerical tests and results
Conclusions
Acknowledgements
Bibliography
Thanks
Questions/Suggestions?
51 / 51

More Related Content

PDF
Development of Multi-Level ROM
PPTX
Iwsm2014 an analogy-based approach to estimation of software development ef...
PDF
Flavours of Physics Challenge: Transfer Learning approach
DOC
352735340 rsh-qam11-tif-12-doc
PPT
lecture_mooney.ppt
PDF
Training and Inference for Deep Gaussian Processes
DOC
352735339 rsh-qam11-tif-13-doc
PPT
Instance Based Learning in Machine Learning
Development of Multi-Level ROM
Iwsm2014 an analogy-based approach to estimation of software development ef...
Flavours of Physics Challenge: Transfer Learning approach
352735340 rsh-qam11-tif-12-doc
lecture_mooney.ppt
Training and Inference for Deep Gaussian Processes
352735339 rsh-qam11-tif-13-doc
Instance Based Learning in Machine Learning

What's hot (19)

PDF
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
PPT
Chapter 10 ds
PDF
Mining at scale with latent factor models for matrix completion
PPTX
Instance based learning
PDF
LCBM: Statistics-Based Parallel Collaborative Filtering
PDF
Analysing and combining partial problem solutions for properly informed heuri...
PDF
17 Machine Learning Radial Basis Functions
PDF
Analytical study of feature extraction techniques in opinion mining
PDF
ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MINING
PDF
Vc dimension in Machine Learning
PDF
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
PDF
Introduction to Approximation Algorithms
PDF
Symbolic Reasoning and Concrete Execution - Andrii Vozniuk
PDF
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
PDF
Adversarial Reinforced Learning for Unsupervised Domain Adaptation
PDF
Statistical Pattern recognition(1)
PDF
QUESTION BANK FOR ANNA UNNIVERISTY SYLLABUS
PPT
Chapter 2 ds
PDF
On selection of periodic kernels parameters in time series prediction
MULTI-OBJECTIVE ENERGY EFFICIENT OPTIMIZATION ALGORITHM FOR COVERAGE CONTROL ...
Chapter 10 ds
Mining at scale with latent factor models for matrix completion
Instance based learning
LCBM: Statistics-Based Parallel Collaborative Filtering
Analysing and combining partial problem solutions for properly informed heuri...
17 Machine Learning Radial Basis Functions
Analytical study of feature extraction techniques in opinion mining
ANALYTICAL STUDY OF FEATURE EXTRACTION TECHNIQUES IN OPINION MINING
Vc dimension in Machine Learning
論文紹介 Combining Model-Based and Model-Free Updates for Trajectory-Centric Rein...
Introduction to Approximation Algorithms
Symbolic Reasoning and Concrete Execution - Andrii Vozniuk
On the Performance of the Pareto Set Pursuing (PSP) Method for Mixed-Variable...
Adversarial Reinforced Learning for Unsupervised Domain Adaptation
Statistical Pattern recognition(1)
QUESTION BANK FOR ANNA UNNIVERISTY SYLLABUS
Chapter 2 ds
On selection of periodic kernels parameters in time series prediction
Ad

Similar to ANSSummer2015 (20)

PDF
AbdoSummerANS_mod3
PDF
Prpagation of Error Bounds Across reduction interfaces
PPT
AHF_IDETC_2011_Jie
PDF
LNCS 5050 - Bilevel Optimization and Machine Learning
PDF
Machine learning in science and industry — day 1
PDF
An Algorithm For Vector Quantizer Design
PPT
Artificial Intelligence
ODP
Eswc2009
PDF
Integrated Model Discovery and Self-Adaptation of Robots
PDF
Shor's discrete logarithm quantum algorithm for elliptic curves
PDF
1108.1170
PPT
modeling.ppt
PPTX
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
PDF
Development of Multi-level Reduced Order MOdeling Methodology
PDF
MultiLevelROM_ANS_Summer2015_RevMarch23
PPTX
Selection K in K-means Clustering
PPTX
lecture_16.pptx
PDF
An Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
PDF
Probability density estimation using Product of Conditional Experts
PPTX
Keynote at IWLS 2017
AbdoSummerANS_mod3
Prpagation of Error Bounds Across reduction interfaces
AHF_IDETC_2011_Jie
LNCS 5050 - Bilevel Optimization and Machine Learning
Machine learning in science and industry — day 1
An Algorithm For Vector Quantizer Design
Artificial Intelligence
Eswc2009
Integrated Model Discovery and Self-Adaptation of Robots
Shor's discrete logarithm quantum algorithm for elliptic curves
1108.1170
modeling.ppt
Lecture 2 Basic Concepts of Optimal Design and Optimization Techniques final1...
Development of Multi-level Reduced Order MOdeling Methodology
MultiLevelROM_ANS_Summer2015_RevMarch23
Selection K in K-means Clustering
lecture_16.pptx
An Exact Branch And Bound Algorithm For The General Quadratic Assignment Problem
Probability density estimation using Product of Conditional Experts
Keynote at IWLS 2017
Ad

ANSSummer2015

  • 1. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Multi-level Reduced Order Modeling with Robust Error Bounds Mohammad G. Abdo and Hany S. Abdel-Khalik School of Nuclear Engineering Purdue University mgabdo@ncsu.edu and abdelkhalik@purdue.edu June 10, 2015 1 / 51
  • 2. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Motivation ROM is indispensible for analysis with repetitive executions. ROM premised on the assumption: intrinsic dimensionality nominal dimensionality. ROM discards componants with negligible impact on reactor attributes of interest and hence must be equipped with error metrics. Can extract "active subspaces" from a reduced complexity model that undergoes similar physics. 2 / 51
  • 3. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Objectives Apply the reduction and identify the active subspaces in a much more efficient methodology. Equip the reduced model with a robust error bound that can test the representitaveness of the active subspaces and hence define a validation domain that includes different conditions and different scenarios. 3 / 51
  • 4. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Definition A nonlinear function f is said to be reducable if there exist matrices Urx ∈ Rn×rx and/or Ury ∈ Rm×ry such that: f(x) − Ury UT ry f(Urx UT rx x) f(x) ≤ 4 / 51
  • 5. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Reduction Algorithms In our context, reduction algorithms refer to two different algorithms [4], each is used at a different interface: Gradient-free Snapshot Reduction Algorithm (Reduces response interface). Gradient-based Reduction Algorithm(Reduces parameter interface). 5 / 51
  • 6. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Gradient-free Snapshot Reduction Algorithm Consider the reducible model under inspection to be described by: φ = f (x) , (1) 1 Generate k random parameters realizations: xi k i=1. 6 / 51
  • 7. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Gradient-free Snapshot Reduction Algorithm Consider the reducible model under inspection to be described by: φ = f (x) , (1) 1 Generate k random parameters realizations: xi k i=1. 2 Execute the forward model k times and record the corresponding k responses: φi = f xi k i=1 , and aggregate them into: = φ1 φ2 · · · φk ∈ Rm×k . 6 / 51
  • 8. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Gradient-free Snapshot Reduction Algorithm Consider the reducible model under inspection to be described by: φ = f (x) , (1) 1 Generate k random parameters realizations: xi k i=1. 2 Execute the forward model k times and record the corresponding k responses: φi = f xi k i=1 , and aggregate them into: = φ1 φ2 · · · φk ∈ Rm×k . 6 / 51
  • 9. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Snapshot Reduction (cont.) 3 Calculate the singular value decomposition (SVD): = Uy Sy VT y ; where Uy ∈ Rm×k . 4 Collect the first ry columns of Uy in Ury to span the active response subspace, where ry ≤ min (m, k). 7 / 51
  • 10. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Snapshot Reduction (cont.) 3 Calculate the singular value decomposition (SVD): = Uy Sy VT y ; where Uy ∈ Rm×k . 4 Collect the first ry columns of Uy in Ury to span the active response subspace, where ry ≤ min (m, k). 7 / 51
  • 11. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Gradient-based Reduction This algorithm may be described by the following steps: 1 Execute the adjoint model k times to get: G = dR pseudo 1 dx x1 · · · dR pseudo k dx xk . 8 / 51
  • 12. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Gradient-based Reduction This algorithm may be described by the following steps: 1 Execute the adjoint model k times to get: G = dR pseudo 1 dx x1 · · · dR pseudo k dx xk . 2 From SVD of: G = Ux Sx VT x , one can pick the first rx columns of Ux (denoted by Urx ) to span the active parameter subspace. 8 / 51
  • 13. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Error Estimation To estimate the error resulting from the reduction, the ijth entry of the operator E can be written as: [E]ij = fi xj − Ury (i, :) UT ry (i, :) fi Urx UT rx xj fi xj , where Urx ∈ Rn×rx and Ury ∈ Rm×ry are matrices whose orthonormal columns span the parameter and response spaces respectively. We need to estimate an upper bound for the error in each response. 9 / 51
  • 14. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Error Estimation (cont.) The 2-norm of E (or each row in E) can be estimated using: P E ≤ η max i=1,2,...s Ew(i) ≥ 1 − 1 η2 0 pdfw2 1 (t) dt s (2) where E ∈ Rm×N with N being the number of sampled responses and w is an N-dimensional random vector sampled from a known distribution D This probabilistic statement has its roots in Dixon’s theory [1983], where he sampled w(i) from a standard normal distribution and found an analytic value for the probability in terms of the multiplier η It is intuitive that if the user presets a probability of sucess then the value of the multiplier η depends solely on D from which w is sampled. careful inspection showed that the estimated error can be multiple order of magnitudes larger than the actual error (unneccessarily conservative). 10 / 51
  • 15. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Error Estimation (cont.) This motivated the numerical inspection of many distributions and the selection of the most practical one (i.e. which gave the least multiplier η). The inspection showed that the distribution which gave the least η is the binomial distribution. [2, 3] 11 / 51
  • 16. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Distribution Selection Figure : Uniform Distribution. Figure : Gaussian Distribution. Estimated norm is orders of magnitude off. 12 / 51
  • 17. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks ROM Algorithms Error Estimation Distribution Selection [cont.] The binomial shows a linear structure arround the 45-degree solid line. This means that even if the case is a failure case (i.e. The actual norm is greater than the bound),the estimated norm will still be very close to the actual norm This appealing behaviour motivates the use of the binomial distribution to get rid of unneccessarily conservative bounds. Figure : Binomial Distribution 13 / 51
  • 18. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 Benchmark lattice for Peach Bottom Atomic Power Station Unit2 (PB-2, 1112MWe BWR designed by OECD/NEA and manufactured by General Electric). Figure : 7x7 BWR Benchmark. 14 / 51
  • 19. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] The idea is that for such an assembly, running the forward model and the assembly model rx times to identify the active parameter subspace is still very expensive !! . 15 / 51
  • 20. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] The idea is that for such an assembly, running the forward model and the assembly model rx times to identify the active parameter subspace is still very expensive !! . Can we extract the active subspace from running only subdomain of the problem? Or a reduced complexity model? 15 / 51
  • 21. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] The idea is that for such an assembly, running the forward model and the assembly model rx times to identify the active parameter subspace is still very expensive !! . Can we extract the active subspace from running only subdomain of the problem? Or a reduced complexity model? Figure : Calculation Levels. http://www.nrc.gov/about-nrc/emerg-preparedness/images/fuel-pellet- assembly.jpg 15 / 51
  • 22. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. 16 / 51
  • 23. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! 16 / 51
  • 24. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! 16 / 51
  • 25. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 16 / 51
  • 26. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 17 / 51
  • 27. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 18 / 51
  • 28. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 19 / 51
  • 29. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 20 / 51
  • 30. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 21 / 51
  • 31. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 22 / 51
  • 32. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 23 / 51
  • 33. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 24 / 51
  • 34. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 1 [cont.] Why don’t we extract the parameters active subspace for each of the nine pin cells and try to visualize them ?!. (x ∈ R49784)!!!! We are not visualizing to see what does the topology of this n-dimensional surface look like ! We are only trying to see how far is each subspace from the other ! How different ! Figure : Scatter Visualization for the active subspaces. 25 / 51
  • 35. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study [cont.] The previous figure defends that active parameter subspaces for the 9 pins are pretty close. 26 / 51
  • 36. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study [cont.] The previous figure defends that active parameter subspaces for the 9 pins are pretty close. This motivates that the active parameter subspace for the whole assembly might be revealed from sampling a pin or more !! 26 / 51
  • 37. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study [cont.] The previous figure defends that active parameter subspaces for the 9 pins are pretty close. This motivates that the active parameter subspace for the whole assembly might be revealed from sampling a pin or more !! Tests Description: Identify the parameter active subspace for one (or more) pin cells Construct an error bound for each response test the identified subspace on different pins then on the whole assembly. If successful, test it in different conditions !! 26 / 51
  • 38. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study [cont.] Nominal dimension for the parameter space n = 127nuclides ×7reactions ×56 energy groups = 49784 3 pins are depleted to 30 GWd/MTU then used to extract the subspace (2.93% UO2 with 3 % gd, 1.94%UO2,2.93%UO2 ) and rx is taken to be 1500 Test1: The subspace is tested on the highest enrichment pin and a completely different one. First two figures will show the error in flux within two selective energy ranges: 1.85-3.0 MeV and 0.625-1.01 eV for the most dominant Pin Cell. Then a figure showing the maximum and mean errors over all energies and hence are taken as the bounds. Test2: This is repeated for another pin cell (Mixture 4) Test3: The extracted subspace is then employed on the whole assembly depleted to the same point (30GWd/MTU). The results for this test is shown in 9 figures showing the maximum and mean error for the 9 unique mixtures. 27 / 51
  • 39. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 1 Figure : Fast Flux Error (Mix 500, LF). Figure : Thermal Flux Error(Mix 500, LF). 28 / 51
  • 40. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 2 Figure : Error Boubds (Mix 500, LF). Figure : Actual Errors(Mix 4, LF). The left figure shows the typical bounds doesn’t exceed 3% for maximum error and is less than 0.7% for mean error. 29 / 51
  • 41. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 3 Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF). 30 / 51
  • 42. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 3 [cont.] Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF). 31 / 51
  • 43. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 3 [cont.] Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF). 32 / 51
  • 44. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 3 [cont.] Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF). 33 / 51
  • 45. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Test 3 [cont.] Figure : Actual Errors (Mix 213, HF). Figures show that the subspace extracted from the low-fidelity model fully represent the full assembly at 30 GWd/MTU and hot conditions. the maximum error did not exceed 3% and the mean error is always less than 0.7% which is exactly consistent with what we had from the low-fidelity model. 34 / 51
  • 46. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 The assembly is depleted to (60 GWd/MTU) The next 9 figures show the maximum and mean errors for the 9 different mixtures. This test aims to inspect the behaviour of the active subspace extracted from the low fidelity model at different composition due to different burnup. 35 / 51
  • 47. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 [cont.] Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF). 36 / 51
  • 48. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 [cont.] Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF). 37 / 51
  • 49. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 [cont.] Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF). 38 / 51
  • 50. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 [cont.] Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF). 39 / 51
  • 51. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 [cont.] Figure : Actual Errors (Mix 213, HF). Figures show that the subspace extracted from the low-fidelity model fully represent the full assembly at 30 GWd/MTU and hot conditions. the maximum error did not exceed 3% and the mean error is always less than 0.7% which is exactly consistent with what we had from the low-fidelity model. 40 / 51
  • 52. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 3 The assembly is depleted to the End of the first cycle (20 GWd/MTU) and at Cold conditions The next 9 figures show the maximum and mean errors for the 9 different mixtures. This test aims to inspect the behaviour of the active subspace extracted from the low fidelity model at different composition due to different burnup and at different temperature. 41 / 51
  • 53. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 3 [cont.] Figure : Actual Errors (Mix 1, HF). Figure : Actual Errors(Mix 2, HF). 42 / 51
  • 54. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 3 [cont.] Figure : Actual Errors (Mix 4, HF). Figure : Actual Errors(Mix 500, HF). 43 / 51
  • 55. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 3 [cont.] Figure : Actual Errors (Mix 201, HF). Figure : Actual Errors(Mix 202, HF). 44 / 51
  • 56. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 3 [cont.] Figure : Actual Errors (Mix203, HF). Figure : Actual Errors(Mix 212, HF). 45 / 51
  • 57. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Case Study 1 Case Study 2 Case Study 3 Case Study 2 [cont.] Figure : Actual Errors (Mix 213, HF). Figures show that the subspace extracted from the low-fidelity model fully represent the full assembly at 30 GWd/MTU and hot conditions. the maximum error did not exceed 3% and the mean error is always less than 0.7% which is exactly consistent with what we had from the low-fidelity model. 46 / 51
  • 58. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Conclusions ROM errors are reliably quantified using realistic bounds (i.e., actual error is close to error bound). Provide-a-first-of-a-kind approach to quantify errors resulting from dimensionality reduction in nuclear reactor calculations. Can be used to experiment with different ROM techniques to determine optimum performance for application of interest. Quantify errors resulting from homogenization theory (a form of dimensionality reduction). Multi-physics ROM, where one physics determines active subspace for next physics. Efficient rendering of active subspaces for expensive model, MLROM 47 / 51
  • 59. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Conclusions ROM errors are reliably quantified using realistic bounds (i.e., actual error is close to error bound). Provide-a-first-of-a-kind approach to quantify errors resulting from dimensionality reduction in nuclear reactor calculations. Can be used to experiment with different ROM techniques to determine optimum performance for application of interest. Quantify errors resulting from homogenization theory (a form of dimensionality reduction). Multi-physics ROM, where one physics determines active subspace for next physics. Efficient rendering of active subspaces for expensive model, MLROM Using MLROM enables the application of ROM on all models that was expensive enough to prohebit executing the model to extract the subspace. This enables the determination of the validation space and One can make a statement about how good is the subspace if used with conditions different than those used in the sampling process. 47 / 51
  • 60. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Acknowledgements I’d like to acknowledge the support of the Department of Nuclear Engineering at North Carolina State University to complete this work in support of my PhD. 48 / 51
  • 61. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Bibliography I SCALE:A Comperhensive Modeling and Simulation Suite for Nuclear Safety Analysis and Design,ORNL/TM-2005/39, Version 6.1, Oak Ridge National Laboratory, Oak Ridge, Tennessee,June 2011. Available from Radiation Safety Information Computational Center at Oak Rodge National Laboratory as CCC-785. M. G. ABDO AND H. S. ABDEL-KHALIK, Propagation of error bounds due to active subspace reduction, ANS, 110 (2014), pp. 196–199. , Further investigation of error bounds for reduced order modeling, ANS MC2015, (2015). Y. BANG, J. HITE, AND H. S. ABDEL-KHALIK, Hybrid reduced order modeling applied to nonlinear models, IJNME, 91 (2012), pp. 929–949. J. D. DIXON, Estimating extremal eigenvalues and condition numbers of matrices, SIAM, 20 (1983), pp. 812–814. 49 / 51
  • 62. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Bibliography II N. HALKO, P. G. MARTINSSON, AND J. A. TROPP, Finding structure with randomness:probabilistic algorithms for constructing approximate matrix decompositions, SIAM, 53 (2011), pp. 217–288. P. G. MARTINSSON, V. ROKHLIN, AND M. TYGERT, A randomized algorithm for the approximation of matrices, tech. report, Yale University. J. A. TROPP, User-friendly tools for random matrices. S. S. WILKS, Mathematical statistics, John Wiley, New York, 1st ed., 1962. F. WOOLFE, E. LIBERTY, V. ROKHLIN, AND M. TYGERT, A fast randomized algorithm for the approximation of matrices, preliminary report, Yale University. 50 / 51
  • 63. Motivation Objectives Background of Supporting Algorithms and Theory Numerical tests and results Conclusions Acknowledgements Bibliography Thanks Questions/Suggestions? 51 / 51