SlideShare a Scribd company logo
von Karman Institute for Fluid Dynamics
Chaussée de Waterloo, 72
B - 1640 Rhode Saint Genèse - Belgium
Stagiaire Report
POD AND DMD DECOMPOSITION OF
NUMERICAL AND EXPERIMENTAL DATA
Author: K. Zdybal
Supervisor: M. A. Mendez
September 2016
Acknowledgements
I am afraid I will never be able to thank enough for these wonderful two months
of my life. But I at least want to give it a try.
I would like to thank my supervisor Miguel for giving me a chance to work on this
amazing topic and letting me discover the power of linear algebra.
I would like to thank Claire for her kindness and for all the chances to work in
the garden, both of which have made my stay in Saint-Genèse so pleasant.
Abstract
Matrix decomposition is a useful tool for approximating large data matrices which
might be obtained from experimental or numerical results. The decomposition
methods give a powerful insight into the underlying physical phenomena hid-
den in the collected data. The present work discusses two decomposition meth-
ods: Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition
(DMD) (Chapter 1).
Two examples of data decomposition applications are presented in this work:
approximating the pulsating velocity profile of the Poiseuille flow (Chapter 2) and
approximating the flow behind a cylinder (Chapter 3). The results are obtained
using Matlab software and various codes for performing POD and DMD and for
post-processing are shown in the appendices.
The development of the Graphical User Interface (GUI) program in Matlab for
decomposing data with POD and DMD is presented in Chapter 4.
The concepts of data decomposition lie in linear algebra and linear dynamical sys-
tems. Additional exercises in the form of ideas and problem-solving are presented
in Chapter 5.
Keywords: Linear Algebra, Linear Dynamical Systems, Proper Orthogonal De-
composition, Dynamic Mode Decomposition, Matlab
Contents
1 Introduction to Data Decomposition 8
1.1 Setting the Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
1.2 Data Matrix Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
1.3 Proper Orthogonal Decomposition (POD) . . . . . . . . . . . . . . . . . . . . . . 9
1.4 Dynamic Mode Decomposition (DMD) . . . . . . . . . . . . . . . . . . . . . . . . 10
1.5 Criteria for the Choice of the Approximation Rank . . . . . . . . . . . . . . . . . 12
1.5.1 Rank for POD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
1.5.2 Rank for DMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
1.6 Comparison of POD and DMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
2 Pulsating Poiseuille Flow 15
2.1 Test Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Asymptotic Complex Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Eigenfunction Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Discrete Proper Orthogonal Decomposition (POD) . . . . . . . . . . . . . . . . . 17
2.5 Discrete Dynamic Mode Decomposition (DMD) . . . . . . . . . . . . . . . . . . . 17
2.6 Comparison of the Three Approximations . . . . . . . . . . . . . . . . . . . . . . 18
2.6.1 Initial Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.2 Comparison of the Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
2.6.3 Amplitude Decay Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6.4 Eigenvalues Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.6.5 The First Three Modes of the POD and DMD Approximations . . . . . . 20
2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
3 Dynamic Mode Decomposition of 2D Data 23
3.1 DMD and POD Approximation to the Flow Behind a Cylinder . . . . . . . . . . 23
3.1.1 Initial Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
3.1.2 Dynamic Mode Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 25
3.1.3 Proper Orthogonal Decomposition . . . . . . . . . . . . . . . . . . . . . . 26
3.1.4 Conclusions and Comparison of the Two Decomposition Methods . . . . . 28
4 GUI Beta Version 29
4.1 Scheme of the Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
4.2 Function Executions Inside the Program . . . . . . . . . . . . . . . . . . . . . . . 33
4.3 Tutorial on Using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.1 Tutorial Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
4.3.2 1D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
4.3.3 2D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
3
5 Additional Exercises 38
5.1 Discrete and Continuous Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
5.2 The Phase Shift Ψ Between Two Sine Functions . . . . . . . . . . . . . . . . . . . 41
5.3 A Note on the Sizes of Component Matrices in the SVD . . . . . . . . . . . . . . 43
5.3.1 SVD on a General Matrix D . . . . . . . . . . . . . . . . . . . . . . . . . 43
5.3.2 SVD on the Matrix X1 from DMD . . . . . . . . . . . . . . . . . . . . . . 43
5.3.3 SVD with POD Approximation on Matrices D and X1 . . . . . . . . . . . 43
5.4 A Note on the Linear Propagator Matrix . . . . . . . . . . . . . . . . . . . . . . 44
5.5 A Note on the Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 46
5.6 A Note on the Linear Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . 50
A Pulsating Poiseuille Flow 51
B 1D and 2D POD Functions 60
C 1D and 2D DMD Functions 62
D 1D and 2D POD Results Plotting 67
E 1D and 2D DMD Results Plotting 74
F List of Useful Matlab Commands 81
G Complete List of Codes Produced 82
Chapter 1
Introduction to Data Decomposition
1.1 Setting the Stage
The present work revolves around large data matrices and the ways to deal with them. They can
come from experimental results, such as PIV1
or wind tunnel tests, or from numerical results
such as CFD2
, and hence they are matrices that contain information about the evolution of a
certain physical quantity in space and time. Their structure is therefore the following: their
rows are linked to a particular coordinate in space and their columns to a particular moment
in time. The space coordinate can represent any dimension but typically it will be 1D, 2D or
3D space coordinate. Each entry inside a data matrix D, say Dij, gives a numerical value to a
certain quantity at position pi and at time tj. This value could for instance be velocity, pressure
or any other physical quantity of choice. A single, full column extracted from a data matrix is
called a snapshot. It represents the full field of a physical quantity at a single instance of time -
it is indeed like a picture. Using Matlab notation we can refer to it as D(:, m) for any time tm.
The graphical representation of a general data matrix is captured in the Figure 1.1.
Figure 1.1: Structure of a
data matrix D.
An important property that describes a data matrix is its
rank. The rank of a matrix specifies how many linearly indepen-
dent columns or rows a matrix has. A matrix of size np × nt
is of full-rank when its rank is defined as min (np, nt). When a
matrix has a smaller rank than min (np, nt) some of its rows (or
columns) can be expressed in terms of a linear combination of its
other rows (or columns).
Usually, and for the data matrices shown later in this work,
we assume that the number of spatial coordinates is much larger
than the number of moments in time: np > nt, hence if a matrix
is of full-rank it will have the rank equal to nt.
An important concept now emerges: imagine we could ap-
proximate a data matrix of full-rank well enough with another
matrix of a lower rank. Suppose we wanted to share that data
matrix with someone else, and instead of sending a matrix of
full-rank with all its entries inside, we could send only the lin-
early independent rows or columns and a rule to figure out the
remaining, linearly dependent ones. This decreases the amount
of data that we have to store or share, but of course comes at
a certain error of approximation. The goal then is to minimize
the error while minimizing the rank of a data matrix at the same
time.
1Particle Image Velocimetry
2Computational Fluid Dynamics
5
1.2 Data Matrix Decomposition
Two methods of a data matrix decomposition and approximation are discussed in this work.
By matrix decomposition we mean writing the original matrix D as a product of three
matrices:
D = ABC (1.1)
Such product is not unique, meaning that we can find several sets of matrices A, B and
C that will satisfy the equation (1.1). In order to create a unique decomposition, we have to
impose some additional constraints on component matrices A, B and C.
The nature of these constraints is what creates a new matrix decomposition method. Two
such methods, described below, are the main subject of this work.
In the methods to follow, matrices A, B and C have a special physical meaning. A product of
corresponding columns of the three component matrices is called a mode. Matrix A is a matrix
of spatial structures, which gives a "shape" to the mode. Matrix B is a matrix of amplitudes,
which specifies the importance of every mode in the solution. Matrix C is a matrix of temporal
structures and gives dynamics to the mode - it specifies how fast spatial structures evolve from
one to another. It should therefore be noticed, that in addition to reducing the rank of a data
matrix and approximating it, decomposition gives a better insight into the underlying physical
phenomena which are hidden in the data obtained.
1.3 Proper Orthogonal Decomposition (POD)
The concept of the Proper Orthogonal Decomposition (POD) on a discrete data set (therefore a
matrix) coincides with the Singular Value Decomposition (SVD)3
. More information is given in
[2] and [3].
We write therefore:
D = UΣVT
(1.2)
The constraints of SVD are, that the matrices U and V are both orthogonal and orthonormal.
Figure 1.2: Computation
of matrix Ai.
The orthogonality of a matrix means that the inner product4
of
any two different columns of this matrix is zero and the orthonor-
mality means that the inner product of any of the column with itself
is 1. Putting it in other words, the products VVT
and UUT
both
give an identity matrix. The orthogonality condition also means
that the matrices U and V are of full-rank and none of its columns
can be obtained by a linear combination of their other columns.
The SVD also requires that the matrix Σ is diagonal with elements
on the diagonal denoted as σi.
Once the SVD decomposition is made, we approximate the orig-
inal matrix of rank d by a matrix Dr of a lower rank r, which we
write as a sum of "simple" matrices Ai multiplied by the corre-
sponding amplitude σi, extracted from the diagonal of matrix Σ.
D ≈ Dr = A1σ1 + A2σ2 + · · · + Arσr (1.3)
Each matrix Ai has rank equal to 1 and the same size as the
original matrix. In order to obtain these matrices we multiply the
corresponding columns of U and V with an outer product. Using
the Matlab notation, the computation of Ai is the following: Ai =
U(:, i) · V(:, i)T
. We are therefore guaranteed that the matrix Ai
3Check section 5.3 for additional information on SVD.
4Check section 5.1 for more information on the inner products.
6
is of rank 1: each column of Ai is the ith
column of U multiplied
by a corresponding element inside the vector V(:, i)T
(and so is just a linear combination of
the ith
column of U!). We are also guaranteed that any matrices Ai and Aj for i = j will be
orthogonal to each other, since any two columns U(:, i) and U(:, j) are, from the definition of
SVD, linearly independent. In the Matlab notation we write the equation 1.3 as
Dr ≈ U(:, 1 : r)Σ(1 : r, 1 : r)V(:, 1 : r)T
.
Hence, by keeping r terms of the sum in (1.3), where r < d, we create a matrix of lower rank
r that is an approximation to the original matrix D. The error we make by truncating the sum
after r elements, computed as the L2
norm is:
||D − Dr||2 = σr+1 (1.4)
with Dr =
r
i=1 Aiσi. This is known as the Eckart-Young theorem. The tilde denotes that
it is the approximation to matrix D and the r in the subscript specifies the number of elements
in the sum that we decide to keep; σr+1 is an amplitude of the first term left out.
1.4 Dynamic Mode Decomposition (DMD)
The background of the Dynamic Mode Decomposition lies in the linear dynamical systems. On
a practical level we might think of the DMD method as fitting a linear system into the data
matrix. A more general framework, however, is given in [4], [5], [6] and [7]. This implies that
any snapshot taken from the data matrix D at time i+1 is a linear combination of the previous
snapshot at time i. In a vector form we write that:
xi+1 = Axi (1.5)
where xi+1 and xi are the (i + 1)th
and the ith
column from the matrix D respectively.
The matrix A is a linear propagator matrix that tells us how the system will evolve from one
snapshot to another, for all columns inside matrix D.
Figure 1.3: Extracted
data sets.
In a general case, it is impossible to fit the linear system into
the data matrix that we have, as this data might not be linear in
the first place.
We extract two data sets from the matrix D. The data set X1
is created by all the columns of D except the last one and the data
set X2 is created by all the columns of D except the first one. In
the Matlab notation we can write them as: X1 = D(:, 1 : end − 1),
X2 = D(:, 2 : end). Therefore in general:
X2 = AX1 (1.6)
where every ith
column of D is linked by the linear propagator
matrix to every (i+1)th
column of D. The equation (1.6) is equiva-
lent to the equation (1.5), except it is written for all columns inside
D.5
We perform the SVD and later the POD approximation of ma-
trix X1:
X1 = UΣVT
X1 ≈ X1,r (1.7)
Since finding the matrix A is in general impossible, it is enough
to extract certain properties of that matrix, like its eigenvalues. We
seek therefore a matrix S that is similar to matrix A. Similar ma-
trices have the same eigenvalues and share a few other properties. We use the matrix similarity
5It is interesting to think when or whether the equation 1.6 could be solved for A explicitly. This issue is
discussed in section 5.4.
7
condition, which says that if matrices satisfy the equation: S = Q−1
AQ for any matrix Q that
has an inverse, then S and A are similar6
.
In order to find the matrix S, we perform algebraic operations on the equation (1.6), substi-
tuting the POD approximation from the equation (1.7):
X2 = AUΣVT
UT
× (1.8a)
UT
X2 = UT
AUΣVT
× V (1.8b)
UT
X2V = UT
AUΣVT
V × Σ−1
(1.8c)
UT
X2VΣ−1
= UT
AU (1.8d)
NOTE 1: in the equation (1.8c) the product VT
V becomes a unity matrix, since V is an
orthogonal and orthonormal matrix.
NOTE 2: matrix Σ must be an invertible matrix, which also means that it has to be a
square matrix (which is accomplished by performing the POD approximation of matrix X1).
We now call the product UT
X2VΣ−1
= S and observe, that in the equation (1.8d) we arrive
at the condition of similarity of two matrices:
S = UT
AU (1.9)
where the orthogonal matrix U satisfies the condition UT
= U−1
and so plays a role of a
matrix Q. However, it should be noted here that the matrix U might not be a square matrix.
The matrix S is size r × r.
We perform the eigenvalue decomposition of S:
S = ΦMΦ−1
(1.10)
where Φ is a matrix of eigenvectors of S and M is a matrix of eigenvalues of S. In general,
Φ and M are complex matrices. Since S is the size r × r, and typically smaller size than the
matrix A, we can only approximate r eigenvalues of A. Notice also, that similar matrices don’t
share eigenvectors7
.
We extract the elements from the diagonal of matrix M into a vector µ. Since these elements
are in general complex, they can be written as µi = eωi
, where ωi is a complex frequency in
terms of pulsation.
We find a vector of frequencies ω by computing the natural logarithm of µ:
ω = ln (µ) (1.11)
Next, we extract the DMD spatial modes:
φ = UΦ (1.12)
The above equation has a meaning of projecting vectors onto the basis made from orthogonal
columns of U. The coefficients of the projection are extracted from the eigenvectors matrix Φ.
We compute the initial DMD amplitudes b by solving the equation x1 = φb with the least-
squares approach:
b = (φT
φ)−1
φT
x1 (1.13)
where x1 is the full first column of the matrix X1.
We compute the DMD temporal modes in a form of a matrix Tmodes, which will have the
size of r × nt. Each ith
column of this matrix is computed using the following relation:
ti = beωk
= beωti/∆t
(1.14)
6More on matrix similarity can be found in section 5.5.
7There exists, however, a relationship, see section 5.5 for more details.
8
with the number k = ti
∆t , where ti is a particular moment in time and ∆t is a time step at
which snapshots from the data matrix D have been taken. Notice that the DMD method hence
requires that the ∆t is a constant sampling interval for the whole matrix D. If the matrix D
represents experimental data, they must have been sampled at equal intervals of time. k is an
integer and comes directly from the linear dynamical system character of the DMD method8
.
Notice, that each jth
row of matrix Tmodes corresponds to only one jth
frequency and only
one jth
amplitude.
Figure 1.4: Structure of the matrix Tmodes.
The DMD approximation to the original data matrix D is finally computed as a product of
spatial and temporal modes:
Dr = φ · Tmodes (1.15)
Notice that the resemblance with the decomposition equation (1.1) is not evident, since we
arrived at the product of only two matrices, but the third matrix of amplitudes b is hidden in
the matrix Tmodes. To write the equation (1.15) in the form (1.1), one should write Tmodes as
a product of a diagonal matrix and a Vandermonde matrix [4].
1.5 Criteria for the Choice of the Approximation Rank
The rank r of the approximation in the decomposition methods is perhaps the most important
and as well the most mysterious parameter that a person performing POD or DMD has to decide
upon.
1.5.1 Rank for POD
In the POD method the choice is rather simple. Since the amplitudes σi are ordered, we are
sure that increasing r will result in a better approximation. We might then decide, that for
some value ri we are satisfied with the approximation. Two situations are possible: that the
amplitudes converge to zero and that the amplitudes converge to some constant value c. More
information is given in [2].
We can plot the graph of σi(r), which is composed of discrete lines as the rank r is an integer.
8This is further discussed in section 5.6.
9
Figure 1.5: Graph of σi(r).
σmin method
In the case when the amplitudes converge to zero, we can reduce the error of the approximation
as much as we want. This error is equal to σi+1 and since the values σi converge to zero, the
error converges to zero as well. In the σmin method we find such rank r that the error is not
greater then a specified value σmin. This rank can be found by increasing r from 1 to ri + 1 and
each time computing the error of the approximation. Once the error is smaller than the value
specified:
σi+1 ≤ σmin (1.16)
we can stop the approximation.
Slope method
In the case when the amplitudes converge to a constant value c, it is better to look at the
changing slopes of the graph σi(r). We can never reach the error smaller then c, we can only
get as close to c as we like. The slope of the graph is strongly decreasing for the initial values
of r and later on it is close to 0 and almost constant. Once the two consecutive slopes of the
graph σi(r) don’t change more then some specified value, we can stop the approximation.
1.5.2 Rank for DMD
The criteria for the choice of the rank in the DMD method is still an open question but more
complete approach can be found in [8].
In the DMD method the choice of the rank r is made at the level of performing the POD on
the extracted data set X1. One idea is to perform the POD of matrix X1 with a large number
r, proceed with DMD and compute complex eigenvalues of the matrix S. Each eigenvalue is
denoted by µi and is an element of a vector µ.
Since in general they are complex numbers, we can extract their real λr and imaginary λi
part and write each of them as a sum:
µi = λr + λi (1.17)
Each eigenvalue creates a point on a complex plane (λr, λi). If we plot these points with
respect to a unit circle on a complex plane, we may apply the reasoning from linear dynamical
systems. From it we know that the position of the eigenvalue with respect to the unit circle has a
special meaning. Points laying exactly on the circle create a dynamically stable solution. Points
laying inside the circle represent a system decaying in time and in the DMD approximation
have a meaning of noise in the data. They are therefore less relevant to the approximation than
points laying on the circle.
The choice on the rank is sometimes not evident. Since the amplitudes bi are not ordered,
we are unsure how many terms we should keep. Some eigenvalues laying inside the circle may
have large amplitudes, which means that they eventually decay with time.
10
Circle based method
Increasing the rank number r we are getting more points on a complex plane. The points laying
on the circle or close to the circle boundary are the most important. Usually, the approximation
can be stopped once noise start to appear in the data. The criteria then might be, to create
a smaller circle of radius R < 1 (this radius can be chosen by us). Once the eigenvalues start
appearing inside that circle we stop the approximation and retrieve the largest rank r for which
all the eigenvalues are outside the small circle.
Figure 1.6: Circle based choice of the rank r.
1.6 Comparison of POD and DMD
The POD method imposes the orthogonality and orthonormality restriction on the spatial and
temporal structures. In the POD, several frequencies per mode might be present.
The DMD method introduces a single frequency dynamics into the temporal modes. This
method is hence especially informative if we know that a particular phenomenon is occuring at
a certain frequency.
The rank of the approximation for the purpose of performing the SVD, both in the POD
and DMD methods, can be chosen manually at the start of computations and later adjusted
(lowered or increased) once an implemented criteria decides, that the approximation has reached
the satisfactory level.
11
Chapter 2
Pulsating Poiseuille Flow
The test case considered in this chapter is the Poiseuille flow where the pressure gradient be-
tween two sections is changing in time. This definition of pressure results in a velocity profile
which deforms in time and differs from the parabolic one obtained for a constant pressure gra-
dient. This flow situation is analyzed in detail in [1], where the analytic solution to the velocity
profile function is derived in terms of the Asymptotic Complex Solution method and the Eigen-
function Expansion method. In the present chapter, three methods are adopted to approximate
the analytical solution. The three methods are: the former Eigenfunction Expansion method,
the Proper Orthogonal Decomposition (POD) and the Dynamic Mode Decomposition (DMD).
The scope of this chapter is to compare the three methods and analyze the behaviour of their
solutions.
2.1 Test Case
The test case for the purpose of this assignemnt is a flow between two parallel plates known as
the Poiseuille flow. We analyze the situation with the pulsating pressure gradient between two
sections. The variation of pressure in time is thus described by:
p(t) = pM + pA cos(ωt) (2.1)
Figure 2.1: Pulsating Poiseuille
flow. Figure taken from [1].
The momentum balance in the Navier-Stokes equation in
the stream-wise direction is concerned:
∂u
∂t
= −
1
ρ
∂p
∂x
+ ν
∂2
u
∂y2
(2.2)
This is a partial differential equation which links the
pressure gradient with the velocity distribution. In [1], the
Navier-Stokes equation from (2.2) is nondimensionalized for
simplicty and later on, the Eigenfunction Expansion method
and the Asymptotic Complex Solution method are applied
to derive the solution. Two important nondimensional con-
stants appear: the Womersley number W and the dimension-
less pressure amplitude ˆpA.
In the present chapter we approximate the solution to the
equation (2.2) using three methods and compare the results
of these approximations.
12
2.2 Asymptotic Complex Solution
The analytical solution that is used as a base for the approximations to follow is obtained in [1]
with the Asymptotic Complex Solution method.
Figure 2.2: Structure of
the matrix UR.
In this method, the solution is a real part of the complex velocity
function:
UR(ˆy, ˆt) = Re{˜u(ˆy, ˆt)} (2.3)
where ˆy is a dimensionless space coordinate and ˆt is a dimen-
sionless time.
The result is a matrix UR, which in general might be a full-rank,
rectangular matrix. Its rows are linked to the 1D spatial coordinates
and its columns are linked to the consecutive time steps. The size
of this matrix is therefore ny ×nt. Depending on how we descretize
space and time, we might end up with a matrix of a large size.
Typically, we descretize ˆy and ˆt in such way, that the number of
rows ny is larger than the number of columns nt.
A vector ˆy, specifying the discretization of space, contains ny
entries ranging from -1 to 1.
A vector ˆt, specifying the discretization of time, contains nt
entries ranging from 0 to some tend which in this case is the final
time of the simulation that we wish to perform.
2.3 Eigenfunction Expansion
The Eigenfunction Expansion solution is also obtained in [1]. In the Eigenfunction Expansion
method the solution is written in terms of the sum:
UA(ˆy, ˆt) =
n
i=1
φi(ˆy)aiΨi(ˆt) + UM (2.4)
where φi(ˆy) is a spatial structure, ai is an amplitude and Ψi(ˆt) is a temporal structure. UM
is the mean flow.
Figure 2.3: Graphical rep-
resentation of the equa-
tion (2.5).
A number n describes the number of modes that we decide to
keep in the approximation. When n = max(ny, nt), the solution
is exact. The product φi(ˆy)aiΨi(ˆt), computed for any i is the ith
mode of the approximation.
Computing the sum in (2.4) is equivalent to multiplying three
matrices:
UA(ˆy, ˆt) = YATT
(2.5)
where Y is the matrix of spatial structures of size ny × n, T
is the matrix of temporal structures of size nt × n. Matrix A is a
diagonal matrix of amplitudes of size n × n.
Each column of the spatial structures matrix Y is made up from
a cosine function of a particular frequency. Spatial structures give
a shape to the solution. The entries ai in the amplitude matrix A
depend on the dimensionless pressure amplitude ˆpA and the Wom-
ersley number W. Each amplitude describes how important every
mode is in the summation. Each column of the temporal matrix T
(or each row of its transpose) represents the time evolution and can
be viewed as giving the dynamics to the system.
13
2.4 Discrete Proper Orthogonal Decomposition (POD)
In the POD method we start with the data matrix UR, as defined in 2.2, and approximate it
with a lower rank matrix UPOD of the same size.
We perform the SVD on the matrix UR and decompose it into a product of three matrices:
UR = UΣVT
(2.6)
The SVD decomposition is imposing an orthogonality and orthonormality constraint on
matrices U and V. U is the matrix of spacial structures and V is the matrix of temporal
structures. Unlike in the Eigenfunction Expansion method, the POD may introduce several
frequencies per mode.
Matrix Σ is a diagonal matrix. The entries σi on its diagonal have the meaning of amplitudes
which, similarly to the entries ai, specify the importance of each POD mode.
Next, the POD approximation is performed by computing the truncated sum:
UR ≈ UPOD = u1vT
1 σ1 + u2vT
2 σ2 + · · · + urvT
r σr (2.7)
where ui is the ith
column of U and vi is the ith
column of V.
It is important to note that in the SVD the elements σi are ordered and for any i it always
holds that: σi > σi+1. This condition is very important in the POD approximation: on truncat-
ing the sum after the rth
term, we are sure that every next element of the sum will contribute
less then any element from the ones that we have kept.
Since the L2
norm error of POD is equal to the amplitude of the first term left out, it
is important for the accuracy of the approximation that the amplitudes σi converge to zero.
Otherwise, if they converge to some constant value, the norm will remain almost constant after
some time, no matter how many terms we add to the approximation. When the amplitudes
converge to zero, the error converges to zero as well. The faster it converges to zero, the more
the matrix UR is close to be rank deficient.
2.5 Discrete Dynamic Mode Decomposition (DMD)
In the DMD method we start again with the data matrix UR from which we extract the data
sets X1 and X2, as defined in the section 1.4. The rank r is chosen at the level of computing
the POD of a matrix X1.
The analysis that follows is analogous to the one described in section 1.4.
The vector µ of the eigenvalues of the matrix S is decomposed into the real and imaginary
part:
µ = Re(µ) + Im(µ) = λr + λi (2.8)
where λr = Re(µ) and λi = Im(µ). The vector µ is length r, and so is the vector of
frequencies ω and the vector of initial amplitudes b.
As a result we obtain the DMD approximation to the original data matrix:
UR ≈ UDMD = φ · Tmodes (2.9)
An important thing to note is that the amplitudes in the DMD method (unlike in the POD
method) are not ordered. This means that some bi+1 > bi and therefore often we cannot
approximate the matrix well with only first few modes, as some further modes may appear
important as well.
14
2.6 Comparison of the Three Approximations
2.6.1 Initial Parameters
A Matlab code [App.A] is developed to simulate the three approximation methods.
The approximations are performed for the Womersley number W = 10 and for the dimen-
sionless pressure coefficient ˆpA = 60 (case C2 from [1]).
The matrix UR in the present test case has rank = 3. Hence, only three approximations are
considered: for r = 1, r = 2 and r = 3. The POD and DMD methods should give an exact
result when r = 3. For the purpose of plotting the amplitude decay rate and the eigenvalues
circle, the number of modes is later increased to 20.
The time step is dt = 0.05. The smaller the time step, the smoother graphs of the temporal
structures we obtain.
The space step is dy = 0.05. The smaller the space step, the smoother graphs of the spatial
structures we obtain.
The results presented below are obtained for the total time of simulation T = 20.
2.6.2 Comparison of the Modes
A time evolution of the pulsating velocity profile between two parallel plates is obtained, in a
form of a movie, for the analytic solution and three approximation methods. The time of the
simulation can be chosen arbitrarily. In Figures 2.4 and 2.5 three first modes of the approxi-
mations are drawn with respect to the analytic solution, for two different time moments in the
simulation.
Figure 2.4: Approximation of the asymptotic complex solution with 1, 2 and 3 first modes.
Drawn for t = 2.
The first mode U1 is approximating the mean flow in all three approximation methods.
In the Eigenfunction Expansion and in the POD method, the two first approximations follow
in time the original solution. In the DMD method, the first two approximations often find
themselves moving in the opposite direction to the original solution.
Approximation with the third mode U3 follows exactly the original solution in the POD and
in the DMD method. In the Eigenfunction Expansion, the approximation gets better, as we add
more modes, but is still not exact for the third mode U3.
15
Figure 2.5: Approximation of the asymptotic complex solution with 1, 2 and 3 first modes.
Drawn for t = 3.5.
2.6.3 Amplitude Decay Rate
Two functions of the amplitude decay rate are obtained by plotting the elements ai from the
diagonal of matrix A for the eigenfunction expansion and the elements σi from the diagonal
of matrix Σ for the POD method. In Figure 2.6 they are plotted versus the number of modes
taken into account. The amplitudes are normalized so that the largest amplitude has the value
1.
Figure 2.6: The normalized amplitude decay rate.
Some interesting facts can be observed. Firstly, the decay rate is faster for the POD method
then the eigenfunction expansion method. The faster the amplitude decay rate, the better we
can approximate the solution with a low number of modes.
Secondly, the amplitudes of POD are zero for the number of modes larger than 3. This is
due to the rank of matrix UR being equal to 3. With 3 modes of POD we are already recovering
16
the full data set and the L2
error, equal to the largest term left-out is 0%.
The amplitudes converge to zero for both approximation methods and hence adding more
terms in the eigenfunction expansion will keep improving the approximation. With the first
three terms of the eigenfunction expansion we read the error to be about 10%.
2.6.4 Eigenvalues Circle
As one of the results of the DMD method we draw the eigenvalues of the matrix S on a complex
plane versus the corresponding amplitude bi. Thus, the Figure 2.7 is made up from the points
(λr, λi), positioned at the height equal to the normalized amplitude bi. 20 first modes of the
DMD are drawn. We also include a unit circle to represent the boundary of the linear dynamical
system. Position of the eigenvalue with respect to that circle will have a special meaning.
Figure 2.7: The first 20 normalized DMD amplitudes.
The points laying exactly on the circle create a dynamically stable solution. The points
laying inside the circle represent the noise in the data: the corresponding eigenvalues are less
relevant in the approximation and they eventually die out. Some points however, might be
laying inside the circle but close to the circle boundary. They have a meaning of a slowly
decaying approximation. Points laying inside the circle, very close to the complex plane origin
Any points laying outside the circle represent an exploding dynamical system and would diverge
the approximation.
For the Poiseuille flow test case, we have three eigenvalues that lie exactly on the circle and
17 ones very close to the point (0 Re, 0 Im). The first DMD mode is represented by an eigenvalue
with only a real part, which approximates the mean flow. The second DMD mode introduces
one complex eigenvalue and the third DMD mode introduces its complex conjugate. Notice that
every eigenvalue with nonzero imaginary part will have its complex conjugate partner.
2.6.5 The First Three Modes of the POD and DMD Approximations
The first three spatial and temporal structures of POD and DMD approximations are plotted
in Figures 2.8-2.13.
The first spatial modes are the same for the POD and the DMD method and they reconstruct
the mean flow between the parallel plates.
The POD method introduces several frequencies in the spatial structures. The temporal
modes of the POD are shifted in phase.
The first two DMD temporal modes are the same and the third mode is constant.
17
Figure 2.8: POD 1st
mode. Figure 2.9: DMD 1st
mode.
Figure 2.10: POD 2nd
mode. Figure 2.11: DMD 2nd
mode.
18
Figure 2.12: POD 3rd
mode. Figure 2.13: DMD 3rd
mode.
2.7 Conclusions
Few interesting facts can be observed from the approximations performed.
Approximation methods satisfy the boundary condition, even though this information was
not implemented in the procedures.
The mean flow is reconstructed in both the POD and DMD. It appears in the first mode
of the approximation. The first spatial structure has the same shape for the POD and DMD
method but the temporal structures are different.
In the DMD method, the filtering (the choice of the rank r) does not necessarily have to be
performed at the POD level. From the eigenvalue circle we observe that the DMD can filter
the noise as well later on. We might therefore include many modes at the POD level, let the
DMD method be performed for all of them and then from an implemented criteria correct the
approximation for a lower number of r. The purpose of restricting r at the POD level is so
that the DMD can start at a smaller set of data. Filtering at the POD level might therefore be
useful, as it decreases the sizes of those matrices, whose size is dependent on r. This in turn
reduces the use of memory during computations.
Finally, it is worth noticing that in general, columns creating matrices Y and T in the
Eigenfunction Expansion method, might not be orthogonal to each other nor be orthonormal.
In this test case it happened that the matrix Y was orthogonal, though T was not. None of
these matrices were orthonormal. In the POD method we require the columns of matrices U
and V to be orthogonal and this is one of the reasons why POD creates a faster approximation.
19
Chapter 3
Dynamic Mode Decomposition of
2D Data
The flow behind a cylinder was simulated by other student using the OpenFOAM CFD software
[10], to produce two large matrices corresponding to two velocity components in the 2D plane.
The region of the flow was discretized into a mesh in the x and y axis. In this chapter, the
Dynamic Mode Decomposition method and the Proper Orthogonal Decomposition are applied to
approximate the simulated flow behind a cylinder, analyze the behaviour of the approximations
and compare the two decomposition methods.
3.1 DMD and POD Approximation to the Flow Behind a
Cylinder
3.1.1 Initial Data
In this chapter we deal with the 2D flow case, hence the amount of data that we have to process
is significantly larger then in the Poiseuille flow case. This time we need two velocity components
u and υ, corresponding to a point in the xy-plane.
Figure 3.1: Velocity compo-
nents.
The data obtained from the CFD consists of four matrices:
1. U matrix of u-components of velocity U.mat
2. V matrix of υ-components of velocity V.mat
3. X vector of coordinates on the x-axis X.mat
4. Y vector of coordinates on the y-axis Y.mat
The region of the flow is discretized in the x-axis into 301
points and in the y-axis into 201 points. The total number of
spatial points is therefore:
np = 301 × 201 = 60501 (3.1)
The time of simulation is discretized into 313 timesteps.
Vector X contains 301 x-coordinates. Their range is from
-0.3287 to -0.0287.
Vector Y contains 201 y-coordinates. Their range is from
-0.1108 to +0.0892.
Matrix U is size 60501 × 313.
20
Matrix V is size 60501 × 313.
Other parameters of the CFD simulation are presented below:
Free stream velocity 10 [m
s ]
Cylinder diameter 0.015 [m]
Turbulence intensity 5% [−]
Kinematic viscosity 1.43 · 10−5
[m2
s ]
Strouhal number 0.2 [−]
Length of the region in the x-axis 0.3 [m]
Length of the region in the y-axis 0.2 [m]
The lengths of matrices U and V correspond to a 2D space, and hence the rows of these
matrices have a special structure. The first 201 rows correspond to each entry inside vector Y
and to the first entry inside vector X. The second 201 rows correspond again to each entry
inside vector Y but now to the second entry inside vector X. In general, it can be viewed as 301
Y vectors placed one after the other, each of them corresponding to only one x-coordinate. This
structure allows to represent 2D data in a single length of the data matrix. It is also graphically
represented in the Figure 3.2.
Figure 3.2: Structure of both data matrices U and V.
The cylinder itself is masked in the data matrices and the velocity components are set to
NaN, where the cylinder is present.
21
3.1.2 Dynamic Mode Decomposition
The Dynamic Mode Decomposition is performed on the data matrices U and V joined together.
The time step chosen for the approximation is 0.01.
The procedure of performing 2D DMD is then the same as described in the section 1.4.
The rank r chosen for the analysis is 4.
DMD Results
Figure 3.3: DMD spatial mode for r = 1. Figure 3.4: DMD temporal mode for r = 1.
Figure 3.5: DMD spatial mode for r = 2. Figure 3.6: DMD temporal mode for r = 2.
22
Figure 3.7: DMD spatial mode for r = 3. Figure 3.8: DMD temporal mode for r = 3.
Figure 3.9: DMD spatial mode for r = 4.
Figure 3.10: DMD temporal mode for r =
4.
3.1.3 Proper Orthogonal Decomposition
The Proper Orthogonal Decomposition is performed on the data matrices U and V joined
together.
The time step chosen for the approximation is 0.01.
The procedure of performing 2D POD is then the same as described in the section 1.3.
The rank r chosen for the analysis is 4.
23
POD Results
Figure 3.11: POD spatial mode for r = 1.
Figure 3.12: POD temporal mode for r =
1.
Figure 3.13: POD spatial mode for r = 2.
Figure 3.14: POD temporal mode for r =
2.
24
Figure 3.15: POD spatial mode for r = 3.
Figure 3.16: POD temporal mode for r =
3.
Figure 3.17: POD spatial mode for r = 4.
Figure 3.18: POD temporal mode for r =
4.
3.1.4 Conclusions and Comparison of the Two Decomposition Meth-
ods
The third spatial structure of DMD and the first spatial structures of POD are approximating
the mean flow. The temporal structures associated with the mean flow are almost constant for
both approximation methods.
The first two temporal modes of DMD have the same wavelength.
The POD temporal modes (apart from the constant one) are shifted in phase.
Even though in the DMD method the amplitudes of the aproximation are not ordered, with
the first two modes we already see the vortex pattern of the flow behind the cylinder.
The time evolution of the approximated flow behind the cylinder is very similar in both
methods.
25
Chapter 4
GUI Beta Version
In this chapter we present a description of a developed beta version GUI program in Matlab
to load data, perform POD or DMD, post-process and save the results. This version of GUI
is under development and not all the elements are yet implemented or are guaranteed to work
without error.
4.1 Scheme of the Program
The scheme of the program is presented in Figure 4.2.
The main menu of the GUI is POD_DMD_beta_1. This menu needs two string variables
specified by the user in the two preceding windows:
String_An_Type which specifies the analysis type
String_Dec_Type which specifies the decomposition type
The user has three choices for the analysis type:
1DS which is 1D data scalar analysis
(associates one physical quantity p to one coordinate in space x)
2DS which is 2D data scalar analysis
(associates one physical quantity p to two coordinates in space x and y)
2DV which is 2D data vector analysis
(associates two physical quantities p and q to two coordinates in space x and y)
Figure 4.1: Analysis type: 1D scalar, 2D scalar, 2D vector.
The user has two choices for the decomposition type:
POD which is Proper Orthogonal Decomposition
DMD which is Dynamic Mode Decomposition
The choices made will appear as a reminder in the Matlab command window.
26
Once in the main menu, four buttons are available:
Import Data opens an IMPORT menu
Decompose opens a window to input a variable dt
and then opens either POD_CRITERIA or DMD_CRITERIA menu
Export Results opens EXPORT_DATA menu
Exit exits the GUI
The IMPORT menu has three buttons where the user can chose the type of data to load into
the program:
Sampled OpenFOAM are OpenFOAM datasets
TxT Dataset are files with the .txt extension
Mat Files are Matlab files with the .mat extension
So far, the TxT Dataset is not implemented. Selecting Sampled OpenFOAM or Mat Files is
possible.
When the user choses Sampled OpenFOAM, another GUI developed by other student [10]
opens, where the user can decide to apply mask to the OpenFOAM dataset or downsample
directly without creating a mask.
When the user choses Mat Files, a line appears in the Matlab command window to remind
what type of data the program is expecting for a given type of analysis. Notice, that the data
files names and the variable names inside of the files must agree with the names
requested by the program. They are:
D.mat
y.mat
for 1D scalar analysis,
U.mat
X.mat
Y.mat
for 2D scalar analysis, and:
U.mat
V.mat
X.mat
Y.mat
for 2D vector analysis.
A pop-up window then appears, where the user can select data to be imported. The function
used to import data is called uipickfiles.m. Multiple selection is possible.
Once the data is imported, the user should select the Decompose button. A pop-up window
appears where the user can enter the time step dt of the data. The choice of the time step is
many times arbitrary but a note should be made, that the data should have been sampled at
equal time steps dt.
Next, according to the decomposition method chosen, either the POD_CRITERIA or DMD_CRITERIA
window appears. These two windows allow the user to select the criteria on the choice of rank
r.
27
Figure 4.2: Scheme of the POD and DMD GUI.
28
Figure 4.3: Legend of the scheme elements.
In the POD_CRITERIA three buttons are available:
Manual where the manual selection of r is made
Automatic: Slope where the automatic slope selection of r is made
Automatic: Sigma min where the automatic σmin selection of r is made
In the DMD_CRITERIA two buttons are available:
POD Preprocessed where the manual selection of r is made
Circle Based where the automatic circle based selection of r is made
So far, only the Manual and POD Preprocessed selections are possible, both of which require
the user to perform the choice of the rank r himself.
When the manual selection is chosen, for both decomposition methods, the user receives a
graph of the amplitude decay rate of the imported data, which he can zoom in in the represen-
tative region and which in turn helps the user to make the initial choice of r. Once the user
is ready with his choice, he should press Enter in the Matlab command window. The pop-up
window appears, where the user can enter the value of the rank.
Finally, the Export Results button opens the EXPORT_DATA menu. Four buttons are avail-
able, regardless of the decomposition method and analysis type:
Specify case name which allows the user to enter the name of the case
Save Approximation which saves the new approximated matrices as .mat files
Gif which saves the .gif file of the approximation
Export Modes which saves the .png files with graphs
of the modes of the approximation
The user must start with the first button for specifying the case name. A pop-up window
appears, where a name of the case can be entered. This name will then appear in the name of a
created folder, where all the results will be saved. The name of the folder will always have the
following pattern:
[Analysis type]_[Decomposition type]_[User entered case name]
29
After the name is entered, the approximated matrices can be saved in the created folder by
clicking Approximation. Notice that this may take long time if the matrices are of a large size.
If not needed, the user may skip saving the approximated matrices.
Then, a .gif file with the graphical representation of the results can be viewed and saved
by clicking Gif.
Finally, to export the modes of the approximation, the user can press the button Modes. A
pop-up window appears, where the user can select which mode to export. The number that the
user enters should be less than or equal to the rank r chosen before.
Notice that the user can decide to save only some of the results, in particular only the Gif
and Modes might be chosen.
After saving the results, the user may close the GUI by pressing Exit. A note will appear
in the Matlab command window, that the GUI is closed.
4.2 Function Executions Inside the Program
There are external Matlab functions, which are utilised by the GUI. They have to be placed in
the same directory as the GUI files in order for all the elements to work properly. The following
functions are used:
uigetfiles.m function for selecting multiple data files and loading them into Matlab
POD_2D_S.m function that performs 2D scalar POD
POD_2D_V.m function that performs 2D vector POD
DMD_1D.m function that performs 1D DMD
DMD_2D.m function that performs 2D scalar and vector DMD
POD_1D_PLOT_GIF.m function that plots a .gif file of the 1D POD approximation
POD_1D_PLOT_MODES.m function that plots a .png file of the 1D POD modes
POD_2D_PLOT_GIF.m function that plots a .gif file of the 2D POD approximation
POD_2D_PLOT_MODES.m function that plots a .png file of the 2D POD modes
DMD_1D_PLOT_GIF.m function that plots a .gif file of the 1D DMD approximation
DMD_1D_PLOT_MODES.m function that plots a .png file of the 1D DMD modes
DMD_2D_PLOT_GIF.m function that plots a .gif file of the 2D DMD approximation
DMD_2D_PLOT_MODES.m function that plots a .png file of the 2D DMD modes
AXIS.m function for setting the plot style
The function executions inside menus are presented in the Figure 4.4. It specifies which
menus execute a particular function. This figure is useful when any input or output variables
are to be changed in any of these functions. They have to be then adjusted in the corresponding
menu as well. Note also that the plotting functions executed by the EXPORT_DATA menu use
outputs from the functions performing 1D and 2D POD and DMD. If the output variables inside
these functions are changed, they have to be adapted in the plotting functions. Function AXIS.m
is used inside every plotting function.
30
Figure 4.4: Function executions inside GUI.
4.3 Tutorial on Using the GUI
This section is a small tutorial on running the GUI program and performing the POD and DMD
on a sample data, coming from the Poiseuille flow (for 1D scalar case) and the flow behind a
cylinder (for 2D vector case).
4.3.1 Tutorial Folder
To successfully run this tutorial you need a tutorial folder named POD_DMD_Vbeta1. Inside this
folder you should see the following data folders:
folder 1D_Data_full_set:
- D.mat
- extracting_1D_data_set.m
- y.mat
folder 2D_Data_full_set_S:
- U.mat
- X.mat
- Y.mat
folder 2D_Data_full_set_V:
- U.mat
- V.mat
- X.mat
- Y.mat
along with the all the Matlab functions that create the GUI.
31
4.3.2 1D Data
Folder 1D_Data_full_set contains sample data from the Poiseuille flow described in Chapter 2.
This data has been prepared in the form of two matrices: D.mat and y.mat, for fixed parameters:
TIME = 10 total time of the simulation
W = 10 Womersley number
Pa = 60 dimensionless pressure coefficient
dt = 0.1 time step
dy = 0.01 space step
In case you want to extract data with different parameters, a code for extracting is included,
where you can change things and create new data sets. You can use the code extracting_1D_data_set.m.
For now, this definition of parameters results in a matrix D of size 201 × 101, which has
the same structure as a general data matrix from Figure 1.1 - the rows are related to space
coordinates and the columns are related to time coordinates.
The time vector has entries ranging from 0 to TIME, with a time step 0.1. It contains 101
elements.
The space vector has entries ranging from -1 to +1, with a space step 0.01. It contains 201
elements. It represents the position between two parallel plates in the Poiseuille flow.
Open Matlab and change the working directory to the tutorial folder:
POD_DMD_Vbeta1
Then type Main_MENU in the Matlab command window.
NOTE: While proceeding with this tutorial, observe also some feedback information that
appears in the Matlab command window when you interact with the GUI.
The first window should appear, where you are asked to select the decomposition mode. Click
1D Scalar. In the second window you are asked to select the decomposition method. Let’s say
we will perform the 1D analysis with the POD method, so click Proper Orthogonal... (POD).
Now you are in the main menu and you have to follow the order of the buttons. First, we
want to import our data, so click Import Data. You are now asked to chose the type of data
that you want to import and for now, click Mat Files, since our matrices are stored with the
extension .mat.
Find the folder 1D_Data_full_set inside the tutorial folder and select both files (by mouse
or by holding Shift) D.mat and y.mat.
Once the data is imported, click the next button Decompose. A pop-up window appears,
where you are asked to enter the time step in your data. You are in fact free to chose whatever
time step you want but if you want to be consistent with how the data was prepared, change
the default value to 0.1 (otherwise the total time of simulation will differ from TIME = 10). A
window with the choice of criteria on the rank r appears. For the moment, we will chose the
rank manually, so click Manual. You now receive a graph of the amplitude decay rate of the
imported data. This graph helps to make a choice on the rank r. You see that the first mode
has the largest amplitude, the second and the third mode has got a lower amplitude and the
furhter modes have zero (or almost zero) amplitudes. In general, you can zoom in the graph in
the region that it of interest to you to help you make the decision on r. For this imported data
it’s enough to take three first modes. The program is waiting for your response, and once you’re
ready to type the rank, press Enter in the Matlab command window.
A pop-up window appears, where you can type 3 and click OK. The graph now disappears
and the rank is chosen.
The GUI is now ready to process and save your data! Click Export Results.
In the Export Data menu we have to start with specifying the name of your case, so click
Specify case name. In the pop-up window you are asked to create a folder name for saving
your case. You are free to enter whatever you want. The program is smart though, and always
32
adds a prefix before your case name. So, even if you create a meaningless name, like AAAA, you
will still know what kind of analysis was performed, as the folder name will be 1D_POD_AAAA.
After the name is entered, you have three results that you can save:
1. the approximated matrix D_POD.mat
2. the .gif file with the movie of a pulsating velocity profile with its approximation
3. the .png file with the graph of the spatial and temporal structures of the approximation
In general, you can decide to only save the things that you need, and you can skip eg. saving
the .mat files (as sometimes this might take a long time). But this time, just to test whether
everything is working properly, we will save all the results.
Click Save Approximation. Once clicked, the results should be saved as .mat files automat-
ically.
Move on to Gif button. A pop-up window appears, where you can specify the ranges in the
x and y axes. The default values are adjusted to the Poiseuille case, so you can simply click OK.
You should now see a nice movie of the pulsating velocity profile of the Poiseuille flow. When
the window with the Matlab figure closes, the .gif file is saved.
The final thing is to save the modes of the approximation. The maximum number of the firs
modes that you can save is equal to the rank number r previously entered. Each time you can
chose which mode to export.
You can simply click the button Export Modes and in the pop-up window type the number
of mode that you want to save (1, 2 or 3). Just to try it out, it’s recommended that you save
all three, one by one.
You can now click Exit to close the GUI.
Go now to the tutorial folder:
POD_DMD_Vbeta1
and notice that a new folder with your case name was created there. In that folder you
should see five files:
D_POD.mat .mat file with the approximated matrix
GIF_1D_POD_r3.gif .gif file with the movie of a pulsating velocity profile
MODES_1D_POD_r1.png .png file with the graph of the 1st
POD mode
MODES_1D_POD_r2.png .png file with the graph of the 2nd
POD mode
MODES_1D_POD_r3.png .png file with the graph of the 3rd
POD mode
You can double-click the .gif file to watch it and you can view the .png files with the POD
modes.
As an additional exercise, you can run the 1D case with the DMD method, which will be
analogous to the POD. You can also move on to testing the 2D data, where we will perform
DMD.
4.3.3 2D Data
To test the 2D case inside GUI, we use the sample data from the flow behind a cylinder. We
use reduced region data, as the full set might take a long time to process. In any case, please
be patient with the GUI this time, as the matrices get larger then they were in the 1D case!
The 2D vector data is prepared in the form of four matrices: U.mat, V.mat, X.mat and Y.mat,
obtained from the CFD simulation.
Matrices U and V have the same structure as presented in the Figure 3.2.
The matrix U has size 1476 × 128.
The matrix V has size 1476 × 128.
The vector X has 41 elements.
The vector Y has 36 elements.
33
Make sure you are still in the working directory of the tutorial folder:
POD_DMD_Vbeta1
Type Main_MENU in the Matlab command window.
This time chose 2D Vector in the first window and Dynamic Mode... (DMD) in the second
window.
Once in the main menu, import the 2D dataset: click Import Data, then chose Mat Files.
Find the folder 2D_Data_full_set_V inside the tutorial folder and select four files U.mat, V.mat,
X.mat and Y.mat.
Next, click Decompose and specify the timestep dt. You can enter any value you want, eg.
type 0.05.
In the choice of criteria on r menu, click POD Preprocessed. This will allow you to manually
enter the rank. You will first see the amplitude decay rate of the imported data and it is now
recommended that you zoom in the first few amplitudes to see them precisely. The amplitudes
diverge to zero and the first 7 amplitudes seem to be the largest. Once you are ready with your
choice, click Enter in the Matlab command window. In the pop-up window type 7.
It’s time to export our results. Click Export Results. Start with specifying the name for
your case - you can type anything you want.
Click Save Approximation to export the approximated matrices U_DMD.mat and V_DMD.mat.
Click Gif to draw the movie of the flow behind a cylinder. In the pop-up window you can
enter the range of your data. You can simply click OK.
You should now be seeing a nice colour plot!
Once it is saved, click Export Modes. This time we will not save all 7 modes. Suppose, we
only want to save the first one, so type 1 in the pop-up window.
You first see the spatial mode and by clicking Enter in the Matlab command window, you
move on to the temporal mode.
Click Enter again and you see the graph of the eigenvalues on the complex plane. The red
mode is the one that you are exporting at the moment.
Click Enter one more time and the graphs will be closed and saved.
You can exit the GUI by clicking Exit.
As the final thing, check if the results are saved. Go to the tutorial folder:
POD_DMD_Vbeta1
and find a new folder corresponding to your 2D vector analysis. Inside it, you should see five
files:
U_DMD.mat .mat file with the approximated matrix
V_DMD.mat .mat file with the approximated matrix
GIF_2D_DMD_r7.gif .gif file with the movie of an approximated
flow behind a cylinder
Spatial_Modes_2D_DMD_r1.png .png file with the graph of the 1st
DMD spatial mode
Temp_Modes_2D_DMD_r1.png .png file with the graph of the 1st
DMD temporal mode
34
Chapter 5
Additional Exercises
5.1 Discrete and Continuous Norms
In the following exercise we seek a relationship between the norm calculated in a discrete way
(as defined in the Euclidean space) and in a continuous way (as defined in the Hilbert space).
In Euclidean space we define an operation between two vectors x and y called inner product:
x, y =
n
i=1
xiyi (5.1)
which results in a scalar.
We also define a square of the norm of a vector by performing an inner product of that vector
with itself:
||x||2
= x, x (5.2)
The concept of an inner product and computing the norm carries over from Euclidean space
to the Hilbert space, where it is defined for continuous functions within a certain domain.
We have therefore an inner product between two functions f(x) and g(x):
f(x), g(x) =
1
−1
f(x)g(x)dx (5.3)
and, once again, a square of the norm of a function f(x) is defined as an inner product of
the function with itself:
||f(x)||2
=
1
−1
f2
(x)dx (5.4)
The functions chosen for this exercise are two Legendre polynomials L3 and L4:
L3 =
1
2
5x3
− 3x (5.5a)
L4 =
1
8
35x4
− 30x2
+ 3 (5.5b)
For the purpose of discrete computing we discretize space (x-axis) into nx points. Functions
L3 and L4 become vectors with the number of elements equal to nx. The square of the norm of
each function is calculated in three ways for comparison:
1. using the command trapz() which approximates the definite integral by calculating the
sum of areas of small trapezoids. It is in fact a discrete calculation of a continuous case.
35
2. using the command norm() which computes the norm of a vector
3. using the command dot() which computes a dot product of each function with itself
In the continuous case, the integrals are computed analytically and evaluated in the code for
the specified integration boundaries.
NOTE: An inner product of functions L3 and L4 is also computed and since the Legendre
polynomials form an orthogonal basis, we expect it to be zero.
L3, L4 = 0 in the continuous case (5.6a)
L3, L4 ≈ 0 in the discrete case (5.6b)
Listing 5.1: Matlab code to test the relationship between a discrete and continuous evaluation
of the norm of two Legendre polynomials. disc_cont_exercise_1.m
1 %% Finding a Relationship Between a Discrete and Continuous =========
2 % Evaluation of a Norm of Two Functions
3 % ===================================================================
4 % NOTE: This code i s actually evaluating a square of the norm
5 % f or s i m p l i c i t y .
6 % ===================================================================
7 clc , c l e a r
8
9 % Number of points to d i s c r e t i z e the i n t e g r a t i o n i n t e r v a l into :
10 n_x = 500;
11
12 %% D i s c r e t i z i n g the i n t e g r a t i o n i n t e r v a l : ===========================
13 a = −1; b = 1; % i n t e g r a t i o n boundaries
14 step = (b−a ) /(n_x − 1) ; % d i s c r e t i z a t i o n step on the i n t e r v a l
15 x = [ a : step : b ] ; % x−axis as a vector
16
17 %% Functions to be calculated ( two Legendre polynomials ) : ===========
18 L3 = 1/2 ∗ (5∗x.^3 − 3∗x) ; % L3 as a vector
19 L4 = 1/8 ∗ (35∗x.^4 − 30∗x.^2 + 3) ; % L4 as a vector
20
21 %% Approximation of a d e f i n i t e i n t e g r a l of L3 and L4 function : ======
22 Area_L3 = trapz (x , L3) ; % approximation to the area below L3
23 Area_L4 = trapz (x , L4) ; % approximation to the area below L4
24
25 %% Approximation of the inner product of L3 and L4 : =================
26 IP = L3 .∗ L4 ; % multiplying the two functions L3 and L4
27
28 % Discrete c a l c u l a t i o n of the inner product :
29 IP_t = trapz (x , IP ) ; % d i s c r e t e using trapz ()
30 IP_d = dot (L3 , L4) ; % d i s c r e t e using dot ()
31
32 %% Approximation of the norm of function L3 : ========================
33 L3_L3 = L3 .^2; % multiplying L3 with i t s e l f
34
35 % Discrete c a l c u l a t i o n of the norm :
36 NL3_t = trapz (x , L3_L3) ; % d i s c r e t e using trapz ()
37 NL3_n = norm(L3) ^2/(n_x/2) ; % d i s c r e t e using norm ()
36
38 NL3_d = dot (L3 , L3) /(n_x/2) ; % d i s c r e t e using dot ()
39
40 % Continuous c a l c u l a t i o n of the norm :
41 NL3_a = b^3∗(25∗b^4−42∗b^2+21)/28−a^3∗(25∗a^4−42∗a^2+21) /28;
42
43 %% Approximation of the norm of function L4 : ========================
44 L4_L4 = L4 .^2; % multiplying L4 with i t s e l f
45
46 % Discrete c a l c u l a t i o n of the norm :
47 NL4_t = trapz (x , L4_L4) ; % d i s c r e t e using trapz ()
48 NL4_n = norm(L4) ^2/(n_x/2) ; % d i s c r e t e using norm ()
49 NL4_d = dot (L4 , L4) /(n_x/2) ; % d i s c r e t e using dot ()
50
51 % Continuous c a l c u l a t i o n of the norm :
52 NL4_a = b∗(1225∗b^8−2700∗b^6+1998∗b^4−540∗b^2+81)/576− . . .
53 a ∗(1225∗ a^8−2700∗a^6+1998∗a^4−540∗a^2+81) /576;
The numerical results of the code are presented below:
IP_t = -5.2042e-018
IP_d = 1.4433e-015
NL3_t = 0.28575
NL3_n = 0.28917
NL3_d = 0.28917
NL3_a = 0.28571
NL4_t = 0.22228
NL4_n = 0.22583
NL4_d = 0.22583
NL4_a = 0.22222
It is seen therefore, that in order to match up the norms calculated using the analytic solution
with the ones calculated using discrete methods, we had to divide the latter by nx/2.
As a result of this exercise we can therefore conclude that:
1
−1
f2
(x)dx ≈
||f||2
nx
2
(5.7)
and hence in reverse:
||f|| ≈
nx
2
1
−1
f2(x)dx (5.8)
where f is a discrete vector and f(x) is a continuous function. We can also put it in words
that:
discrete norm
2
≈
nx
2
continuous norm (5.9)
The approximation gets better as we increase the number of discretization points nx.
The following result might be useful for approximating integrals which can be written in the
general form as:
1
−1
f2
(x)dx
37
5.2 The Phase Shift Ψ Between Two Sine Functions
In this exercise we retrieve a phase shift Ψ between two sine functions by computing the inner
product of two discrete vectors y1 and y2, defined in the following way:
y1 = sin (x) (5.10a)
y2 = sin (x + Ψ) (5.10b)
where x is a vector obtained by discretizing the x-axis and Ψ is the pre-specified phase.
Figure 5.1: Two sine functions shifted in phase by Ψ = 2.
The idea is to find the correlation coefficient ρ between these two vectors y1 and y2. The
correlation coefficient is defined as a cosine of the angle Ψ between two vectors and is computed
by means of an inner product:
ρ = cos (Ψ) =
y1, y2
||y1|| · ||y2||
(5.11)
The correlation coefficient is hence a number between −1 and +1 and is related to the phase
shift. For example, it is zero when the phase is π/2 or 3π/2, -1 when the phase is π and 1 when
the phase is 0 or 2π.
Finally, as we take the arccos (ρ) we get the phase back, since:
arccos (ρ) = arccos (cos (Ψ)) = Ψ (5.12)
There is, however, a problem with retrieving the phase shift exactly. This is due to the
symmetry of the correlaction coefficient with respect to the phase Ψ. This symmetry is captured
in the figure below:
Figure 5.2: Symmetry in the correlation coefficient.
38
For a certain correlation coefficient ρ we cannot tell whether the phase shift is Ψ or 2π − Ψ.
We therefore compute both. In the graphical representation, this means that we cannot tell
whether vector y2 was shifted to the left or to the right of vector y1.
Listing 5.2: Matlab code for retrieving the phase shift between two sine functions.
phase_shift_of_two_sines.m
1 %% Phase s h i f t between two sine functions ===========================
2 % Retrieving the phase s h i f t by computing an inner product .
3 % ===================================================================
4 c l c
5 c l e a r
6
7 %% USER INPUT: ======================================================
8 PHASE = 2; % pre−s p e c i f i e d phase
9 xstart = −10; % s t a r t i n t e r v a l on the x−axis
10 xend = −xstart ; % end i n t e r v a l on the x−axis
11 step = 0 . 0 1 ; % step on the x−axis
12 % END OF USER INPUT =================================================
13
14 %% Computing the actual phase : ======================================
15 Actual_phase = rem(PHASE, 2∗ pi ) ;
16 disp ( [ ’The pre−s p e c i f i e d phase i s : ’ , num2str ( Actual_phase ) ] )
17
18 %% Define the span on the x−axis : ===================================
19 x = [ xstart : step : xend ] ;
20
21 %% Define the vectors : ==============================================
22 y1 = sin (x) ;
23 y2 = sin (x + PHASE) ;
24
25 %% Find the inner product of two vectors y1 and y2 : =================
26 Inner_prod = dot (y1 , y2 ) ;
27 NORMy1 = norm( y1 ) ;
28 NORMy2 = norm( y2 ) ;
29 CORRELATION = Inner_prod /(NORMy1∗NORMy2) ;
30 disp ( [ ’The c o r r e l a t i o n c o e f f i c i e n t i s : ’ , num2str (CORRELATION) ] ) ;
31
32 %% Retrieving the phase : ============================================
33 PHASE_back = acos (CORRELATION) ;
34 mirror_phase = pi + ( pi−PHASE_back) ;
35 disp ( [ ’ Retrieved phase : ’ , num2str (PHASE_back) ] ) ;
36 disp ( [ ’ or : ’ , num2str ( mirror_phase ) ] ) ;
In the Matlab code presented above, we calculate the actual, minimum phase, from the pre-
specified one, taking into account that shifting the sine function by any 2kπ can be neglected.
The numerical results of the code are presented below:
The pre-specified phase is: 2
The correlation coefficient is: -0.40054
Retrieved phase: 1.9829
or: 4.3003
NOTE: As we increase the span on the x-axis, and/or decrease the step in vector x, the
approximation to the phase gets better.
39
5.3 A Note on the Sizes of Component Matrices in the SVD
5.3.1 SVD on a General Matrix D
When the SVD is performed on a general matrix D of size n × m the sizes of the resultant
matrices are as follows:
The matrix U is n × n and is always a square matrix.
The matrix Σ is n × m and is the same size as the matrix D.
The matrix V is m × m and is always a square matrix.
The matrix D can be written by means of:
D = UΣVT
(5.13)
The matrix multiplication from the equation (5.13) is represented graphically in the figure
(5.3).
5.3.2 SVD on the Matrix X1 from DMD
When the SVD is performed on a general matrix X1 of size np ×(nt −1) the sizes of the resultant
matrices are as follows:
The matrix U is np × np and is always a square matrix.
The matrix Σ is np × (nt − 1) and is the same size as the matrix X1.
The matrix V is nt × nt and is always a square matrix.
The matrix X1 can be written by means of:
X1 = UΣVT
(5.14)
The matrix multiplication from the equation (5.14) is represented graphically in the figure
(5.4).
5.3.3 SVD with POD Approximation on Matrices D and X1
After approximating the matrices D and X1 with the POD method, the sizes of decomposition
matrices change. Suppose the rank of the approximation is r.
Dr ≈ U(:, 1 : r)Σ(1 : r, 1 : r)V(:, 1 : r)T
(5.15)
The matrix multiplication from the equation (5.15) is represented graphically in the figure
(5.5).
X1,r ≈ U(:, 1 : r)Σ(1 : r, 1 : r)V(:, 1 : r)T
(5.16)
The matrix multiplication from the equation (5.16) is represented graphically in the figure
(5.6).
The sizes of Dr and X1,r are the same as the sizes of D and X1.
When the POD approximation is performed on a general matrix D of size np × nt the sizes
of the resultant matrices are as follows:
The matrix U is np × r and is in general a rectangular matrix.
The matrix Σ is r × r and becomes a square matrix.
The matrix V is nt × r and is in general a rectangular matrix.
When the POD approximation is performed on a general matrix X1 of size np × (nt − 1) the
sizes of the resultant matrices are as follows:
The matrix U is np × r and is in general a rectangular matrix.
The matrix Σ is r × r and becomes a square matrix.
The matrix V is (nt − 1) × r and is in general a rectangular matrix.
Notice also, that with the assumption that np > nt, the maximum rank of the matrix D is
nt, so the rank of the approximation Dr is at most r = nt. The maximum rank of the matrix
40
X1 is nt − 1 and analogously the rank of the approximation X1,r is at most r = nt − 1. This
assumption restricts what the maximum sizes of the component matrices can be.
Figure 5.3: Graphical representation of the
equation (5.13).
Figure 5.4: Graphical representation of the
equation (5.14).
Figure 5.5: Graphical representation of the
equation (5.15).
Figure 5.6: Graphical representation of the
equation (5.16).
5.4 A Note on the Linear Propagator Matrix
In this section we look deeper into fitting a linear system into the data matrix D, and especially
we look at the equation (1.6), which we rewrite below:
X2 = AX1 (1.6)
We investigate whether it is possible to find matrix A explicitly.
First, let’s analyze the sizes of each of the matrices in the above equation. Matrices X1 and
X2 have size np × (nt − 1) and since the matrix A multiplies the matrix X1 to give a matrix of
the same size as X1, it has to be size np × np.
We then want to find an equation from which the matrix A can be solved. Since in general
matrix X1 is not square (unless in a rare case np = nt − 1), we cannot compute its inverse
directly. We seek a way to find a special kind of inverse, denoted by X−1∗
1 , so that:
X2X−1∗
1 = A (5.17)
One idea might be to use a Moore-Penrose inverse and compute the matrix A by means of
the least-squares method. We perform a few algebraic operations on the equation (1.6):
X2 = AX1 × XT
1 (5.18a)
41
X2XT
1 = AX1XT
1 × (X1XT
1 )−1
(5.18b)
X2XT
1 (X1XT
1 )−1
= A (5.18c)
This will only hold when the product X1XT
1 is invertible.
Figure 5.7: Graphical representation of the equation (1.6).
Listing 5.3: Matlab code for attempting to find a linear propagator matrix A. fit-
ting_linear_systems.m
1 %% I n v e s t i g a t i n g the l i n e a r system of a form X2 = A X1 ==============
2 % We attempt to find matrix A fo r a nonlinear set of data and then
3 % to r e t r i e v e matrix A f o r an a r t i f i c i a l l y created set .
4 % ===================================================================
5 c l c
6 c l e a r
7
8 %% Attempt at f i t t i n g a l i n e a r propagator matrix A ==================
9 % into nonlinear data :
10 % Generation of a data matrix of f u l l −rank :
11 D = [2 0 0 0 0 ; 0 8 2 0 0 ; 0 0 1 1 0 ; . . .
12 0 0 0 0 5 ; 0 2 0 0 1 ; 5 0 0 0 1 ; 0 0 0 1 0 ] ;
13
14 % Checking the rank of matrix D:
15 rank_D = rank (D) ;
16
17 % Extracting data s e t s :
18 X1_D = D( : , 1 : end−1) ;
19 X2_D = D( : , 2 : end ) ;
20
21 % Finding a Moore−Penrose inverse :
22 square_D = X1_D ∗ X1_D’ ;
23 det ( square_D ) ;
24 inverse_D = inv ( square_D ) ;
25
26 % Attempt at finding A:
27 A_D = X2_D ∗ X1_D’ ∗ inverse_D ;
28
29 %% Attempt at r e t r i e v i n g the l i n e a r propagator matrix A: ============
30 % Creating matrix A:
42
31 A = [5 2 1 5 ; 1 2 2 2 ; 3 6 9 5 ; 1 0 2 2 ] ;
32
33 % Creating data set X1:
34 X1_A = [1 2 3 ; 5 6 1 ; 0 0 2 ; 2 0 0 ] ;
35
36 % Computing data set X2:
37 X2_A = A∗X1_A;
38
39 % Finding a Moore−Penrose inverse :
40 square_A = X1_A∗X1_A’ ;
41 det ( square_A ) ;
42 inverse_A = inv ( square_A ) ;
43
44 % Attempt at getting back A:
45 A_A = X2_A ∗ X1_A’ ∗ inverse_A ;
In the Matlab code presented above we try to find matrix A for a data set matrix D of rank
1. The code produces a warning that the matrix inverse_D is singular, which means that the
determinant of the product X1XT
1 is equal to or close to 0. Matrix A_D is hence not computed
by Matlab.
In the second part of the code, we attempt to retrieve the matrix A, specifying it at the
begining. We therefore go in reverse, and compute the matrix X2 that satisfies the equation
(1.6). The code produces the same warning about matrix inverse_A. The matrix A_A gets
computed but is in no way retrieving the original matrix A. The results are the following:
A =
5 2 1 5
1 2 2 2
3 6 9 5
1 0 2 2
A_A =
-2.0000 1.5000 -8.0000 4.7500
0 2.2500 0 2.0000
6.0000 5.0000 -2.0000 8.5000
1.2500 0.2500 3.0000 1.8750
We might conclude that finding a linear propagator matrix of a linear system is in general
impossible without an error.
5.5 A Note on the Similarity of Matrices
In this section we analyze in a closer detail the condition of similarity of two matrices, which is
an important concept in the DMD method, where instead of finding a linear propagator matrix
A, we find a similar matrix S.
Two matrices are similar if they satifsy the equation:
S = Q−1
AQ (5.19)
for any matrix Q that has an inverse.
In the DMD method, we use the orthogonal matrix U to play a role of a matrix Q. The
matrix U has got an inverse, and it is equal to its transpose:
U−1
= UT
(5.20)
43
We have therefore the equation (1.9) from section 1.4:
S = UT
AU (1.9)
We find the size of matrix S, and we compute it from the left hand side of the equation
(1.8d), which we recall below:
UT
X2VΣ−1
= UT
AU (1.8d)
Matrices U, V and Σ have already been approximated with the POD method, so the di-
mension from section 5.3.3 apply.
Hence we have the following size multiplication:
size(S) = (r × np) · (np × (nt − 1)) · ((nt − 1) × r) · (r × r) (5.21a)
size(S) = (r × r) (5.21b)
We then check whether this size agrees with the equation (1.9) and with the size of the
matrix A obtained in the section 5.4. We have therefore:
size(S) = (r × np) · (np × np) · (np × r) (5.22a)
size(S) = (r × r) (5.22b)
In general, the size of matrix A is larger then the size of matrix S. The maximum size of
matrix S can be achieved when r = nt −1 and is in that case equal to (nt −1)×(nt −1). We then
find the eigenvalues of matrix S, which approximate some of the eigenvalues of A. Producing a
matrix S of size r × r we can only retrieve r of them.
In a rare case when np = nt − 1 = r, the sizes of matrices S and A are the same and we can
retrieve all eigenvalues of A.
In a Matlab code presented below we investigate the behaviour of the eigenvalues of a similar
matrix S, first, when the matrix S is of the same size as matrix A, and next, when the matrix
S is of a reduced size r × r.
Listing 5.4: Matlab code to investigate the similarity condition. similar_matrices.m
1 %% Similar Matrices =================================================
2 % In t h i s code we i n v e s t i g a t e the s i m i l a r i t y condition :
3 % S = U^−1 A U, fo r two cases :
4 % − when matrix U i s of the same s i z e as matrix A.
5 % − when matrix U i s of reduced size , and approximates only
6 % r eigenvalues of matrix A.
7 % ===================================================================
8 c l c
9 c l e a r
10
11 %% USER INPUT: ======================================================
12 % Choice of rank to decrease the s i z e of matrix S :
13 r = 7;
14 % END OF USER INPUT =================================================
15
16 %% I n i t i a l data :
17 % Generation of a dummy data matrix :
18 D = [2 0 0 0 0 ; 0 8 2 0 0 ; 0 0 1 1 0 ; . . .
19 0 0 0 0 5 ; 0 2 0 0 1 ; 5 0 0 0 1 ; 0 0 0 1 0 ] ;
20
21 len_D = s i z e (D, 1 ) ; % length of matrix D
22 wid_D = s i z e (D, 2 ) ; % width of matrix D
44
23
24 % Generation of a l i n e a r propagator matrix A:
25 A = zeros (len_D , len_D) ;
26
27 f or i = 1 : 1 : ( len_D∗len_D)
28 A( i ) = i −1;
29 end
30
31 f or i = 1 : 4 : ( len_D∗len_D)
32 A( i ) = i ∗2;
33 end
34
35 % Finding the eigenvectors and eigenvalues of A:
36 [ eigvec_A , eigval_A ] = eig (A) ;
37 E_A = diag ( eigval_A )
38
39 %% Full s i z e : =======================================================
40 % Creating an orthogonal matrix U:
41 [U, Sigma , V] = svd (D) ;
42
43 % Creating a s i m i l a r matrix S :
44 S = U’ ∗ A ∗ U;
45
46 % Finding the eigenvectors and eigenvalues of S :
47 [ eigvec_S , eigval_S ] = eig (S) ;
48 E_S = diag ( eigval_S )
49
50 %% Reduced s i z e : ====================================================
51 % Extracting r columns of U:
52 U_app = U( : , 1 : 1 : r ) ;
53
54 % Creating a s i m i l a r matrix S :
55 S_app = U_app’ ∗ A ∗ U_app;
56
57 % Finding the eigenvectors and eigenvalues of S :
58 [ eigvec_S_app , eigval_S_app ] = eig (S_app) ;
59 E_S_app = diag ( eigval_S_app ) ;
60 % Sorting the eigenvalues by absolute values :
61 [~ ,n ] = sort ( abs (E_S_app) , ’ descend ’ ) ;
62 E_S_app = E_S_app(n)
63
64 %% Reconstructing eigenvectors of S :
65 EV_S = U’ ∗ eigvec_A
66 eigvec_S
67 ERR = norm( abs (EV_S) − abs ( eigvec_S ) )
45
The eigenvalues of a matrix A of size 7 × 7
are:
E_A =
230.1355
61.1446
43.4887
28.9760
-9.5426
-2.4386
-1.7637
The eigenvalues of a matrix S of full size 7 × 7
are:
E_S =
230.1355
61.1446
43.4887
28.9760
-9.5426
-2.4386
-1.7637
Next, we perform the approximations of the eigenvalues of A by increasing the rank r from
1 to 7.
For r = 1:
E_S_app =
32.9295
For r = 2:
E_S_app =
145.9173
-1.5572
For r = 3:
E_S_app =
145.9611
30.7503
-3.4328
For r = 4:
E_S_app =
191.3768
54.7387
29.5563
-3.4428
For r = 5:
E_S_app =
231.9506
59.2763
29.5688
7.8134
-3.7689
For r = 6:
E_S_app =
227.7946
60.9257
32.8918
14.4950
-9.8219
-2.3767
For r = 7:
E_S_app =
230.1355
61.1446
43.4887
28.9760
-9.5426
-2.4386
-1.7637
The eigenvectors of matrices A are different, however, they are linked by the following
relationship:
eigenvectors(S) = U−1
eigenvectors(A) (5.23)
This relationship is checked in the Matlab code above and produces the following results.
The eigenvectors of matrix S:
EV_S =
0.4488 0.4735 -0.1361 -0.1386 0.4951 0.6765 0.4071
-0.6016 -0.5088 -0.0648 -0.3259 -0.0558 0.3550 0.0981
-0.0148 0.2253 -0.2696 -0.8201 0.1092 -0.2603 -0.0056
0.5369 -0.6507 -0.3023 -0.0148 0.2808 0.1577 -0.4903
0.3209 -0.1834 0.3233 -0.2662 -0.4042 -0.2030 0.4880
0.1544 -0.0895 0.6101 -0.0923 0.7003 -0.5239 0.1950
0.1459 0.0346 0.5800 -0.3499 0.0849 0.0900 -0.5550
46
The eigenvectors of matrix S reconstructed as with the relation (5.23):
eigvec_S =
-0.4488 0.4735 -0.1361 0.1386 0.4951 0.6765 -0.4071
0.6016 -0.5088 -0.0648 0.3259 -0.0558 0.3550 -0.0981
0.0148 0.2253 -0.2696 0.8201 0.1092 -0.2603 0.0056
-0.5369 -0.6507 -0.3023 0.0148 0.2808 0.1577 0.4903
-0.3209 -0.1834 0.3233 0.2662 -0.4042 -0.2030 -0.4880
-0.1544 -0.0895 0.6101 0.0923 0.7003 -0.5239 -0.1950
-0.1459 0.0346 0.5800 0.3499 0.0849 0.0900 0.5550
Notice, that they are the same with respect to the absolute value (some of them only differ
in sign). The L2
norm error computed from the absolute values of the above matrices is:
ERR =
1.7422e-014
5.6 A Note on the Linear Dynamical Systems
In the linear dynamical systems, any kth
column of matrix D can be represented by the initial
column D0 multiplied by the kth
power of the linear propagator matrix S:
Dk = Sk
D0 (5.24)
Substituting the eigendecomposition of matrix S, we get:
Dk = ΦMk
Φ−1
D0 (5.25)
Writing the above equation as a sum we get:
Dk =
r
j=1
φjµk
j bj (5.26)
Substituting for µj = eωj
:
Dk =
r
j=1
φj(eωj
)k
bj =
r
j=1
φjeωj k
bj (5.27)
Since every column of matrix D is linked to a particular moment in time, the integer k, can
be written in terms of the time it corresponds to, divided by the timestep in our data: k = ti/∆t.
Substituting this back into (5.27) we get:
Dr =
r
j=1
φjeωj t/∆t
bj (5.28)
47
Appendix A
Pulsating Poiseuille Flow
Listing A.1: Matlab code to approximate the Poiseuille flow with three different methods.
pulsating_poiseuille_approximations.m
1 %% Pulsating P o i s e u i l l e Flow ========================================
2 % Approximating the v e l o c i t y p r o f i l e with eigenfunction expansion ,
3 % POD and DMD.
4 % ===================================================================
5 c l c
6 c l e a r
7 c l o s e a l l
8
9 %% I n i t i a l data : ====================================================
10 % USER INPUT ========================================================
11 TIME = 10; % t o t a l time of the animation
12 W = 10; % Womersley number
13 Pa = 60; % dimensionless pressure c o e f f i c i e n t
14 FONT = 10; % f o n t s i z e fo r graphs
15 modes = 5; % number of modes f or a l l approximations (max 7)
16 dt = 0 . 0 5 ; % time step
17 dy = 0 . 0 5 ; % space step
18 % END OF USER INPUT =================================================
19 MODES = modes ; % number of modes in eigenfunction approximation
20 RANK = modes ; % number of modes in POD approximation
21 r = modes ; % number of modes in DMD approximation
22 nm = modes ; % number of f i r s t modes to draw
23
24 % D efin ition of colours f o r pl ot t in g :
25 red = [236 68 28] ./ 255; % used f or eigenfunction
26 blue = [3 105 172] ./ 255; % used f o r POD
27 green = [34 139 34] ./ 255; % used f o r DMD
28
29 % I n i t i a l i z e error matrices :
30 ERROR_EIG = zeros (MODES, 1) ;
31 ERROR_POD = zeros (RANK, 1) ;
32 ERROR_DMD = zeros ( r , 1) ;
33
34 %% Analytical r e s u l t from asymptotic complex solution : ==============
35 t = [ 0 : dt :TIME ] ; n_t = length ( t ) ; % time d i s c r e t i z a t i o n
36 y = [ −1:dy : 1 ] ; n_y = length (y) ; % space d i s c r e t i z a t i o n
37 u_A_r = zeros (n_y, n_t) ; % i n i t i a l i z e solution
48
38
39 % Construct r e a l and imaginary parts :
40 f or j = 1: length ( t )
41
42 Y = (1 − cosh (W∗ sqrt (1 i ) .∗ y) . / ( cosh (W∗ sqrt (1 i ) ) ) ) ∗1 i ∗Pa/W.^2;
43 u_A_r( : , j ) = r e a l (Y.∗ exp (1 i ∗ t ( j ) ) ) ;
44 end
45
46 % Adding the mean flow component :
47 u_Mb = (1 − y .^2) ∗ 0 . 5 ; % mean flow
48 u_M = repmat (u_Mb, length ( t ) , 1) ; % repeat solution
49 u_A_R = u_M’ + u_A_r; % r e a l a n a l y t i c a l solution
50
51 %% Eigenfunction approximation : =====================================
52 % Extended solution matrix ( fo r p lo t ti ng ) :
53 u_T_extend = zeros (n_y, MODES∗n_t) ;
54
55 % Calculate the solution matrix f o r each mode :
56 f or j = 1 : 1 :MODES
57
58 n = [ 1 : 1 : j ] ; % matrix of modes to include in the summation
59
60 % I n i t i a l i z e matrices :
61 Y = zeros (n_y, j ) ; % i n i t i a l i z e s p a c i a l basis
62 A_n = zeros ( j , j ) ; % i n i t i a l i z e amplitude matrix
63 T = zeros (n_t , j ) ; % i n i t i a l i z e temporal basis
64 U_A = zeros (n_t , n_y) ; % i n i t i a l i z e PDE solution
65
66 % Construct s p a t i a l basis :
67 f or i = 1 : 1 : length (n)
68
69 N = 2∗n( i ) − 1; % odd number in the s e r i e s
70 Y( : , i ) = cos (N∗ pi ∗y/2) ;
71 end
72
73 % Construct the amplitudes :
74 f or i = 1 : 1 : length (n)
75
76 N = 2∗n( i ) − 1; % odd number in the s e r i e s
77 A_n( i , i ) = (16∗Pa) / (N∗ pi ∗ sqrt ((2∗W)^4 + N^4∗ pi ^4) ) ;
78 end
79
80 % Construct the temporal modes :
81 f or i = 1 : 1 : length (n)
82
83 N = 2∗n( i ) − 1; % odd number in the s e r i e s
84 T( : , i ) = (−1)^(n( i ) ) ∗ cos ( t − atan ((4∗W^2) / (N^2∗ pi ^2) ) ) ;
85 end
86
87 % Assembly solution :
88 U_A = Y ∗ A_n ∗ T’ ;
89
90 % Adding the mean flow component :
91 u_Mb = (1 − y .^2) ∗ 0 . 5 ; % mean flow
49
92 u_M = repmat (u_Mb, length ( t ) , 1) ; % repeat solution
93 u_T = U_A + u_M’ ; % eigenfunction solution
94
95 % Paste solution to the large matrix ( fo r p lo t ti ng ) :
96 u_T_extend ( : , (( j − 1) ∗n_t + 1) : 1 : ( j ∗n_t) ) = u_T;
97
98 % Compute the error of the current approximation :
99 ERROR_EIG( j ) = abs (norm(u_T − u_A_R) ) ;
100 end
101
102 % Obtain elements from the diagonal :
103 sigma_A_n = diag (A_n) ;
104
105 %% POD approximation : ===============================================
106 % SVD of the o r i g i n a l solution matrix :
107 [U_POD, S_POD, V_POD] = svd (u_A_R) ;
108
109 f or j = 1 : 1 :RANK
110
111 % Create a POD approximation :
112 U_POD_approx = U_POD( : , 1 : 1 : j ) ∗ . . .
113 S_POD( 1 : 1 : j , 1 : 1 : j ) ∗ V_POD( : , 1 : 1 : j ) ’ ;
114 % Compute the error of the current approximation :
115 ERROR_POD( j ) = abs (norm(U_POD_approx − u_A_R) ) ;
116 end
117
118 % Obtain elements from the diagonal :
119 sigma_POD = diag (S_POD) ;
120
121 %% DMD approximation : ===============================================
122 % Extended solution matrix ( f or p lo t ti ng ) :
123 U_DMD_extend = zeros (n_y, r ∗n_t) ;
124
125 % Calculate the solution matrix f o r each mode :
126 f or j = 1 : 1 : r
127
128 % Define matrix D:
129 D = u_A_R;
130
131 % Construct data s e t s X1 and X2:
132 X1 = D( : , 1: end−1) ; X2 = D( : , 2: end ) ;
133
134 % Compute the POD (SVD) of X1:
135 [U, Sigma , V] = svd (X1, ’ econ ’ ) ;
136
137 % Approximate matrix X1 keeping only r elements of the sum :
138 U = U( : , 1 : 1 : j ) ; % retain only r modes in U
139 Sigma = Sigma ( 1 : j , 1: j ) ; % retain only r modes in Sigma
140 V = V( : , 1 : 1 : j ) ; % retain only r modes in V
141
142 % Construct the propagator S :
143 S = U’ ∗ X2 ∗ V ∗ inv ( Sigma ) ;
144
145 % Compute eigenvalues and eigenvectors of the matrix S :
50
146 [ PHI , MU] = eig (S) ;
147
148 % Extract f r e q u e n c i e s from the diagonal :
149 mu = diag (MU) ;
150
151 % Extract r e a l and imaginary parts of f r e q u e n c i e s :
152 lambda_r = r e a l (mu) ; lambda_i = imag (mu) ;
153
154 % Frequency in terms of pulsation :
155 omega = log (mu) /dt ;
156
157 % Compute the DMD s p a t i a l modes :
158 Phi = U ∗ PHI ;
159
160 % Compute amplitudes with the least −squares method :
161 b2 = inv ( Phi ’ ∗ Phi ) ∗ Phi ’ ∗ X1( : , 1 ) ;
162
163 % Compute the DMD temporal modes :
164 T_modes = zeros ( j , n_t) ;
165 f or i = 1: length ( t )
166
167 T_modes ( : , i ) = b2 .∗ exp (omega ∗ t ( i ) ) ;
168 end
169
170 % Get the f u l l DMD reconstruction :
171 U_DMD = r e a l ( Phi ∗ T_modes) ;
172
173 % Paste solution to the large matrix ( f or p lo t ti ng ) :
174 U_DMD_extend( : , (( j − 1) ∗n_t + 1) : 1 : ( j ∗n_t) ) = U_DMD;
175
176 % Compute the error of the current approximation :
177 ERROR_DMD( j ) = abs (norm(U_DMD − u_A_R) ) ;
178
179 %% Plot of the f i r s t modes : =====================================
180 hfig1 = f i g u r e (1) ;
181 set ( hfig1 , ’ units ’ , ’ normalized ’ , ’ outerposition ’ , [0 0 1 1 ] ) ;
182
183 % DMD s p a t i a l modes :
184 subplot (2 , nm , j ) ;
185 plot (y , r e a l ( Phi ( : , j ) ) /norm( Phi ) , ’ color ’ , green , . . .
186 ’ LineStyle ’ , ’ : ’ , ’ LineWidth ’ , 1.5) ;
187 [M] = AXIS(FONT) ;
188 set ( gcf , ’ color ’ , ’w ’ ) ;
189 t i t l e ( [ ’ phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
190 hold on
191
192 % DMD temporal modes :
193 subplot (2 , nm, j+nm) ;
194 plot ( t , r e a l (T_modes ( 1 , : ) ) /norm(T_modes) , ’ color ’ , . . .
195 green , ’ LineStyle ’ , ’− ’ ) ;
196 [M] = AXIS(FONT) ;
197 set ( gcf , ’ color ’ , ’w ’ ) ;
198 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
199 hold on
51
200 end
201
202 %% Plot of the f i r s t modes : =========================================
203 hfig1 = f i g u r e (1) ;
204
205 % Plot f o r modes from eigenfunction expansion :
206 % Spatial :
207 f or j = 1 : 1 :nm
208
209 subplot (2 , nm, j ) ;
210 plot (y , Y( : , j ) /norm(Y) , ’ color ’ , red , ’ LineStyle ’ , ’− ’ ) ;
211 hold on
212 [M] = AXIS(FONT) ;
213 set ( gcf , ’ color ’ , ’w ’ ) ;
214 t i t l e ( [ ’ phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
215 end
216
217 % Temporal :
218 f or j = 1 : 1 :nm
219
220 subplot (2 , nm, j+nm) ;
221 plot ( t , T( : , j ) /norm(T) , ’ color ’ , red , ’ LineStyle ’ , ’− ’ ) ;
222 hold on
223 [M] = AXIS(FONT) ;
224 set ( gcf , ’ color ’ , ’w ’ ) ;
225 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
226 end
227
228 % Plot f o r modes from POD:
229 % Spatial :
230 f or j = 1 : 1 :nm
231
232 subplot (2 , nm, j ) ;
233 plot (y , U_POD( : , j ) /norm(U_POD) , ’ color ’ , blue , ’ LineStyle ’ , ’− ’ )
234 [M] = AXIS(FONT) ;
235 set ( gcf , ’ color ’ , ’w ’ ) ;
236 end
237
238 % Temporal :
239 f or j = 1 : 1 :nm
240
241 subplot (2 , nm, j+nm) ;
242 plot ( t , V_POD( : , j ) /norm(V_POD) , ’ color ’ , blue , ’ LineStyle ’ , ’− ’ )
243 [M] = AXIS(FONT) ;
244 set ( gcf , ’ color ’ , ’w ’ ) ;
245 end
246
247 % Save the plot :
248 print ( ’−dpng ’ , ’−r500 ’ , [ ’Modes_T ’ , num2str (TIME) , ’ . png ’ ] )
249
250 %% Plot f or amplitude decay f or both approximations : ================
251 hfig2 = f i g u r e (2) ;
252 LABEL = { ’ Eigenfunction . . . ’ , ’POD’ };
253 MARKER = { ’ o ’ , ’ s ’ };
52
254
255 % Get the f i r s t element to normalize the plot :
256 norm_POD = sigma_POD(1) ;
257 norm_EIG = sigma_A_n(1) ;
258
259 % Use only MODES number of terms from the diagonal :
260 Line_POD = sigma_POD( 1 :MODES) /norm_POD;
261 Line_A_n = sigma_A_n ( 1 :MODES) /norm_EIG ;
262
263 f or j = 1 : 1 :MODES
264
265 % Plot f o r eigenfunction amplitude decay :
266 plot ( j , sigma_A_n( j ) /norm_EIG, MARKER{1} , ’ color ’ , red ) ;
267 [M] = AXIS(FONT) ;
268 set ( gcf , ’ color ’ , ’w ’ ) ;
269 hold on
270
271 % Plot f o r POD amplitude decay :
272 plot ( j , sigma_POD( j ) /norm_POD, MARKER{2} , ’ color ’ , blue ) ;
273 [M] = AXIS(FONT) ;
274 set ( gcf , ’ color ’ , ’w ’ ) ;
275 legend (LABEL) ;
276 l i n e ( [ 1 :MODES] , [ Line_A_n ] , ’ color ’ , red ) ;
277 l i n e ( [ 1 :MODES] , [Line_POD] , ’ color ’ , blue ) ;
278 t i t l e ( [ ’ Amplitude decay rate ( normalized ) ’ ] ) ;
279 ylim ([ −0.1 1 . 1 ] ) ;
280 xlim ( [ 0 . 9 ( modes + 0.1) ] ) ;
281 end
282
283 % Save the plot :
284 print ( ’−dpng ’ , ’−r500 ’ , ’ Amplitude_decay . png ’ )
285
286 %% Plot of the eigenvalues c i r c l e : ==================================
287 hfig3 = f i g u r e (3) ;
288
289 % D efin ition of the c i r c l e in a complex plane :
290 radius = 1; z_Circle = radius ∗ exp ( ( 0 : 0 . 1 : ( 2 ∗ pi ) ) ∗ sqrt (−1) ) ;
291
292 % Eigenvalues and complex c i r c l e :
293 stem3 (lambda_r , lambda_i , r e a l ( abs ( b2 ) ) , ’ color ’ , green ) ;
294 hold on
295 plot ( r e a l ( z_Circle ) , imag ( z_Circle ) , ’k−’ )
296 [M] = AXIS(FONT) ;
297 set ( gcf , ’ color ’ , ’w ’ ) ;
298 t i t l e ( [ ’ Eigenvalues c i r c l e fo r DMD with r = ’ , num2str ( r ) ] ) ;
299
300 % Save the plot :
301 print ( ’−dpng ’ , ’−r500 ’ , ’ Eigenvalues_circle . png ’ )
302
303 %% Movie of a pulsating v e l o c i t y p r o f i l e with approximations : =======
304 hfig4 = f i g u r e (4) ;
305 MARKER = { ’ : ’ , ’−−’ , ’ −. ’ , ’−−’ , ’−−’ , ’−−’ , ’−−’ };
306 LABEL = { ’ Original . . ’ , ’U_1 ’ , ’U_2 ’ , ’U_3 ’ };
307
53
308 f or i = 1 : 1 : length ( t )
309
310 hold o f f
311
312 % Plot f o r eigenfunction approximation :
313 subplot (1 ,3 ,1)
314 ORIGINAL = u_A_R( : , i ) ;
315 plot (y , ORIGINAL, ’k−’ , ’ linewidth ’ , 1) ;
316 [M] = AXIS(FONT) ;
317 set ( gcf , ’ color ’ , ’w ’ ) ;
318 hold on
319 f or j = 1 : 1 :MODES
320
321 EIGEN = u_T_extend ( : , (n_t∗( j − 1) + i ) ) ;
322 plot (y , EIGEN, MARKER{ j } , ’ color ’ , red ) ;
323 [M] = AXIS(FONT) ;
324 set ( gcf , ’ color ’ , ’w ’ ) ;
325 end
326 t i t l e ( [ ’ Eigenfunction ’ ] ) ;
327 i f modes <= 3
328
329 legend (LABEL) ;
330 end
331 ylim ([ −1 1 . 5 ] ) ;
332 xlim ([ −1 1 ] ) ;
333 hold o f f
334
335 % Plot of the POD approximation :
336 subplot (1 ,3 ,2)
337 plot (y , ORIGINAL, ’k−’ , ’ linewidth ’ , 1) ;
338 [M] = AXIS(FONT) ;
339 set ( gcf , ’ color ’ , ’w ’ ) ;
340 hold on
341 f or j = 1 : 1 :RANK
342
343 U_POD_approx = U_POD( : , 1 : 1 : j ) ∗ . . .
344 S_POD( 1 : 1 : j , 1 : 1 : j ) ∗ V_POD( : , 1 : 1 : j ) ’ ;
345 POD = U_POD_approx( : , i ) ;
346 plot (y , POD, MARKER{ j } , ’ color ’ , blue ) ;
347 [M] = AXIS(FONT) ;
348 set ( gcf , ’ color ’ , ’w ’ ) ;
349 end
350 t i t l e ( [ ’POD’ ] ) ;
351 i f modes <= 3
352
353 legend (LABEL) ;
354 end
355 ylim ([ −1 1 . 5 ] ) ;
356 xlim ([ −1 1 ] ) ;
357 hold o f f
358
359 % Plot of the DMD approximation :
360 subplot (1 ,3 ,3)
361 plot (y , ORIGINAL, ’k−’ , ’ linewidth ’ , 1) ;
54
362 [M] = AXIS(FONT) ;
363 set ( gcf , ’ color ’ , ’w ’ ) ;
364 hold on
365 f or j = 1 : 1 : r
366
367 DMD_APPROX = U_DMD_extend( : , (n_t∗( j −1)+i ) ) ;
368 plot (y , DMD_APPROX, MARKER{ j } , ’ color ’ , green ) ;
369 [M] = AXIS(FONT) ;
370 set ( gcf , ’ color ’ , ’w ’ ) ;
371 end
372 t i t l e ( [ ’DMD’ ] ) ;
373 i f modes <= 3
374
375 legend (LABEL) ;
376 end
377 ylim ([ −1 1 . 5 ] ) ;
378 xlim ([ −1 1 ] ) ;
379 hold o f f
380 drawnow
381
382 % Save the g i f :
383 frame = getframe (4) ;
384 im = frame2im ( frame ) ;
385 [ imind , cm] = rgb2ind (im , 256) ;
386 filename = ’ Pulsating_Poiseuille_Movie . g i f ’ ;
387 i f i == 1;
388 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ;
389 e l s e
390 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . .
391 ’ append ’ , ’ DelayTime ’ , 0.1) ;
392 end
393 end
394
395 %% Plot of the error of each approximation : =========================
396 hfig5 = f i g u r e (5) ;
397
398 % Maximum of each error :
399 max_eig = max(ERROR_EIG) ;
400 max_pod = max(ERROR_POD) ;
401 max_dmd = max(ERROR_DMD) ;
402
403 f or j = 1 : 1 : modes
404
405 LABEL = { ’ Eigenfunction . . . ’ , ’POD’ , ’DMD’ };
406 plot ( j , ERROR_EIG( j ) /max_eig , ’ color ’ , red , ’ LineStyle ’ , ’ o ’ )
407 [M] = AXIS(FONT) ;
408 set ( gcf , ’ color ’ , ’w ’ ) ;
409 hold on
410 plot ( j , ERROR_POD( j ) /max_pod, ’ color ’ , blue , ’ LineStyle ’ , ’ s ’ )
411 [M] = AXIS(FONT) ;
412 set ( gcf , ’ color ’ , ’w ’ ) ;
413 plot ( j , ERROR_DMD( j ) /max_dmd, ’ color ’ , green , ’ LineStyle ’ , ’^ ’ )
414 [M] = AXIS(FONT) ;
415 set ( gcf , ’ color ’ , ’w ’ ) ;
55
416 legend (LABEL) ;
417 l i n e ( [ 1 : modes ] , [ERROR_EIG] . / max_eig , ’ color ’ , red ) ;
418 l i n e ( [ 1 : modes ] , [ERROR_POD] . / max_pod, ’ color ’ , blue ) ;
419 l i n e ( [ 1 : modes ] , [ERROR_DMD] . /max_dmd, ’ color ’ , green ) ;
420 t i t l e ( [ ’ Normalized error of each approximation ’ ] ) ;
421 ylim ([ −0.1 1 . 1 ] ) ;
422 xlim ( [ 0 . 9 ( modes + 0.1) ] ) ;
423 end
424
425 % Save the plot :
426 print ( ’−dpng ’ , ’−r500 ’ , ’ Error . png ’ )
427
428 %% Ending : ==========================================================
429 c l o s e a l l
430 c l c
56
Appendix B
1D and 2D POD Functions
Listing B.1: Matlab function for performing 1D POD. POD_1D.m
1 %% POD_1D function ==================================================
2 % POD done on a 1D data set created by :
3 % − matrix D
4 % − vector y
5 % ===================================================================
6
7 function [D_POD, U_POD, S_POD, V_POD] = POD_1D(D, r )
8 % SVD of o r i g i n a l solution matrix :
9 [U_POD, S_POD, V_POD] = svd (D) ;
10
11 % POD Approximation :
12 D_POD = U_POD( : , 1 : 1 : r ) ∗ S_POD( 1 : 1 : r , 1 : 1 : r ) ∗ V_POD( : , 1 : 1 : r ) ’ ;
13 end
57
Listing B.2: Matlab function for performing 2D POD. POD_2D.m
1 %% POD_2D function ==================================================
2 % POD done on a 2D data set created by :
3 % − matrix U
4 % − matrix V
5 % − vector X
6 % − vector Y
7 % ===================================================================
8
9 function [U_POD, V_POD, UU_POD, VU_POD, UV_POD, VV_POD] = . . .
10 POD_2D(U, V, X, Y, r )
11
12 %% Check the s i z e s of matrices U, V, X and Y: =======================
13 len_U = s i z e (U, 1 ) ;
14 len_V = s i z e (V, 1 ) ;
15 wid_U = s i z e (U, 2 ) ;
16 wid_V = s i z e (V, 2 ) ;
17 len_X = length (X) ;
18 len_Y = length (Y) ;
19
20 i f len_U == len_V && len_U == len_X ∗ len_Y && wid_U == wid_V
21 %% I n i t i a l d e f i n i t i o n s : =============================================
22 % Changing NaN to zeros fo r the s o l i d parts :
23 V( isnan (V) ) = 0;
24 U( isnan (U) ) = 0;
25
26 SPACE_X = X; % extracting space X−coordinates
27 SPACE_Y = Y; % extracting space Y−coordinates
28 n_x = length (SPACE_X) ; % number of X−coordinates
29 n_y = length (SPACE_Y) ; % number of Y−coordinates
30 n_t = s i z e (V, 2 ) ; % number of time steps
31
32 %% POD approximation : ===============================================
33 % SVD of o r i g i n a l solution matrix :
34 [UU_POD, SU_POD, VU_POD] = svd (U, ’ econ ’ ) ;
35 [UV_POD, SV_POD, VV_POD] = svd (V, ’ econ ’ ) ;
36
37 % POD Approximation :
38 U_POD = UU_POD( : , 1 : 1 : r ) ∗ SU_POD( 1 : 1 : r , 1 : 1 : r ) ∗ VU_POD( : , 1 : 1 : r ) ’ ;
39 V_POD = UV_POD( : , 1 : 1 : r ) ∗ SV_POD( 1 : 1 : r , 1 : 1 : r ) ∗ VV_POD( : , 1 : 1 : r ) ’ ;
40
41 % Change zeros to NaN to re−create s o l i d parts :
42 U_POD(U_POD == 0) = NaN;
43 V_POD(V_POD == 0) = NaN;
44
45 e l s e
46 disp ( [ ’Wrong matrix dimensions . ’ ] )
47
48 U_POD = NaN; V_POD = NaN; UU_POD = NaN; VU_POD = NaN; . . .
49 UV_POD = NaN; VV_POD = NaN;
50 end
58
Appendix C
1D and 2D DMD Functions
Listing C.1: Matlab function for performing 1D DMD. DMD_1D.m
1 %% DMD_1D function ==================================================
2 % DMD done on a 1D data set created by :
3 % − matrix D
4 % − vector y
5 % ===================================================================
6
7 function [ lambda_r , lambda_i , D_DMD_extend, bD, Phi_extend , . . .
8 T_modes_extend ] = DMD_1D(D, y , dt , r )
9
10 %% Check the s i z e s of matrices D and y : =============================
11 len_D = s i z e (D, 1 ) ;
12 len_y = length (y) ;
13
14 i f len_D == len_y
15 %% I n i t i a l d e f i n i t i o n s : =============================================
16 n_t = s i z e (D, 2 ) ; % number of time steps
17 n_y = s i z e (D, 1 ) ; % number of space y−coordinates
18
19 %% Create extended output matrices to containt a l l ranks ============
20 % approximations up to r :
21 D_DMD_extend = zeros (n_y, r ∗n_t) ; % solution
22 T_modes_extend = zeros ((1+ r ) ∗ r /2 , n_t) ; % temporal structures
23 Phi_extend = zeros (n_y, (1+r ) ∗ r /2) ; % s p a t i a l structures
24 s t a r t = 1;
25
26 %% DMD approximation : ===============================================
27 % Construct data s e t s X1 and X2:
28 X1 = D( : , 1: end − 1) ; X2 = D( : , 2: end ) ;
29
30 f or i = 1 : 1 : r
31 % Compute the POD (SVD) of X1:
32 [UD, SigmaD , VD] = svd (X1, ’ econ ’ ) ;
33
34 % Approximate matrix X1 keeping only r elements of the sum :
35 UD = UD( : , 1 : 1 : i ) ; % f i l t e r : retain only r modes in U
36 SigmaD = SigmaD ( 1 : 1 : i , 1 : 1 : i ) ; % f i l t e r : retain only r modes in Sigma
37 VD = VD( : , 1 : 1 : i ) ; % f i l t e r : retain only r modes in V
38
59
39 % Construct the propagator S :
40 S = UD’ ∗ X2 ∗ VD ∗ inv (SigmaD) ;
41
42 % Compute eigenvalues and eigenvectors of the matrix S :
43 [ PHI , MU] = eig (S) ; % eigenvalue decomposition of matrix S
44
45 % Extract f r e q u e n c i e s :
46 mu = diag (MU) ;
47
48 % Extract r e a l and imaginary parts of f r e q u e n c i e s :
49 lambda_r = r e a l (mu) ; lambda_i = imag (mu) ;
50
51 % Frequency in terms of pulsation :
52 omega = log (mu) /dt ; % computing natural log
53
54 % Compute DMD s p a t i a l modes :
55 Phi = UD ∗ PHI ;
56
57 % Compute DMD amplitudes with the least −squares method :
58 bD = inv ( Phi ’ ∗ Phi ) ∗ Phi ’ ∗ X1( : , 1 ) ;
59
60 % Define temporal behaviour :
61 TIME = [ 0 : 1 : n_t − 1] ∗ dt ;
62
63 % I n i t i a l i z e temporal modes matrix :
64 T_modes = zeros ( i , n_t) ;
65
66 % Compute DMD temporal modes :
67 f or j = 1: length (TIME)
68 T_modes ( : , j ) = bD .∗ exp (omega ∗ TIME( j ) ) ;
69 end
70
71 % Get the r e a l part of the DMD reconstruction :
72 D_DMD = r e a l ( Phi ∗ T_modes) ;
73
74 % Paste current s o l u t i o n s into the right place in extended matrices :
75 D_DMD_extend( : , (( i − 1) ∗n_t + 1) : 1 : ( i ∗n_t) ) = D_DMD;
76 s t a r t = s t a r t + ( i −1) ;
77 T_modes_extend (( s t a r t : 1 : s t a r t +(i −1)) , : ) = r e a l (T_modes) ;
78 Phi_extend ( : , ( s t a r t : 1 : s t a r t +(i −1)) ) = r e a l ( Phi ) ;
79 end
80
81 e l s e
82 disp ( [ ’Wrong matrix dimensions . ’ ] )
83
84 lambda_r = NaN; lambda_i = NaN; D_DMD = NaN; bD = NaN;
85 end
60
Listing C.2: Matlab function for performing 2D DMD. DMD_2D.m
1 %% DMD_2D function ==================================================
2 % DMD done on a 2D data set created by :
3 % − matrix U
4 % − matrix V
5 % − vector X
6 % − vector Y
7 % ===================================================================
8
9 function [ lambdaU_r , lambdaU_i , lambdaV_r , lambdaV_i , U_DMD, . . .
10 V_DMD, bU, bV, Phi_U, Phi_V, T_modesU, T_modesV] . . .
11 = DMD_2D(U, V, X, Y, dt , r )
12
13 %% Check the s i z e s of matrices U, V, X and Y: =======================
14 len_U = s i z e (U, 1 ) ;
15 len_V = s i z e (V, 1 ) ;
16 wid_U = s i z e (U, 2 ) ;
17 wid_V = s i z e (V, 2 ) ;
18 len_X = length (X) ;
19 len_Y = length (Y) ;
20
21 i f len_U == len_V && len_U == len_X ∗ len_Y && wid_U == wid_V
22 %% I n i t i a l d e f i n i t i o n s : =============================================
23 % Changing NaN to zeros fo r the s o l i d parts :
24 V( isnan (V) ) = 0;
25 U( isnan (U) ) = 0;
26
27 SPACE_X = X; % extracting space X−coordinates
28 SPACE_Y = Y; % extracting space Y−coordinates
29 n_x = length (SPACE_X) ; % number of X−coordinates
30 n_y = length (SPACE_Y) ; % number of Y−coordinates
31 n_t = s i z e (V, 2 ) ; % number of time steps
32
33 %% DMD approximation : ===============================================
34 % Construct data s e t s X1 and X2:
35 X1U = U( : , 1: end − 1) ; X2U = U( : , 2: end ) ;
36 X1V = V( : , 1: end − 1) ; X2V = V( : , 2: end ) ;
37
38 % Compute the POD (SVD) of X1:
39 [UU, SigmaU , VU] = svd (X1U, ’ econ ’ ) ;
40 [UV, SigmaV , VV] = svd (X1V, ’ econ ’ ) ;
41
42 % Approximate matrix X1 keeping only r elements of the sum :
43 UUn = UU( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in U
44 SigmaUn = SigmaU ( 1 : r , 1: r ) ; % f i l t e r : retain only r modes in Sigma
45 VUn = VU( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in V
46
47 UVn = UV( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in U
48 SigmaVn = SigmaV ( 1 : r , 1: r ) ; % f i l t e r : retain only r modes in Sigma
49 VVn = VV( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in V
50
51 % Construct the propagator S :
52 SU = UUn’ ∗ X2U ∗ VUn ∗ inv (SigmaUn) ;
53 SV = UVn’ ∗ X2V ∗ VVn ∗ inv (SigmaVn) ;
61
54
55 % Compute eigenvalues and eigenvectors of the matrix S :
56 [PHIU, MUU] = eig (SU) ; % eigenvalue decomposition of matrix S
57 [PHIV, MUV] = eig (SV) ; % eigenvalue decomposition of matrix S
58
59 % Extract f r e q u e n c i e s :
60 muU = diag (MUU) ;
61 muV = diag (MUV) ;
62
63 % Extract r e a l and imaginary parts of f r e q u e n c i e s :
64 lambdaU_r = r e a l (muU) ; % r e a l
65 lambdaU_i = imag (muU) ; % imaginary
66 lambdaV_r = r e a l (muV) ; % r e a l
67 lambdaV_i = imag (muV) ; % imaginary
68
69 % Frequency in terms of pulsation :
70 omegaU = log (muU) /dt ; % computing natural log
71 omegaV = log (muV) /dt ; % computing natural log
72
73 % Compute DMD s p a t i a l modes :
74 Phi_U = UUn ∗ PHIU;
75 Phi_V = UVn ∗ PHIV;
76
77 % Compute DMD amplitudes with the least −squares method :
78 bU = inv (Phi_U’ ∗ Phi_U) ∗ Phi_U’ ∗ X1U( : , 1 ) ;
79 bV = inv (Phi_V’ ∗ Phi_V) ∗ Phi_V’ ∗ X1V( : , 1 ) ;
80
81 % Define temporal behaviour :
82 TIME = [ 0 : 1 : n_t − 1] ∗ dt ;
83
84 % I n i t i a l i z e temporal modes matrix :
85 T_modesU = zeros ( r , n_t) ;
86 T_modesV = zeros ( r , n_t) ;
87
88 % Compute DMD temporal modes :
89 f or i = 1: length (TIME)
90 T_modesU ( : , i ) = bU .∗ exp (omegaU ∗ TIME( i ) ) ;
91 T_modesV ( : , i ) = bV .∗ exp (omegaV ∗ TIME( i ) ) ;
92 end
93
94 % Get the r e a l part of the DMD reconstruction :
95 U_DMD = r e a l (Phi_U ∗ T_modesU) ; % r e a l part taken
96 V_DMD = r e a l (Phi_V ∗ T_modesV) ; % r e a l part taken
97
98 % Change zeros to NaN to re−create s o l i d parts :
99 U_DMD(U_DMD == 0) = NaN;
100 V_DMD(V_DMD == 0) = NaN;
101
102 e l s e
103 disp ( [ ’Wrong matrix dimensions . ’ ] )
104
105 lambdaU_r = NaN; lambdaU_i = NaN; lambdaV_r = NaN; . . .
106 lambdaV_i = NaN; U_DMD = NaN; V_DMD = NaN; bU = NaN; . . .
107 bV = NaN; Phi_U = NaN; Phi_V = NaN; T_modesU = NaN; . . .
62
108 T_modesV = NaN;
109 end
63
Appendix D
1D and 2D POD Results Plotting
Listing D.1: Matlab function for plotting the .gif file of the 1D POD approximation.
POD_1D_PLOT_GIF.m
1 %% POD_1D_PLOT_GIF function =========================================
2 % This function plots the . g i f f i l e of the approximation
3 % ===================================================================
4
5 function POD_1D_PLOT_GIF(D, rank_POD, U_POD, S_POD, V_POD, . . .
6 y , dt , Curr_Dir )
7 hfig1 = f i g u r e (1) ;
8 MARKER = { ’ : ’ , ’−−’ , ’ −. ’ , ’−−’ , ’−−’ };
9 LABEL = { ’ Original . . ’ , ’U_1 ’ , ’U_2 ’ , ’U_3 ’ , ’U_4 ’ , ’U_5 ’ };
10
11 % D efin ition of time span :
12 n_t = s i z e (D, 2 ) ;
13 t = [ 0 : 1 : n_t−1]∗ dt ;
14
15 filename = ( [ ’GIF_1D_POD_r ’ , num2str (rank_POD) , ’ . g i f ’ ] ) ;
16
17 %% . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : ========
18 f or i = 1 : 1 : length ( t )
19 hold o f f
20
21 % Plot of the o r i g i n a l matrix D:
22 plot (y , D( : , i ) , ’k−’ ) ;
23 [M] = AXIS(12) ;
24 set ( gcf , ’ color ’ , ’w ’ ) ;
25
26 % Plot of the POD approximation :
27 hold on
28 f or j = 1 : 1 :rank_POD
29 POD1D_approx = U_POD( : , 1 : 1 : j ) ∗ . . .
30 S_POD( 1 : 1 : j , 1 : 1 : j ) ∗ V_POD( : , 1 : 1 : j ) ’ ;
31 plot (y , POD1D_approx ( : , i ) , MARKER{ j } , ’ color ’ , ’b ’ ) ;
32 [M] = AXIS(12) ;
33 set ( gcf , ’ color ’ , ’w ’ ) ;
34 end
35 t i t l e ( [ ’POD approximation ’ ] ) ;
36 drawnow
37 xlim ( [ min(y) ∗1.1 max(y) ∗ 1 . 1 ] ) ;
64
38 ylim ( [ min(min(D) ) ∗1.1 max(max(D) ) ∗ 1 . 1 ] ) ;
39
40 % Save the g i f :
41 cd ( Curr_Dir ) ;
42 frame = getframe (1) ;
43 im = frame2im ( frame ) ;
44 [ imind , cm] = rgb2ind (im , 256) ;
45
46 i f i == 1;
47 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ;
48 e l s e
49 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . .
50 ’ append ’ , ’ DelayTime ’ , 0.1) ;
51 end
52 cd . .
53 end
54
55 c l o s e ( hfig1 )
65
Listing D.2: Matlab function for plotting the .gif file of the 2D POD approximation.
POD_2D_PLOT_GIF.m
1 %% POD_2D_PLOT_GIF function =========================================
2 % This function plots the . g i f f i l e of the approximation
3 % ===================================================================
4
5 function POD_2D_PLOT_GIF(U_POD, V_POD, rank_POD, X, Y, dt , Curr_Dir )
6
7 % D efin ition of time span :
8 n_t = s i z e (U_POD, 2 ) ;
9 t = [ 0 : 1 : n_t−1]∗ dt ;
10
11 SPACE_X = X;
12 SPACE_Y = Y;
13 n_X = length (SPACE_X) ;
14 n_Y = length (SPACE_Y) ;
15
16 filename = ( [ ’GIF_2D_POD_r ’ , num2str (rank_POD) ’ . g i f ’ ] ) ;
17
18 % Limit on the time of simulation :
19 i f length ( t ) <=30
20 sim_t = length ( t ) ;
21 e l s e
22 sim_t = 30;
23 end
24
25 % . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : =========
26 f or i = 1 : 1 : sim_t
27
28 % Extract i−th column from U_DMD and V_DMD:
29 U_extr = U_POD( : , i ) ;
30 V_extr = V_POD( : , i ) ;
31
32 % Prepare the v e l o c i t y components fo r p lo tt in g :
33 U_vel = zeros (n_Y, n_X) ;
34 V_vel = zeros (n_Y, n_X) ;
35
36 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g :
37 f or j = 1 : 1 :n_X
38 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
39 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
40 end
41
42 VELOCITY = sqrt (U_vel.^2 + V_vel .^2) ;
43 VELOCITY(VELOCITY == 0) = NaN;
44
45 % Plotting the colour plot :
46 hfig1 = f i g u r e (1) ;
47 pcolor (SPACE_X, SPACE_Y, VELOCITY) ;
48 [M] = AXIS(12) ;
49 set ( gcf , ’ color ’ , ’w ’ ) ;
50 shading interp
51 colorbar
52 axis equal
66
53 grid o f f
54 daspect ( [ 1 1 1 ] )
55 t i t l e ( [ ’POD approximation with r = ’ , num2str (rank_POD) ] ) ;
56 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ;
57 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ;
58 [M] = AXIS(12) ;
59 set ( gcf , ’ color ’ , ’w ’ ) ;
60 drawnow
61 hold o f f
62
63 % Save the g i f :
64 cd ( Curr_Dir ) ;
65 frame = getframe (1) ;
66 im = frame2im ( frame ) ;
67 [ imind , cm] = rgb2ind (im , 256) ;
68
69 i f i == 1;
70 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ;
71 e l s e
72 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . .
73 ’ append ’ , ’ DelayTime ’ , 0.1) ;
74 end
75 cd . .
76 end
77
78 c l o s e ( hfig1 )
67
Listing D.3: Matlab function for plotting the .png file of the modes of the 1D POD approxima-
tion. POD_1D_PLOT_MODES.m
1 %% POD_1D_PLOT_MODES function =======================================
2 % This function plots the . png f i l e of the modes of the approximation
3 % ===================================================================
4
5 function POD_1D_PLOT_MODES(D, rank_POD, U_POD, S_POD, V_POD, . . .
6 y , dt , Curr_Dir )
7
8 % D efin ition of time span :
9 n_t = s i z e (D, 2 ) ;
10 t = [ 0 : 1 : n_t−1]∗ dt ;
11
12 %% Modes of the approximation : =====================================
13 hfig2 = f i g u r e (2) ;
14
15 f or j = 1 : 1 :rank_POD
16
17 subplot (2 , rank_POD, j )
18 plot (y , U_POD( : , j ) /norm(U_POD) , ’b−’ )
19 [M] = AXIS(12) ;
20 set ( gcf , ’ color ’ , ’w ’ ) ;
21 t i t l e ( [ ’ Phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
22
23 subplot (2 , rank_POD, j+rank_POD)
24 plot ( t , V_POD( : , j ) /norm(V_POD) , ’b−’ )
25 [M] = AXIS(12) ;
26 set ( gcf , ’ color ’ , ’w ’ ) ;
27 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
28
29 drawnow
30
31 end
32
33 cd ( Curr_Dir ) ;
34 print ( ’−dpng ’ , ’−r500 ’ , [ ’MODES_1D_POD_r’ , . . .
35 num2str (rank_POD) , ’ . png ’ ] )
36 cd . .
37
38 c l o s e ( hfig2 )
68
Listing D.4: Matlab function for plotting the .png file of the modes of the 2D POD approxima-
tion. POD_2D_PLOT_MODES.m
1 %% POD_2D_PLOT_MODES function =======================================
2 % This function plots the . png f i l e of the modes of the approximation
3 % ===================================================================
4
5 function POD_2D_PLOT_MODES( r , UU_POD, VU_POD, UV_POD, . . .
6 VV_POD, V, X, Y, dt , Curr_Dir )
7
8 % D efin ition of time span :
9 n_t = s i z e (V, 2 ) ;
10 t = [ 0 : 1 : n_t−1]∗ dt ;
11
12 % D efin ition of space :
13 SPACE_X = X;
14 SPACE_Y = Y;
15 n_Y = length (SPACE_Y) ;
16 n_X = length (SPACE_X) ;
17
18 %% Plotting s p a t i a l structures :
19 hfig1 = f i g u r e (1) ;
20 % Extract s i n g l e columns representing a s i n g l e s p a t i a l structure :
21 U_extr = UU_POD( : , r ) ;
22 V_extr = UV_POD( : , r ) ;
23
24 % Prepare the v e l o c i t y components f o r pl o tt in g :
25 U_vel = zeros (n_Y, n_X) ;
26 V_vel = zeros (n_Y, n_X) ;
27
28 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g :
29 f or j = 1 : 1 :n_X
30 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
31 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
32 end
33
34 MODE = sqrt (U_vel.^2 + V_vel .^2) ;
35 MODE = MODE/(max(max( abs (MODE) ) ) ) ;
36 MODE(MODE == 0) = NaN;
37
38 % Spatial structures :
39 pcolor (SPACE_X, SPACE_Y, MODE) ;
40 [M] = AXIS(12) ;
41 set ( gcf , ’ color ’ , ’w ’ ) ;
42 shading interp
43 colorbar
44 axis equal
45 grid o f f
46 daspect ( [ 1 1 1 ] )
47 t i t l e ( [ ’POD s p a t i a l mode f or r = ’ , num2str ( r ) ] ) ;
48 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ;
49 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ;
50 [M] = AXIS(12) ;
51 set ( gcf , ’ color ’ , ’w ’ ) ;
52
69
53 cd ( Curr_Dir ) ;
54 print ( ’−dpng ’ , ’−r500 ’ , [ ’Spatial_Modes_2D_POD_r ’ , . . .
55 num2str ( r ) , ’ . png ’ ] )
56 cd . .
57
58 %% Plotting temporal structures :
59 hfig2 = f i g u r e (2) ;
60 temp_mode = sqrt ((VU_POD( : , r ) ) .^2 + (VV_POD( : , r ) ) .^2 ) ;
61 temp_mode = temp_mode/(max(max( abs (temp_mode) ) ) ) ;
62
63 plot ( t , temp_mode , ’k−’ )
64 [M] = AXIS(12) ;
65 set ( gcf , ’ color ’ , ’w ’ ) ;
66 t i t l e ( [ ’POD temporal mode fo r r = ’ , num2str ( r ) ] ) ;
67 ylim ( [ min(temp_mode) max(temp_mode) ] ) ;
68 xlim ( [ min( t ) max( t ) ] ) ;
69
70 cd ( Curr_Dir ) ;
71 print ( ’−dpng ’ , ’−r500 ’ , [ ’Temp_Modes_2D_POD_r ’ , . . .
72 num2str ( r ) , ’ . png ’ ] )
73 cd . .
74
75 c l o s e ( hfig1 )
76 c l o s e ( hfig2 )
70
Appendix E
1D and 2D DMD Results Plotting
Listing E.1: Matlab function for plotting the .gif file of the 1D DMD approximation.
DMD_1D_PLOT_GIF.m
1 %% DMD_1D_PLOT_GIF function =========================================
2 % This function plots the . g i f f i l e of the approximation
3 % ===================================================================
4
5 function DMD_1D_PLOT_GIF(D, rank_DMD, D_DMD_extend, y , dt , Curr_Dir )
6 hfig1 = f i g u r e (1) ;
7 MARKER = { ’ : ’ , ’−−’ , ’ −. ’ , ’−−’ , ’−−’ };
8 LABEL = { ’ Original . . ’ , ’U_1 ’ , ’U_2 ’ , ’U_3 ’ , ’U_4 ’ , ’U_5 ’ };
9
10 % D efin ition of time span :
11 n_t = s i z e (D, 2 ) ;
12 t = [ 0 : 1 : n_t−1]∗ dt ;
13
14 filename = ( [ ’GIF_1D_DMD_r’ , num2str (rank_DMD) , ’ . g i f ’ ] ) ;
15
16 %% . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : ========
17 f or i = 1 : 1 : length ( t )
18 hold o f f
19
20 % Plot of the o r i g i n a l matrix D:
21 plot (y , D( : , i ) , ’k−’ ) ;
22 [M] = AXIS(12) ;
23 set ( gcf , ’ color ’ , ’w ’ ) ;
24
25 % Plot of the DMD approximation :
26 hold on
27 f or j = 1 : 1 :rank_DMD
28 DMD_APPROX = D_DMD_extend( : , (n_t∗( j −1)+i ) ) ;
29 plot (y , DMD_APPROX, MARKER{ j } , ’ color ’ , ’ r ’ ) ;
30 [M] = AXIS(12) ;
31 set ( gcf , ’ color ’ , ’w ’ ) ;
32 end
33 t i t l e ( [ ’DMD approximation ’ ] ) ;
34 drawnow
35 xlim ( [ min(y) ∗1.1 max(y) ∗ 1 . 1 ] ) ;
36 ylim ( [ min(min(D) ) ∗1.1 max(max(D) ) ∗ 1 . 1 ] ) ;
37
71
38 % Save the g i f :
39 cd ( Curr_Dir ) ;
40 frame = getframe (1) ;
41 im = frame2im ( frame ) ;
42 [ imind , cm] = rgb2ind (im , 256) ;
43
44 i f i == 1;
45 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ;
46 e l s e
47 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . .
48 ’ append ’ , ’ DelayTime ’ , 0.1) ;
49 end
50 cd . .
51 end
52
53 c l o s e ( hfig1 )
72
Listing E.2: Matlab function for plotting the .gif file of the 2D DMD approximation.
DMD_2D_PLOT_GIF.m
1 %% DMD_2D_PLOT_GIF function =========================================
2 % This function plots the . g i f f i l e of the approximation
3 % ===================================================================
4
5 function DMD_2D_PLOT_GIF(U_DMD, V_DMD, rank_DMD, X, Y, dt , Curr_Dir )
6
7 % D efin ition of time span :
8 n_t = s i z e (U_DMD, 2 ) ;
9 t = [ 0 : 1 : n_t−1]∗ dt ;
10
11 SPACE_X = X;
12 SPACE_Y = Y;
13 n_X = length (SPACE_X) ;
14 n_Y = length (SPACE_Y) ;
15
16 filename = ( [ ’GIF_2D_DMD_r’ , num2str (rank_DMD) , ’ . g i f ’ ] ) ;
17
18 % Limit on the time of simulation :
19 i f length ( t ) <=30
20 sim_t = length ( t ) ;
21 e l s e
22 sim_t = 30;
23 end
24
25 % . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : =========
26 f or i = 1 : 1 : sim_t
27
28 % Extract i−th column from U_DMD and V_DMD:
29 U_extr = U_DMD( : , i ) ;
30 V_extr = V_DMD( : , i ) ;
31
32 % Prepare the v e l o c i t y components fo r p lo tt in g :
33 U_vel = zeros (n_Y, n_X) ;
34 V_vel = zeros (n_Y, n_X) ;
35
36 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g :
37 f or j = 1 : 1 :n_X
38 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
39 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
40 end
41
42 VELOCITY = sqrt (U_vel.^2 + V_vel .^2) ;
43 VELOCITY(VELOCITY == 0) = NaN;
44
45 % Plotting the colour plot :
46 hfig1 = f i g u r e (1) ;
47 pcolor (SPACE_X, SPACE_Y, VELOCITY) ;
48 [M] = AXIS(12) ;
49 set ( gcf , ’ color ’ , ’w ’ ) ;
50 shading interp
51 colorbar
52 axis equal
73
53 grid o f f
54 daspect ( [ 1 1 1 ] )
55 t i t l e ( [ ’DMD approximation with r = ’ , num2str (rank_DMD) ] ) ;
56 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ;
57 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ;
58 [M] = AXIS(12) ;
59 set ( gcf , ’ color ’ , ’w ’ ) ;
60 drawnow
61 hold o f f
62
63 % Save the g i f :
64 cd ( Curr_Dir ) ;
65 frame = getframe (1) ;
66 im = frame2im ( frame ) ;
67 [ imind , cm] = rgb2ind (im , 256) ;
68
69 i f i == 1;
70 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ;
71 e l s e
72 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . .
73 ’ append ’ , ’ DelayTime ’ , 0.1) ;
74 end
75 cd . .
76 end
77
78 c l o s e ( hfig1 )
74
Listing E.3: Matlab function for plotting the .png file of the modes of the 1D DMD approxima-
tion. DMD_1D_PLOT_MODES.m
1 %% DMD_1D_PLOT_MODES function =======================================
2 % This function plots the . png f i l e of the modes of the approximation
3 % ===================================================================
4
5 function DMD_1D_PLOT_MODES(D, rank_DMD, Phi_extend , . . .
6 T_modes_extend , y , dt , Curr_Dir )
7
8 % D efin ition of time span :
9 n_t = s i z e (D, 2 ) ;
10 t = [ 0 : 1 : n_t−1]∗ dt ;
11
12 %% Modes of the approximation : ======================================
13 hfig2 = f i g u r e (2) ;
14 s t a r t = 1;
15
16 f or j = 1 : 1 :rank_DMD
17
18 s t a r t = s t a r t + ( j −1) ;
19
20 subplot (2 , rank_DMD, j )
21 plot (y , Phi_extend ( : , s t a r t ) , ’ r−’ ) ;
22 [M] = AXIS(12) ;
23 set ( gcf , ’ color ’ , ’w ’ ) ;
24 t i t l e ( [ ’ Phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
25
26 subplot (2 , rank_DMD, j+rank_DMD)
27 plot ( t , T_modes_extend( start , : ) , ’ r−’ ) ;
28 [M] = AXIS(12) ;
29 set ( gcf , ’ color ’ , ’w ’ ) ;
30 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ;
31
32 drawnow
33
34 end
35
36 cd ( Curr_Dir ) ;
37 print ( ’−dpng ’ , ’−r500 ’ , [ ’MODES_1D_DMD_r’ , . . .
38 num2str (rank_DMD) , ’ . png ’ ] )
39 cd . .
40
41 c l o s e ( hfig2 )
75
Listing E.4: Matlab function for plotting the .png file of the modes of the 2D DMD approxima-
tion. DMD_2D_PLOT_MODES.m
1 %% DMD_2D_PLOT_MODES function =======================================
2 % This function plots the . png f i l e of the modes of the approximation
3 % ===================================================================
4
5 function DMD_2D_PLOT_MODES( r , Phi_U, Phi_V, T_modesU, . . .
6 T_modesV, V, X, Y, dt , Curr_Dir )
7
8 % D efin ition of time span :
9 n_t = s i z e (V, 2 ) ;
10 t = [ 0 : 1 : n_t−1]∗ dt ;
11
12 % D efin ition of space :
13 SPACE_X = X;
14 SPACE_Y = Y;
15 n_Y = length (SPACE_Y) ;
16 n_X = length (SPACE_X) ;
17
18 % Taking r e a l values of the modes :
19 Phi_U = r e a l (Phi_U) ;
20 Phi_V = r e a l (Phi_V) ;
21 T_modesU = r e a l (T_modesU) ;
22 T_modesV = r e a l (T_modesV) ;
23
24 %% Plotting s p a t i a l structures :
25 hfig1 = f i g u r e (1) ;
26 % Extract s i n g l e columns representing a s i n g l e s p a t i a l structure :
27 U_extr = Phi_U ( : , r ) ;
28 V_extr = Phi_V ( : , r ) ;
29
30 % Prepare the v e l o c i t y components fo r pl o tt in g :
31 U_vel = zeros (n_Y, n_X) ;
32 V_vel = zeros (n_Y, n_X) ;
33
34 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g :
35 f or j = 1 : 1 :n_X
36 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
37 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ;
38 end
39
40 MODE = sqrt (U_vel.^2 + V_vel .^2) ;
41 MODE = MODE/(max(max( abs (MODE) ) ) ) ;
42 MODE(MODE == 0) = NaN;
43
44 % Spatial structures :
45 pcolor (SPACE_X, SPACE_Y, MODE) ;
46 [M] = AXIS(12) ;
47 set ( gcf , ’ color ’ , ’w ’ ) ;
48 shading interp
49 colorbar
50 axis equal
51 grid o f f
52 daspect ( [ 1 1 1 ] )
76
53 t i t l e ( [ ’DMD s p a t i a l mode fo r r = ’ , num2str ( r ) ] ) ;
54 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ;
55 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ;
56 [M] = AXIS(12) ;
57 set ( gcf , ’ color ’ , ’w ’ ) ;
58
59 cd ( Curr_Dir ) ;
60 print ( ’−dpng ’ , ’−r500 ’ , [ ’Spatial_Modes_2D_DMD_r ’ , . . .
61 num2str ( r ) , ’ . png ’ ] )
62 cd . .
63
64 %% Plotting temporal structures :
65 hfig2 = f i g u r e (2) ;
66 temp_mode = sqrt ((T_modesU( r , : ) ) .^2 + (T_modesV( r , : ) ) .^2 ) ;
67 temp_mode = temp_mode/(max(max( abs (temp_mode) ) ) ) ;
68
69 plot ( t , temp_mode , ’k−’ )
70 [M] = AXIS(12) ;
71 set ( gcf , ’ color ’ , ’w ’ ) ;
72 t i t l e ( [ ’DMD temporal mode fo r r = ’ , num2str ( r ) ] ) ;
73 ylim ( [ min(temp_mode) max(temp_mode) ] ) ;
74 xlim ( [ min( t ) max( t ) ] ) ;
75
76 cd ( Curr_Dir ) ;
77 print ( ’−dpng ’ , ’−r500 ’ , [ ’Temp_Modes_2D_DMD_r ’ , . . .
78 num2str ( r ) , ’ . png ’ ] )
79 cd . .
80
81 c l o s e ( hfig1 )
82 c l o s e ( hfig2 )
77
Appendix F
List of Useful Matlab Commands
For any matrix A:
norm(A) computes a norm of a matrix A
[U, S, V] = svd(A) performs a singular value decomposition of matrix A
[U, S, V] = svd(A, ’econ’) performs a singular value decomposition of matrix A
and computes economy-sized matrices
repmat(A, m, n) creates an m × n array of a matrix A
reshape(A, [a, b]) reshapes elements of matrix A into matrices of size a × b,
the number of elements in A must be equal to a · b
diag(A) extracts elements from the diagonal of matrix A
size(A) computes the size of matrix A
A’ computes a transpose of matrix A
For a square matrix A
[U, S] = eig(A) computes eigenvectors U and eigenvalues S of a matrix A
inv(A) computes an inverse of a matrix A
For vectors x and y:
norm(x) computes a norm of a vector x
dot(x, y) computes an inner (dot) product of vectors x and y
trapz(x, y) computes an approximation for an integral of y on an interval x
length(x) computes the length of a vector x
References to matrix elements:
A(i, j) extracts a single element from ith
row and jth
column
A(n:m, i:j) extracts rows n to m for columns i to j
A(:, i) extracts full ith
column
A(:, n:end) extracts full columns from nth
till the last one
A(:, n:m) extracts full columns from nth
till mth
A(i, :) extracts full ith
row
A(n:end, :) extracts full rows from nth
till the last one
A(n:m, :) extracts full rows from nth
till mth
78
Appendix G
Complete List of Codes Produced
pulsating_poiseuille_approximations.m
POD_1D.m
POD_2D.m
DMD_1D.m
DMD_2D.m
POD_1D_PLOT_GIF.m
POD_2D_PLOT_GIF.m
POD_1D_PLOT_MODES.m
POD_2D_PLOT_MODES.m
DMD_1D_PLOT_GIF.m
DMD_2D_PLOT_GIF.m
DMD_1D_PLOT_MODES.m
DMD_2D_PLOT_MODES.m
disc_cont_exercise_1.m
phase_shift_of_two_sines.m
fitting_linear_systems.m
similar_matrices.m
GUI Components:
DMD_CRITERIA.m
EXPORT_DATA.m
IMPORT.m
Main_MENU.m
POD_CRITERIA.m
POD_DMD_beta_1.m
POD_OR_DMD.m
79
Bibliography
[1] M.A. Mendez, J.-M. Buchlin, "Notes on 2D Pulsatile Poiseuille Flows: An Introduction to
Eigenfunction Expansion and Complex Variables using Matlab," VKI Technical Notes: TN
215, February 2016
[2] M.A. Mendez, M. Raiola, A. Masullo, S. Discetti, A. Ianiro, R. Theunissen, J.-M. Buchlin,
"POD-based background removal for particle image velocimetry," Experimental Thermal
and Fluid Science, January 2017
[3] P.J. Schmid, "Advanced Post-Processing of Experimental and Numerical Data," VKI Lec-
ture Series 2014-01, November 2013
[4] P.J. Schmid, "Dynamic mode decomposition of numerical and experimental data," Journal
of Fluid Mechanics, July 2010
[5] B.O. Koopman, "Hamiltonian Systems and Transformations in Space," Mathematics, Vol.
17, 1931
[6] C.W. Rowley, I. Mezic, S. Bagheri, P. Schlatter, D.S. Henningson, "Spectral analysis of
nonlinear flows," Journal of Fluid Mechanics, September 2009
[7] I. Mezic, "Spectral properties of dynamical systems, model reduction and decompositions,"
Nonlinear Dynamics, June 2004
[8] M.R. Jovanovic, P.J. Schmid, J.W. Nichols, "Sparsity-promoting dynamic mode decompo-
sition," Physics of Fluids, February 2014
[9] E.R. Scheinerman, Invitation to Dynamical Systems, 1996
[10] D. Dumoulin, "Numerical Characterization of an Impinging Jet Flow," VKI Stagiaire Re-
port, September 2016
[11] http://www.mathworks.com/moler/eigs.pdf
80

More Related Content

PDF
Tecn_Metalli_5_ferrosi e non ferrosi.pdf
PDF
Oxide dispersion strengthened steels
PPT
The Happiest Country In The World
PPTX
PDF
(DRI) Direct Reduction Iron Plant Flowsheet Options
DOC
Steel Making: Ingot casting
PDF
SharesinsideOnlineBooklet
Tecn_Metalli_5_ferrosi e non ferrosi.pdf
Oxide dispersion strengthened steels
The Happiest Country In The World
(DRI) Direct Reduction Iron Plant Flowsheet Options
Steel Making: Ingot casting
SharesinsideOnlineBooklet

Viewers also liked (18)

PDF
Aurel Consulting LLP
PDF
Dating Science
PDF
Dinah Amwayi- CV
DOCX
Bibiana
PPTX
Website Research
PDF
Batchly - Automate AWS Cost Reduction
PDF
Apuntes el relieve
PDF
Toobler profile
PDF
Aurel - CFO Services
PPTX
Website Research
DOCX
Location Research
PPT
REMOTE GSM BASED MOBILE STARTER FOR MOTOR / PUMP
PPTX
Record Label Research
DOCX
Gsm Based Automated Irrigation irrigation system
PPTX
Gsm based irrigation control
PDF
"Automatic Intelligent Plant Irrigation System using Arduino and GSM board"
PDF
Sociologia
DOC
Gregory L Shireman Resume 12 08 2016
Aurel Consulting LLP
Dating Science
Dinah Amwayi- CV
Bibiana
Website Research
Batchly - Automate AWS Cost Reduction
Apuntes el relieve
Toobler profile
Aurel - CFO Services
Website Research
Location Research
REMOTE GSM BASED MOBILE STARTER FOR MOTOR / PUMP
Record Label Research
Gsm Based Automated Irrigation irrigation system
Gsm based irrigation control
"Automatic Intelligent Plant Irrigation System using Arduino and GSM board"
Sociologia
Gregory L Shireman Resume 12 08 2016
Ad

Similar to Final Report (20)

PDF
Rapport d'analyse Dimensionality Reduction
PDF
Machine learning and its parameter is discussed here
PDF
A Comparative Study Of Generalized Arc-Consistency Algorithms
PDF
M2R Group 26
PDF
TR-CIS-0420-09 BobZigon
PDF
Honours_Thesis2015_final
PDF
M.Sc thesis
PDF
PDF
Master Thesis - A Distributed Algorithm for Stateless Load Balancing
PDF
Barret templates
PDF
Vivarana fyp report
PDF
Conversion and visualization of business processes in dsm format s. nicastro
PDF
Phd dissertation
PDF
An Introduction To Mathematical Modelling
PDF
Blind_Source_Separation_algorithms_and_Coded_Random_Access_strategies_for_6G.pdf
PDF
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
PDF
"Massive Parallel Decoding of Low-Density Parity-Check Codes Using Graphic Ca...
PDF
matconvnet-manual.pdf
PDF
A Factor Graph Approach To Constrained Optimization
PDF
Matconvnet manual
Rapport d'analyse Dimensionality Reduction
Machine learning and its parameter is discussed here
A Comparative Study Of Generalized Arc-Consistency Algorithms
M2R Group 26
TR-CIS-0420-09 BobZigon
Honours_Thesis2015_final
M.Sc thesis
Master Thesis - A Distributed Algorithm for Stateless Load Balancing
Barret templates
Vivarana fyp report
Conversion and visualization of business processes in dsm format s. nicastro
Phd dissertation
An Introduction To Mathematical Modelling
Blind_Source_Separation_algorithms_and_Coded_Random_Access_strategies_for_6G.pdf
An_expected_improvement_criterion_for_the_global_optimization_of_a_noisy_comp...
"Massive Parallel Decoding of Low-Density Parity-Check Codes Using Graphic Ca...
matconvnet-manual.pdf
A Factor Graph Approach To Constrained Optimization
Matconvnet manual
Ad

Recently uploaded (20)

PPTX
Fluid dynamics vivavoce presentation of prakash
PPTX
POULTRY PRODUCTION AND MANAGEMENTNNN.pptx
PPT
POSITIONING IN OPERATION THEATRE ROOM.ppt
PPT
6.1 High Risk New Born. Padetric health ppt
PDF
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
PPTX
famous lake in india and its disturibution and importance
PPTX
Pharmacology of Autonomic nervous system
PDF
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
PPTX
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
PDF
lecture 2026 of Sjogren's syndrome l .pdf
PPT
protein biochemistry.ppt for university classes
PPTX
neck nodes and dissection types and lymph nodes levels
PDF
Sciences of Europe No 170 (2025)
PDF
Biophysics 2.pdffffffffffffffffffffffffff
PDF
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
PDF
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
PDF
An interstellar mission to test astrophysical black holes
DOCX
Q1_LE_Mathematics 8_Lesson 5_Week 5.docx
PPTX
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
PDF
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf
Fluid dynamics vivavoce presentation of prakash
POULTRY PRODUCTION AND MANAGEMENTNNN.pptx
POSITIONING IN OPERATION THEATRE ROOM.ppt
6.1 High Risk New Born. Padetric health ppt
ELS_Q1_Module-11_Formation-of-Rock-Layers_v2.pdf
famous lake in india and its disturibution and importance
Pharmacology of Autonomic nervous system
Unveiling a 36 billion solar mass black hole at the centre of the Cosmic Hors...
ognitive-behavioral therapy, mindfulness-based approaches, coping skills trai...
lecture 2026 of Sjogren's syndrome l .pdf
protein biochemistry.ppt for university classes
neck nodes and dissection types and lymph nodes levels
Sciences of Europe No 170 (2025)
Biophysics 2.pdffffffffffffffffffffffffff
Warm, water-depleted rocky exoplanets with surfaceionic liquids: A proposed c...
CAPERS-LRD-z9:AGas-enshroudedLittleRedDotHostingaBroad-lineActive GalacticNuc...
An interstellar mission to test astrophysical black holes
Q1_LE_Mathematics 8_Lesson 5_Week 5.docx
cpcsea ppt.pptxssssssssssssssjjdjdndndddd
CHAPTER 3 Cell Structures and Their Functions Lecture Outline.pdf

Final Report

  • 1. von Karman Institute for Fluid Dynamics Chaussée de Waterloo, 72 B - 1640 Rhode Saint Genèse - Belgium Stagiaire Report POD AND DMD DECOMPOSITION OF NUMERICAL AND EXPERIMENTAL DATA Author: K. Zdybal Supervisor: M. A. Mendez September 2016
  • 2. Acknowledgements I am afraid I will never be able to thank enough for these wonderful two months of my life. But I at least want to give it a try. I would like to thank my supervisor Miguel for giving me a chance to work on this amazing topic and letting me discover the power of linear algebra. I would like to thank Claire for her kindness and for all the chances to work in the garden, both of which have made my stay in Saint-Genèse so pleasant.
  • 3. Abstract Matrix decomposition is a useful tool for approximating large data matrices which might be obtained from experimental or numerical results. The decomposition methods give a powerful insight into the underlying physical phenomena hid- den in the collected data. The present work discusses two decomposition meth- ods: Proper Orthogonal Decomposition (POD) and Dynamic Mode Decomposition (DMD) (Chapter 1). Two examples of data decomposition applications are presented in this work: approximating the pulsating velocity profile of the Poiseuille flow (Chapter 2) and approximating the flow behind a cylinder (Chapter 3). The results are obtained using Matlab software and various codes for performing POD and DMD and for post-processing are shown in the appendices. The development of the Graphical User Interface (GUI) program in Matlab for decomposing data with POD and DMD is presented in Chapter 4. The concepts of data decomposition lie in linear algebra and linear dynamical sys- tems. Additional exercises in the form of ideas and problem-solving are presented in Chapter 5. Keywords: Linear Algebra, Linear Dynamical Systems, Proper Orthogonal De- composition, Dynamic Mode Decomposition, Matlab
  • 4. Contents 1 Introduction to Data Decomposition 8 1.1 Setting the Stage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 1.2 Data Matrix Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.3 Proper Orthogonal Decomposition (POD) . . . . . . . . . . . . . . . . . . . . . . 9 1.4 Dynamic Mode Decomposition (DMD) . . . . . . . . . . . . . . . . . . . . . . . . 10 1.5 Criteria for the Choice of the Approximation Rank . . . . . . . . . . . . . . . . . 12 1.5.1 Rank for POD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 1.5.2 Rank for DMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 1.6 Comparison of POD and DMD . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2 Pulsating Poiseuille Flow 15 2.1 Test Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.2 Asymptotic Complex Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.3 Eigenfunction Expansion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 2.4 Discrete Proper Orthogonal Decomposition (POD) . . . . . . . . . . . . . . . . . 17 2.5 Discrete Dynamic Mode Decomposition (DMD) . . . . . . . . . . . . . . . . . . . 17 2.6 Comparison of the Three Approximations . . . . . . . . . . . . . . . . . . . . . . 18 2.6.1 Initial Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.6.2 Comparison of the Modes . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 2.6.3 Amplitude Decay Rate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 2.6.4 Eigenvalues Circle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 2.6.5 The First Three Modes of the POD and DMD Approximations . . . . . . 20 2.7 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3 Dynamic Mode Decomposition of 2D Data 23 3.1 DMD and POD Approximation to the Flow Behind a Cylinder . . . . . . . . . . 23 3.1.1 Initial Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.1.2 Dynamic Mode Decomposition . . . . . . . . . . . . . . . . . . . . . . . . 25 3.1.3 Proper Orthogonal Decomposition . . . . . . . . . . . . . . . . . . . . . . 26 3.1.4 Conclusions and Comparison of the Two Decomposition Methods . . . . . 28 4 GUI Beta Version 29 4.1 Scheme of the Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29 4.2 Function Executions Inside the Program . . . . . . . . . . . . . . . . . . . . . . . 33 4.3 Tutorial on Using the GUI . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3.1 Tutorial Folder . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34 4.3.2 1D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35 4.3.3 2D Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36 3
  • 5. 5 Additional Exercises 38 5.1 Discrete and Continuous Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38 5.2 The Phase Shift Ψ Between Two Sine Functions . . . . . . . . . . . . . . . . . . . 41 5.3 A Note on the Sizes of Component Matrices in the SVD . . . . . . . . . . . . . . 43 5.3.1 SVD on a General Matrix D . . . . . . . . . . . . . . . . . . . . . . . . . 43 5.3.2 SVD on the Matrix X1 from DMD . . . . . . . . . . . . . . . . . . . . . . 43 5.3.3 SVD with POD Approximation on Matrices D and X1 . . . . . . . . . . . 43 5.4 A Note on the Linear Propagator Matrix . . . . . . . . . . . . . . . . . . . . . . 44 5.5 A Note on the Similarity of Matrices . . . . . . . . . . . . . . . . . . . . . . . . . 46 5.6 A Note on the Linear Dynamical Systems . . . . . . . . . . . . . . . . . . . . . . 50 A Pulsating Poiseuille Flow 51 B 1D and 2D POD Functions 60 C 1D and 2D DMD Functions 62 D 1D and 2D POD Results Plotting 67 E 1D and 2D DMD Results Plotting 74 F List of Useful Matlab Commands 81 G Complete List of Codes Produced 82
  • 6. Chapter 1 Introduction to Data Decomposition 1.1 Setting the Stage The present work revolves around large data matrices and the ways to deal with them. They can come from experimental results, such as PIV1 or wind tunnel tests, or from numerical results such as CFD2 , and hence they are matrices that contain information about the evolution of a certain physical quantity in space and time. Their structure is therefore the following: their rows are linked to a particular coordinate in space and their columns to a particular moment in time. The space coordinate can represent any dimension but typically it will be 1D, 2D or 3D space coordinate. Each entry inside a data matrix D, say Dij, gives a numerical value to a certain quantity at position pi and at time tj. This value could for instance be velocity, pressure or any other physical quantity of choice. A single, full column extracted from a data matrix is called a snapshot. It represents the full field of a physical quantity at a single instance of time - it is indeed like a picture. Using Matlab notation we can refer to it as D(:, m) for any time tm. The graphical representation of a general data matrix is captured in the Figure 1.1. Figure 1.1: Structure of a data matrix D. An important property that describes a data matrix is its rank. The rank of a matrix specifies how many linearly indepen- dent columns or rows a matrix has. A matrix of size np × nt is of full-rank when its rank is defined as min (np, nt). When a matrix has a smaller rank than min (np, nt) some of its rows (or columns) can be expressed in terms of a linear combination of its other rows (or columns). Usually, and for the data matrices shown later in this work, we assume that the number of spatial coordinates is much larger than the number of moments in time: np > nt, hence if a matrix is of full-rank it will have the rank equal to nt. An important concept now emerges: imagine we could ap- proximate a data matrix of full-rank well enough with another matrix of a lower rank. Suppose we wanted to share that data matrix with someone else, and instead of sending a matrix of full-rank with all its entries inside, we could send only the lin- early independent rows or columns and a rule to figure out the remaining, linearly dependent ones. This decreases the amount of data that we have to store or share, but of course comes at a certain error of approximation. The goal then is to minimize the error while minimizing the rank of a data matrix at the same time. 1Particle Image Velocimetry 2Computational Fluid Dynamics 5
  • 7. 1.2 Data Matrix Decomposition Two methods of a data matrix decomposition and approximation are discussed in this work. By matrix decomposition we mean writing the original matrix D as a product of three matrices: D = ABC (1.1) Such product is not unique, meaning that we can find several sets of matrices A, B and C that will satisfy the equation (1.1). In order to create a unique decomposition, we have to impose some additional constraints on component matrices A, B and C. The nature of these constraints is what creates a new matrix decomposition method. Two such methods, described below, are the main subject of this work. In the methods to follow, matrices A, B and C have a special physical meaning. A product of corresponding columns of the three component matrices is called a mode. Matrix A is a matrix of spatial structures, which gives a "shape" to the mode. Matrix B is a matrix of amplitudes, which specifies the importance of every mode in the solution. Matrix C is a matrix of temporal structures and gives dynamics to the mode - it specifies how fast spatial structures evolve from one to another. It should therefore be noticed, that in addition to reducing the rank of a data matrix and approximating it, decomposition gives a better insight into the underlying physical phenomena which are hidden in the data obtained. 1.3 Proper Orthogonal Decomposition (POD) The concept of the Proper Orthogonal Decomposition (POD) on a discrete data set (therefore a matrix) coincides with the Singular Value Decomposition (SVD)3 . More information is given in [2] and [3]. We write therefore: D = UΣVT (1.2) The constraints of SVD are, that the matrices U and V are both orthogonal and orthonormal. Figure 1.2: Computation of matrix Ai. The orthogonality of a matrix means that the inner product4 of any two different columns of this matrix is zero and the orthonor- mality means that the inner product of any of the column with itself is 1. Putting it in other words, the products VVT and UUT both give an identity matrix. The orthogonality condition also means that the matrices U and V are of full-rank and none of its columns can be obtained by a linear combination of their other columns. The SVD also requires that the matrix Σ is diagonal with elements on the diagonal denoted as σi. Once the SVD decomposition is made, we approximate the orig- inal matrix of rank d by a matrix Dr of a lower rank r, which we write as a sum of "simple" matrices Ai multiplied by the corre- sponding amplitude σi, extracted from the diagonal of matrix Σ. D ≈ Dr = A1σ1 + A2σ2 + · · · + Arσr (1.3) Each matrix Ai has rank equal to 1 and the same size as the original matrix. In order to obtain these matrices we multiply the corresponding columns of U and V with an outer product. Using the Matlab notation, the computation of Ai is the following: Ai = U(:, i) · V(:, i)T . We are therefore guaranteed that the matrix Ai 3Check section 5.3 for additional information on SVD. 4Check section 5.1 for more information on the inner products. 6
  • 8. is of rank 1: each column of Ai is the ith column of U multiplied by a corresponding element inside the vector V(:, i)T (and so is just a linear combination of the ith column of U!). We are also guaranteed that any matrices Ai and Aj for i = j will be orthogonal to each other, since any two columns U(:, i) and U(:, j) are, from the definition of SVD, linearly independent. In the Matlab notation we write the equation 1.3 as Dr ≈ U(:, 1 : r)Σ(1 : r, 1 : r)V(:, 1 : r)T . Hence, by keeping r terms of the sum in (1.3), where r < d, we create a matrix of lower rank r that is an approximation to the original matrix D. The error we make by truncating the sum after r elements, computed as the L2 norm is: ||D − Dr||2 = σr+1 (1.4) with Dr = r i=1 Aiσi. This is known as the Eckart-Young theorem. The tilde denotes that it is the approximation to matrix D and the r in the subscript specifies the number of elements in the sum that we decide to keep; σr+1 is an amplitude of the first term left out. 1.4 Dynamic Mode Decomposition (DMD) The background of the Dynamic Mode Decomposition lies in the linear dynamical systems. On a practical level we might think of the DMD method as fitting a linear system into the data matrix. A more general framework, however, is given in [4], [5], [6] and [7]. This implies that any snapshot taken from the data matrix D at time i+1 is a linear combination of the previous snapshot at time i. In a vector form we write that: xi+1 = Axi (1.5) where xi+1 and xi are the (i + 1)th and the ith column from the matrix D respectively. The matrix A is a linear propagator matrix that tells us how the system will evolve from one snapshot to another, for all columns inside matrix D. Figure 1.3: Extracted data sets. In a general case, it is impossible to fit the linear system into the data matrix that we have, as this data might not be linear in the first place. We extract two data sets from the matrix D. The data set X1 is created by all the columns of D except the last one and the data set X2 is created by all the columns of D except the first one. In the Matlab notation we can write them as: X1 = D(:, 1 : end − 1), X2 = D(:, 2 : end). Therefore in general: X2 = AX1 (1.6) where every ith column of D is linked by the linear propagator matrix to every (i+1)th column of D. The equation (1.6) is equiva- lent to the equation (1.5), except it is written for all columns inside D.5 We perform the SVD and later the POD approximation of ma- trix X1: X1 = UΣVT X1 ≈ X1,r (1.7) Since finding the matrix A is in general impossible, it is enough to extract certain properties of that matrix, like its eigenvalues. We seek therefore a matrix S that is similar to matrix A. Similar ma- trices have the same eigenvalues and share a few other properties. We use the matrix similarity 5It is interesting to think when or whether the equation 1.6 could be solved for A explicitly. This issue is discussed in section 5.4. 7
  • 9. condition, which says that if matrices satisfy the equation: S = Q−1 AQ for any matrix Q that has an inverse, then S and A are similar6 . In order to find the matrix S, we perform algebraic operations on the equation (1.6), substi- tuting the POD approximation from the equation (1.7): X2 = AUΣVT UT × (1.8a) UT X2 = UT AUΣVT × V (1.8b) UT X2V = UT AUΣVT V × Σ−1 (1.8c) UT X2VΣ−1 = UT AU (1.8d) NOTE 1: in the equation (1.8c) the product VT V becomes a unity matrix, since V is an orthogonal and orthonormal matrix. NOTE 2: matrix Σ must be an invertible matrix, which also means that it has to be a square matrix (which is accomplished by performing the POD approximation of matrix X1). We now call the product UT X2VΣ−1 = S and observe, that in the equation (1.8d) we arrive at the condition of similarity of two matrices: S = UT AU (1.9) where the orthogonal matrix U satisfies the condition UT = U−1 and so plays a role of a matrix Q. However, it should be noted here that the matrix U might not be a square matrix. The matrix S is size r × r. We perform the eigenvalue decomposition of S: S = ΦMΦ−1 (1.10) where Φ is a matrix of eigenvectors of S and M is a matrix of eigenvalues of S. In general, Φ and M are complex matrices. Since S is the size r × r, and typically smaller size than the matrix A, we can only approximate r eigenvalues of A. Notice also, that similar matrices don’t share eigenvectors7 . We extract the elements from the diagonal of matrix M into a vector µ. Since these elements are in general complex, they can be written as µi = eωi , where ωi is a complex frequency in terms of pulsation. We find a vector of frequencies ω by computing the natural logarithm of µ: ω = ln (µ) (1.11) Next, we extract the DMD spatial modes: φ = UΦ (1.12) The above equation has a meaning of projecting vectors onto the basis made from orthogonal columns of U. The coefficients of the projection are extracted from the eigenvectors matrix Φ. We compute the initial DMD amplitudes b by solving the equation x1 = φb with the least- squares approach: b = (φT φ)−1 φT x1 (1.13) where x1 is the full first column of the matrix X1. We compute the DMD temporal modes in a form of a matrix Tmodes, which will have the size of r × nt. Each ith column of this matrix is computed using the following relation: ti = beωk = beωti/∆t (1.14) 6More on matrix similarity can be found in section 5.5. 7There exists, however, a relationship, see section 5.5 for more details. 8
  • 10. with the number k = ti ∆t , where ti is a particular moment in time and ∆t is a time step at which snapshots from the data matrix D have been taken. Notice that the DMD method hence requires that the ∆t is a constant sampling interval for the whole matrix D. If the matrix D represents experimental data, they must have been sampled at equal intervals of time. k is an integer and comes directly from the linear dynamical system character of the DMD method8 . Notice, that each jth row of matrix Tmodes corresponds to only one jth frequency and only one jth amplitude. Figure 1.4: Structure of the matrix Tmodes. The DMD approximation to the original data matrix D is finally computed as a product of spatial and temporal modes: Dr = φ · Tmodes (1.15) Notice that the resemblance with the decomposition equation (1.1) is not evident, since we arrived at the product of only two matrices, but the third matrix of amplitudes b is hidden in the matrix Tmodes. To write the equation (1.15) in the form (1.1), one should write Tmodes as a product of a diagonal matrix and a Vandermonde matrix [4]. 1.5 Criteria for the Choice of the Approximation Rank The rank r of the approximation in the decomposition methods is perhaps the most important and as well the most mysterious parameter that a person performing POD or DMD has to decide upon. 1.5.1 Rank for POD In the POD method the choice is rather simple. Since the amplitudes σi are ordered, we are sure that increasing r will result in a better approximation. We might then decide, that for some value ri we are satisfied with the approximation. Two situations are possible: that the amplitudes converge to zero and that the amplitudes converge to some constant value c. More information is given in [2]. We can plot the graph of σi(r), which is composed of discrete lines as the rank r is an integer. 8This is further discussed in section 5.6. 9
  • 11. Figure 1.5: Graph of σi(r). σmin method In the case when the amplitudes converge to zero, we can reduce the error of the approximation as much as we want. This error is equal to σi+1 and since the values σi converge to zero, the error converges to zero as well. In the σmin method we find such rank r that the error is not greater then a specified value σmin. This rank can be found by increasing r from 1 to ri + 1 and each time computing the error of the approximation. Once the error is smaller than the value specified: σi+1 ≤ σmin (1.16) we can stop the approximation. Slope method In the case when the amplitudes converge to a constant value c, it is better to look at the changing slopes of the graph σi(r). We can never reach the error smaller then c, we can only get as close to c as we like. The slope of the graph is strongly decreasing for the initial values of r and later on it is close to 0 and almost constant. Once the two consecutive slopes of the graph σi(r) don’t change more then some specified value, we can stop the approximation. 1.5.2 Rank for DMD The criteria for the choice of the rank in the DMD method is still an open question but more complete approach can be found in [8]. In the DMD method the choice of the rank r is made at the level of performing the POD on the extracted data set X1. One idea is to perform the POD of matrix X1 with a large number r, proceed with DMD and compute complex eigenvalues of the matrix S. Each eigenvalue is denoted by µi and is an element of a vector µ. Since in general they are complex numbers, we can extract their real λr and imaginary λi part and write each of them as a sum: µi = λr + λi (1.17) Each eigenvalue creates a point on a complex plane (λr, λi). If we plot these points with respect to a unit circle on a complex plane, we may apply the reasoning from linear dynamical systems. From it we know that the position of the eigenvalue with respect to the unit circle has a special meaning. Points laying exactly on the circle create a dynamically stable solution. Points laying inside the circle represent a system decaying in time and in the DMD approximation have a meaning of noise in the data. They are therefore less relevant to the approximation than points laying on the circle. The choice on the rank is sometimes not evident. Since the amplitudes bi are not ordered, we are unsure how many terms we should keep. Some eigenvalues laying inside the circle may have large amplitudes, which means that they eventually decay with time. 10
  • 12. Circle based method Increasing the rank number r we are getting more points on a complex plane. The points laying on the circle or close to the circle boundary are the most important. Usually, the approximation can be stopped once noise start to appear in the data. The criteria then might be, to create a smaller circle of radius R < 1 (this radius can be chosen by us). Once the eigenvalues start appearing inside that circle we stop the approximation and retrieve the largest rank r for which all the eigenvalues are outside the small circle. Figure 1.6: Circle based choice of the rank r. 1.6 Comparison of POD and DMD The POD method imposes the orthogonality and orthonormality restriction on the spatial and temporal structures. In the POD, several frequencies per mode might be present. The DMD method introduces a single frequency dynamics into the temporal modes. This method is hence especially informative if we know that a particular phenomenon is occuring at a certain frequency. The rank of the approximation for the purpose of performing the SVD, both in the POD and DMD methods, can be chosen manually at the start of computations and later adjusted (lowered or increased) once an implemented criteria decides, that the approximation has reached the satisfactory level. 11
  • 13. Chapter 2 Pulsating Poiseuille Flow The test case considered in this chapter is the Poiseuille flow where the pressure gradient be- tween two sections is changing in time. This definition of pressure results in a velocity profile which deforms in time and differs from the parabolic one obtained for a constant pressure gra- dient. This flow situation is analyzed in detail in [1], where the analytic solution to the velocity profile function is derived in terms of the Asymptotic Complex Solution method and the Eigen- function Expansion method. In the present chapter, three methods are adopted to approximate the analytical solution. The three methods are: the former Eigenfunction Expansion method, the Proper Orthogonal Decomposition (POD) and the Dynamic Mode Decomposition (DMD). The scope of this chapter is to compare the three methods and analyze the behaviour of their solutions. 2.1 Test Case The test case for the purpose of this assignemnt is a flow between two parallel plates known as the Poiseuille flow. We analyze the situation with the pulsating pressure gradient between two sections. The variation of pressure in time is thus described by: p(t) = pM + pA cos(ωt) (2.1) Figure 2.1: Pulsating Poiseuille flow. Figure taken from [1]. The momentum balance in the Navier-Stokes equation in the stream-wise direction is concerned: ∂u ∂t = − 1 ρ ∂p ∂x + ν ∂2 u ∂y2 (2.2) This is a partial differential equation which links the pressure gradient with the velocity distribution. In [1], the Navier-Stokes equation from (2.2) is nondimensionalized for simplicty and later on, the Eigenfunction Expansion method and the Asymptotic Complex Solution method are applied to derive the solution. Two important nondimensional con- stants appear: the Womersley number W and the dimension- less pressure amplitude ˆpA. In the present chapter we approximate the solution to the equation (2.2) using three methods and compare the results of these approximations. 12
  • 14. 2.2 Asymptotic Complex Solution The analytical solution that is used as a base for the approximations to follow is obtained in [1] with the Asymptotic Complex Solution method. Figure 2.2: Structure of the matrix UR. In this method, the solution is a real part of the complex velocity function: UR(ˆy, ˆt) = Re{˜u(ˆy, ˆt)} (2.3) where ˆy is a dimensionless space coordinate and ˆt is a dimen- sionless time. The result is a matrix UR, which in general might be a full-rank, rectangular matrix. Its rows are linked to the 1D spatial coordinates and its columns are linked to the consecutive time steps. The size of this matrix is therefore ny ×nt. Depending on how we descretize space and time, we might end up with a matrix of a large size. Typically, we descretize ˆy and ˆt in such way, that the number of rows ny is larger than the number of columns nt. A vector ˆy, specifying the discretization of space, contains ny entries ranging from -1 to 1. A vector ˆt, specifying the discretization of time, contains nt entries ranging from 0 to some tend which in this case is the final time of the simulation that we wish to perform. 2.3 Eigenfunction Expansion The Eigenfunction Expansion solution is also obtained in [1]. In the Eigenfunction Expansion method the solution is written in terms of the sum: UA(ˆy, ˆt) = n i=1 φi(ˆy)aiΨi(ˆt) + UM (2.4) where φi(ˆy) is a spatial structure, ai is an amplitude and Ψi(ˆt) is a temporal structure. UM is the mean flow. Figure 2.3: Graphical rep- resentation of the equa- tion (2.5). A number n describes the number of modes that we decide to keep in the approximation. When n = max(ny, nt), the solution is exact. The product φi(ˆy)aiΨi(ˆt), computed for any i is the ith mode of the approximation. Computing the sum in (2.4) is equivalent to multiplying three matrices: UA(ˆy, ˆt) = YATT (2.5) where Y is the matrix of spatial structures of size ny × n, T is the matrix of temporal structures of size nt × n. Matrix A is a diagonal matrix of amplitudes of size n × n. Each column of the spatial structures matrix Y is made up from a cosine function of a particular frequency. Spatial structures give a shape to the solution. The entries ai in the amplitude matrix A depend on the dimensionless pressure amplitude ˆpA and the Wom- ersley number W. Each amplitude describes how important every mode is in the summation. Each column of the temporal matrix T (or each row of its transpose) represents the time evolution and can be viewed as giving the dynamics to the system. 13
  • 15. 2.4 Discrete Proper Orthogonal Decomposition (POD) In the POD method we start with the data matrix UR, as defined in 2.2, and approximate it with a lower rank matrix UPOD of the same size. We perform the SVD on the matrix UR and decompose it into a product of three matrices: UR = UΣVT (2.6) The SVD decomposition is imposing an orthogonality and orthonormality constraint on matrices U and V. U is the matrix of spacial structures and V is the matrix of temporal structures. Unlike in the Eigenfunction Expansion method, the POD may introduce several frequencies per mode. Matrix Σ is a diagonal matrix. The entries σi on its diagonal have the meaning of amplitudes which, similarly to the entries ai, specify the importance of each POD mode. Next, the POD approximation is performed by computing the truncated sum: UR ≈ UPOD = u1vT 1 σ1 + u2vT 2 σ2 + · · · + urvT r σr (2.7) where ui is the ith column of U and vi is the ith column of V. It is important to note that in the SVD the elements σi are ordered and for any i it always holds that: σi > σi+1. This condition is very important in the POD approximation: on truncat- ing the sum after the rth term, we are sure that every next element of the sum will contribute less then any element from the ones that we have kept. Since the L2 norm error of POD is equal to the amplitude of the first term left out, it is important for the accuracy of the approximation that the amplitudes σi converge to zero. Otherwise, if they converge to some constant value, the norm will remain almost constant after some time, no matter how many terms we add to the approximation. When the amplitudes converge to zero, the error converges to zero as well. The faster it converges to zero, the more the matrix UR is close to be rank deficient. 2.5 Discrete Dynamic Mode Decomposition (DMD) In the DMD method we start again with the data matrix UR from which we extract the data sets X1 and X2, as defined in the section 1.4. The rank r is chosen at the level of computing the POD of a matrix X1. The analysis that follows is analogous to the one described in section 1.4. The vector µ of the eigenvalues of the matrix S is decomposed into the real and imaginary part: µ = Re(µ) + Im(µ) = λr + λi (2.8) where λr = Re(µ) and λi = Im(µ). The vector µ is length r, and so is the vector of frequencies ω and the vector of initial amplitudes b. As a result we obtain the DMD approximation to the original data matrix: UR ≈ UDMD = φ · Tmodes (2.9) An important thing to note is that the amplitudes in the DMD method (unlike in the POD method) are not ordered. This means that some bi+1 > bi and therefore often we cannot approximate the matrix well with only first few modes, as some further modes may appear important as well. 14
  • 16. 2.6 Comparison of the Three Approximations 2.6.1 Initial Parameters A Matlab code [App.A] is developed to simulate the three approximation methods. The approximations are performed for the Womersley number W = 10 and for the dimen- sionless pressure coefficient ˆpA = 60 (case C2 from [1]). The matrix UR in the present test case has rank = 3. Hence, only three approximations are considered: for r = 1, r = 2 and r = 3. The POD and DMD methods should give an exact result when r = 3. For the purpose of plotting the amplitude decay rate and the eigenvalues circle, the number of modes is later increased to 20. The time step is dt = 0.05. The smaller the time step, the smoother graphs of the temporal structures we obtain. The space step is dy = 0.05. The smaller the space step, the smoother graphs of the spatial structures we obtain. The results presented below are obtained for the total time of simulation T = 20. 2.6.2 Comparison of the Modes A time evolution of the pulsating velocity profile between two parallel plates is obtained, in a form of a movie, for the analytic solution and three approximation methods. The time of the simulation can be chosen arbitrarily. In Figures 2.4 and 2.5 three first modes of the approxi- mations are drawn with respect to the analytic solution, for two different time moments in the simulation. Figure 2.4: Approximation of the asymptotic complex solution with 1, 2 and 3 first modes. Drawn for t = 2. The first mode U1 is approximating the mean flow in all three approximation methods. In the Eigenfunction Expansion and in the POD method, the two first approximations follow in time the original solution. In the DMD method, the first two approximations often find themselves moving in the opposite direction to the original solution. Approximation with the third mode U3 follows exactly the original solution in the POD and in the DMD method. In the Eigenfunction Expansion, the approximation gets better, as we add more modes, but is still not exact for the third mode U3. 15
  • 17. Figure 2.5: Approximation of the asymptotic complex solution with 1, 2 and 3 first modes. Drawn for t = 3.5. 2.6.3 Amplitude Decay Rate Two functions of the amplitude decay rate are obtained by plotting the elements ai from the diagonal of matrix A for the eigenfunction expansion and the elements σi from the diagonal of matrix Σ for the POD method. In Figure 2.6 they are plotted versus the number of modes taken into account. The amplitudes are normalized so that the largest amplitude has the value 1. Figure 2.6: The normalized amplitude decay rate. Some interesting facts can be observed. Firstly, the decay rate is faster for the POD method then the eigenfunction expansion method. The faster the amplitude decay rate, the better we can approximate the solution with a low number of modes. Secondly, the amplitudes of POD are zero for the number of modes larger than 3. This is due to the rank of matrix UR being equal to 3. With 3 modes of POD we are already recovering 16
  • 18. the full data set and the L2 error, equal to the largest term left-out is 0%. The amplitudes converge to zero for both approximation methods and hence adding more terms in the eigenfunction expansion will keep improving the approximation. With the first three terms of the eigenfunction expansion we read the error to be about 10%. 2.6.4 Eigenvalues Circle As one of the results of the DMD method we draw the eigenvalues of the matrix S on a complex plane versus the corresponding amplitude bi. Thus, the Figure 2.7 is made up from the points (λr, λi), positioned at the height equal to the normalized amplitude bi. 20 first modes of the DMD are drawn. We also include a unit circle to represent the boundary of the linear dynamical system. Position of the eigenvalue with respect to that circle will have a special meaning. Figure 2.7: The first 20 normalized DMD amplitudes. The points laying exactly on the circle create a dynamically stable solution. The points laying inside the circle represent the noise in the data: the corresponding eigenvalues are less relevant in the approximation and they eventually die out. Some points however, might be laying inside the circle but close to the circle boundary. They have a meaning of a slowly decaying approximation. Points laying inside the circle, very close to the complex plane origin Any points laying outside the circle represent an exploding dynamical system and would diverge the approximation. For the Poiseuille flow test case, we have three eigenvalues that lie exactly on the circle and 17 ones very close to the point (0 Re, 0 Im). The first DMD mode is represented by an eigenvalue with only a real part, which approximates the mean flow. The second DMD mode introduces one complex eigenvalue and the third DMD mode introduces its complex conjugate. Notice that every eigenvalue with nonzero imaginary part will have its complex conjugate partner. 2.6.5 The First Three Modes of the POD and DMD Approximations The first three spatial and temporal structures of POD and DMD approximations are plotted in Figures 2.8-2.13. The first spatial modes are the same for the POD and the DMD method and they reconstruct the mean flow between the parallel plates. The POD method introduces several frequencies in the spatial structures. The temporal modes of the POD are shifted in phase. The first two DMD temporal modes are the same and the third mode is constant. 17
  • 19. Figure 2.8: POD 1st mode. Figure 2.9: DMD 1st mode. Figure 2.10: POD 2nd mode. Figure 2.11: DMD 2nd mode. 18
  • 20. Figure 2.12: POD 3rd mode. Figure 2.13: DMD 3rd mode. 2.7 Conclusions Few interesting facts can be observed from the approximations performed. Approximation methods satisfy the boundary condition, even though this information was not implemented in the procedures. The mean flow is reconstructed in both the POD and DMD. It appears in the first mode of the approximation. The first spatial structure has the same shape for the POD and DMD method but the temporal structures are different. In the DMD method, the filtering (the choice of the rank r) does not necessarily have to be performed at the POD level. From the eigenvalue circle we observe that the DMD can filter the noise as well later on. We might therefore include many modes at the POD level, let the DMD method be performed for all of them and then from an implemented criteria correct the approximation for a lower number of r. The purpose of restricting r at the POD level is so that the DMD can start at a smaller set of data. Filtering at the POD level might therefore be useful, as it decreases the sizes of those matrices, whose size is dependent on r. This in turn reduces the use of memory during computations. Finally, it is worth noticing that in general, columns creating matrices Y and T in the Eigenfunction Expansion method, might not be orthogonal to each other nor be orthonormal. In this test case it happened that the matrix Y was orthogonal, though T was not. None of these matrices were orthonormal. In the POD method we require the columns of matrices U and V to be orthogonal and this is one of the reasons why POD creates a faster approximation. 19
  • 21. Chapter 3 Dynamic Mode Decomposition of 2D Data The flow behind a cylinder was simulated by other student using the OpenFOAM CFD software [10], to produce two large matrices corresponding to two velocity components in the 2D plane. The region of the flow was discretized into a mesh in the x and y axis. In this chapter, the Dynamic Mode Decomposition method and the Proper Orthogonal Decomposition are applied to approximate the simulated flow behind a cylinder, analyze the behaviour of the approximations and compare the two decomposition methods. 3.1 DMD and POD Approximation to the Flow Behind a Cylinder 3.1.1 Initial Data In this chapter we deal with the 2D flow case, hence the amount of data that we have to process is significantly larger then in the Poiseuille flow case. This time we need two velocity components u and υ, corresponding to a point in the xy-plane. Figure 3.1: Velocity compo- nents. The data obtained from the CFD consists of four matrices: 1. U matrix of u-components of velocity U.mat 2. V matrix of υ-components of velocity V.mat 3. X vector of coordinates on the x-axis X.mat 4. Y vector of coordinates on the y-axis Y.mat The region of the flow is discretized in the x-axis into 301 points and in the y-axis into 201 points. The total number of spatial points is therefore: np = 301 × 201 = 60501 (3.1) The time of simulation is discretized into 313 timesteps. Vector X contains 301 x-coordinates. Their range is from -0.3287 to -0.0287. Vector Y contains 201 y-coordinates. Their range is from -0.1108 to +0.0892. Matrix U is size 60501 × 313. 20
  • 22. Matrix V is size 60501 × 313. Other parameters of the CFD simulation are presented below: Free stream velocity 10 [m s ] Cylinder diameter 0.015 [m] Turbulence intensity 5% [−] Kinematic viscosity 1.43 · 10−5 [m2 s ] Strouhal number 0.2 [−] Length of the region in the x-axis 0.3 [m] Length of the region in the y-axis 0.2 [m] The lengths of matrices U and V correspond to a 2D space, and hence the rows of these matrices have a special structure. The first 201 rows correspond to each entry inside vector Y and to the first entry inside vector X. The second 201 rows correspond again to each entry inside vector Y but now to the second entry inside vector X. In general, it can be viewed as 301 Y vectors placed one after the other, each of them corresponding to only one x-coordinate. This structure allows to represent 2D data in a single length of the data matrix. It is also graphically represented in the Figure 3.2. Figure 3.2: Structure of both data matrices U and V. The cylinder itself is masked in the data matrices and the velocity components are set to NaN, where the cylinder is present. 21
  • 23. 3.1.2 Dynamic Mode Decomposition The Dynamic Mode Decomposition is performed on the data matrices U and V joined together. The time step chosen for the approximation is 0.01. The procedure of performing 2D DMD is then the same as described in the section 1.4. The rank r chosen for the analysis is 4. DMD Results Figure 3.3: DMD spatial mode for r = 1. Figure 3.4: DMD temporal mode for r = 1. Figure 3.5: DMD spatial mode for r = 2. Figure 3.6: DMD temporal mode for r = 2. 22
  • 24. Figure 3.7: DMD spatial mode for r = 3. Figure 3.8: DMD temporal mode for r = 3. Figure 3.9: DMD spatial mode for r = 4. Figure 3.10: DMD temporal mode for r = 4. 3.1.3 Proper Orthogonal Decomposition The Proper Orthogonal Decomposition is performed on the data matrices U and V joined together. The time step chosen for the approximation is 0.01. The procedure of performing 2D POD is then the same as described in the section 1.3. The rank r chosen for the analysis is 4. 23
  • 25. POD Results Figure 3.11: POD spatial mode for r = 1. Figure 3.12: POD temporal mode for r = 1. Figure 3.13: POD spatial mode for r = 2. Figure 3.14: POD temporal mode for r = 2. 24
  • 26. Figure 3.15: POD spatial mode for r = 3. Figure 3.16: POD temporal mode for r = 3. Figure 3.17: POD spatial mode for r = 4. Figure 3.18: POD temporal mode for r = 4. 3.1.4 Conclusions and Comparison of the Two Decomposition Meth- ods The third spatial structure of DMD and the first spatial structures of POD are approximating the mean flow. The temporal structures associated with the mean flow are almost constant for both approximation methods. The first two temporal modes of DMD have the same wavelength. The POD temporal modes (apart from the constant one) are shifted in phase. Even though in the DMD method the amplitudes of the aproximation are not ordered, with the first two modes we already see the vortex pattern of the flow behind the cylinder. The time evolution of the approximated flow behind the cylinder is very similar in both methods. 25
  • 27. Chapter 4 GUI Beta Version In this chapter we present a description of a developed beta version GUI program in Matlab to load data, perform POD or DMD, post-process and save the results. This version of GUI is under development and not all the elements are yet implemented or are guaranteed to work without error. 4.1 Scheme of the Program The scheme of the program is presented in Figure 4.2. The main menu of the GUI is POD_DMD_beta_1. This menu needs two string variables specified by the user in the two preceding windows: String_An_Type which specifies the analysis type String_Dec_Type which specifies the decomposition type The user has three choices for the analysis type: 1DS which is 1D data scalar analysis (associates one physical quantity p to one coordinate in space x) 2DS which is 2D data scalar analysis (associates one physical quantity p to two coordinates in space x and y) 2DV which is 2D data vector analysis (associates two physical quantities p and q to two coordinates in space x and y) Figure 4.1: Analysis type: 1D scalar, 2D scalar, 2D vector. The user has two choices for the decomposition type: POD which is Proper Orthogonal Decomposition DMD which is Dynamic Mode Decomposition The choices made will appear as a reminder in the Matlab command window. 26
  • 28. Once in the main menu, four buttons are available: Import Data opens an IMPORT menu Decompose opens a window to input a variable dt and then opens either POD_CRITERIA or DMD_CRITERIA menu Export Results opens EXPORT_DATA menu Exit exits the GUI The IMPORT menu has three buttons where the user can chose the type of data to load into the program: Sampled OpenFOAM are OpenFOAM datasets TxT Dataset are files with the .txt extension Mat Files are Matlab files with the .mat extension So far, the TxT Dataset is not implemented. Selecting Sampled OpenFOAM or Mat Files is possible. When the user choses Sampled OpenFOAM, another GUI developed by other student [10] opens, where the user can decide to apply mask to the OpenFOAM dataset or downsample directly without creating a mask. When the user choses Mat Files, a line appears in the Matlab command window to remind what type of data the program is expecting for a given type of analysis. Notice, that the data files names and the variable names inside of the files must agree with the names requested by the program. They are: D.mat y.mat for 1D scalar analysis, U.mat X.mat Y.mat for 2D scalar analysis, and: U.mat V.mat X.mat Y.mat for 2D vector analysis. A pop-up window then appears, where the user can select data to be imported. The function used to import data is called uipickfiles.m. Multiple selection is possible. Once the data is imported, the user should select the Decompose button. A pop-up window appears where the user can enter the time step dt of the data. The choice of the time step is many times arbitrary but a note should be made, that the data should have been sampled at equal time steps dt. Next, according to the decomposition method chosen, either the POD_CRITERIA or DMD_CRITERIA window appears. These two windows allow the user to select the criteria on the choice of rank r. 27
  • 29. Figure 4.2: Scheme of the POD and DMD GUI. 28
  • 30. Figure 4.3: Legend of the scheme elements. In the POD_CRITERIA three buttons are available: Manual where the manual selection of r is made Automatic: Slope where the automatic slope selection of r is made Automatic: Sigma min where the automatic σmin selection of r is made In the DMD_CRITERIA two buttons are available: POD Preprocessed where the manual selection of r is made Circle Based where the automatic circle based selection of r is made So far, only the Manual and POD Preprocessed selections are possible, both of which require the user to perform the choice of the rank r himself. When the manual selection is chosen, for both decomposition methods, the user receives a graph of the amplitude decay rate of the imported data, which he can zoom in in the represen- tative region and which in turn helps the user to make the initial choice of r. Once the user is ready with his choice, he should press Enter in the Matlab command window. The pop-up window appears, where the user can enter the value of the rank. Finally, the Export Results button opens the EXPORT_DATA menu. Four buttons are avail- able, regardless of the decomposition method and analysis type: Specify case name which allows the user to enter the name of the case Save Approximation which saves the new approximated matrices as .mat files Gif which saves the .gif file of the approximation Export Modes which saves the .png files with graphs of the modes of the approximation The user must start with the first button for specifying the case name. A pop-up window appears, where a name of the case can be entered. This name will then appear in the name of a created folder, where all the results will be saved. The name of the folder will always have the following pattern: [Analysis type]_[Decomposition type]_[User entered case name] 29
  • 31. After the name is entered, the approximated matrices can be saved in the created folder by clicking Approximation. Notice that this may take long time if the matrices are of a large size. If not needed, the user may skip saving the approximated matrices. Then, a .gif file with the graphical representation of the results can be viewed and saved by clicking Gif. Finally, to export the modes of the approximation, the user can press the button Modes. A pop-up window appears, where the user can select which mode to export. The number that the user enters should be less than or equal to the rank r chosen before. Notice that the user can decide to save only some of the results, in particular only the Gif and Modes might be chosen. After saving the results, the user may close the GUI by pressing Exit. A note will appear in the Matlab command window, that the GUI is closed. 4.2 Function Executions Inside the Program There are external Matlab functions, which are utilised by the GUI. They have to be placed in the same directory as the GUI files in order for all the elements to work properly. The following functions are used: uigetfiles.m function for selecting multiple data files and loading them into Matlab POD_2D_S.m function that performs 2D scalar POD POD_2D_V.m function that performs 2D vector POD DMD_1D.m function that performs 1D DMD DMD_2D.m function that performs 2D scalar and vector DMD POD_1D_PLOT_GIF.m function that plots a .gif file of the 1D POD approximation POD_1D_PLOT_MODES.m function that plots a .png file of the 1D POD modes POD_2D_PLOT_GIF.m function that plots a .gif file of the 2D POD approximation POD_2D_PLOT_MODES.m function that plots a .png file of the 2D POD modes DMD_1D_PLOT_GIF.m function that plots a .gif file of the 1D DMD approximation DMD_1D_PLOT_MODES.m function that plots a .png file of the 1D DMD modes DMD_2D_PLOT_GIF.m function that plots a .gif file of the 2D DMD approximation DMD_2D_PLOT_MODES.m function that plots a .png file of the 2D DMD modes AXIS.m function for setting the plot style The function executions inside menus are presented in the Figure 4.4. It specifies which menus execute a particular function. This figure is useful when any input or output variables are to be changed in any of these functions. They have to be then adjusted in the corresponding menu as well. Note also that the plotting functions executed by the EXPORT_DATA menu use outputs from the functions performing 1D and 2D POD and DMD. If the output variables inside these functions are changed, they have to be adapted in the plotting functions. Function AXIS.m is used inside every plotting function. 30
  • 32. Figure 4.4: Function executions inside GUI. 4.3 Tutorial on Using the GUI This section is a small tutorial on running the GUI program and performing the POD and DMD on a sample data, coming from the Poiseuille flow (for 1D scalar case) and the flow behind a cylinder (for 2D vector case). 4.3.1 Tutorial Folder To successfully run this tutorial you need a tutorial folder named POD_DMD_Vbeta1. Inside this folder you should see the following data folders: folder 1D_Data_full_set: - D.mat - extracting_1D_data_set.m - y.mat folder 2D_Data_full_set_S: - U.mat - X.mat - Y.mat folder 2D_Data_full_set_V: - U.mat - V.mat - X.mat - Y.mat along with the all the Matlab functions that create the GUI. 31
  • 33. 4.3.2 1D Data Folder 1D_Data_full_set contains sample data from the Poiseuille flow described in Chapter 2. This data has been prepared in the form of two matrices: D.mat and y.mat, for fixed parameters: TIME = 10 total time of the simulation W = 10 Womersley number Pa = 60 dimensionless pressure coefficient dt = 0.1 time step dy = 0.01 space step In case you want to extract data with different parameters, a code for extracting is included, where you can change things and create new data sets. You can use the code extracting_1D_data_set.m. For now, this definition of parameters results in a matrix D of size 201 × 101, which has the same structure as a general data matrix from Figure 1.1 - the rows are related to space coordinates and the columns are related to time coordinates. The time vector has entries ranging from 0 to TIME, with a time step 0.1. It contains 101 elements. The space vector has entries ranging from -1 to +1, with a space step 0.01. It contains 201 elements. It represents the position between two parallel plates in the Poiseuille flow. Open Matlab and change the working directory to the tutorial folder: POD_DMD_Vbeta1 Then type Main_MENU in the Matlab command window. NOTE: While proceeding with this tutorial, observe also some feedback information that appears in the Matlab command window when you interact with the GUI. The first window should appear, where you are asked to select the decomposition mode. Click 1D Scalar. In the second window you are asked to select the decomposition method. Let’s say we will perform the 1D analysis with the POD method, so click Proper Orthogonal... (POD). Now you are in the main menu and you have to follow the order of the buttons. First, we want to import our data, so click Import Data. You are now asked to chose the type of data that you want to import and for now, click Mat Files, since our matrices are stored with the extension .mat. Find the folder 1D_Data_full_set inside the tutorial folder and select both files (by mouse or by holding Shift) D.mat and y.mat. Once the data is imported, click the next button Decompose. A pop-up window appears, where you are asked to enter the time step in your data. You are in fact free to chose whatever time step you want but if you want to be consistent with how the data was prepared, change the default value to 0.1 (otherwise the total time of simulation will differ from TIME = 10). A window with the choice of criteria on the rank r appears. For the moment, we will chose the rank manually, so click Manual. You now receive a graph of the amplitude decay rate of the imported data. This graph helps to make a choice on the rank r. You see that the first mode has the largest amplitude, the second and the third mode has got a lower amplitude and the furhter modes have zero (or almost zero) amplitudes. In general, you can zoom in the graph in the region that it of interest to you to help you make the decision on r. For this imported data it’s enough to take three first modes. The program is waiting for your response, and once you’re ready to type the rank, press Enter in the Matlab command window. A pop-up window appears, where you can type 3 and click OK. The graph now disappears and the rank is chosen. The GUI is now ready to process and save your data! Click Export Results. In the Export Data menu we have to start with specifying the name of your case, so click Specify case name. In the pop-up window you are asked to create a folder name for saving your case. You are free to enter whatever you want. The program is smart though, and always 32
  • 34. adds a prefix before your case name. So, even if you create a meaningless name, like AAAA, you will still know what kind of analysis was performed, as the folder name will be 1D_POD_AAAA. After the name is entered, you have three results that you can save: 1. the approximated matrix D_POD.mat 2. the .gif file with the movie of a pulsating velocity profile with its approximation 3. the .png file with the graph of the spatial and temporal structures of the approximation In general, you can decide to only save the things that you need, and you can skip eg. saving the .mat files (as sometimes this might take a long time). But this time, just to test whether everything is working properly, we will save all the results. Click Save Approximation. Once clicked, the results should be saved as .mat files automat- ically. Move on to Gif button. A pop-up window appears, where you can specify the ranges in the x and y axes. The default values are adjusted to the Poiseuille case, so you can simply click OK. You should now see a nice movie of the pulsating velocity profile of the Poiseuille flow. When the window with the Matlab figure closes, the .gif file is saved. The final thing is to save the modes of the approximation. The maximum number of the firs modes that you can save is equal to the rank number r previously entered. Each time you can chose which mode to export. You can simply click the button Export Modes and in the pop-up window type the number of mode that you want to save (1, 2 or 3). Just to try it out, it’s recommended that you save all three, one by one. You can now click Exit to close the GUI. Go now to the tutorial folder: POD_DMD_Vbeta1 and notice that a new folder with your case name was created there. In that folder you should see five files: D_POD.mat .mat file with the approximated matrix GIF_1D_POD_r3.gif .gif file with the movie of a pulsating velocity profile MODES_1D_POD_r1.png .png file with the graph of the 1st POD mode MODES_1D_POD_r2.png .png file with the graph of the 2nd POD mode MODES_1D_POD_r3.png .png file with the graph of the 3rd POD mode You can double-click the .gif file to watch it and you can view the .png files with the POD modes. As an additional exercise, you can run the 1D case with the DMD method, which will be analogous to the POD. You can also move on to testing the 2D data, where we will perform DMD. 4.3.3 2D Data To test the 2D case inside GUI, we use the sample data from the flow behind a cylinder. We use reduced region data, as the full set might take a long time to process. In any case, please be patient with the GUI this time, as the matrices get larger then they were in the 1D case! The 2D vector data is prepared in the form of four matrices: U.mat, V.mat, X.mat and Y.mat, obtained from the CFD simulation. Matrices U and V have the same structure as presented in the Figure 3.2. The matrix U has size 1476 × 128. The matrix V has size 1476 × 128. The vector X has 41 elements. The vector Y has 36 elements. 33
  • 35. Make sure you are still in the working directory of the tutorial folder: POD_DMD_Vbeta1 Type Main_MENU in the Matlab command window. This time chose 2D Vector in the first window and Dynamic Mode... (DMD) in the second window. Once in the main menu, import the 2D dataset: click Import Data, then chose Mat Files. Find the folder 2D_Data_full_set_V inside the tutorial folder and select four files U.mat, V.mat, X.mat and Y.mat. Next, click Decompose and specify the timestep dt. You can enter any value you want, eg. type 0.05. In the choice of criteria on r menu, click POD Preprocessed. This will allow you to manually enter the rank. You will first see the amplitude decay rate of the imported data and it is now recommended that you zoom in the first few amplitudes to see them precisely. The amplitudes diverge to zero and the first 7 amplitudes seem to be the largest. Once you are ready with your choice, click Enter in the Matlab command window. In the pop-up window type 7. It’s time to export our results. Click Export Results. Start with specifying the name for your case - you can type anything you want. Click Save Approximation to export the approximated matrices U_DMD.mat and V_DMD.mat. Click Gif to draw the movie of the flow behind a cylinder. In the pop-up window you can enter the range of your data. You can simply click OK. You should now be seeing a nice colour plot! Once it is saved, click Export Modes. This time we will not save all 7 modes. Suppose, we only want to save the first one, so type 1 in the pop-up window. You first see the spatial mode and by clicking Enter in the Matlab command window, you move on to the temporal mode. Click Enter again and you see the graph of the eigenvalues on the complex plane. The red mode is the one that you are exporting at the moment. Click Enter one more time and the graphs will be closed and saved. You can exit the GUI by clicking Exit. As the final thing, check if the results are saved. Go to the tutorial folder: POD_DMD_Vbeta1 and find a new folder corresponding to your 2D vector analysis. Inside it, you should see five files: U_DMD.mat .mat file with the approximated matrix V_DMD.mat .mat file with the approximated matrix GIF_2D_DMD_r7.gif .gif file with the movie of an approximated flow behind a cylinder Spatial_Modes_2D_DMD_r1.png .png file with the graph of the 1st DMD spatial mode Temp_Modes_2D_DMD_r1.png .png file with the graph of the 1st DMD temporal mode 34
  • 36. Chapter 5 Additional Exercises 5.1 Discrete and Continuous Norms In the following exercise we seek a relationship between the norm calculated in a discrete way (as defined in the Euclidean space) and in a continuous way (as defined in the Hilbert space). In Euclidean space we define an operation between two vectors x and y called inner product: x, y = n i=1 xiyi (5.1) which results in a scalar. We also define a square of the norm of a vector by performing an inner product of that vector with itself: ||x||2 = x, x (5.2) The concept of an inner product and computing the norm carries over from Euclidean space to the Hilbert space, where it is defined for continuous functions within a certain domain. We have therefore an inner product between two functions f(x) and g(x): f(x), g(x) = 1 −1 f(x)g(x)dx (5.3) and, once again, a square of the norm of a function f(x) is defined as an inner product of the function with itself: ||f(x)||2 = 1 −1 f2 (x)dx (5.4) The functions chosen for this exercise are two Legendre polynomials L3 and L4: L3 = 1 2 5x3 − 3x (5.5a) L4 = 1 8 35x4 − 30x2 + 3 (5.5b) For the purpose of discrete computing we discretize space (x-axis) into nx points. Functions L3 and L4 become vectors with the number of elements equal to nx. The square of the norm of each function is calculated in three ways for comparison: 1. using the command trapz() which approximates the definite integral by calculating the sum of areas of small trapezoids. It is in fact a discrete calculation of a continuous case. 35
  • 37. 2. using the command norm() which computes the norm of a vector 3. using the command dot() which computes a dot product of each function with itself In the continuous case, the integrals are computed analytically and evaluated in the code for the specified integration boundaries. NOTE: An inner product of functions L3 and L4 is also computed and since the Legendre polynomials form an orthogonal basis, we expect it to be zero. L3, L4 = 0 in the continuous case (5.6a) L3, L4 ≈ 0 in the discrete case (5.6b) Listing 5.1: Matlab code to test the relationship between a discrete and continuous evaluation of the norm of two Legendre polynomials. disc_cont_exercise_1.m 1 %% Finding a Relationship Between a Discrete and Continuous ========= 2 % Evaluation of a Norm of Two Functions 3 % =================================================================== 4 % NOTE: This code i s actually evaluating a square of the norm 5 % f or s i m p l i c i t y . 6 % =================================================================== 7 clc , c l e a r 8 9 % Number of points to d i s c r e t i z e the i n t e g r a t i o n i n t e r v a l into : 10 n_x = 500; 11 12 %% D i s c r e t i z i n g the i n t e g r a t i o n i n t e r v a l : =========================== 13 a = −1; b = 1; % i n t e g r a t i o n boundaries 14 step = (b−a ) /(n_x − 1) ; % d i s c r e t i z a t i o n step on the i n t e r v a l 15 x = [ a : step : b ] ; % x−axis as a vector 16 17 %% Functions to be calculated ( two Legendre polynomials ) : =========== 18 L3 = 1/2 ∗ (5∗x.^3 − 3∗x) ; % L3 as a vector 19 L4 = 1/8 ∗ (35∗x.^4 − 30∗x.^2 + 3) ; % L4 as a vector 20 21 %% Approximation of a d e f i n i t e i n t e g r a l of L3 and L4 function : ====== 22 Area_L3 = trapz (x , L3) ; % approximation to the area below L3 23 Area_L4 = trapz (x , L4) ; % approximation to the area below L4 24 25 %% Approximation of the inner product of L3 and L4 : ================= 26 IP = L3 .∗ L4 ; % multiplying the two functions L3 and L4 27 28 % Discrete c a l c u l a t i o n of the inner product : 29 IP_t = trapz (x , IP ) ; % d i s c r e t e using trapz () 30 IP_d = dot (L3 , L4) ; % d i s c r e t e using dot () 31 32 %% Approximation of the norm of function L3 : ======================== 33 L3_L3 = L3 .^2; % multiplying L3 with i t s e l f 34 35 % Discrete c a l c u l a t i o n of the norm : 36 NL3_t = trapz (x , L3_L3) ; % d i s c r e t e using trapz () 37 NL3_n = norm(L3) ^2/(n_x/2) ; % d i s c r e t e using norm () 36
  • 38. 38 NL3_d = dot (L3 , L3) /(n_x/2) ; % d i s c r e t e using dot () 39 40 % Continuous c a l c u l a t i o n of the norm : 41 NL3_a = b^3∗(25∗b^4−42∗b^2+21)/28−a^3∗(25∗a^4−42∗a^2+21) /28; 42 43 %% Approximation of the norm of function L4 : ======================== 44 L4_L4 = L4 .^2; % multiplying L4 with i t s e l f 45 46 % Discrete c a l c u l a t i o n of the norm : 47 NL4_t = trapz (x , L4_L4) ; % d i s c r e t e using trapz () 48 NL4_n = norm(L4) ^2/(n_x/2) ; % d i s c r e t e using norm () 49 NL4_d = dot (L4 , L4) /(n_x/2) ; % d i s c r e t e using dot () 50 51 % Continuous c a l c u l a t i o n of the norm : 52 NL4_a = b∗(1225∗b^8−2700∗b^6+1998∗b^4−540∗b^2+81)/576− . . . 53 a ∗(1225∗ a^8−2700∗a^6+1998∗a^4−540∗a^2+81) /576; The numerical results of the code are presented below: IP_t = -5.2042e-018 IP_d = 1.4433e-015 NL3_t = 0.28575 NL3_n = 0.28917 NL3_d = 0.28917 NL3_a = 0.28571 NL4_t = 0.22228 NL4_n = 0.22583 NL4_d = 0.22583 NL4_a = 0.22222 It is seen therefore, that in order to match up the norms calculated using the analytic solution with the ones calculated using discrete methods, we had to divide the latter by nx/2. As a result of this exercise we can therefore conclude that: 1 −1 f2 (x)dx ≈ ||f||2 nx 2 (5.7) and hence in reverse: ||f|| ≈ nx 2 1 −1 f2(x)dx (5.8) where f is a discrete vector and f(x) is a continuous function. We can also put it in words that: discrete norm 2 ≈ nx 2 continuous norm (5.9) The approximation gets better as we increase the number of discretization points nx. The following result might be useful for approximating integrals which can be written in the general form as: 1 −1 f2 (x)dx 37
  • 39. 5.2 The Phase Shift Ψ Between Two Sine Functions In this exercise we retrieve a phase shift Ψ between two sine functions by computing the inner product of two discrete vectors y1 and y2, defined in the following way: y1 = sin (x) (5.10a) y2 = sin (x + Ψ) (5.10b) where x is a vector obtained by discretizing the x-axis and Ψ is the pre-specified phase. Figure 5.1: Two sine functions shifted in phase by Ψ = 2. The idea is to find the correlation coefficient ρ between these two vectors y1 and y2. The correlation coefficient is defined as a cosine of the angle Ψ between two vectors and is computed by means of an inner product: ρ = cos (Ψ) = y1, y2 ||y1|| · ||y2|| (5.11) The correlation coefficient is hence a number between −1 and +1 and is related to the phase shift. For example, it is zero when the phase is π/2 or 3π/2, -1 when the phase is π and 1 when the phase is 0 or 2π. Finally, as we take the arccos (ρ) we get the phase back, since: arccos (ρ) = arccos (cos (Ψ)) = Ψ (5.12) There is, however, a problem with retrieving the phase shift exactly. This is due to the symmetry of the correlaction coefficient with respect to the phase Ψ. This symmetry is captured in the figure below: Figure 5.2: Symmetry in the correlation coefficient. 38
  • 40. For a certain correlation coefficient ρ we cannot tell whether the phase shift is Ψ or 2π − Ψ. We therefore compute both. In the graphical representation, this means that we cannot tell whether vector y2 was shifted to the left or to the right of vector y1. Listing 5.2: Matlab code for retrieving the phase shift between two sine functions. phase_shift_of_two_sines.m 1 %% Phase s h i f t between two sine functions =========================== 2 % Retrieving the phase s h i f t by computing an inner product . 3 % =================================================================== 4 c l c 5 c l e a r 6 7 %% USER INPUT: ====================================================== 8 PHASE = 2; % pre−s p e c i f i e d phase 9 xstart = −10; % s t a r t i n t e r v a l on the x−axis 10 xend = −xstart ; % end i n t e r v a l on the x−axis 11 step = 0 . 0 1 ; % step on the x−axis 12 % END OF USER INPUT ================================================= 13 14 %% Computing the actual phase : ====================================== 15 Actual_phase = rem(PHASE, 2∗ pi ) ; 16 disp ( [ ’The pre−s p e c i f i e d phase i s : ’ , num2str ( Actual_phase ) ] ) 17 18 %% Define the span on the x−axis : =================================== 19 x = [ xstart : step : xend ] ; 20 21 %% Define the vectors : ============================================== 22 y1 = sin (x) ; 23 y2 = sin (x + PHASE) ; 24 25 %% Find the inner product of two vectors y1 and y2 : ================= 26 Inner_prod = dot (y1 , y2 ) ; 27 NORMy1 = norm( y1 ) ; 28 NORMy2 = norm( y2 ) ; 29 CORRELATION = Inner_prod /(NORMy1∗NORMy2) ; 30 disp ( [ ’The c o r r e l a t i o n c o e f f i c i e n t i s : ’ , num2str (CORRELATION) ] ) ; 31 32 %% Retrieving the phase : ============================================ 33 PHASE_back = acos (CORRELATION) ; 34 mirror_phase = pi + ( pi−PHASE_back) ; 35 disp ( [ ’ Retrieved phase : ’ , num2str (PHASE_back) ] ) ; 36 disp ( [ ’ or : ’ , num2str ( mirror_phase ) ] ) ; In the Matlab code presented above, we calculate the actual, minimum phase, from the pre- specified one, taking into account that shifting the sine function by any 2kπ can be neglected. The numerical results of the code are presented below: The pre-specified phase is: 2 The correlation coefficient is: -0.40054 Retrieved phase: 1.9829 or: 4.3003 NOTE: As we increase the span on the x-axis, and/or decrease the step in vector x, the approximation to the phase gets better. 39
  • 41. 5.3 A Note on the Sizes of Component Matrices in the SVD 5.3.1 SVD on a General Matrix D When the SVD is performed on a general matrix D of size n × m the sizes of the resultant matrices are as follows: The matrix U is n × n and is always a square matrix. The matrix Σ is n × m and is the same size as the matrix D. The matrix V is m × m and is always a square matrix. The matrix D can be written by means of: D = UΣVT (5.13) The matrix multiplication from the equation (5.13) is represented graphically in the figure (5.3). 5.3.2 SVD on the Matrix X1 from DMD When the SVD is performed on a general matrix X1 of size np ×(nt −1) the sizes of the resultant matrices are as follows: The matrix U is np × np and is always a square matrix. The matrix Σ is np × (nt − 1) and is the same size as the matrix X1. The matrix V is nt × nt and is always a square matrix. The matrix X1 can be written by means of: X1 = UΣVT (5.14) The matrix multiplication from the equation (5.14) is represented graphically in the figure (5.4). 5.3.3 SVD with POD Approximation on Matrices D and X1 After approximating the matrices D and X1 with the POD method, the sizes of decomposition matrices change. Suppose the rank of the approximation is r. Dr ≈ U(:, 1 : r)Σ(1 : r, 1 : r)V(:, 1 : r)T (5.15) The matrix multiplication from the equation (5.15) is represented graphically in the figure (5.5). X1,r ≈ U(:, 1 : r)Σ(1 : r, 1 : r)V(:, 1 : r)T (5.16) The matrix multiplication from the equation (5.16) is represented graphically in the figure (5.6). The sizes of Dr and X1,r are the same as the sizes of D and X1. When the POD approximation is performed on a general matrix D of size np × nt the sizes of the resultant matrices are as follows: The matrix U is np × r and is in general a rectangular matrix. The matrix Σ is r × r and becomes a square matrix. The matrix V is nt × r and is in general a rectangular matrix. When the POD approximation is performed on a general matrix X1 of size np × (nt − 1) the sizes of the resultant matrices are as follows: The matrix U is np × r and is in general a rectangular matrix. The matrix Σ is r × r and becomes a square matrix. The matrix V is (nt − 1) × r and is in general a rectangular matrix. Notice also, that with the assumption that np > nt, the maximum rank of the matrix D is nt, so the rank of the approximation Dr is at most r = nt. The maximum rank of the matrix 40
  • 42. X1 is nt − 1 and analogously the rank of the approximation X1,r is at most r = nt − 1. This assumption restricts what the maximum sizes of the component matrices can be. Figure 5.3: Graphical representation of the equation (5.13). Figure 5.4: Graphical representation of the equation (5.14). Figure 5.5: Graphical representation of the equation (5.15). Figure 5.6: Graphical representation of the equation (5.16). 5.4 A Note on the Linear Propagator Matrix In this section we look deeper into fitting a linear system into the data matrix D, and especially we look at the equation (1.6), which we rewrite below: X2 = AX1 (1.6) We investigate whether it is possible to find matrix A explicitly. First, let’s analyze the sizes of each of the matrices in the above equation. Matrices X1 and X2 have size np × (nt − 1) and since the matrix A multiplies the matrix X1 to give a matrix of the same size as X1, it has to be size np × np. We then want to find an equation from which the matrix A can be solved. Since in general matrix X1 is not square (unless in a rare case np = nt − 1), we cannot compute its inverse directly. We seek a way to find a special kind of inverse, denoted by X−1∗ 1 , so that: X2X−1∗ 1 = A (5.17) One idea might be to use a Moore-Penrose inverse and compute the matrix A by means of the least-squares method. We perform a few algebraic operations on the equation (1.6): X2 = AX1 × XT 1 (5.18a) 41
  • 43. X2XT 1 = AX1XT 1 × (X1XT 1 )−1 (5.18b) X2XT 1 (X1XT 1 )−1 = A (5.18c) This will only hold when the product X1XT 1 is invertible. Figure 5.7: Graphical representation of the equation (1.6). Listing 5.3: Matlab code for attempting to find a linear propagator matrix A. fit- ting_linear_systems.m 1 %% I n v e s t i g a t i n g the l i n e a r system of a form X2 = A X1 ============== 2 % We attempt to find matrix A fo r a nonlinear set of data and then 3 % to r e t r i e v e matrix A f o r an a r t i f i c i a l l y created set . 4 % =================================================================== 5 c l c 6 c l e a r 7 8 %% Attempt at f i t t i n g a l i n e a r propagator matrix A ================== 9 % into nonlinear data : 10 % Generation of a data matrix of f u l l −rank : 11 D = [2 0 0 0 0 ; 0 8 2 0 0 ; 0 0 1 1 0 ; . . . 12 0 0 0 0 5 ; 0 2 0 0 1 ; 5 0 0 0 1 ; 0 0 0 1 0 ] ; 13 14 % Checking the rank of matrix D: 15 rank_D = rank (D) ; 16 17 % Extracting data s e t s : 18 X1_D = D( : , 1 : end−1) ; 19 X2_D = D( : , 2 : end ) ; 20 21 % Finding a Moore−Penrose inverse : 22 square_D = X1_D ∗ X1_D’ ; 23 det ( square_D ) ; 24 inverse_D = inv ( square_D ) ; 25 26 % Attempt at finding A: 27 A_D = X2_D ∗ X1_D’ ∗ inverse_D ; 28 29 %% Attempt at r e t r i e v i n g the l i n e a r propagator matrix A: ============ 30 % Creating matrix A: 42
  • 44. 31 A = [5 2 1 5 ; 1 2 2 2 ; 3 6 9 5 ; 1 0 2 2 ] ; 32 33 % Creating data set X1: 34 X1_A = [1 2 3 ; 5 6 1 ; 0 0 2 ; 2 0 0 ] ; 35 36 % Computing data set X2: 37 X2_A = A∗X1_A; 38 39 % Finding a Moore−Penrose inverse : 40 square_A = X1_A∗X1_A’ ; 41 det ( square_A ) ; 42 inverse_A = inv ( square_A ) ; 43 44 % Attempt at getting back A: 45 A_A = X2_A ∗ X1_A’ ∗ inverse_A ; In the Matlab code presented above we try to find matrix A for a data set matrix D of rank 1. The code produces a warning that the matrix inverse_D is singular, which means that the determinant of the product X1XT 1 is equal to or close to 0. Matrix A_D is hence not computed by Matlab. In the second part of the code, we attempt to retrieve the matrix A, specifying it at the begining. We therefore go in reverse, and compute the matrix X2 that satisfies the equation (1.6). The code produces the same warning about matrix inverse_A. The matrix A_A gets computed but is in no way retrieving the original matrix A. The results are the following: A = 5 2 1 5 1 2 2 2 3 6 9 5 1 0 2 2 A_A = -2.0000 1.5000 -8.0000 4.7500 0 2.2500 0 2.0000 6.0000 5.0000 -2.0000 8.5000 1.2500 0.2500 3.0000 1.8750 We might conclude that finding a linear propagator matrix of a linear system is in general impossible without an error. 5.5 A Note on the Similarity of Matrices In this section we analyze in a closer detail the condition of similarity of two matrices, which is an important concept in the DMD method, where instead of finding a linear propagator matrix A, we find a similar matrix S. Two matrices are similar if they satifsy the equation: S = Q−1 AQ (5.19) for any matrix Q that has an inverse. In the DMD method, we use the orthogonal matrix U to play a role of a matrix Q. The matrix U has got an inverse, and it is equal to its transpose: U−1 = UT (5.20) 43
  • 45. We have therefore the equation (1.9) from section 1.4: S = UT AU (1.9) We find the size of matrix S, and we compute it from the left hand side of the equation (1.8d), which we recall below: UT X2VΣ−1 = UT AU (1.8d) Matrices U, V and Σ have already been approximated with the POD method, so the di- mension from section 5.3.3 apply. Hence we have the following size multiplication: size(S) = (r × np) · (np × (nt − 1)) · ((nt − 1) × r) · (r × r) (5.21a) size(S) = (r × r) (5.21b) We then check whether this size agrees with the equation (1.9) and with the size of the matrix A obtained in the section 5.4. We have therefore: size(S) = (r × np) · (np × np) · (np × r) (5.22a) size(S) = (r × r) (5.22b) In general, the size of matrix A is larger then the size of matrix S. The maximum size of matrix S can be achieved when r = nt −1 and is in that case equal to (nt −1)×(nt −1). We then find the eigenvalues of matrix S, which approximate some of the eigenvalues of A. Producing a matrix S of size r × r we can only retrieve r of them. In a rare case when np = nt − 1 = r, the sizes of matrices S and A are the same and we can retrieve all eigenvalues of A. In a Matlab code presented below we investigate the behaviour of the eigenvalues of a similar matrix S, first, when the matrix S is of the same size as matrix A, and next, when the matrix S is of a reduced size r × r. Listing 5.4: Matlab code to investigate the similarity condition. similar_matrices.m 1 %% Similar Matrices ================================================= 2 % In t h i s code we i n v e s t i g a t e the s i m i l a r i t y condition : 3 % S = U^−1 A U, fo r two cases : 4 % − when matrix U i s of the same s i z e as matrix A. 5 % − when matrix U i s of reduced size , and approximates only 6 % r eigenvalues of matrix A. 7 % =================================================================== 8 c l c 9 c l e a r 10 11 %% USER INPUT: ====================================================== 12 % Choice of rank to decrease the s i z e of matrix S : 13 r = 7; 14 % END OF USER INPUT ================================================= 15 16 %% I n i t i a l data : 17 % Generation of a dummy data matrix : 18 D = [2 0 0 0 0 ; 0 8 2 0 0 ; 0 0 1 1 0 ; . . . 19 0 0 0 0 5 ; 0 2 0 0 1 ; 5 0 0 0 1 ; 0 0 0 1 0 ] ; 20 21 len_D = s i z e (D, 1 ) ; % length of matrix D 22 wid_D = s i z e (D, 2 ) ; % width of matrix D 44
  • 46. 23 24 % Generation of a l i n e a r propagator matrix A: 25 A = zeros (len_D , len_D) ; 26 27 f or i = 1 : 1 : ( len_D∗len_D) 28 A( i ) = i −1; 29 end 30 31 f or i = 1 : 4 : ( len_D∗len_D) 32 A( i ) = i ∗2; 33 end 34 35 % Finding the eigenvectors and eigenvalues of A: 36 [ eigvec_A , eigval_A ] = eig (A) ; 37 E_A = diag ( eigval_A ) 38 39 %% Full s i z e : ======================================================= 40 % Creating an orthogonal matrix U: 41 [U, Sigma , V] = svd (D) ; 42 43 % Creating a s i m i l a r matrix S : 44 S = U’ ∗ A ∗ U; 45 46 % Finding the eigenvectors and eigenvalues of S : 47 [ eigvec_S , eigval_S ] = eig (S) ; 48 E_S = diag ( eigval_S ) 49 50 %% Reduced s i z e : ==================================================== 51 % Extracting r columns of U: 52 U_app = U( : , 1 : 1 : r ) ; 53 54 % Creating a s i m i l a r matrix S : 55 S_app = U_app’ ∗ A ∗ U_app; 56 57 % Finding the eigenvectors and eigenvalues of S : 58 [ eigvec_S_app , eigval_S_app ] = eig (S_app) ; 59 E_S_app = diag ( eigval_S_app ) ; 60 % Sorting the eigenvalues by absolute values : 61 [~ ,n ] = sort ( abs (E_S_app) , ’ descend ’ ) ; 62 E_S_app = E_S_app(n) 63 64 %% Reconstructing eigenvectors of S : 65 EV_S = U’ ∗ eigvec_A 66 eigvec_S 67 ERR = norm( abs (EV_S) − abs ( eigvec_S ) ) 45
  • 47. The eigenvalues of a matrix A of size 7 × 7 are: E_A = 230.1355 61.1446 43.4887 28.9760 -9.5426 -2.4386 -1.7637 The eigenvalues of a matrix S of full size 7 × 7 are: E_S = 230.1355 61.1446 43.4887 28.9760 -9.5426 -2.4386 -1.7637 Next, we perform the approximations of the eigenvalues of A by increasing the rank r from 1 to 7. For r = 1: E_S_app = 32.9295 For r = 2: E_S_app = 145.9173 -1.5572 For r = 3: E_S_app = 145.9611 30.7503 -3.4328 For r = 4: E_S_app = 191.3768 54.7387 29.5563 -3.4428 For r = 5: E_S_app = 231.9506 59.2763 29.5688 7.8134 -3.7689 For r = 6: E_S_app = 227.7946 60.9257 32.8918 14.4950 -9.8219 -2.3767 For r = 7: E_S_app = 230.1355 61.1446 43.4887 28.9760 -9.5426 -2.4386 -1.7637 The eigenvectors of matrices A are different, however, they are linked by the following relationship: eigenvectors(S) = U−1 eigenvectors(A) (5.23) This relationship is checked in the Matlab code above and produces the following results. The eigenvectors of matrix S: EV_S = 0.4488 0.4735 -0.1361 -0.1386 0.4951 0.6765 0.4071 -0.6016 -0.5088 -0.0648 -0.3259 -0.0558 0.3550 0.0981 -0.0148 0.2253 -0.2696 -0.8201 0.1092 -0.2603 -0.0056 0.5369 -0.6507 -0.3023 -0.0148 0.2808 0.1577 -0.4903 0.3209 -0.1834 0.3233 -0.2662 -0.4042 -0.2030 0.4880 0.1544 -0.0895 0.6101 -0.0923 0.7003 -0.5239 0.1950 0.1459 0.0346 0.5800 -0.3499 0.0849 0.0900 -0.5550 46
  • 48. The eigenvectors of matrix S reconstructed as with the relation (5.23): eigvec_S = -0.4488 0.4735 -0.1361 0.1386 0.4951 0.6765 -0.4071 0.6016 -0.5088 -0.0648 0.3259 -0.0558 0.3550 -0.0981 0.0148 0.2253 -0.2696 0.8201 0.1092 -0.2603 0.0056 -0.5369 -0.6507 -0.3023 0.0148 0.2808 0.1577 0.4903 -0.3209 -0.1834 0.3233 0.2662 -0.4042 -0.2030 -0.4880 -0.1544 -0.0895 0.6101 0.0923 0.7003 -0.5239 -0.1950 -0.1459 0.0346 0.5800 0.3499 0.0849 0.0900 0.5550 Notice, that they are the same with respect to the absolute value (some of them only differ in sign). The L2 norm error computed from the absolute values of the above matrices is: ERR = 1.7422e-014 5.6 A Note on the Linear Dynamical Systems In the linear dynamical systems, any kth column of matrix D can be represented by the initial column D0 multiplied by the kth power of the linear propagator matrix S: Dk = Sk D0 (5.24) Substituting the eigendecomposition of matrix S, we get: Dk = ΦMk Φ−1 D0 (5.25) Writing the above equation as a sum we get: Dk = r j=1 φjµk j bj (5.26) Substituting for µj = eωj : Dk = r j=1 φj(eωj )k bj = r j=1 φjeωj k bj (5.27) Since every column of matrix D is linked to a particular moment in time, the integer k, can be written in terms of the time it corresponds to, divided by the timestep in our data: k = ti/∆t. Substituting this back into (5.27) we get: Dr = r j=1 φjeωj t/∆t bj (5.28) 47
  • 49. Appendix A Pulsating Poiseuille Flow Listing A.1: Matlab code to approximate the Poiseuille flow with three different methods. pulsating_poiseuille_approximations.m 1 %% Pulsating P o i s e u i l l e Flow ======================================== 2 % Approximating the v e l o c i t y p r o f i l e with eigenfunction expansion , 3 % POD and DMD. 4 % =================================================================== 5 c l c 6 c l e a r 7 c l o s e a l l 8 9 %% I n i t i a l data : ==================================================== 10 % USER INPUT ======================================================== 11 TIME = 10; % t o t a l time of the animation 12 W = 10; % Womersley number 13 Pa = 60; % dimensionless pressure c o e f f i c i e n t 14 FONT = 10; % f o n t s i z e fo r graphs 15 modes = 5; % number of modes f or a l l approximations (max 7) 16 dt = 0 . 0 5 ; % time step 17 dy = 0 . 0 5 ; % space step 18 % END OF USER INPUT ================================================= 19 MODES = modes ; % number of modes in eigenfunction approximation 20 RANK = modes ; % number of modes in POD approximation 21 r = modes ; % number of modes in DMD approximation 22 nm = modes ; % number of f i r s t modes to draw 23 24 % D efin ition of colours f o r pl ot t in g : 25 red = [236 68 28] ./ 255; % used f or eigenfunction 26 blue = [3 105 172] ./ 255; % used f o r POD 27 green = [34 139 34] ./ 255; % used f o r DMD 28 29 % I n i t i a l i z e error matrices : 30 ERROR_EIG = zeros (MODES, 1) ; 31 ERROR_POD = zeros (RANK, 1) ; 32 ERROR_DMD = zeros ( r , 1) ; 33 34 %% Analytical r e s u l t from asymptotic complex solution : ============== 35 t = [ 0 : dt :TIME ] ; n_t = length ( t ) ; % time d i s c r e t i z a t i o n 36 y = [ −1:dy : 1 ] ; n_y = length (y) ; % space d i s c r e t i z a t i o n 37 u_A_r = zeros (n_y, n_t) ; % i n i t i a l i z e solution 48
  • 50. 38 39 % Construct r e a l and imaginary parts : 40 f or j = 1: length ( t ) 41 42 Y = (1 − cosh (W∗ sqrt (1 i ) .∗ y) . / ( cosh (W∗ sqrt (1 i ) ) ) ) ∗1 i ∗Pa/W.^2; 43 u_A_r( : , j ) = r e a l (Y.∗ exp (1 i ∗ t ( j ) ) ) ; 44 end 45 46 % Adding the mean flow component : 47 u_Mb = (1 − y .^2) ∗ 0 . 5 ; % mean flow 48 u_M = repmat (u_Mb, length ( t ) , 1) ; % repeat solution 49 u_A_R = u_M’ + u_A_r; % r e a l a n a l y t i c a l solution 50 51 %% Eigenfunction approximation : ===================================== 52 % Extended solution matrix ( fo r p lo t ti ng ) : 53 u_T_extend = zeros (n_y, MODES∗n_t) ; 54 55 % Calculate the solution matrix f o r each mode : 56 f or j = 1 : 1 :MODES 57 58 n = [ 1 : 1 : j ] ; % matrix of modes to include in the summation 59 60 % I n i t i a l i z e matrices : 61 Y = zeros (n_y, j ) ; % i n i t i a l i z e s p a c i a l basis 62 A_n = zeros ( j , j ) ; % i n i t i a l i z e amplitude matrix 63 T = zeros (n_t , j ) ; % i n i t i a l i z e temporal basis 64 U_A = zeros (n_t , n_y) ; % i n i t i a l i z e PDE solution 65 66 % Construct s p a t i a l basis : 67 f or i = 1 : 1 : length (n) 68 69 N = 2∗n( i ) − 1; % odd number in the s e r i e s 70 Y( : , i ) = cos (N∗ pi ∗y/2) ; 71 end 72 73 % Construct the amplitudes : 74 f or i = 1 : 1 : length (n) 75 76 N = 2∗n( i ) − 1; % odd number in the s e r i e s 77 A_n( i , i ) = (16∗Pa) / (N∗ pi ∗ sqrt ((2∗W)^4 + N^4∗ pi ^4) ) ; 78 end 79 80 % Construct the temporal modes : 81 f or i = 1 : 1 : length (n) 82 83 N = 2∗n( i ) − 1; % odd number in the s e r i e s 84 T( : , i ) = (−1)^(n( i ) ) ∗ cos ( t − atan ((4∗W^2) / (N^2∗ pi ^2) ) ) ; 85 end 86 87 % Assembly solution : 88 U_A = Y ∗ A_n ∗ T’ ; 89 90 % Adding the mean flow component : 91 u_Mb = (1 − y .^2) ∗ 0 . 5 ; % mean flow 49
  • 51. 92 u_M = repmat (u_Mb, length ( t ) , 1) ; % repeat solution 93 u_T = U_A + u_M’ ; % eigenfunction solution 94 95 % Paste solution to the large matrix ( fo r p lo t ti ng ) : 96 u_T_extend ( : , (( j − 1) ∗n_t + 1) : 1 : ( j ∗n_t) ) = u_T; 97 98 % Compute the error of the current approximation : 99 ERROR_EIG( j ) = abs (norm(u_T − u_A_R) ) ; 100 end 101 102 % Obtain elements from the diagonal : 103 sigma_A_n = diag (A_n) ; 104 105 %% POD approximation : =============================================== 106 % SVD of the o r i g i n a l solution matrix : 107 [U_POD, S_POD, V_POD] = svd (u_A_R) ; 108 109 f or j = 1 : 1 :RANK 110 111 % Create a POD approximation : 112 U_POD_approx = U_POD( : , 1 : 1 : j ) ∗ . . . 113 S_POD( 1 : 1 : j , 1 : 1 : j ) ∗ V_POD( : , 1 : 1 : j ) ’ ; 114 % Compute the error of the current approximation : 115 ERROR_POD( j ) = abs (norm(U_POD_approx − u_A_R) ) ; 116 end 117 118 % Obtain elements from the diagonal : 119 sigma_POD = diag (S_POD) ; 120 121 %% DMD approximation : =============================================== 122 % Extended solution matrix ( f or p lo t ti ng ) : 123 U_DMD_extend = zeros (n_y, r ∗n_t) ; 124 125 % Calculate the solution matrix f o r each mode : 126 f or j = 1 : 1 : r 127 128 % Define matrix D: 129 D = u_A_R; 130 131 % Construct data s e t s X1 and X2: 132 X1 = D( : , 1: end−1) ; X2 = D( : , 2: end ) ; 133 134 % Compute the POD (SVD) of X1: 135 [U, Sigma , V] = svd (X1, ’ econ ’ ) ; 136 137 % Approximate matrix X1 keeping only r elements of the sum : 138 U = U( : , 1 : 1 : j ) ; % retain only r modes in U 139 Sigma = Sigma ( 1 : j , 1: j ) ; % retain only r modes in Sigma 140 V = V( : , 1 : 1 : j ) ; % retain only r modes in V 141 142 % Construct the propagator S : 143 S = U’ ∗ X2 ∗ V ∗ inv ( Sigma ) ; 144 145 % Compute eigenvalues and eigenvectors of the matrix S : 50
  • 52. 146 [ PHI , MU] = eig (S) ; 147 148 % Extract f r e q u e n c i e s from the diagonal : 149 mu = diag (MU) ; 150 151 % Extract r e a l and imaginary parts of f r e q u e n c i e s : 152 lambda_r = r e a l (mu) ; lambda_i = imag (mu) ; 153 154 % Frequency in terms of pulsation : 155 omega = log (mu) /dt ; 156 157 % Compute the DMD s p a t i a l modes : 158 Phi = U ∗ PHI ; 159 160 % Compute amplitudes with the least −squares method : 161 b2 = inv ( Phi ’ ∗ Phi ) ∗ Phi ’ ∗ X1( : , 1 ) ; 162 163 % Compute the DMD temporal modes : 164 T_modes = zeros ( j , n_t) ; 165 f or i = 1: length ( t ) 166 167 T_modes ( : , i ) = b2 .∗ exp (omega ∗ t ( i ) ) ; 168 end 169 170 % Get the f u l l DMD reconstruction : 171 U_DMD = r e a l ( Phi ∗ T_modes) ; 172 173 % Paste solution to the large matrix ( f or p lo t ti ng ) : 174 U_DMD_extend( : , (( j − 1) ∗n_t + 1) : 1 : ( j ∗n_t) ) = U_DMD; 175 176 % Compute the error of the current approximation : 177 ERROR_DMD( j ) = abs (norm(U_DMD − u_A_R) ) ; 178 179 %% Plot of the f i r s t modes : ===================================== 180 hfig1 = f i g u r e (1) ; 181 set ( hfig1 , ’ units ’ , ’ normalized ’ , ’ outerposition ’ , [0 0 1 1 ] ) ; 182 183 % DMD s p a t i a l modes : 184 subplot (2 , nm , j ) ; 185 plot (y , r e a l ( Phi ( : , j ) ) /norm( Phi ) , ’ color ’ , green , . . . 186 ’ LineStyle ’ , ’ : ’ , ’ LineWidth ’ , 1.5) ; 187 [M] = AXIS(FONT) ; 188 set ( gcf , ’ color ’ , ’w ’ ) ; 189 t i t l e ( [ ’ phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 190 hold on 191 192 % DMD temporal modes : 193 subplot (2 , nm, j+nm) ; 194 plot ( t , r e a l (T_modes ( 1 , : ) ) /norm(T_modes) , ’ color ’ , . . . 195 green , ’ LineStyle ’ , ’− ’ ) ; 196 [M] = AXIS(FONT) ; 197 set ( gcf , ’ color ’ , ’w ’ ) ; 198 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 199 hold on 51
  • 53. 200 end 201 202 %% Plot of the f i r s t modes : ========================================= 203 hfig1 = f i g u r e (1) ; 204 205 % Plot f o r modes from eigenfunction expansion : 206 % Spatial : 207 f or j = 1 : 1 :nm 208 209 subplot (2 , nm, j ) ; 210 plot (y , Y( : , j ) /norm(Y) , ’ color ’ , red , ’ LineStyle ’ , ’− ’ ) ; 211 hold on 212 [M] = AXIS(FONT) ; 213 set ( gcf , ’ color ’ , ’w ’ ) ; 214 t i t l e ( [ ’ phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 215 end 216 217 % Temporal : 218 f or j = 1 : 1 :nm 219 220 subplot (2 , nm, j+nm) ; 221 plot ( t , T( : , j ) /norm(T) , ’ color ’ , red , ’ LineStyle ’ , ’− ’ ) ; 222 hold on 223 [M] = AXIS(FONT) ; 224 set ( gcf , ’ color ’ , ’w ’ ) ; 225 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 226 end 227 228 % Plot f o r modes from POD: 229 % Spatial : 230 f or j = 1 : 1 :nm 231 232 subplot (2 , nm, j ) ; 233 plot (y , U_POD( : , j ) /norm(U_POD) , ’ color ’ , blue , ’ LineStyle ’ , ’− ’ ) 234 [M] = AXIS(FONT) ; 235 set ( gcf , ’ color ’ , ’w ’ ) ; 236 end 237 238 % Temporal : 239 f or j = 1 : 1 :nm 240 241 subplot (2 , nm, j+nm) ; 242 plot ( t , V_POD( : , j ) /norm(V_POD) , ’ color ’ , blue , ’ LineStyle ’ , ’− ’ ) 243 [M] = AXIS(FONT) ; 244 set ( gcf , ’ color ’ , ’w ’ ) ; 245 end 246 247 % Save the plot : 248 print ( ’−dpng ’ , ’−r500 ’ , [ ’Modes_T ’ , num2str (TIME) , ’ . png ’ ] ) 249 250 %% Plot f or amplitude decay f or both approximations : ================ 251 hfig2 = f i g u r e (2) ; 252 LABEL = { ’ Eigenfunction . . . ’ , ’POD’ }; 253 MARKER = { ’ o ’ , ’ s ’ }; 52
  • 54. 254 255 % Get the f i r s t element to normalize the plot : 256 norm_POD = sigma_POD(1) ; 257 norm_EIG = sigma_A_n(1) ; 258 259 % Use only MODES number of terms from the diagonal : 260 Line_POD = sigma_POD( 1 :MODES) /norm_POD; 261 Line_A_n = sigma_A_n ( 1 :MODES) /norm_EIG ; 262 263 f or j = 1 : 1 :MODES 264 265 % Plot f o r eigenfunction amplitude decay : 266 plot ( j , sigma_A_n( j ) /norm_EIG, MARKER{1} , ’ color ’ , red ) ; 267 [M] = AXIS(FONT) ; 268 set ( gcf , ’ color ’ , ’w ’ ) ; 269 hold on 270 271 % Plot f o r POD amplitude decay : 272 plot ( j , sigma_POD( j ) /norm_POD, MARKER{2} , ’ color ’ , blue ) ; 273 [M] = AXIS(FONT) ; 274 set ( gcf , ’ color ’ , ’w ’ ) ; 275 legend (LABEL) ; 276 l i n e ( [ 1 :MODES] , [ Line_A_n ] , ’ color ’ , red ) ; 277 l i n e ( [ 1 :MODES] , [Line_POD] , ’ color ’ , blue ) ; 278 t i t l e ( [ ’ Amplitude decay rate ( normalized ) ’ ] ) ; 279 ylim ([ −0.1 1 . 1 ] ) ; 280 xlim ( [ 0 . 9 ( modes + 0.1) ] ) ; 281 end 282 283 % Save the plot : 284 print ( ’−dpng ’ , ’−r500 ’ , ’ Amplitude_decay . png ’ ) 285 286 %% Plot of the eigenvalues c i r c l e : ================================== 287 hfig3 = f i g u r e (3) ; 288 289 % D efin ition of the c i r c l e in a complex plane : 290 radius = 1; z_Circle = radius ∗ exp ( ( 0 : 0 . 1 : ( 2 ∗ pi ) ) ∗ sqrt (−1) ) ; 291 292 % Eigenvalues and complex c i r c l e : 293 stem3 (lambda_r , lambda_i , r e a l ( abs ( b2 ) ) , ’ color ’ , green ) ; 294 hold on 295 plot ( r e a l ( z_Circle ) , imag ( z_Circle ) , ’k−’ ) 296 [M] = AXIS(FONT) ; 297 set ( gcf , ’ color ’ , ’w ’ ) ; 298 t i t l e ( [ ’ Eigenvalues c i r c l e fo r DMD with r = ’ , num2str ( r ) ] ) ; 299 300 % Save the plot : 301 print ( ’−dpng ’ , ’−r500 ’ , ’ Eigenvalues_circle . png ’ ) 302 303 %% Movie of a pulsating v e l o c i t y p r o f i l e with approximations : ======= 304 hfig4 = f i g u r e (4) ; 305 MARKER = { ’ : ’ , ’−−’ , ’ −. ’ , ’−−’ , ’−−’ , ’−−’ , ’−−’ }; 306 LABEL = { ’ Original . . ’ , ’U_1 ’ , ’U_2 ’ , ’U_3 ’ }; 307 53
  • 55. 308 f or i = 1 : 1 : length ( t ) 309 310 hold o f f 311 312 % Plot f o r eigenfunction approximation : 313 subplot (1 ,3 ,1) 314 ORIGINAL = u_A_R( : , i ) ; 315 plot (y , ORIGINAL, ’k−’ , ’ linewidth ’ , 1) ; 316 [M] = AXIS(FONT) ; 317 set ( gcf , ’ color ’ , ’w ’ ) ; 318 hold on 319 f or j = 1 : 1 :MODES 320 321 EIGEN = u_T_extend ( : , (n_t∗( j − 1) + i ) ) ; 322 plot (y , EIGEN, MARKER{ j } , ’ color ’ , red ) ; 323 [M] = AXIS(FONT) ; 324 set ( gcf , ’ color ’ , ’w ’ ) ; 325 end 326 t i t l e ( [ ’ Eigenfunction ’ ] ) ; 327 i f modes <= 3 328 329 legend (LABEL) ; 330 end 331 ylim ([ −1 1 . 5 ] ) ; 332 xlim ([ −1 1 ] ) ; 333 hold o f f 334 335 % Plot of the POD approximation : 336 subplot (1 ,3 ,2) 337 plot (y , ORIGINAL, ’k−’ , ’ linewidth ’ , 1) ; 338 [M] = AXIS(FONT) ; 339 set ( gcf , ’ color ’ , ’w ’ ) ; 340 hold on 341 f or j = 1 : 1 :RANK 342 343 U_POD_approx = U_POD( : , 1 : 1 : j ) ∗ . . . 344 S_POD( 1 : 1 : j , 1 : 1 : j ) ∗ V_POD( : , 1 : 1 : j ) ’ ; 345 POD = U_POD_approx( : , i ) ; 346 plot (y , POD, MARKER{ j } , ’ color ’ , blue ) ; 347 [M] = AXIS(FONT) ; 348 set ( gcf , ’ color ’ , ’w ’ ) ; 349 end 350 t i t l e ( [ ’POD’ ] ) ; 351 i f modes <= 3 352 353 legend (LABEL) ; 354 end 355 ylim ([ −1 1 . 5 ] ) ; 356 xlim ([ −1 1 ] ) ; 357 hold o f f 358 359 % Plot of the DMD approximation : 360 subplot (1 ,3 ,3) 361 plot (y , ORIGINAL, ’k−’ , ’ linewidth ’ , 1) ; 54
  • 56. 362 [M] = AXIS(FONT) ; 363 set ( gcf , ’ color ’ , ’w ’ ) ; 364 hold on 365 f or j = 1 : 1 : r 366 367 DMD_APPROX = U_DMD_extend( : , (n_t∗( j −1)+i ) ) ; 368 plot (y , DMD_APPROX, MARKER{ j } , ’ color ’ , green ) ; 369 [M] = AXIS(FONT) ; 370 set ( gcf , ’ color ’ , ’w ’ ) ; 371 end 372 t i t l e ( [ ’DMD’ ] ) ; 373 i f modes <= 3 374 375 legend (LABEL) ; 376 end 377 ylim ([ −1 1 . 5 ] ) ; 378 xlim ([ −1 1 ] ) ; 379 hold o f f 380 drawnow 381 382 % Save the g i f : 383 frame = getframe (4) ; 384 im = frame2im ( frame ) ; 385 [ imind , cm] = rgb2ind (im , 256) ; 386 filename = ’ Pulsating_Poiseuille_Movie . g i f ’ ; 387 i f i == 1; 388 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ; 389 e l s e 390 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . . 391 ’ append ’ , ’ DelayTime ’ , 0.1) ; 392 end 393 end 394 395 %% Plot of the error of each approximation : ========================= 396 hfig5 = f i g u r e (5) ; 397 398 % Maximum of each error : 399 max_eig = max(ERROR_EIG) ; 400 max_pod = max(ERROR_POD) ; 401 max_dmd = max(ERROR_DMD) ; 402 403 f or j = 1 : 1 : modes 404 405 LABEL = { ’ Eigenfunction . . . ’ , ’POD’ , ’DMD’ }; 406 plot ( j , ERROR_EIG( j ) /max_eig , ’ color ’ , red , ’ LineStyle ’ , ’ o ’ ) 407 [M] = AXIS(FONT) ; 408 set ( gcf , ’ color ’ , ’w ’ ) ; 409 hold on 410 plot ( j , ERROR_POD( j ) /max_pod, ’ color ’ , blue , ’ LineStyle ’ , ’ s ’ ) 411 [M] = AXIS(FONT) ; 412 set ( gcf , ’ color ’ , ’w ’ ) ; 413 plot ( j , ERROR_DMD( j ) /max_dmd, ’ color ’ , green , ’ LineStyle ’ , ’^ ’ ) 414 [M] = AXIS(FONT) ; 415 set ( gcf , ’ color ’ , ’w ’ ) ; 55
  • 57. 416 legend (LABEL) ; 417 l i n e ( [ 1 : modes ] , [ERROR_EIG] . / max_eig , ’ color ’ , red ) ; 418 l i n e ( [ 1 : modes ] , [ERROR_POD] . / max_pod, ’ color ’ , blue ) ; 419 l i n e ( [ 1 : modes ] , [ERROR_DMD] . /max_dmd, ’ color ’ , green ) ; 420 t i t l e ( [ ’ Normalized error of each approximation ’ ] ) ; 421 ylim ([ −0.1 1 . 1 ] ) ; 422 xlim ( [ 0 . 9 ( modes + 0.1) ] ) ; 423 end 424 425 % Save the plot : 426 print ( ’−dpng ’ , ’−r500 ’ , ’ Error . png ’ ) 427 428 %% Ending : ========================================================== 429 c l o s e a l l 430 c l c 56
  • 58. Appendix B 1D and 2D POD Functions Listing B.1: Matlab function for performing 1D POD. POD_1D.m 1 %% POD_1D function ================================================== 2 % POD done on a 1D data set created by : 3 % − matrix D 4 % − vector y 5 % =================================================================== 6 7 function [D_POD, U_POD, S_POD, V_POD] = POD_1D(D, r ) 8 % SVD of o r i g i n a l solution matrix : 9 [U_POD, S_POD, V_POD] = svd (D) ; 10 11 % POD Approximation : 12 D_POD = U_POD( : , 1 : 1 : r ) ∗ S_POD( 1 : 1 : r , 1 : 1 : r ) ∗ V_POD( : , 1 : 1 : r ) ’ ; 13 end 57
  • 59. Listing B.2: Matlab function for performing 2D POD. POD_2D.m 1 %% POD_2D function ================================================== 2 % POD done on a 2D data set created by : 3 % − matrix U 4 % − matrix V 5 % − vector X 6 % − vector Y 7 % =================================================================== 8 9 function [U_POD, V_POD, UU_POD, VU_POD, UV_POD, VV_POD] = . . . 10 POD_2D(U, V, X, Y, r ) 11 12 %% Check the s i z e s of matrices U, V, X and Y: ======================= 13 len_U = s i z e (U, 1 ) ; 14 len_V = s i z e (V, 1 ) ; 15 wid_U = s i z e (U, 2 ) ; 16 wid_V = s i z e (V, 2 ) ; 17 len_X = length (X) ; 18 len_Y = length (Y) ; 19 20 i f len_U == len_V && len_U == len_X ∗ len_Y && wid_U == wid_V 21 %% I n i t i a l d e f i n i t i o n s : ============================================= 22 % Changing NaN to zeros fo r the s o l i d parts : 23 V( isnan (V) ) = 0; 24 U( isnan (U) ) = 0; 25 26 SPACE_X = X; % extracting space X−coordinates 27 SPACE_Y = Y; % extracting space Y−coordinates 28 n_x = length (SPACE_X) ; % number of X−coordinates 29 n_y = length (SPACE_Y) ; % number of Y−coordinates 30 n_t = s i z e (V, 2 ) ; % number of time steps 31 32 %% POD approximation : =============================================== 33 % SVD of o r i g i n a l solution matrix : 34 [UU_POD, SU_POD, VU_POD] = svd (U, ’ econ ’ ) ; 35 [UV_POD, SV_POD, VV_POD] = svd (V, ’ econ ’ ) ; 36 37 % POD Approximation : 38 U_POD = UU_POD( : , 1 : 1 : r ) ∗ SU_POD( 1 : 1 : r , 1 : 1 : r ) ∗ VU_POD( : , 1 : 1 : r ) ’ ; 39 V_POD = UV_POD( : , 1 : 1 : r ) ∗ SV_POD( 1 : 1 : r , 1 : 1 : r ) ∗ VV_POD( : , 1 : 1 : r ) ’ ; 40 41 % Change zeros to NaN to re−create s o l i d parts : 42 U_POD(U_POD == 0) = NaN; 43 V_POD(V_POD == 0) = NaN; 44 45 e l s e 46 disp ( [ ’Wrong matrix dimensions . ’ ] ) 47 48 U_POD = NaN; V_POD = NaN; UU_POD = NaN; VU_POD = NaN; . . . 49 UV_POD = NaN; VV_POD = NaN; 50 end 58
  • 60. Appendix C 1D and 2D DMD Functions Listing C.1: Matlab function for performing 1D DMD. DMD_1D.m 1 %% DMD_1D function ================================================== 2 % DMD done on a 1D data set created by : 3 % − matrix D 4 % − vector y 5 % =================================================================== 6 7 function [ lambda_r , lambda_i , D_DMD_extend, bD, Phi_extend , . . . 8 T_modes_extend ] = DMD_1D(D, y , dt , r ) 9 10 %% Check the s i z e s of matrices D and y : ============================= 11 len_D = s i z e (D, 1 ) ; 12 len_y = length (y) ; 13 14 i f len_D == len_y 15 %% I n i t i a l d e f i n i t i o n s : ============================================= 16 n_t = s i z e (D, 2 ) ; % number of time steps 17 n_y = s i z e (D, 1 ) ; % number of space y−coordinates 18 19 %% Create extended output matrices to containt a l l ranks ============ 20 % approximations up to r : 21 D_DMD_extend = zeros (n_y, r ∗n_t) ; % solution 22 T_modes_extend = zeros ((1+ r ) ∗ r /2 , n_t) ; % temporal structures 23 Phi_extend = zeros (n_y, (1+r ) ∗ r /2) ; % s p a t i a l structures 24 s t a r t = 1; 25 26 %% DMD approximation : =============================================== 27 % Construct data s e t s X1 and X2: 28 X1 = D( : , 1: end − 1) ; X2 = D( : , 2: end ) ; 29 30 f or i = 1 : 1 : r 31 % Compute the POD (SVD) of X1: 32 [UD, SigmaD , VD] = svd (X1, ’ econ ’ ) ; 33 34 % Approximate matrix X1 keeping only r elements of the sum : 35 UD = UD( : , 1 : 1 : i ) ; % f i l t e r : retain only r modes in U 36 SigmaD = SigmaD ( 1 : 1 : i , 1 : 1 : i ) ; % f i l t e r : retain only r modes in Sigma 37 VD = VD( : , 1 : 1 : i ) ; % f i l t e r : retain only r modes in V 38 59
  • 61. 39 % Construct the propagator S : 40 S = UD’ ∗ X2 ∗ VD ∗ inv (SigmaD) ; 41 42 % Compute eigenvalues and eigenvectors of the matrix S : 43 [ PHI , MU] = eig (S) ; % eigenvalue decomposition of matrix S 44 45 % Extract f r e q u e n c i e s : 46 mu = diag (MU) ; 47 48 % Extract r e a l and imaginary parts of f r e q u e n c i e s : 49 lambda_r = r e a l (mu) ; lambda_i = imag (mu) ; 50 51 % Frequency in terms of pulsation : 52 omega = log (mu) /dt ; % computing natural log 53 54 % Compute DMD s p a t i a l modes : 55 Phi = UD ∗ PHI ; 56 57 % Compute DMD amplitudes with the least −squares method : 58 bD = inv ( Phi ’ ∗ Phi ) ∗ Phi ’ ∗ X1( : , 1 ) ; 59 60 % Define temporal behaviour : 61 TIME = [ 0 : 1 : n_t − 1] ∗ dt ; 62 63 % I n i t i a l i z e temporal modes matrix : 64 T_modes = zeros ( i , n_t) ; 65 66 % Compute DMD temporal modes : 67 f or j = 1: length (TIME) 68 T_modes ( : , j ) = bD .∗ exp (omega ∗ TIME( j ) ) ; 69 end 70 71 % Get the r e a l part of the DMD reconstruction : 72 D_DMD = r e a l ( Phi ∗ T_modes) ; 73 74 % Paste current s o l u t i o n s into the right place in extended matrices : 75 D_DMD_extend( : , (( i − 1) ∗n_t + 1) : 1 : ( i ∗n_t) ) = D_DMD; 76 s t a r t = s t a r t + ( i −1) ; 77 T_modes_extend (( s t a r t : 1 : s t a r t +(i −1)) , : ) = r e a l (T_modes) ; 78 Phi_extend ( : , ( s t a r t : 1 : s t a r t +(i −1)) ) = r e a l ( Phi ) ; 79 end 80 81 e l s e 82 disp ( [ ’Wrong matrix dimensions . ’ ] ) 83 84 lambda_r = NaN; lambda_i = NaN; D_DMD = NaN; bD = NaN; 85 end 60
  • 62. Listing C.2: Matlab function for performing 2D DMD. DMD_2D.m 1 %% DMD_2D function ================================================== 2 % DMD done on a 2D data set created by : 3 % − matrix U 4 % − matrix V 5 % − vector X 6 % − vector Y 7 % =================================================================== 8 9 function [ lambdaU_r , lambdaU_i , lambdaV_r , lambdaV_i , U_DMD, . . . 10 V_DMD, bU, bV, Phi_U, Phi_V, T_modesU, T_modesV] . . . 11 = DMD_2D(U, V, X, Y, dt , r ) 12 13 %% Check the s i z e s of matrices U, V, X and Y: ======================= 14 len_U = s i z e (U, 1 ) ; 15 len_V = s i z e (V, 1 ) ; 16 wid_U = s i z e (U, 2 ) ; 17 wid_V = s i z e (V, 2 ) ; 18 len_X = length (X) ; 19 len_Y = length (Y) ; 20 21 i f len_U == len_V && len_U == len_X ∗ len_Y && wid_U == wid_V 22 %% I n i t i a l d e f i n i t i o n s : ============================================= 23 % Changing NaN to zeros fo r the s o l i d parts : 24 V( isnan (V) ) = 0; 25 U( isnan (U) ) = 0; 26 27 SPACE_X = X; % extracting space X−coordinates 28 SPACE_Y = Y; % extracting space Y−coordinates 29 n_x = length (SPACE_X) ; % number of X−coordinates 30 n_y = length (SPACE_Y) ; % number of Y−coordinates 31 n_t = s i z e (V, 2 ) ; % number of time steps 32 33 %% DMD approximation : =============================================== 34 % Construct data s e t s X1 and X2: 35 X1U = U( : , 1: end − 1) ; X2U = U( : , 2: end ) ; 36 X1V = V( : , 1: end − 1) ; X2V = V( : , 2: end ) ; 37 38 % Compute the POD (SVD) of X1: 39 [UU, SigmaU , VU] = svd (X1U, ’ econ ’ ) ; 40 [UV, SigmaV , VV] = svd (X1V, ’ econ ’ ) ; 41 42 % Approximate matrix X1 keeping only r elements of the sum : 43 UUn = UU( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in U 44 SigmaUn = SigmaU ( 1 : r , 1: r ) ; % f i l t e r : retain only r modes in Sigma 45 VUn = VU( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in V 46 47 UVn = UV( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in U 48 SigmaVn = SigmaV ( 1 : r , 1: r ) ; % f i l t e r : retain only r modes in Sigma 49 VVn = VV( : , 1 : 1 : r ) ; % f i l t e r : retain only r modes in V 50 51 % Construct the propagator S : 52 SU = UUn’ ∗ X2U ∗ VUn ∗ inv (SigmaUn) ; 53 SV = UVn’ ∗ X2V ∗ VVn ∗ inv (SigmaVn) ; 61
  • 63. 54 55 % Compute eigenvalues and eigenvectors of the matrix S : 56 [PHIU, MUU] = eig (SU) ; % eigenvalue decomposition of matrix S 57 [PHIV, MUV] = eig (SV) ; % eigenvalue decomposition of matrix S 58 59 % Extract f r e q u e n c i e s : 60 muU = diag (MUU) ; 61 muV = diag (MUV) ; 62 63 % Extract r e a l and imaginary parts of f r e q u e n c i e s : 64 lambdaU_r = r e a l (muU) ; % r e a l 65 lambdaU_i = imag (muU) ; % imaginary 66 lambdaV_r = r e a l (muV) ; % r e a l 67 lambdaV_i = imag (muV) ; % imaginary 68 69 % Frequency in terms of pulsation : 70 omegaU = log (muU) /dt ; % computing natural log 71 omegaV = log (muV) /dt ; % computing natural log 72 73 % Compute DMD s p a t i a l modes : 74 Phi_U = UUn ∗ PHIU; 75 Phi_V = UVn ∗ PHIV; 76 77 % Compute DMD amplitudes with the least −squares method : 78 bU = inv (Phi_U’ ∗ Phi_U) ∗ Phi_U’ ∗ X1U( : , 1 ) ; 79 bV = inv (Phi_V’ ∗ Phi_V) ∗ Phi_V’ ∗ X1V( : , 1 ) ; 80 81 % Define temporal behaviour : 82 TIME = [ 0 : 1 : n_t − 1] ∗ dt ; 83 84 % I n i t i a l i z e temporal modes matrix : 85 T_modesU = zeros ( r , n_t) ; 86 T_modesV = zeros ( r , n_t) ; 87 88 % Compute DMD temporal modes : 89 f or i = 1: length (TIME) 90 T_modesU ( : , i ) = bU .∗ exp (omegaU ∗ TIME( i ) ) ; 91 T_modesV ( : , i ) = bV .∗ exp (omegaV ∗ TIME( i ) ) ; 92 end 93 94 % Get the r e a l part of the DMD reconstruction : 95 U_DMD = r e a l (Phi_U ∗ T_modesU) ; % r e a l part taken 96 V_DMD = r e a l (Phi_V ∗ T_modesV) ; % r e a l part taken 97 98 % Change zeros to NaN to re−create s o l i d parts : 99 U_DMD(U_DMD == 0) = NaN; 100 V_DMD(V_DMD == 0) = NaN; 101 102 e l s e 103 disp ( [ ’Wrong matrix dimensions . ’ ] ) 104 105 lambdaU_r = NaN; lambdaU_i = NaN; lambdaV_r = NaN; . . . 106 lambdaV_i = NaN; U_DMD = NaN; V_DMD = NaN; bU = NaN; . . . 107 bV = NaN; Phi_U = NaN; Phi_V = NaN; T_modesU = NaN; . . . 62
  • 64. 108 T_modesV = NaN; 109 end 63
  • 65. Appendix D 1D and 2D POD Results Plotting Listing D.1: Matlab function for plotting the .gif file of the 1D POD approximation. POD_1D_PLOT_GIF.m 1 %% POD_1D_PLOT_GIF function ========================================= 2 % This function plots the . g i f f i l e of the approximation 3 % =================================================================== 4 5 function POD_1D_PLOT_GIF(D, rank_POD, U_POD, S_POD, V_POD, . . . 6 y , dt , Curr_Dir ) 7 hfig1 = f i g u r e (1) ; 8 MARKER = { ’ : ’ , ’−−’ , ’ −. ’ , ’−−’ , ’−−’ }; 9 LABEL = { ’ Original . . ’ , ’U_1 ’ , ’U_2 ’ , ’U_3 ’ , ’U_4 ’ , ’U_5 ’ }; 10 11 % D efin ition of time span : 12 n_t = s i z e (D, 2 ) ; 13 t = [ 0 : 1 : n_t−1]∗ dt ; 14 15 filename = ( [ ’GIF_1D_POD_r ’ , num2str (rank_POD) , ’ . g i f ’ ] ) ; 16 17 %% . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : ======== 18 f or i = 1 : 1 : length ( t ) 19 hold o f f 20 21 % Plot of the o r i g i n a l matrix D: 22 plot (y , D( : , i ) , ’k−’ ) ; 23 [M] = AXIS(12) ; 24 set ( gcf , ’ color ’ , ’w ’ ) ; 25 26 % Plot of the POD approximation : 27 hold on 28 f or j = 1 : 1 :rank_POD 29 POD1D_approx = U_POD( : , 1 : 1 : j ) ∗ . . . 30 S_POD( 1 : 1 : j , 1 : 1 : j ) ∗ V_POD( : , 1 : 1 : j ) ’ ; 31 plot (y , POD1D_approx ( : , i ) , MARKER{ j } , ’ color ’ , ’b ’ ) ; 32 [M] = AXIS(12) ; 33 set ( gcf , ’ color ’ , ’w ’ ) ; 34 end 35 t i t l e ( [ ’POD approximation ’ ] ) ; 36 drawnow 37 xlim ( [ min(y) ∗1.1 max(y) ∗ 1 . 1 ] ) ; 64
  • 66. 38 ylim ( [ min(min(D) ) ∗1.1 max(max(D) ) ∗ 1 . 1 ] ) ; 39 40 % Save the g i f : 41 cd ( Curr_Dir ) ; 42 frame = getframe (1) ; 43 im = frame2im ( frame ) ; 44 [ imind , cm] = rgb2ind (im , 256) ; 45 46 i f i == 1; 47 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ; 48 e l s e 49 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . . 50 ’ append ’ , ’ DelayTime ’ , 0.1) ; 51 end 52 cd . . 53 end 54 55 c l o s e ( hfig1 ) 65
  • 67. Listing D.2: Matlab function for plotting the .gif file of the 2D POD approximation. POD_2D_PLOT_GIF.m 1 %% POD_2D_PLOT_GIF function ========================================= 2 % This function plots the . g i f f i l e of the approximation 3 % =================================================================== 4 5 function POD_2D_PLOT_GIF(U_POD, V_POD, rank_POD, X, Y, dt , Curr_Dir ) 6 7 % D efin ition of time span : 8 n_t = s i z e (U_POD, 2 ) ; 9 t = [ 0 : 1 : n_t−1]∗ dt ; 10 11 SPACE_X = X; 12 SPACE_Y = Y; 13 n_X = length (SPACE_X) ; 14 n_Y = length (SPACE_Y) ; 15 16 filename = ( [ ’GIF_2D_POD_r ’ , num2str (rank_POD) ’ . g i f ’ ] ) ; 17 18 % Limit on the time of simulation : 19 i f length ( t ) <=30 20 sim_t = length ( t ) ; 21 e l s e 22 sim_t = 30; 23 end 24 25 % . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : ========= 26 f or i = 1 : 1 : sim_t 27 28 % Extract i−th column from U_DMD and V_DMD: 29 U_extr = U_POD( : , i ) ; 30 V_extr = V_POD( : , i ) ; 31 32 % Prepare the v e l o c i t y components fo r p lo tt in g : 33 U_vel = zeros (n_Y, n_X) ; 34 V_vel = zeros (n_Y, n_X) ; 35 36 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g : 37 f or j = 1 : 1 :n_X 38 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 39 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 40 end 41 42 VELOCITY = sqrt (U_vel.^2 + V_vel .^2) ; 43 VELOCITY(VELOCITY == 0) = NaN; 44 45 % Plotting the colour plot : 46 hfig1 = f i g u r e (1) ; 47 pcolor (SPACE_X, SPACE_Y, VELOCITY) ; 48 [M] = AXIS(12) ; 49 set ( gcf , ’ color ’ , ’w ’ ) ; 50 shading interp 51 colorbar 52 axis equal 66
  • 68. 53 grid o f f 54 daspect ( [ 1 1 1 ] ) 55 t i t l e ( [ ’POD approximation with r = ’ , num2str (rank_POD) ] ) ; 56 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ; 57 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ; 58 [M] = AXIS(12) ; 59 set ( gcf , ’ color ’ , ’w ’ ) ; 60 drawnow 61 hold o f f 62 63 % Save the g i f : 64 cd ( Curr_Dir ) ; 65 frame = getframe (1) ; 66 im = frame2im ( frame ) ; 67 [ imind , cm] = rgb2ind (im , 256) ; 68 69 i f i == 1; 70 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ; 71 e l s e 72 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . . 73 ’ append ’ , ’ DelayTime ’ , 0.1) ; 74 end 75 cd . . 76 end 77 78 c l o s e ( hfig1 ) 67
  • 69. Listing D.3: Matlab function for plotting the .png file of the modes of the 1D POD approxima- tion. POD_1D_PLOT_MODES.m 1 %% POD_1D_PLOT_MODES function ======================================= 2 % This function plots the . png f i l e of the modes of the approximation 3 % =================================================================== 4 5 function POD_1D_PLOT_MODES(D, rank_POD, U_POD, S_POD, V_POD, . . . 6 y , dt , Curr_Dir ) 7 8 % D efin ition of time span : 9 n_t = s i z e (D, 2 ) ; 10 t = [ 0 : 1 : n_t−1]∗ dt ; 11 12 %% Modes of the approximation : ===================================== 13 hfig2 = f i g u r e (2) ; 14 15 f or j = 1 : 1 :rank_POD 16 17 subplot (2 , rank_POD, j ) 18 plot (y , U_POD( : , j ) /norm(U_POD) , ’b−’ ) 19 [M] = AXIS(12) ; 20 set ( gcf , ’ color ’ , ’w ’ ) ; 21 t i t l e ( [ ’ Phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 22 23 subplot (2 , rank_POD, j+rank_POD) 24 plot ( t , V_POD( : , j ) /norm(V_POD) , ’b−’ ) 25 [M] = AXIS(12) ; 26 set ( gcf , ’ color ’ , ’w ’ ) ; 27 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 28 29 drawnow 30 31 end 32 33 cd ( Curr_Dir ) ; 34 print ( ’−dpng ’ , ’−r500 ’ , [ ’MODES_1D_POD_r’ , . . . 35 num2str (rank_POD) , ’ . png ’ ] ) 36 cd . . 37 38 c l o s e ( hfig2 ) 68
  • 70. Listing D.4: Matlab function for plotting the .png file of the modes of the 2D POD approxima- tion. POD_2D_PLOT_MODES.m 1 %% POD_2D_PLOT_MODES function ======================================= 2 % This function plots the . png f i l e of the modes of the approximation 3 % =================================================================== 4 5 function POD_2D_PLOT_MODES( r , UU_POD, VU_POD, UV_POD, . . . 6 VV_POD, V, X, Y, dt , Curr_Dir ) 7 8 % D efin ition of time span : 9 n_t = s i z e (V, 2 ) ; 10 t = [ 0 : 1 : n_t−1]∗ dt ; 11 12 % D efin ition of space : 13 SPACE_X = X; 14 SPACE_Y = Y; 15 n_Y = length (SPACE_Y) ; 16 n_X = length (SPACE_X) ; 17 18 %% Plotting s p a t i a l structures : 19 hfig1 = f i g u r e (1) ; 20 % Extract s i n g l e columns representing a s i n g l e s p a t i a l structure : 21 U_extr = UU_POD( : , r ) ; 22 V_extr = UV_POD( : , r ) ; 23 24 % Prepare the v e l o c i t y components f o r pl o tt in g : 25 U_vel = zeros (n_Y, n_X) ; 26 V_vel = zeros (n_Y, n_X) ; 27 28 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g : 29 f or j = 1 : 1 :n_X 30 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 31 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 32 end 33 34 MODE = sqrt (U_vel.^2 + V_vel .^2) ; 35 MODE = MODE/(max(max( abs (MODE) ) ) ) ; 36 MODE(MODE == 0) = NaN; 37 38 % Spatial structures : 39 pcolor (SPACE_X, SPACE_Y, MODE) ; 40 [M] = AXIS(12) ; 41 set ( gcf , ’ color ’ , ’w ’ ) ; 42 shading interp 43 colorbar 44 axis equal 45 grid o f f 46 daspect ( [ 1 1 1 ] ) 47 t i t l e ( [ ’POD s p a t i a l mode f or r = ’ , num2str ( r ) ] ) ; 48 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ; 49 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ; 50 [M] = AXIS(12) ; 51 set ( gcf , ’ color ’ , ’w ’ ) ; 52 69
  • 71. 53 cd ( Curr_Dir ) ; 54 print ( ’−dpng ’ , ’−r500 ’ , [ ’Spatial_Modes_2D_POD_r ’ , . . . 55 num2str ( r ) , ’ . png ’ ] ) 56 cd . . 57 58 %% Plotting temporal structures : 59 hfig2 = f i g u r e (2) ; 60 temp_mode = sqrt ((VU_POD( : , r ) ) .^2 + (VV_POD( : , r ) ) .^2 ) ; 61 temp_mode = temp_mode/(max(max( abs (temp_mode) ) ) ) ; 62 63 plot ( t , temp_mode , ’k−’ ) 64 [M] = AXIS(12) ; 65 set ( gcf , ’ color ’ , ’w ’ ) ; 66 t i t l e ( [ ’POD temporal mode fo r r = ’ , num2str ( r ) ] ) ; 67 ylim ( [ min(temp_mode) max(temp_mode) ] ) ; 68 xlim ( [ min( t ) max( t ) ] ) ; 69 70 cd ( Curr_Dir ) ; 71 print ( ’−dpng ’ , ’−r500 ’ , [ ’Temp_Modes_2D_POD_r ’ , . . . 72 num2str ( r ) , ’ . png ’ ] ) 73 cd . . 74 75 c l o s e ( hfig1 ) 76 c l o s e ( hfig2 ) 70
  • 72. Appendix E 1D and 2D DMD Results Plotting Listing E.1: Matlab function for plotting the .gif file of the 1D DMD approximation. DMD_1D_PLOT_GIF.m 1 %% DMD_1D_PLOT_GIF function ========================================= 2 % This function plots the . g i f f i l e of the approximation 3 % =================================================================== 4 5 function DMD_1D_PLOT_GIF(D, rank_DMD, D_DMD_extend, y , dt , Curr_Dir ) 6 hfig1 = f i g u r e (1) ; 7 MARKER = { ’ : ’ , ’−−’ , ’ −. ’ , ’−−’ , ’−−’ }; 8 LABEL = { ’ Original . . ’ , ’U_1 ’ , ’U_2 ’ , ’U_3 ’ , ’U_4 ’ , ’U_5 ’ }; 9 10 % D efin ition of time span : 11 n_t = s i z e (D, 2 ) ; 12 t = [ 0 : 1 : n_t−1]∗ dt ; 13 14 filename = ( [ ’GIF_1D_DMD_r’ , num2str (rank_DMD) , ’ . g i f ’ ] ) ; 15 16 %% . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : ======== 17 f or i = 1 : 1 : length ( t ) 18 hold o f f 19 20 % Plot of the o r i g i n a l matrix D: 21 plot (y , D( : , i ) , ’k−’ ) ; 22 [M] = AXIS(12) ; 23 set ( gcf , ’ color ’ , ’w ’ ) ; 24 25 % Plot of the DMD approximation : 26 hold on 27 f or j = 1 : 1 :rank_DMD 28 DMD_APPROX = D_DMD_extend( : , (n_t∗( j −1)+i ) ) ; 29 plot (y , DMD_APPROX, MARKER{ j } , ’ color ’ , ’ r ’ ) ; 30 [M] = AXIS(12) ; 31 set ( gcf , ’ color ’ , ’w ’ ) ; 32 end 33 t i t l e ( [ ’DMD approximation ’ ] ) ; 34 drawnow 35 xlim ( [ min(y) ∗1.1 max(y) ∗ 1 . 1 ] ) ; 36 ylim ( [ min(min(D) ) ∗1.1 max(max(D) ) ∗ 1 . 1 ] ) ; 37 71
  • 73. 38 % Save the g i f : 39 cd ( Curr_Dir ) ; 40 frame = getframe (1) ; 41 im = frame2im ( frame ) ; 42 [ imind , cm] = rgb2ind (im , 256) ; 43 44 i f i == 1; 45 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ; 46 e l s e 47 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . . 48 ’ append ’ , ’ DelayTime ’ , 0.1) ; 49 end 50 cd . . 51 end 52 53 c l o s e ( hfig1 ) 72
  • 74. Listing E.2: Matlab function for plotting the .gif file of the 2D DMD approximation. DMD_2D_PLOT_GIF.m 1 %% DMD_2D_PLOT_GIF function ========================================= 2 % This function plots the . g i f f i l e of the approximation 3 % =================================================================== 4 5 function DMD_2D_PLOT_GIF(U_DMD, V_DMD, rank_DMD, X, Y, dt , Curr_Dir ) 6 7 % D efin ition of time span : 8 n_t = s i z e (U_DMD, 2 ) ; 9 t = [ 0 : 1 : n_t−1]∗ dt ; 10 11 SPACE_X = X; 12 SPACE_Y = Y; 13 n_X = length (SPACE_X) ; 14 n_Y = length (SPACE_Y) ; 15 16 filename = ( [ ’GIF_2D_DMD_r’ , num2str (rank_DMD) , ’ . g i f ’ ] ) ; 17 18 % Limit on the time of simulation : 19 i f length ( t ) <=30 20 sim_t = length ( t ) ; 21 e l s e 22 sim_t = 30; 23 end 24 25 % . g i f of a pulsating v e l o c i t y p r o f i l e with approximations : ========= 26 f or i = 1 : 1 : sim_t 27 28 % Extract i−th column from U_DMD and V_DMD: 29 U_extr = U_DMD( : , i ) ; 30 V_extr = V_DMD( : , i ) ; 31 32 % Prepare the v e l o c i t y components fo r p lo tt in g : 33 U_vel = zeros (n_Y, n_X) ; 34 V_vel = zeros (n_Y, n_X) ; 35 36 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g : 37 f or j = 1 : 1 :n_X 38 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 39 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 40 end 41 42 VELOCITY = sqrt (U_vel.^2 + V_vel .^2) ; 43 VELOCITY(VELOCITY == 0) = NaN; 44 45 % Plotting the colour plot : 46 hfig1 = f i g u r e (1) ; 47 pcolor (SPACE_X, SPACE_Y, VELOCITY) ; 48 [M] = AXIS(12) ; 49 set ( gcf , ’ color ’ , ’w ’ ) ; 50 shading interp 51 colorbar 52 axis equal 73
  • 75. 53 grid o f f 54 daspect ( [ 1 1 1 ] ) 55 t i t l e ( [ ’DMD approximation with r = ’ , num2str (rank_DMD) ] ) ; 56 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ; 57 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ; 58 [M] = AXIS(12) ; 59 set ( gcf , ’ color ’ , ’w ’ ) ; 60 drawnow 61 hold o f f 62 63 % Save the g i f : 64 cd ( Curr_Dir ) ; 65 frame = getframe (1) ; 66 im = frame2im ( frame ) ; 67 [ imind , cm] = rgb2ind (im , 256) ; 68 69 i f i == 1; 70 imwrite ( imind , cm, filename , ’ g i f ’ , ’ Loopcount ’ , i n f ) ; 71 e l s e 72 imwrite ( imind , cm, filename , ’ g i f ’ , ’ WriteMode ’ , . . . 73 ’ append ’ , ’ DelayTime ’ , 0.1) ; 74 end 75 cd . . 76 end 77 78 c l o s e ( hfig1 ) 74
  • 76. Listing E.3: Matlab function for plotting the .png file of the modes of the 1D DMD approxima- tion. DMD_1D_PLOT_MODES.m 1 %% DMD_1D_PLOT_MODES function ======================================= 2 % This function plots the . png f i l e of the modes of the approximation 3 % =================================================================== 4 5 function DMD_1D_PLOT_MODES(D, rank_DMD, Phi_extend , . . . 6 T_modes_extend , y , dt , Curr_Dir ) 7 8 % D efin ition of time span : 9 n_t = s i z e (D, 2 ) ; 10 t = [ 0 : 1 : n_t−1]∗ dt ; 11 12 %% Modes of the approximation : ====================================== 13 hfig2 = f i g u r e (2) ; 14 s t a r t = 1; 15 16 f or j = 1 : 1 :rank_DMD 17 18 s t a r t = s t a r t + ( j −1) ; 19 20 subplot (2 , rank_DMD, j ) 21 plot (y , Phi_extend ( : , s t a r t ) , ’ r−’ ) ; 22 [M] = AXIS(12) ; 23 set ( gcf , ’ color ’ , ’w ’ ) ; 24 t i t l e ( [ ’ Phi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 25 26 subplot (2 , rank_DMD, j+rank_DMD) 27 plot ( t , T_modes_extend( start , : ) , ’ r−’ ) ; 28 [M] = AXIS(12) ; 29 set ( gcf , ’ color ’ , ’w ’ ) ; 30 t i t l e ( [ ’ psi_{ ’ , num2str ( j ) , ’ } ’ ] ) ; 31 32 drawnow 33 34 end 35 36 cd ( Curr_Dir ) ; 37 print ( ’−dpng ’ , ’−r500 ’ , [ ’MODES_1D_DMD_r’ , . . . 38 num2str (rank_DMD) , ’ . png ’ ] ) 39 cd . . 40 41 c l o s e ( hfig2 ) 75
  • 77. Listing E.4: Matlab function for plotting the .png file of the modes of the 2D DMD approxima- tion. DMD_2D_PLOT_MODES.m 1 %% DMD_2D_PLOT_MODES function ======================================= 2 % This function plots the . png f i l e of the modes of the approximation 3 % =================================================================== 4 5 function DMD_2D_PLOT_MODES( r , Phi_U, Phi_V, T_modesU, . . . 6 T_modesV, V, X, Y, dt , Curr_Dir ) 7 8 % D efin ition of time span : 9 n_t = s i z e (V, 2 ) ; 10 t = [ 0 : 1 : n_t−1]∗ dt ; 11 12 % D efin ition of space : 13 SPACE_X = X; 14 SPACE_Y = Y; 15 n_Y = length (SPACE_Y) ; 16 n_X = length (SPACE_X) ; 17 18 % Taking r e a l values of the modes : 19 Phi_U = r e a l (Phi_U) ; 20 Phi_V = r e a l (Phi_V) ; 21 T_modesU = r e a l (T_modesU) ; 22 T_modesV = r e a l (T_modesV) ; 23 24 %% Plotting s p a t i a l structures : 25 hfig1 = f i g u r e (1) ; 26 % Extract s i n g l e columns representing a s i n g l e s p a t i a l structure : 27 U_extr = Phi_U ( : , r ) ; 28 V_extr = Phi_V ( : , r ) ; 29 30 % Prepare the v e l o c i t y components fo r pl o tt in g : 31 U_vel = zeros (n_Y, n_X) ; 32 V_vel = zeros (n_Y, n_X) ; 33 34 % Re−structuring the matrix U_vel fo r the purpose of pl o tt in g : 35 f or j = 1 : 1 :n_X 36 U_vel ( : , j ) = U_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 37 V_vel ( : , j ) = V_extr ( ( ( j −1)∗n_Y+1) : 1 : ( ( j ) ∗n_Y) , : ) ; 38 end 39 40 MODE = sqrt (U_vel.^2 + V_vel .^2) ; 41 MODE = MODE/(max(max( abs (MODE) ) ) ) ; 42 MODE(MODE == 0) = NaN; 43 44 % Spatial structures : 45 pcolor (SPACE_X, SPACE_Y, MODE) ; 46 [M] = AXIS(12) ; 47 set ( gcf , ’ color ’ , ’w ’ ) ; 48 shading interp 49 colorbar 50 axis equal 51 grid o f f 52 daspect ( [ 1 1 1 ] ) 76
  • 78. 53 t i t l e ( [ ’DMD s p a t i a l mode fo r r = ’ , num2str ( r ) ] ) ; 54 ylim ( [ min(SPACE_Y) max(SPACE_Y) ] ) ; 55 xlim ( [ min(SPACE_X) max(SPACE_X) ] ) ; 56 [M] = AXIS(12) ; 57 set ( gcf , ’ color ’ , ’w ’ ) ; 58 59 cd ( Curr_Dir ) ; 60 print ( ’−dpng ’ , ’−r500 ’ , [ ’Spatial_Modes_2D_DMD_r ’ , . . . 61 num2str ( r ) , ’ . png ’ ] ) 62 cd . . 63 64 %% Plotting temporal structures : 65 hfig2 = f i g u r e (2) ; 66 temp_mode = sqrt ((T_modesU( r , : ) ) .^2 + (T_modesV( r , : ) ) .^2 ) ; 67 temp_mode = temp_mode/(max(max( abs (temp_mode) ) ) ) ; 68 69 plot ( t , temp_mode , ’k−’ ) 70 [M] = AXIS(12) ; 71 set ( gcf , ’ color ’ , ’w ’ ) ; 72 t i t l e ( [ ’DMD temporal mode fo r r = ’ , num2str ( r ) ] ) ; 73 ylim ( [ min(temp_mode) max(temp_mode) ] ) ; 74 xlim ( [ min( t ) max( t ) ] ) ; 75 76 cd ( Curr_Dir ) ; 77 print ( ’−dpng ’ , ’−r500 ’ , [ ’Temp_Modes_2D_DMD_r ’ , . . . 78 num2str ( r ) , ’ . png ’ ] ) 79 cd . . 80 81 c l o s e ( hfig1 ) 82 c l o s e ( hfig2 ) 77
  • 79. Appendix F List of Useful Matlab Commands For any matrix A: norm(A) computes a norm of a matrix A [U, S, V] = svd(A) performs a singular value decomposition of matrix A [U, S, V] = svd(A, ’econ’) performs a singular value decomposition of matrix A and computes economy-sized matrices repmat(A, m, n) creates an m × n array of a matrix A reshape(A, [a, b]) reshapes elements of matrix A into matrices of size a × b, the number of elements in A must be equal to a · b diag(A) extracts elements from the diagonal of matrix A size(A) computes the size of matrix A A’ computes a transpose of matrix A For a square matrix A [U, S] = eig(A) computes eigenvectors U and eigenvalues S of a matrix A inv(A) computes an inverse of a matrix A For vectors x and y: norm(x) computes a norm of a vector x dot(x, y) computes an inner (dot) product of vectors x and y trapz(x, y) computes an approximation for an integral of y on an interval x length(x) computes the length of a vector x References to matrix elements: A(i, j) extracts a single element from ith row and jth column A(n:m, i:j) extracts rows n to m for columns i to j A(:, i) extracts full ith column A(:, n:end) extracts full columns from nth till the last one A(:, n:m) extracts full columns from nth till mth A(i, :) extracts full ith row A(n:end, :) extracts full rows from nth till the last one A(n:m, :) extracts full rows from nth till mth 78
  • 80. Appendix G Complete List of Codes Produced pulsating_poiseuille_approximations.m POD_1D.m POD_2D.m DMD_1D.m DMD_2D.m POD_1D_PLOT_GIF.m POD_2D_PLOT_GIF.m POD_1D_PLOT_MODES.m POD_2D_PLOT_MODES.m DMD_1D_PLOT_GIF.m DMD_2D_PLOT_GIF.m DMD_1D_PLOT_MODES.m DMD_2D_PLOT_MODES.m disc_cont_exercise_1.m phase_shift_of_two_sines.m fitting_linear_systems.m similar_matrices.m GUI Components: DMD_CRITERIA.m EXPORT_DATA.m IMPORT.m Main_MENU.m POD_CRITERIA.m POD_DMD_beta_1.m POD_OR_DMD.m 79
  • 81. Bibliography [1] M.A. Mendez, J.-M. Buchlin, "Notes on 2D Pulsatile Poiseuille Flows: An Introduction to Eigenfunction Expansion and Complex Variables using Matlab," VKI Technical Notes: TN 215, February 2016 [2] M.A. Mendez, M. Raiola, A. Masullo, S. Discetti, A. Ianiro, R. Theunissen, J.-M. Buchlin, "POD-based background removal for particle image velocimetry," Experimental Thermal and Fluid Science, January 2017 [3] P.J. Schmid, "Advanced Post-Processing of Experimental and Numerical Data," VKI Lec- ture Series 2014-01, November 2013 [4] P.J. Schmid, "Dynamic mode decomposition of numerical and experimental data," Journal of Fluid Mechanics, July 2010 [5] B.O. Koopman, "Hamiltonian Systems and Transformations in Space," Mathematics, Vol. 17, 1931 [6] C.W. Rowley, I. Mezic, S. Bagheri, P. Schlatter, D.S. Henningson, "Spectral analysis of nonlinear flows," Journal of Fluid Mechanics, September 2009 [7] I. Mezic, "Spectral properties of dynamical systems, model reduction and decompositions," Nonlinear Dynamics, June 2004 [8] M.R. Jovanovic, P.J. Schmid, J.W. Nichols, "Sparsity-promoting dynamic mode decompo- sition," Physics of Fluids, February 2014 [9] E.R. Scheinerman, Invitation to Dynamical Systems, 1996 [10] D. Dumoulin, "Numerical Characterization of an Impinging Jet Flow," VKI Stagiaire Re- port, September 2016 [11] http://www.mathworks.com/moler/eigs.pdf 80