SlideShare a Scribd company logo
Proper Orthogonal Decomposition Methods For
Partial Differential Equations Chen download
https://ebookbell.com/product/proper-orthogonal-decomposition-
methods-for-partial-differential-equations-chen-9973088
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Inverse Analyses With Model Reduction Proper Orthogonal Decomposition
In Structural Mechanics 1st Edition Vladimir Buljak Auth
https://ebookbell.com/product/inverse-analyses-with-model-reduction-
proper-orthogonal-decomposition-in-structural-mechanics-1st-edition-
vladimir-buljak-auth-4195416
Online Damage Detection In Structural Systems Applications Of Proper
Orthogonal Decomposition And Kalman And Particle Filters 1st Edition
Saeed Eftekhar Azam Auth
https://ebookbell.com/product/online-damage-detection-in-structural-
systems-applications-of-proper-orthogonal-decomposition-and-kalman-
and-particle-filters-1st-edition-saeed-eftekhar-azam-auth-4635174
Proper Names Versus Common Nouns Morphosyntactic Contrasts In The
Languages Of The World Javier Caro Reina Editor Johannes Helmbrecht
Editor
https://ebookbell.com/product/proper-names-versus-common-nouns-
morphosyntactic-contrasts-in-the-languages-of-the-world-javier-caro-
reina-editor-johannes-helmbrecht-editor-51135430
Proper Group Actions And The Baumconnes Conjecture Guido Mislin
https://ebookbell.com/product/proper-group-actions-and-the-baumconnes-
conjecture-guido-mislin-51985678
Proper Generalized Decompositions An Introduction To Computer
Implementation With Matlab 1st Edition Elas Cueto
https://ebookbell.com/product/proper-generalized-decompositions-an-
introduction-to-computer-implementation-with-matlab-1st-edition-elas-
cueto-5483782
Proper And Improper Forcing 2nd Edition Saharon Shelah
https://ebookbell.com/product/proper-and-improper-forcing-2nd-edition-
saharon-shelah-6683506
Proper Life With Origami 15 Inventive Paper Initiatives For Your
Lovely Family Belloli Publishing
https://ebookbell.com/product/proper-life-with-origami-15-inventive-
paper-initiatives-for-your-lovely-family-belloli-publishing-23699150
Proper People Early Asylum Life In The Words Of Those Who Were There
David Scrimgeour Scrimgeour
https://ebookbell.com/product/proper-people-early-asylum-life-in-the-
words-of-those-who-were-there-david-scrimgeour-scrimgeour-24344160
Proper Healthy Food Hearty Vegan And Vegetarian Recipes For Meat
Lovers Nick Knowles
https://ebookbell.com/product/proper-healthy-food-hearty-vegan-and-
vegetarian-recipes-for-meat-lovers-nick-knowles-11872050
Proper Orthogonal Decomposition Methods For Partial Differential Equations Chen
Proper Orthogonal Decomposition Methods For Partial Differential Equations Chen
Proper Orthogonal Decomposition
Methods for Partial Differential
Equations
Proper Orthogonal Decomposition Methods For Partial Differential Equations Chen
Mathematics in Science and
Engineering
Proper Orthogonal
Decomposition
Methods for Partial
Differential Equations
Zhendong Luo
School of Mathematics and Physics
North China Electric Power University
Beijing, China
Goong Chen
Department of Mathematics
Texas A&M University
College Station, TX, USA
Series Editor
Goong Chen
Academic Press is an imprint of Elsevier
125 London Wall, London EC2Y 5AS, United Kingdom
525 B Street, Suite 1650, San Diego, CA 92101, United States
50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States
The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom
Copyright © 2019 Elsevier Inc. All rights reserved.
No part of this publication may be reproduced or transmitted in any form or by any means, electronic or
mechanical, including photocopying, recording, or any information storage and retrieval system, without
permission in writing from the publisher. Details on how to seek permission, further information about
the Publisher’s permissions policies and our arrangements with organizations such as the Copyright
Clearance Center and the Copyright Licensing Agency, can be found at our website:
www.elsevier.com/permissions.
This book and the individual contributions contained in it are protected under copyright by the Publisher
(other than as may be noted herein).
Notices
Knowledge and best practice in this field are constantly changing. As new research and experience
broaden our understanding, changes in research methods, professional practices, or medical treatment
may become necessary.
Practitioners and researchers must always rely on their own experience and knowledge in evaluating and
using any information, methods, compounds, or experiments described herein. In using such information
or methods they should be mindful of their own safety and the safety of others, including parties for
whom they have a professional responsibility.
To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any
liability for any injury and/or damage to persons or property as a matter of products liability, negligence
or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in
the material herein.
Library of Congress Cataloging-in-Publication Data
A catalog record for this book is available from the Library of Congress
British Library Cataloguing-in-Publication Data
A catalogue record for this book is available from the British Library
ISBN: 978-0-12-816798-4
For information on all Academic Press publications
visit our website at https://www.elsevier.com/books-and-journals
Publisher: Candice Janco
Acquisition Editor: Scott J. Bentley
Editorial Project Manager: Devlin Person
Production Project Manager: Maria Bernard
Designer: Matthew Limbert
Typeset by VTeX
Foreword and Introduction
We are living in an era of internet, WiFi, mobile apps, Facebook, Twitters, In-
stagram, selfies, clouds, ...– surely we have not named them all, but one thing
certain is that these all deal with digital and computer-generated data. There is
a trendy name for the virtual space of all these things together: big data. The
size of the big data space is ever-growing with an exponential rate. Major issues
such as analytics, effective processing, storage, mining, prediction, visualiza-
tion, compression, and encryption/decryption of big data have become problems
of major interest in contemporary technology.
This book aims at treating numerical methods for partial differential equa-
tions (PDEs) in science and engineering. Applied mathematicians, scientists,
and engineers are always dealing with big data, all the time. Where do their big
data originate? They come, mostly, from problems and solutions of equations of
physical and technological systems. A large number of the modeling equations
are PDEs. Therefore, effective methods and algorithms for processing and re-
solving such data are much in demand. This is not a treatise on general big data.
However, the main objective is to develop a methodology that can effectively
help resolve the challenges of dealing with large data sets and with speedup in
treating time-dependent PDEs.
Our approach here is not the way in which standard textbooks on numerical
PDEs are written. The central theme of this book is actually on the technical
treatment for effective methods that can generate numerical solutions for time-
dependent PDEs involving only a small set of data, but yield decent solutions
that are accurate and suitable for applications. It reduces data storage, CPU time,
and, especially, computational complexity – several orders of magnitude. The
key idea and methodology is proper orthogonal decomposition (POD), from
properties of eigensolutions to a problem involving a large data set. (Indeed,
POD has been known as an effective method for big data even before the term
big data was coined.) In the process, we have developed the necessary mathe-
matical methods and techniques adapting POD to fundamental numerical PDE
methods of finite differences, finite elements, and finite volume in connection
with various numerical schemes for a wide class of time-dependent PDEs as
showcases.
xi
xii Foreword and Introduction
PDES AND THEIR NUMERICAL SOLUTIONS
Physical, biological, and engineering processes are commonly described by
PDEs. Such processes are naturally dynamic, meaning that their time evolution
constitutes the main features, properties, and significance of the system under
investigation or observation. The spatial domains of definition of the PDEs are
usually multidimensional and irregular in shape. Thus, there are fundamental
difficulties involving geometry and dimensionality. The PDEs themselves can
also take rather complex forms, involving a large variety of nonlinearities, sys-
tem couplings, and source terms. These are inherent difficulties of the PDEs
that compound those due to geometry and dimensionality. In general, exact (an-
alytic) nontrivial solutions to PDEs are rarely available. Numerical methods and
algorithms must be developed and then implemented on computers to render ap-
proximate numerical solutions. Therefore, computations become the only way
to treat PDE problems quantitatively. The study of numerical solutions for PDEs
is now a major field in computational science that includes computational math-
ematics, physics, engineering, chemistry, biology, atmospheric, geophysical and
ocean sciences, etc.
Computational PDEs represent an active, prosperous field. New methods
and developments are constantly emerging. However, three canonical schemes
stand out: finite difference (FD), finite element (FE), and finite volume element
(FVE) methods. These methods all require the division of the computational do-
main into meshes. Thus, they involve many degrees of freedom, i.e., unknowns,
which are related to the number of nodes of mesh partition of the computa-
tional domains. For a routine, real-world problem in engineering, the number of
unknowns can easily reach hundreds of thousands or even millions. Thus, the
amount of computational load and complexity is extremely high. The accuracy
of numerical solutions is also affected as truncation errors tend to accumulate.
For a large-scale problem, the CPU time on a supercomputer may require days,
weeks, or even months. It is possible, for example, if we use these canonical
methods of FD, FE, and FVE to simulate the weather forecast in atmospheric
science, after a protracted period of computer calculations, that the output nu-
merical results have already lost their significance as the days of interest are
bygone.
There are two ways of thinking for the resolution of these difficulties. First,
one can think of computer speedup by building the best supercomputers with
continuous refinement. As of June 2016, the world’s fastest supercomputer
on the TOP500 (http://top500.org) supercomputer list is the Sunway Taihu-
Light in China, with a LINPACK benchmark score of 93 PFLOPS (Peta, or
1015, FLOPS), exceeding the previous record holder, Tianhe-2, by around 59
PFLOPS. Tianhe-2 had its peak electric power consumption at 17.8 MW, and
its annual electricity bill is more than $14 million or 100 million Chinese Yuan.
Thus, most tier-1 universities cannot afford to pay such a high expense. The
second option is to instead develop highly effective computational methods that
can reduce the degrees of freedom for the canonical FD, FE, and FVE schemes,
Foreword and Introduction xiii
lighten the computational load, and reduce the running CPU time and the accu-
mulation of truncation errors in the computational processes. This approach is
based on cost-optimal, rational, and mathematical thinking and will be the one
taken by us here.
The focal topic of the book, the POD method (see [56,60]), is one of the
most effective methods that aims exactly at helping computational PDEs.
THE ADVANTAGES AND BENEFITS OF POD
Reduce the degrees of freedom of numerical computational models for time-
dependent PDEs, alleviate the calculation load, reduce the accumulation of
truncated errors in the computational process, and save CPU computing time
and resources for large-scale scientific computing.
POD in a Nutshell
The POD method essentially provides an orthogonal basis for representing a
given set of data in a certain least-squares optimal sense, i.e., it offers ways to
find optimal lower-dimensional approximations for the given data set.
A Brief Prior History of the Development of POD
The POD method has a long history. The predecessor of the POD method was
an eigenvector analysis method, which was initially presented by K. Pearson in
1901 and was used to extract targeted, main ingredients of huge amounts of data
(see [132]). (The trendy name of such data is “big data”.) Pearson’s data mining,
sample analysis, and data processing techniques are relevant even today.
However, the method of snapshots for POD was first presented by Sirovich
in 1987 (see [150]). The POD method has been widely and successfully ap-
plied to numerous fields, including signal analysis and pattern recognition (see
[43]), statistics (see [60]), geophysical fluid dynamics or meteorology (see
also [60], [60] or [78]), and biomedical engineering (see [48]). For a long
time since 1987, the POD method was mainly used to perform the princi-
pal component analysis in statistical computations and to search for certain
major behavior of dynamic systems (see reduced-order Galerkin methods for
PDEs, proposed in the excellent work in 2001 by Kunisch and Volkweind [62,
63]). From that moment forth, the model reduction or reduced basis of the
numerical computational methods based on POD for PDEs underwent some
rapid development, providing improved efficiency for finding numerical so-
lutions to PDEs (see [2,11,15,19,45,48,54,57,58,135,137,138,141,166,167,170,
171,184–186,199]). At first, Kunisch–Volkweind’s POD-based reduced-order
Galerkin methods were applied to reduced-order models of numerical solutions
for PDEs with error estimates presented in [62,63]. Those error estimates con-
sist of some uncertain matrix norms. In particular, they took all the numerical
solutions of the classical Galerkin method on the total time span [0,T ] in the
xiv Foreword and Introduction
formulation of the POD basis, used them to establish the POD-based reduced-
order models, and then recomputed the numerical solutions on the same time
span [0,T ]. This produces some repetitive computation but not much extra gain.
One begins to ponder how to improve this and, furthermore, how to generalize
the methodology initiated by Kunisch–Volkweind’s work beyond the Galerkin
FE method to other FE methods and also to FD and FVE schemes. This book
aims exactly at answering these questions.
Development of POD for Time-Dependent PDEs
The first author, Zhendong Luo, was attracted to the study of reduced-order nu-
merical methods based on POD for PDEs at the beginning of 2003. At that time,
few or no comprehensive accounts existed and only fragmentary introductions
about POD were available. He spent three years (2003–2005) studying the un-
derlying optimization methods, statistical principles, and numerical solutions
for POD. Then, in 2006, he and his collaborators published their first two pa-
pers for POD methods (see [26,27]). These dealt with oceanic models and data
assimilation.
Afterwards, Luo and his coauthors have established some POD-based
reduced-order FD schemes (see [5,38,40,91,113,118,122,155]) as well as FE
formulations (see [37,39,70,88–90,92,93,100,101,103,109,112,119,123,124,
164]). They deduced the error estimates for POD-based reduced-order solutions
for PDEs of various types since 2007 in a series of papers. They also proposed
some POD-based reduced-order formulations and relevant error estimates for
POD-based reduced-order FVE solutions (see [71,104,106,108,120]) for PDEs
in another series of papers beginning in 2011. These POD-based reduced-order
methods were specific to the classical FD schemes, FE methods, and FVE meth-
ods for the construction of the reduced-order models, in which they extracted
one from every ten classical numerical solutions as snapshots, significantly dif-
ferent from Kunisch–Volkweind’s methods, in which numerical solutions from
the classical Galerkin method were extracted at all instants on the total time
span [0,T ]. Therefore, these POD-based reduced-order methods constitute im-
provements, generalizations, and extensions for Kunisch–Volkweind’s methods
in [62,63]. The reduced-order methods in the above cited work need only repeat
part of the computations on the same time span [0,T ].
Since 2012, Luo and his collaborators have established the following three
main methods:
i. PODROEFD: POD-based reduced-order extrapolation FD schemes (see [6,
7,79,81,94–96,102,110,111,117,121,127,154,158]);
ii. PODROEFE: POD-based reduced-order extrapolation FE methods (see [69,
75,82,83,97,116,159–161,165,179]);
iii. PODROEFVE: POD-based reduced-order extrapolation FVE methods (see
[84,85,87,98,99,114,115,162,163]).
Foreword and Introduction xv
These POD-based reduced-order extrapolation methods need only adopt the
standard numerical solutions on some initial rather short time span [0,t0]
(t0  T ) of, respectively, the classical FD, FE, and FVE schemes as snapshots in
order to formulate the POD basis. Therefore, they have significantly improved
the previous, existing version of the reduced-order models. They do not have
to repeat wholesale computations. The physical significance is that one can use
existing data to forecast the future evolution of nature. Furthermore, our PO-
DROEFD, PODROEFE, and PODROEFVE methods can be treated in a similar
way as the classical FD, FE, and FVE methods, leading to error estimates with
concrete orders of convergence. The application of these POD-based extrapola-
tion methods will provide anyone with the advantages and benefits of POD
mentioned earlier.
The second author, Goong Chen, has strong interests in the computation
of numerical solutions of PDEs arising from real-world applications. He has
constantly been faced with the challenges to deal with the needs for large
data storage, process speedup, effective reduction of order, and the extraction
of prominent physical features from the supercomputer numerical solution of
PDEs. When he noticed that Zhendong Luo had already done significant work
on the POD methods for time-dependent PDEs fitting many of his needs, he
got very excited and proposed that a book project be prepared to publish and
promote this very important topic. This started the collaboration of the authors,
with this book as the outcome. Our collaboration is ongoing, hoping more re-
search papers will be produced in the near future demonstrating the advantages
of POD-based methods. However, G. Chen happily acknowledges that all tech-
nical contributions in this book are to be credited to the first author alone. He
has learned tremendously from the collaboration – this by itself makes the book
project worthwhile and satisfying as far as the second author is concerned.
ORGANIZATION OF THE BOOK
In this book, we aim to provide the technical details of the construction, theoreti-
cal analysis, and implementations of algorithms and examples for PODROEFD,
PODROEFE, and PODROEFVE methods for a broad class of dynamic PDEs.
It is organized into the following four chapters.
Chapter 1 includes four sections. In the first section, we review the basic the-
ory of classical FD schemes. It is intended to ensure the self-containedness of the
book. Then, in the subsequent three sections, we introduce the construction, the
theoretical analysis, and the implementations of algorithms for the PODROEFD
schemes for the following two-dimensional (2D) PDEs: the parabolic equation,
the nonstationary Stokes equation, and the shallow water equation with sedi-
ment concentration, respectively. Examples and discussions are also given for
each equation.
Chapter 2 is similarly structured as Chapter 1, with four sections. There
we begin by reviewing the basic theory of Sobolev spaces and elliptic theory,
xvi Foreword and Introduction
the classical FE method, and the mixed FE (MFE) method. Then we describe
the construction, theoretical analysis, and implementations of algorithms for the
PODROEFE methods for the following 2D PDEs: the viscoelastic wave equa-
tion, the Burgers equation, and the nonstationary parabolized Navier–Stokes
equation (for which the stabilized Crank–Nicolson extrapolation scheme is
used), respectively. Numerical examples and graphics are again illustrated.
Chapter 3 contains three sections, aiming at the treatment of PODROEFVE.
We first introduce the basics of FVE. Then three sections for the construction,
theoretical error analysis, and the implementations of algorithms for the PO-
DROEFVE methods for the following three 2D dynamic PDEs are studied: the
hyperbolic equation, Sobolev equation, and incompressible Boussinesq equa-
tion, respectively, are developed, with concrete examples and illustrations.
Numerical results on these model equations as presented in the book have
demonstrated the effectiveness and accuracy of our POD methods.
Finally, Chapter 4 is a short epilogue and outlook, consisting of concluding
remarks and forward-looking statements.
The book is written to be as self-contained as possible. Readers and students
need only to have an undergraduate level in applied and numerical mathemat-
ics for the understanding of this book. Many parts can be used as a standard
graduate-level textbook on numerical PDEs. The theory, methods, and com-
putational algorithms will be valuable to students and practitioners in science,
engineering, and technology.
ACKNOWLEDGMENTS
The authors thank all collaborators, colleagues, and institutions that have gen-
erously supported our work. In particular, the authors are delighted to acknowl-
edge the partial financial support over the years by the National Natural Science
Foundation of China (under grant #11671106), the Qatar National Research
Fund (under grant #NPRP 8-028-1-001), the North China Electric Power Uni-
versity, and the Texas AM University.
Zhendong Luo
Beijing, China
Goong Chen
College Station, TX, USA
Chapter 1
Reduced-Order Extrapolation
Finite Difference Schemes
Based on Proper Orthogonal
Decomposition
The key objective of this book is to develop the numerical treatments of
proper orthogonal decomposition (POD) for partial differential equations
(PDEs). With regard to numerical methods for PDEs, the finite difference
(FD) method essentially constitutes the basis of all numerical methods for
PDEs. In order to introduce how POD works, it is natural to start with the
FD method.
For the sake of proper self-containedness, in this chapter we first review the
basic theory of classical FD schemes. We then introduce the construction, theo-
retical analysis, and implementations of algorithms for the POD-based reduced-
order extrapolation FD (PODROEFD) schemes for the two-dimensional (2D)
parabolic equation, 2D nonstationary Stokes equation, and 2D shallow wa-
ter equation with sediment concentration. Finally, we provide some numerical
examples to show what the PODROEFD schemes have over the classical FD
schemes. Moreover, it is shown that the PODROEFD schemes are reliable and
effective for solving above-mentioned PDEs.
The numerical models treated here include both simple equations and cou-
pled systems. The systematic approach we take in this chapter, namely, fol-
lowing the logical sequence of rudiments, modeling equations, error estimates-
stability-convergence, POD methods, error estimates for POD solutions, and
finally concrete numerical examples, will be the standard for all chapters.
1.1 REVIEW OF CLASSICAL BASIC FINITE DIFFERENCE
THEORY
1.1.1 Approximation of Derivative
The FD schemes use difference quotients to approximate derivatives. Denote
un
i,j = u(ix,jy,nt) = u(xi,yj ,tn). Then un
i±1,j±1 = u(xi ± x,yj ±
y,tn), un±1
i,j = u(xi,yj ,tn ± t).
Derivative approximations have usually the following four forms.
Proper Orthogonal Decomposition Methods for Partial Differential Equations
https://doi.org/10.1016/B978-0-12-816798-4.00006-1
Copyright © 2019 Elsevier Inc. All rights reserved.
1
2 Proper Orthogonal Decomposition Methods for Partial Differential Equations
1. Approximation to first-order derivative by a forward difference
We have

∂u
∂x
n
i,j
= lim
x→0
u(xi + x,yj ,tn) − u(xi,yj ,tn)
x
=
u(xi + x,yj ,tn) − u(xi,yj ,tn)
x
+ O(x)
=
un
i+1,j − un
i,j
x
+ O(x)
≈
un
i+1,j − un
i,j
x
. (1.1.1)
2. Approximation to first-order derivative by a backward difference
We have

∂u
∂x
n
i,j
= lim
x→0
u(xi,yj ,tn) − u(xi − x,yj ,tn)
x
=
u(xi,yj ,tn) − u(xi − x,yj ,tn)
x
+ O(x)
=
un
i,j − un
i−1,j
x
+ O(x)
≈
un
i,j − un
i−1,j
x
. (1.1.2)
3. Approximation to first-order derivative by a central difference
We have

∂u
∂x
n
i,j
=
un
i+1,j − un
i−1,j
2x
+ O(x2
)
≈
un
i+1,j − un
i−1,j
2x
. (1.1.3)
4. Approximation to second derivative by a second-order central differ-
ence
We have

∂2u
∂x2
n
i,j
=
un
i+1,j − 2un
i,j + un
i−1,j
x2
+ O(x2
)
≈
un
i+1,j − 2un
i,j + un
i−1,j
x2
. (1.1.4)
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 3
1.1.2 Difference Operators
1. The definitions of difference operators
We denote operators I,Ex,E−1
x ,x,∇x,δx,μx by the following:
Iun
j = un
j ; I is known as the unit operator;
Exun
j = un
j+1; Ex is known as the forward shift operator and denoted by
Ex = E+1
x ;
E−1
x un
j = un
j−1; E−1
x is known as the backward shift operator and denoted by
E−
x = E−1
x ;
xun
j = un
j+1 − un
j ; x is known as the forward difference operator and sat-
isfies x = Ex − I;
∇xun
j = un
j − un
j−1; ∇x is known as the backward difference operator and
satisfies ∇x = I − E−
x ;
δxun
j = un
j+ 1
2
− un
j− 1
2
; δx is known as the one step central difference and
satisfies δx = E
1
2
x − E
− 1
2
x ;
μxun
j =
1
2
(uj+ 1
2
+ uj− 1
2
); μx is known as the average operator and satisfies
μx =
1
2
(E
1
2
x + E
− 1
2
x ).
2. Composite difference operators
We have
i. (μδ)x =
1
2
(Ex − E−1
x ) =
1
2
(x + ∇x);
ii. (δx)2 = δxδx = (E
1
2
x − E
− 1
2
x )2 = (Ex − 2I + E−1
x );
iii. (δx)n = δx(δn−1
x );n
x = xn−1
x = ··· ;∇n
x = ∇x · ∇n−1
x = ···.
3. Derivative relations with difference operators
We have
i.

∂u
∂x
n
j
=
xun
j
x
+ O(x)—forward difference
=
∇xun
j
x
+ O(x)—backward difference
=
δxun
j
x
+ O(x2
)—central difference;
ii.

∂2u
∂x2
n
j
=
δ2
xun
j
x2
+ O(x2
)—the second-order central difference
=
2
xun
j
x2
+ O(x2
)—the second-order forward difference
=
∇2
x un
j
x2
+ O(x2
)—the second-order backward difference.
4 Proper Orthogonal Decomposition Methods for Partial Differential Equations
1.1.3 The Formation of Difference Equations
1. Explicit FD schemes
The explicit FD scheme implies that the time farthest point values appear
only once. For example,
un+1
j − un
j
t
=
δ
x2
(un
j+1 − 2un
j + un
j−1)
is an explicit FD scheme for
∂u
∂t
= δ
∂2u
∂x2
, where δ is a positive “diffusion coef-
ficient”.
2. Implicit FD schemes
The implicit FD schemes imply that the time farthest point values in the FD
scheme appear more than once. For example,
un+1
j − un
j
t
=
δ
x2
(un+1
j+1 − 2un+1
j + un+1
j−1)
is an implicit FD scheme for
∂u
∂t
= δ
∂2u
∂x2
.
3. Semidiscretized difference schemes
The semidiscretized difference scheme implies that only the spatial variable
is discretized and the time variable is not discretized. For example,

du
dt

j
=
δ
x2
(uj+1 − 2uj + uj−1)
is a semidiscretized difference scheme for
∂u
∂t
= δ
∂2u
∂x2
.
1.1.4 The Effectiveness of Finite Difference Schemes
1. The error of an FD scheme
For the equation
∂u
∂t
= δ
∂2u
∂x2
,
consider an FD scheme discretized by forward difference for time and central
difference for space (FTCS) as follows:
un+1
j − un
j
t
=
δ
x2
(un
j+1 − 2un
j + un
j−1),
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 5
which is expanded at reference point (xj ,tn) by Taylor’s formula into

∂u
∂t
− δ
∂2u
∂x2
n
j
+

t
2

∂2u
∂t2
n
j
− δ
x2
12

∂4u
∂x4
n
j
+ ···

= 0.
The above equation is known as the modified PDE, where the first parenthesis
above is called the source equation (i.e., PDE) and the bracket above is called
the remainder (R) or truncation error (TE), denoted by R = TE.
The TE is equal to the source equation (PDE) subtracting the FD equation
(FDE), i.e., TE = PDE − FDE.
The discretization error (DE, i.e., the error of the FD scheme) is equal to
the TE plus the boundary error (BE), i.e., DE = TE + BE. Thus, DE = BE +
PDE − FDE.
The round-off error (ROE) denotes the rounding error of the computing pro-
cedure by the computer and the total error is denoted by CE. Then
CE = DE + ROE = BE + PDE − FDE + ROE.
However, ROE and BE are usually omitted. The error of FD schemes mainly
considers TE, which is directly obtained from the approximation of the deriva-
tive (1.1.1)–(1.1.4).
2. The consistency of an FD scheme
Definition 1.1.1. (1) Let the PDE
ut = Lu (1.1.5)
be discretized by an FD scheme as follows:

μ
αμun+1
j+μ =

γ
βγ un
j+γ . (1.1.6)
When an FD grid is indefinitely refined and if R = T E satisfies the property
that it tends to zero, i.e.,
lim
x→0
t→0
R = lim
x→0
t→0
T E = 0, (1.1.7)
then the FD scheme is said to be compatible with the source equation.
(2) If t = o(xγ ) (γ  0), i.e., lim
x→0
t→0
t
xγ
= 0, we have R → 0, then the
FD scheme is said to be compatible with the source equation on the conditions
t = o(xγ
) (γ  0).
3. The stability of an FD scheme
6 Proper Orthogonal Decomposition Methods for Partial Differential Equations
Definition 1.1.2. i. If an error disturbance εn
j = ũn
j − un
j is added at certain time
level t = tn, i.e., ũn
j = un
j + εn
j , and the errors εn+1
j = ũn+1
j − un+1
j of solutions
ũn+1
j and un+1
j obtained from the FD scheme (1.1.6) do not produce extra-large
overall growth, i.e., there is a constant K  0 independent of n and j such that
εn+1
j  ⩽ Kεn
j , or omitting j, εn+1
 ⩽ Kεn
,
where  ·  represents a norm, then when 0  K  1, the FD scheme (1.1.6) is
said to be strongly stable, else (when K ⩾ 1) the FD scheme (1.1.6) is said to be
weakly stable.
ii. If there is no restriction between the time step and the spatial step in the
FD scheme (1.1.6), it is said to be absolutely stable or unconditionally stable.
In this case, the time step t can take a larger size.
iii. If the stability of the FD scheme (1.1.6) is restricted by some relationship
of time step and spatial steps (usually constrained in the form of some inequal-
ities, for example, t = o(xγ ) (γ  0)), then it is said to be conditionally
stable.
We have the following criteria for the stability of the FD scheme (1.1.6) (see
[192,194]).
Theorem 1.1.1. The FD scheme (1.1.6) is stable if and only if there are two
positive constants t0 and x0 as well as a nonnegative constant K  0 inde-
pendent of n and j such that the solutions of FD scheme (1.1.6) satisfy
un
j  ⩽ K, n = 1,2,....
Theorem 1.1.2. If the FD scheme (1.1.6) of PDE (1.1.5) satisfies
un+1
j · un
j  0,
un+1
j 
un
j 
 1, n = 1,2,··· , (1.1.8)
where  ·  is a discrete norm, then the FD scheme (1.1.6) is stable.
4. Equivalence between stability and convergency of FD schemes
Definition 1.1.3. i. Let u∗(x,t) be an exact solution for PDE (1.1.5) and
un
j the approximate solutions of the FD scheme (1.1.6) compatible with the
PDE (1.1.5). If when t → 0,x → 0 (i.e., grid is infinitely refined), for any
sequence (xj ,tn) → (x∗,t∗) ∈ , we have
lim
x,t→0
un
j = u∗
(x∗
,t∗
), (1.1.9)
then the FD scheme (1.1.6) is said to be convergent.
The stability and convergence of FD schemes are their intrinsic properties
with the following important equivalence (see [192] or [194, Theorem 1.3.18]).
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 7
Theorem 1.1.3. If the PDE (1.1.5) is well posed, i.e., it has a unique solution
and is continuously dependent on initial boundary value conditions and com-
patible with its FD scheme (1.1.6), then the stability of the FD scheme (1.1.6) is
equivalent to its convergence.
Theorem 1.1.3 is briefly described as “if a PDE is well posed and compatible
with its FD scheme, then the stability of the FD scheme (1.1.6) is equivalent to
its convergence”.
Remark 1.1.1. For linear PDEs, if their FD schemes are stable, then their con-
vergence is ensured. Therefore, it is only necessary to discuss their stability.
However, for nonlinear PDEs, because of the complexity, at present we can
only investigate nearly similar convergence and local stability analysis instead
of global analysis of convergence.
5. Von Neumann’s stability analysis of FD schemes
Von Neumann’s stability analysis of FD schemes is also known as the
Fourier analysis method. In order to analyze the stability of FD schemes, we
first discuss the stability of exact solutions.
i. Stability analysis of exact solution
The initial value problem
⎧
⎨
⎩
ut = L(
∂
∂x
,
∂
∂x2
,···)u, x ∈ R,
u(x,0) = φ(x)
(1.1.10)
is said to be well posed, if it has a unique and stable solution. A so-called stable
solution means that the solution is continuously dependent on the initial value
and can maintain the boundedness of small disturbances.
Assume that the initial value φ(x) is periodic and expandable into the fol-
lowing Fourier series:
φ(x) =

k
fkeikx
, (1.1.11)
where the fks are the Fourier coefficients. Assume that the exact solution u(x,t)
of the initial value problem (1.1.10) is also expandable into the following Fourier
series:
u(x,t) =

k
Fk(t)eikx
, (1.1.12)
where the Fk(t)s are the Fourier coefficients of the series depending on t.
8 Proper Orthogonal Decomposition Methods for Partial Differential Equations
By substituting (1.1.12) into the first equation of (1.1.10), we obtain

k
Fk(t)eikx
= L(
∂
∂x
,
∂
∂x2
,···)

k
Fk(t)eikx
=

k
L(
∂
∂x
,
∂
∂x2
,···)eikx
· Fk(t). (1.1.13)
If L is a linear differential operator, we can rewrite (1.1.13) as follows:

k
Fk(t)eikx
=

k
L(ik,(ik)2
,···)eikx
· Fk(t). (1.1.14)
By comparing the coefficients eikx of the LHS and those of the RHS in (1.1.14)
and setting them equal, we have
Fk(t) = L(ik,(ik)2
,···) · Fk(t). (1.1.15)
Eq. (1.1.15) has a general solution as follows:
Fk(t) = ckeL(ik,(ik)2,··· )t
. (1.1.16)
By using the initial condition u(x,0) = φ(x) of (1.1.10), we obtain
k
fkeikx =
k
Fk(0)eikx, which implies ck = Fk(0) = fk. Thus, (1.1.12) can
be rewritten as follows:
u(x,t) =

k
fkeL(ik,(ik)2,··· )t
· eikx
. (1.1.17)
Note that the exact solution u(x,t) is said to be stable, if there is a nonnegative
constant M, independent of u,t, and x, such that
u(x,t)L2 ⩽ Mu(x,0)L2 . (1.1.18)
Because {eikx} is a set of standard orthogonal bases in L2(−π,π), we have
u(x,t)2
L2 =

k
| fkeL(ik,(ik)2,··· )t
|2
⩽

k
| fk |2
sup | eL(ik,(ik)2,··· )t
|2
, (1.1.19)
u(x,0)2
L2 =

k
| fk |2
, (1.1.20)
where u(x,t)L2 =
π
−π | u(x,t) |2 dx
1/2
is the norm of u(·,t) in L2(−π,π).
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 9
From (1.1.19)–(1.1.20), we know that
| eL(ik,(ik)2,··· )t
|⩽ M (1.1.21)
holds if and only if
u(x,t)2
L2 =

k
| fkeL(ik,(ik)2,··· )t
|2
⩽ M2

k
| fk |2
⩽ M2
u(x,0)2
L2 . (1.1.22)
Thus, the exact solution u(x,t) of (1.1.10) is stable if and only if there is a
nonnegative constant M independent of u,t, and x, such that | eL(ik)t |⩽ M.
ii. Stability analysis of an FD scheme
In the following, we analyze the von Neumann stability conditions of an FD
scheme.
Let (1.1.10) have the following FD scheme:

μ
αμun+1
j+μ =

γ
βγ un
j+γ . (1.1.23)
Let the initial time level (n = 0) be denoted as
u0
j = φ(xj ) =

k
g0
k eikxj =

k
g0
k eikjx
. (1.1.24)
Let the nth time level solution un
j be denoted by
un
j =

k
gn
k eikxj =

k
gn
k eikjx
. (1.1.25)
By inserting (1.1.25) into (1.1.23), and by the standard orthogonality of
{eikx}, we obtain
gn+1
k = Ggn
k , k ∈ Z, (1.1.26)
where Z is the set of all integers and G is known as the growth factor, which is
denoted by the following equation:
G =


μ
αμeikμx
−1
·
⎛
⎝

γ
βγ eikγ x
⎞
⎠. (1.1.27)
Let g = (··· ,g−k,··· ,g−2,g−1,g0,g1,g2,··· ,gk,···). We have
gn
= Ggn−1
,n = 1,2,··· . (1.1.28)
10 Proper Orthogonal Decomposition Methods for Partial Differential Equations
By using Eq. (1.1.28), we have
gn
2 = Ggn−1
2 = G2
gn−2
2 = ··· = Gn
g0
2,n = 1,2,··· , (1.1.29)
where  · 2 represents the norm in l2. Thus, by (1.1.24) and (1.1.25), from
(1.1.29), we obtain
un
j 2 ⩽ Gn
u0
j 2 = Gn
φ(xj )2, n = 1,2,··· . (1.1.30)
Then, by Theorem 1.1.1, we obtain the following result.
Theorem 1.1.4. The FD scheme (1.1.13) is stable if and only if there are two
positive constants t0 and x0 as well as a nonnegative constant K  0 inde-
pendent of n and j, such that its growth factor G satisfies
Gn
 ⩽ K, n = 1,2,··· . (1.1.31)
Corollary 1.1.5. The FD scheme (1.1.13) is stable if and only if its growth
factor G satisfies
G ⩽ 1 + O(t), n = 1,2,··· . (1.1.32)
iii. Von Neumann’s stability analysis of FD schemes
The condition of (1.1.32) in Corollary 1.1.5 is usually known as the von
Neumann stability condition. It follows that in order to distinguish the stability
of the FD scheme (1.1.13), it is necessary to compute its growth factor G by
formula (1.1.27) and determine the t and x such that (1.1.32) or G ⩽ 1 is
satisfied.
For specific FD schemes, it is easy to compute their growth factor G by
(1.1.27). By the standard orthogonality of {eikx}, it is necessary to substitute
un
j = Gn
eikxj , n = 1,2,··· (1.1.33)
into the FD scheme (1.1.13), and then, eliminating some common factors, one
can obtain the growth factor G of (1.1.27).
For an FD scheme for two-dimensional linear PDEs, we need only to substi-
tute un
j,m = Gneikxj eikym (n = 1,2,···) into the FD scheme and then simplify,
so we can also obtain the growth factor G.
Some relative examples can be found in [192].
1.2 A POD-BASED REDUCED-ORDER EXTRAPOLATION
FINITE DIFFERENCE SCHEME FOR THE 2D PARABOLIC
EQUATION
In this section, we introduce the PODROEFD scheme for the two-dimensional
(2D) parabolic equation. The work is based on Luo et al. [111].
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 11
For convenience and without loss of generality, let = (ax,bx) × (cy,dy)
and consider the following 2D parabolic equation.
Find u such that
⎧
⎪
⎨
⎪
⎩
ut − u = f, (x,y,t) ∈ × (0,T ),
u(x,y,t) = g(x,y,t), (x,y,t) ∈ ∂ × [0,T ),
u(x,y,0) = s(x,y), (x,y) ∈ ,
(1.2.1)
where f (x,y,t), g(x,y,t), and s(x,y) are given source item, boundary func-
tion, and initial value function, respectively, and T is the total time duration.
The main motivation and physical background of the parabolic equation are
the modeling of heat conduction and diffusion phenomena.
1.2.1 A Classical Finite Difference Scheme for the 2D Parabolic
Equation
Let x and y be the spatial steps in the x and y directions, respectively, t the
time step, and un
j,k the function value of u at points (xj ,yk,tn) (0 ⩽ j ⩽ J =
[(bx − ax)/x], 0 ⩽ k ⩽ K = [(dy − cy)/y],0 ⩽ n ⩽ N = [T /t], where
[(bx − ax)/x], [(dy − cy)/y], and [T /t] denote the integral parts of (bx −
ax)/x, (dy − cy)/y, and T /t, respectively).
Thus, the forward difference explicit scheme for the 2D parabolic equa-
tion (1.2.1) at reference point (xj ,yk,tn) is given by
un+1
j,k = un
j,k +
t
x2
(un
j+1,k − 2un
j,k + un
j−1,k)
+
t
y2
(un
j,k+1 − 2un
j,k + un
j,k−1) + tf n
j,k. (1.2.2)
For the FD scheme (1.2.2), we have the following stability and convergence.
Theorem 1.2.1. If 4t/x2 ⩽ 1 and 4t/y2 ⩽ 1, the FD scheme (1.2.2) is
stabile. Further, we have the following error estimates:
un
j,k − u(xj ,yk,tn) = O(t,x2
,y2
), 1 ⩽ n ⩽ N. (1.2.3)
Proof. If 4t/x2 ⩽ 1 and 4t/y2 ⩽ 1, we have
| un+1
j,k | ⩽

1 −
2t
x2
−
2t
y2

| un
j,k | +
t
x2
(| un
j+1,k | + | un
j−1,k |)
+
t
y2
(| un
j,k+1 | + | un
j,k−1 |) + t | f n
j,k |
⩽ un
∞ + tf ∞, (1.2.4)
where  · ∞ is the L∞( ) norm. Thus, from (1.2.4), we obtain
un+1
∞ ⩽ un
∞ + tf ∞,n = 0,1,2,··· ,N − 1. (1.2.5)
12 Proper Orthogonal Decomposition Methods for Partial Differential Equations
By summing (1.2.4) from 0 to n − 1, we obtain
un
∞ ⩽ u0
∞ + ntf ∞,n = 1,2,··· ,N. (1.2.6)
Because nt ⩽ T , from (1.2.6), we obtain
un
∞ ⩽ s(x,y)∞ + T f ∞,n = 1,2,··· ,N, (1.2.7)
which shows that solutions of the FD scheme (1.2.2) are bounded and contin-
uously dependent on the initial value s(x,y) and source term f (x,y,t). Thus,
by Theorem 1.1.1, we deduce that the FD scheme (1.2.2) is stable. Furthermore,
the error estimates (1.2.3) are directly obtainable from approximating difference
quotients for derivatives.
Thus, as long as the time step t, the spatial steps x and y, f (x,y,t),
g(x,y,t), and s(x,y) are given, we can obtain classical FD solutions un
j,k(0 ⩽
j ⩽ J,0 ⩽ k ⩽ K,0 ⩽ n ⩽ N) for the 2D parabolic equation (1.2.1) by comput-
ing the FD scheme (1.2.2).
1.2.2 Formulation of the POD Basis
Set un
i = un
j,k and f n
i = f n
j,k (1 ⩽ i ⩽ m ≡ (J +1)(K +1),i = kJ + k + j + 1,
0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,0 ⩽ n ⩽ N). Then, the classical FD solutions for the FD
scheme (1.2.2) can be denoted by {un
i }N
n=1(1 ⩽ i ⩽ m). We extract the first L
sequence of solutions {un
i }L
n=1(1 ⩽ i ⩽ m,L N) as snapshots. Further, we
formulate an m × L snapshot matrix
A =
⎛
⎜
⎜
⎜
⎜
⎜
⎝
u1
1 u2
1 ··· uL
1
u1
2 u2
2 ··· uL
2
.
.
.
.
.
.
...
.
.
.
u1
m u2
m ··· uL
m
⎞
⎟
⎟
⎟
⎟
⎟
⎠
. (1.2.8)
By the singular value decomposition, the snapshot matrix A has a factoriza-
tion
A = U
 
l×l Ol×(L−l)
O(m−l)×l O(m−l)×(L−l)

V T
, (1.2.9)
where

l×l= diag{σ1,σ2,··· ,σl} is a diagonal matrix consisting of the sin-
gular values of A according to the decreasing order σ1 ⩾ σ2 ⩾ ··· ⩾ σl  0,
U = (ϕ1,ϕ2,··· ,ϕm) is an m×m orthogonal matrix consisting of the orthogo-
nal eigenvectors of AAT
, whereas V = (φ1,φ2,··· ,φL) is an L×L orthogonal
matrix consisting of the orthogonal eigenvectors of AT
A, and O is a zero ma-
trix.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 13
Because the number of mesh points m is much larger than that of snap-
shots L extracted, the height m of A for matrix AAT
is much larger than the
width L of A for matrix AT
A, but the positive eigenvalues λj (j = 1,2,··· ,l)
of AT
A and AAT
are identical and satisfy λj = σ2
j (j = 1,2,··· ,l). There-
fore, we may first find the eigenvalues λ1 ⩾ λ2 ⩾ ··· ⩾ λl  0 (l = rankA) for
matrices AT
A and corresponding eigenvectors ϕj . Then, by the relationship
ϕj = Aφj /

λj , j = 1,2,...,l,
we obtain these eigenvectors ϕj (j = 1,2,...,l) corresponding to the nonzero
eigenvalues for matrix AAT
.
Take
AM = U
 
M×M OM×(L−M)
O(m−M)×M O(m−M)×(L−M)

V T
, (1.2.10)
where

M×M = diag{σ1,σ2,··· ,σM} is the diagonal matrix consisting of the
first M positive singular values of the diagonal matrix

l×l. Define the norm
of matrix A as A2,2 = supu∈RL Au2/u2 (where u2 is the l2 norm for
vector u).
We have the following.
Lemma 1.2.2. Let  = (ϕ1,ϕ2,··· ,ϕM) consist of the first M eigenvectors
U = (ϕ1,ϕ2,··· ,ϕm). Then, we have
AM =
M

i=1
σiϕj φT
i = T
A. (1.2.11)
Proof. According to
AM =
M

i=1
σiϕiφT
i =

ϕ1 ··· ϕM

⎛
⎜
⎜
⎝
σ1 ··· 0
.
.
.
...
.
.
.
0 ··· σM
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
φT
1
.
.
.
φT
M
⎞
⎟
⎟
⎠,
A =

ϕ1 ··· ϕl

⎛
⎜
⎜
⎝
σ1 ··· 0
.
.
.
...
.
.
.
0 ··· σl
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
φT
1
.
.
.
φT
l
⎞
⎟
⎟
⎠,
we have
14 Proper Orthogonal Decomposition Methods for Partial Differential Equations
T A = 
⎛
⎜
⎜
⎝
ϕT
1
.
.
.
ϕT
M
⎞
⎟
⎟
⎠

ϕ1 ··· ϕl

⎛
⎜
⎜
⎝
σ1 ··· 0
.
.
.
...
.
.
.
0 ··· σl
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
φT
1
.
.
.
φT
l
⎞
⎟
⎟
⎠
= 

IM O

⎛
⎜
⎜
⎝
σ1 ··· 0
.
.
.
...
.
.
.
0 ··· σl
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
φT
1
.
.
.
φT
l
⎞
⎟
⎟
⎠
= 
⎛
⎜
⎜
⎝
σ1 ··· 0 ··· 0
.
.
.
...
.
.
. ···
.
.
.
0 ··· σM ··· 0
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
φT
1
.
.
.
φT
l
⎞
⎟
⎟
⎠
=

ϕ1 ··· ϕM

⎛
⎜
⎜
⎝
σ1 ··· 0
.
.
.
...
.
.
.
0 ··· σM
⎞
⎟
⎟
⎠
⎛
⎜
⎜
⎝
φT
1
.
.
.
φT
M
⎞
⎟
⎟
⎠ = AM.
Thus, by the relationship between the matrix norm and its spectral radius,
we have
min
rank(B)⩽M
A − B2,2 = A − AM2,2 = A − T
A2,2 =

λM+1,
(1.2.12)
where  = (ϕ1,ϕ2,··· ,ϕM) consists of the first M eigenvectors U = (ϕ1,ϕ2,
··· ,ϕm). If the L column vectors of A are denoted by un = (un
1,un
2,··· ,
un
m)T (n = 1,2,··· ,L), we have
un
− un
M2 = (A − T
A)εn2 ⩽ A − T
A2,2εn2 =

λM+1,
(1.2.13)
where un
M = M
j=1(ϕj ,un)ϕj represents the projection of un onto  =
(ϕ1,ϕ2,··· ,ϕM), (ϕj ,un) is the inner product of ϕj and un, and εn denotes
the unit vector with the nth component being 1. The inequality (1.2.13) shows
that un
M is an optimal approximation of un whose error is no more than
√
λM+1.
Thus,  is just an orthonormal optimal POD base of A.
1.2.3 Establishment of the POD-Based Reduced-Order Finite
Difference Scheme for the 2D Parabolic Equation
We still denote the classical FD solution vectors from the FD scheme (1.2.2) by
un = (un
1,un
2,··· ,un
m)T (n = 1,2,··· ,L,L + 1,··· ,N). Thus, we can write
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 15
the FD scheme (1.2.2) in vector form as follows:
un+1
= un
+
t
x2
Bun
+
t
y2
Cun
+ tFn
m, (1.2.14)
where Fn
m = (f n
1 ,f n
2 ,··· ,f n
m)T , and
B =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
−1 1 0 0 ··· 0 0
1 −2 1 0 ··· 0 0
0 1 −2 1 ··· 0 0
0 0 1 −2 ··· 0 0
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
0 0 0 0 ··· −2 1
0 0 0 0 ··· 1 −1
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
m×m
,
C =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
−1
J times 0
  
··· ··· 1
−2
...
−2
...
1
... 1
... −2
... −2
1 ··· ···
  
J times 0
−1
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
m×m
.
In order to estimate a tridiagonal matrix, it is necessary to introduce the
following lemma (see [192, Theorems 1.3.1 and 1.3.2]).
Lemma 1.2.3. The tridiagonal matrix
B̃ =
⎛
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎜
⎝
d b 0 0 ··· 0 0
c a b 0 ··· 0 0
0 c a b ··· 0 0
0 0 c a ··· 0 0
.
.
.
.
.
.
.
.
.
.
.
.
...
.
.
.
.
.
.
0 0 0 0 ··· a b
0 0 0 0 ··· c d
⎞
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎟
⎠
m×m
16 Proper Orthogonal Decomposition Methods for Partial Differential Equations
has the following eigenvalues:
λ̃j = a + 2
√
bc cos[(2j − 1)π/(2m + 1)], j = 1,2,··· ,m.
Therefore, by using the relationship between the matrix eigenvalue and its
norm, we have
B2,2 = C2,2 =| 2 − 2cos[(2m − 1)π/(2m + 1)] |
=| 2 − 2cos[π − 2π/(2m + 1)] |
=| 2 + 2cos[2π/(2m + 1)] | 4. (1.2.15)
If we replace un of (1.2.14) with u∗n = T Aεn (n = 1,2,··· ,L) and
u∗n = αn (n = L + 1,L + 2,··· ,N), we obtain the PODROEFD scheme as
follows:
⎧
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎩
u∗n = T Aεn, n = 1,2,··· ,L,
αn+1
= αn
+
t
x2
Bαn
+
t
y2
Cαn
+ tFn
m,
n = L,L + 1,··· ,N − 1,
(1.2.16)
where αn = (αn
1 ,αn
2 ,··· ,αn
M)T are vectors yet to be determined.
By using the orthogonal vectors in T multiplied by Eq. (1.2.16), we obtain
⎧
⎪
⎪
⎨
⎪
⎪
⎩
αn
= T
un
, n = 1,2,··· ,L,
αn+1 = αn + t
x2 T Bαn + t
y2 T Cαn + tT Fn
m,
n = L,L + 1,··· ,N − 1.
(1.2.17)
After αn (n = L,L+1,··· ,N) are obtained from the system of Eq. (1.2.17),
we can obtain the PODROEFD solution vectors for Eq. (1.2.1) as follows:
u∗n
= αn
, n = 1,2,··· ,L,L + 1,··· ,N. (1.2.18)
Further, we can obtain the PODROEFD solution components for Eq. (1.2.1)
as follows:
u∗n
j,k = u∗n
i , 0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K,
1 ⩽ i = k(J + 1) + j + 1 ⩽ m = (K + 1)(J + 1). (1.2.19)
Remark 1.2.1. It is easily seen that the classical FD scheme (1.2.2) at each
time level contains m unknown quantities, whereas the system of Eqs. (1.2.17)–
(1.2.18) at the same time level (when n  L) contains only M unknown quanti-
ties (M L m, usually M = 6, but m = O(104) ∼ O(106)). Therefore, the
PODROEFD model (1.2.17)–(1.2.18) includes very few degrees of freedom and
does not involve repeated computations. Here, we extract the snapshots from the
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 17
first L classical FD solutions; but when we solve real-world problems we may,
instead, extract snapshots from the samples of experiments of physical system
trajectories.
1.2.4 Error Estimates of the Reduced-Order Finite Difference
Solutions for the 2D Parabolic Equation
First, from (1.2.13), we obtain
un
− u∗n
2 = un
− un
M2 = (A − T
A)εn2 ⩽

λM+1,
n = 1,2,··· ,L. (1.2.20)
Rewrite the second equation of the system of Eqs. (1.2.16) as follows:
u∗n+1
= u∗n
+
t
x2
Bu∗n
+
t
y2
Cu∗n
+ tFn
m,
n = L,L + 1,··· ,N − 1. (1.2.21)
Put δ = tB2,2/x2 + tC2,2/y2. By subtracting (1.2.21) from
(1.2.14) and taking norm, from (1.2.20), we have
un+1
− u∗n+1
2 ⩽ (1 + δ)un
− u∗n
2
⩽ ···
⩽ (1 + δ)n+1−L
uL
− u∗L
2
⩽ (1 + δ)n+1−L

λM+1, n = L,L + 1,··· ,N − 1. (1.2.22)
By summarizing the above discussion and noting that the absolute value of
each vector component is less than the vector norm, from (1.2.3), we obtain the
following theorem.
Theorem 1.2.4. The errors between the solutions un
jk from the classical FD
scheme and the solutions u∗n
jk from the PODROEFD scheme (1.2.17)–(1.2.18)
satisfy the following estimates:
| un
jk − u∗n
jk |⩽ E(n)

λM+1, 1 ⩽ j ⩽ J,1 ⩽ k ⩽ K,1 ⩽ n ⩽ N, (1.2.23)
where E(n) = 1 (1 ⩽ n ⩽ L), whereas E(n) = (1 + δ)n−L (L + 1 ⩽ n ⩽ N),
δ = tB2,2/x2 +tC2,2/y2. Further, the errors between the accurate
solution u(x,y,t) from Eq. (1.2.1) and the solutions u∗n
jk from the POD-based
reduced-order FD scheme (1.2.17)–(1.2.18) satisfy the following estimates:
| u(xj ,yk,tn) − u∗n
jk |= O(E(n)

λM+1,t,x2
,y2
),
1 ⩽ j ⩽ J,1 ⩽ k ⩽ K,1 ⩽ n ⩽ N. (1.2.24)
18 Proper Orthogonal Decomposition Methods for Partial Differential Equations
Remark 1.2.2. The error terms containing
√
λM+1 in Theorem 1.2.4 are
caused by the POD-based reduced-order for the classical FD scheme, which
could be used to select the number of the POD basis, i.e., it is necessary to
take M such that
√
λM+1 = O(t,x2,y2). Whereas E(n) = (1 + δ)n−L
(L + 1 ⩽ n ⩽ N) are caused by extrapolating iterations, which could be
used as the guide for renewing the POD basis, i.e., if (1 + δ)n−L√
λM+1 
max(t,x2,y2), it is necessary to update the POD basis. If we take λM+1
that satisfies (1 + δ)N−L√
λM+1 = O(t,x2,y2), then the PODROEFD
scheme (1.2.17)–(1.2.18) is convergent and, thus, we do not have to update the
POD basis.
1.2.5 The Implementation of the Algorithm of the POD-Based
Reduced-Order Finite Difference Scheme for the 2D
Parabolic Equation
In order to facilitate the use of the PODROEFD scheme for the 2D parabolic
equation, the following implementation steps of the algorithm for the PO-
DROEFD scheme (1.2.17)–(1.2.18) are helpful.
Step 1. Classical FD computation and extraction of snapshots
Write the classical FD scheme (1.2.2) in vector form (1.2.14) and find the so-
lution vectors un = (un
1,un
2,··· ,un
m)T (n = 1,2,··· ,L) of (1.2.14) at the first
few L steps (in the following, say, take L = 20 in Section 1.2.6).
Step 2. Snapshot matrix A and eigenvalues of AT
A
Formulate snapshot matrices A = (un
i )m×L and compute the eigenvalues λ1 ⩾
λ2 ⩾ ··· ⩾ λl  0 (l = rankA) and the eigenvectors φj (j = 1,2,··· ,M̃) of
matrices AT
A.
Step 3. Choice of POD basis
For the error tolerance μ = O(t,x2,y2), decide the numbers M (M ⩽
M̃) of POD bases such that

λu(M+1) ⩽ μ and formulate the POD bases  =
(ϕ1,ϕ2,··· ,ϕM) (where ϕj = Aφj /

λj ,j = 1,2,··· ,M).
Step 4. Solve/compute the PODROEFD model
Solve the PODROEFD scheme (1.2.17)–(1.2.18) to obtain the reduced-order so-
lution vectors u∗n = (u∗n
1 ,u∗n
2 ,··· ,u∗n
m )T ; further, obtain the component forms
u∗n
j,k = u∗n
i (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, i = k(J + 1) + j + 1, 1 ⩽ i ⩽ m =
(K + 1)(J + 1)).
Step 5. Check accuracy and renew POD basis to continue
Set δ = tB2,2/x2 + tC2,2/y2. If (1 + δ)n−L

λu(M+1) ⩽ μ. Then
u∗n = (u∗n
1 ,u∗n
2 ,··· ,u∗n
m )T is just the solution vectors for the PODROEFD
scheme (1.2.17)–(1.2.18) that satisfy the accuracy requirement. Else, i.e., if (1+
δ)n−L

λu(M+1)  μ, put u1 = u∗(n−L), u2 = u∗(n−L+1),...,uL = u∗(n−1) and
return to Step 2.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 19
FIGURE 1.2.1 The solution u obtained via the classical FD scheme.
1.2.6 A Numerical Example for the 2D Parabolic Equation
A numerical example here is presented to demonstrate the advantage of the
PODROEFD scheme (1.2.17)–(1.2.18). To this end, we take f (x,y,t) = 0,
u(x,y,0) = s(x,y) = sinπx sinπy, and u(x,y,t) = 0 in Eq. (1.2.1). The com-
putational domain is taken as = {(x,y);0 ⩽ x ⩽ 2,0 ⩽ y ⩽ 2}. The spatial
steps are taken as x = y = 0.02 and the time step t = 0.00005. Thus, we
have 8t/x2 ⩽ 1 and 8t/y2 ⩽ 1 such that δ  1.
We first find the classical FD solution at t = 2 by means of the classical FD
scheme (1.2.2), which is depicted graphically in Fig. 1.2.1. Next, we take 20
classical FD solutions of the classical FD scheme (1.2.2) in the first 20 steps
as snapshots. By computing directly, we have achieved
√
λ7 ⩽ 4 × 10−4. It can
be shown that as long as we take the first six POD bases, the theoretical accu-
racy requirement can be satisfied. Next, we find the reduced-order FD solution
at t = 2 (n = 40000) by means of the PODROEFD scheme (1.2.17)–(1.2.18),
where it is unnecessary to update the POD basis and the solution is depicted
graphically in Fig. 1.2.2. Because the PODROEFD scheme (1.2.17)–(1.2.18)
only includes six unknown quantities and uses six optimizing data of the first 20
classical solutions as initial values, it saves computing time, reduces the degrees
of freedom in the numerical computations, and alleviates the TE accumulation
and, therefore, the PODROEFD solution is more efficient.
Fig. 1.2.3 is the (log 10) error chart between the classical FD solutions and
the reduced-order FD solution with different number of up to 20 POD bases at
t = 2. It is shown that as long as we take M = 6, the reduced-order FD solutions
obtained satisfy the accuracy requirement (i.e., its error does not exceed 4 ×
10−4). Thus, the theoretical results are consistent with the ones from numerical
calculations (they are all O(10−4)). Hence, it is shown that the PODROEFD
20 Proper Orthogonal Decomposition Methods for Partial Differential Equations
FIGURE 1.2.2 The solution u obtained via the PODROEFD scheme.
FIGURE 1.2.3 The (log10) error plot between the classical FD solutions and the reduced-order
FD solution with different number of up to 20 POD bases at t = 2.
scheme (1.2.17)–(1.2.18) is effective for the 2D parabolic equation. See the
advantages and benefits of POD in Foreword and Introduction of the book.
1.3 A POD-BASED REDUCED-ORDER EXTRAPOLATION
FINITE DIFFERENCE SCHEME FOR THE 2D
NONSTATIONARY STOKES EQUATION
In this section, a PODROEFD scheme is given for the 2D nonstationary Stokes
equation. The PODROEFD scheme to produce the solutions on the time span
[T0,T ] (T0 T ) is obtained from the information on a short time span [0,T0]
by extrapolation and iteration. The guidelines to choose the number of POD
bases and renew the POD bases are provided, and an implementation for solv-
ing the PODROEFD scheme is given. Some numerical experiments are provided
to illustrate the feasibility and efficiency of the PODROEFD scheme for simu-
lating a channel flow with local expansion. The PODROEFD scheme for the 2D
nonstationary Stokes equation is based on the work in [117].
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 21
FIGURE 1.3.1 The domain of Problem 1.3.1.
1.3.1 Background for the 2D Nonstationary Stokes Equation
The channel flow with local expansion in this section is motivated by applica-
tions, for example, for the case of the capillary blood vessel flow in the human
body, where there is the microchannel flow with expansion geometries. It can be
simplified into a nonstationary Stokes channel flow with local expansion [16,34,
144]. Its geometry is approximately made by two squares at the top and bottom
of the channel. The flow domain is shown in Fig. 1.3.1. Thus, the mathematical
model for the channel flow with local expansion is described by the following
system of the nonstationary Stokes equation.
Problem 1.3.1. Find (u,v) and p such that, for T  0,
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎪
⎩
∂u
∂t
−
1
Re

∂2u
∂x2
+
∂2u
∂y2

+
∂p
∂x
= f, (x,y,t) ∈ × (0,T ],
∂v
∂t
−
1
Re

∂2v
∂x2
+
∂2v
∂y2

+
∂p
∂y
= g, (x,y,t) ∈ × (0,T ],
∂u
∂x
+
∂v
∂y
= 0, (x,y,t) ∈ × (0,T ],
u(x,y,t) = ϕ1(x,y,t),v(x,y,t) = ϕ2(x,y,t), (x,y,t) ∈ ∂ × (0,T ],
u(x,y,0) = u0(x,y),v(x,y,0) = v0(x,y), (x,y) ∈ ,
(1.3.1)
where (u,v) is the velocity vector, p is the pressure, T is the total time duration,
Re is the Reynolds number, f (x,y,t) and g(x,y,t) are the given body forces
in the x-direction and y-direction, respectively, and ϕ1(x,y,t), ϕ2(x,y,t),
u0(x,y), and v0(x,y) are four given functions.
The nonstationary Stokes equation constitutes an important system of equa-
tions in fluid dynamics with many applications beyond the channel flow with
local expansion [8,16,18,34,42,144,146,183]. For Problem 1.3.1, generally there
are no analytical solutions. One has to rely on numerical solutions. The classical
FD scheme with second-order accuracy in time and spatial variable discretiza-
tions is one of the simplest and most convenient high accuracy methods for
22 Proper Orthogonal Decomposition Methods for Partial Differential Equations
finding numerical solutions of the nonstationary Stokes equation. However, the
approach involves a large number of degrees of freedom (unknown quantities).
Especially, due to the TE accumulation in the computational process, it may
also appear not to converge after several computation steps. Thus, an important
task is to lessen the degrees of freedom so as to reduce the computational load
and save CPU time in the process in a way that guarantees sufficiently accurate
numerical solutions. Here, we employ the POD method to reduce its degrees of
freedom.
1.3.2 A Classical Finite Difference Scheme for the 2D
Nonstationary Stokes Equation and the Generation of
Snapshots
Let N be a positive integer, t = T /N the time step increment, x and
y the spatial step increments in the x- and y-directions, respectively, and
un
j+ 1
2 ,k
, vn
j,k+ 1
2
, and pn
j,k denote the value of function u,v, and p at the points
(xj+ 1
2
,yk,tn), (xj ,yk+ 1
2
,tn), and (xj ,yk,tn) (0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,0 ⩽ n ⩽
N), respectively. Then Problem 1.3.1 is known to have the following classical
FD scheme with second-order accuracy:

uj+ 1
2 ,k − uj− 1
2 ,k
x
+
vj,k+ 1
2
− vj,k− 1
2
y
n+1
= 0, (1.3.2)
un+1
j+ 1
2 ,k
= Fn
j+ 1
2 ,k
−
2t
x
(pn
j+1,k − pn
j,k) + 2tf n
j+ 1
2 ,k
, (1.3.3)
vn+1
j,k+ 1
2
= Gn
j,k+ 1
2
−
2t
x
(pn
j,k+1 − pn
j,k) + 2tgn
j,k+ 1
2
, (1.3.4)
where
Fn
j+ 1
2 ,k
= un−1
j+ 1
2 ,k
+
2t
Re

uj+ 1
2 ,k−1 − 2uj+ 1
2 ,k + uj+ 1
2 ,k+1
y2
+
uj− 1
2 ,k − 2uj+ 1
2 ,k + uj+ 3
2 ,k
x2
n
,
Gn
j,k+ 1
2
= vn−1
j,k+ 1
2
+
2t
Re

vj−1,k+ 1
2
− 2vj,k+ 1
2
+ vj+1,k+ 1
2
x2
+
vj,k− 1
2
− 2vj,k+ 1
2
+ vj,k+ 3
2
y2
n
.
Inserting (1.3.3) and (1.3.4) into (1.3.2), one can obtain the approximate FD
scheme of Poisson equations for p as follows:

pj−1,k − 2pj,k + pj+1,k
x2
+
pj,k−1 − 2pj,k + pj,k+1
y2
n
= R, (1.3.5)
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 23
where R = 1
tx [2t(fj+ 1
2 ,k − fj− 1
2 ,k) + Fj+ 1
2 ,k − Fj− 1
2 ,k]n + 1
ty ×
[2t(gj,k+ 1
2
− gj,k− 1
2
) + Gj,k+ 1
2
− Gj,k− 1
2
]n.
If tRe ⩽ 4 and 4t ⩽ min{Rex2,Rey2} or t = o(Rex2,Rey2),
by taking the same approach as in the proof of Theorem 1.2.1, we easily prove
that the FD scheme (1.3.3)–(1.3.5) is stable (see also [34,192]). If the solution
triple (u,v,p) to Problem 3.2.1 is sufficiently regular, we have the following
error estimates:
(u(xj+ 1
2
,yk,tn),v(xj ,yk+ 1
2
,tn),p(xj ,yk,tn)) − (un
j+ 1
2 ,k
,vn
j,k+ 1
2
,pn
j,k)2
= O(t2
,x2
,y2
), n = 1,2,··· ,N, (1.3.6)
where  · 2 denotes the l2 norm of the vector.
If the Reynolds number Re, body force f (x,y,t) in the x-direction and
body force g(x,y,t) in the y-direction, boundary value functions ϕ1(x,y,t)
and ϕ2(x,y,t), initial value functions u0(x,y) and v0(x,y), time step incre-
ment t, and spatial step increments x and y are given, let u0
j+ 1
2 ,k
=
u1
j+ 1
2 ,k
= u0(xj+ 1
2
,yk), v0
j,k+ 1
2
= v1
j,k+ 1
2
= v0(xj ,yk+ 1
2
). By solving the FD
scheme (1.3.3)–(1.3.5), we can obtain the classical FD solutions un
j+ 1
2 ,k
, vn
j,k+ 1
2
,
and pn
j,k (0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,1 ⩽ n ⩽ N).
Set un
i = un
j+ 1
2 ,k
, vn
i = vn
j,k+ 1
2
, and pn
i = pn
j,k (i = kJ + j + 1, 1 ⩽ i ⩽ m,
m = (J + 1)(K + 1), 0 ⩽ j ⩽ J − 1, 0 ⩽ k ⩽ K − 1), respectively. We may
choose the first L solutions from the set {un
i ,vn
i ,pn
i }N
n=1 (1 ⩽ i ⩽ m) including
N ×m elements to construct a set {ul
i,vl
i,pl
i}L
l=1 (1 ⩽ i ⩽ m,L N) containing
L × m elements, which are just snapshots.
Remark 1.3.1. The snapshots are drawn from the first L FD solutions here.
However, when one computes the real-world problems, one may also choose
the ensemble of snapshots from physical system trajectories by drawing samples
from the experiments.
1.3.3 Formulations of the POD Basis and the POD-Based
Reduced-Order Extrapolating Finite Difference Scheme
In the following, we first formulate a set of POD bases, and then establish the
PODROEFD scheme with second-order accuracy for the nonstationary Stokes
equation.
The set of snapshots {ul
i,vl
i,pl
i}L
l=1 (1 ⩽ i ⩽ m) in Section 1.3.2 can
be expressed in the following three m × L matrices As = (sl
i )m×L (s =
u,v,p). Let λs1 ⩾ λs1 ⩾ ··· ⩾ λsM̃s
 0 (M̃s = rank(As)) be the positive
eigenvalues of the matrices AsAT
s (s = u,v,p) and let the matrices Us =
(φs1,φs2,··· ,φsm) and the matrices V s = (ϕs1,ϕs2,··· ,ϕsL) consist of the
24 Proper Orthogonal Decomposition Methods for Partial Differential Equations
orthonormal eigenvectors of the matrices AsAT
s and AT
s As, respectively. Then,
it follows easily from linear algebra that As = UsDM̃s
V T
s (where DM̃s
=
diag{
√
λs1,
√
λs2,··· ,
√
λsM̃s
,0,··· ,0}). Thus, Us = (φs1,φs2,··· ,φsm) (s =
u,v,p) make up three sets of POD bases.
The number of mesh points is far larger than that of the snapshots drawn,
that is, m  L; the order m for matrices AsAT
s is far larger than the order L
for matrices AT
s As. But the numbers of their positive eigenvalues are identi-
cal, so we may first solve the eigenequation corresponding to matrices AT
s As
to find the eigenvectors ϕsj (j = 1,2,...,M̃s), and then, by the relationship
φsj = Asϕsj /

λsj (j = 1,2,...,M̃s, s = u,v,p), we obtain the eigenvectors
φsj (j = 1,2,...,M̃s) corresponding to the nonzero eigenvalues for matrix
AsAT
s . Thus, by using the same technique as in Section 1.2.2, three optimiz-
ing orthonormal POD bases s = (φs1,φs2,··· ,φsMs
) (Ms L, s = u,v,p)
are formed by the first Ms (0  Ms ⩽ M̃s ⩽ L) columns from three groups of
POD bases Us = (φs1,φs2,··· ,φsm). Further, by the properties of the norm of
a matrix (see (1.2.12) in Section 1.2.2), the following error estimates hold:
As − AMs 2,2 = As − sT
s As2,2 =

λs(Ms+1),s = u,v,p, (1.3.7)
where A2,2 = supx Ax2/x2,  · 2 is the l2 vector norm, AMs =
UsDMs V T
s , and DMs = diag{
√
λs1,
√
λs2,··· ,
√
λsMs ,0,··· ,0} (0  Ms ⩽
M̃s ⩽ L).
Let εl (l = 1,2,...,L) denote unit column vectors whose lth component
is 1. Set
sn
m = (sn
1 ,sn
2 ,··· ,un
m)T
, s = u,v,p, n = 1,2,··· ,N. (1.3.8)
Then, from (1.3.7), the following error estimates hold:
sl
m − sT
s sl
m2 = (As − sT
s As)εl2
⩽ As − sT
s As2,2εl2
=

λs(Ms+1), s = u,v,p, l = 1,2,··· ,L, (1.3.9)
which shows that sT
s sl
m is the optimal approximation for sl
m and the errors
are

λs(Ms+1) (s = u,v,p).
By using the notation of (1.3.8), the classical FD scheme (1.3.3)–(1.3.5) is
now written in the following vector scheme:
pn
m = F̃1(un
m,vn
m), 0 ⩽ n ⩽ N,

un+1
m ,vn+1
m
T
=

un−1
m ,vn−1
m
T
+ F̃(un
m,vn
m,pn
m),1 ⩽ n ⩽ N − 1,
(1.3.10)
where F̃ and F̃1 are determined from (1.3.3)–(1.3.4) and (1.3.5), respectively.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 25
Set

u∗n
m ,v∗n
m ,p∗n
m
T
= uαn
Mu
,vβn
Mv
,pγ n
Mp
!T
, (1.3.11)
where u∗n
m = (u∗n
1 ,u∗n
2 ,··· ,u∗n
m )T , v∗n
m = (v∗n
1 ,v∗n
2 ,··· ,v∗n
m )T , p∗n
m = (p∗n
1 ,
p∗n
2 ,··· ,p∗n
m )T are three column vectors corresponding to u, v, and p, re-
spectively, and αn
Mu
= (α1,α2,··· ,αMu )T , βn
Mv
= (β1,β2,··· ,βMv )T , and
γ n
Mp
= (γ1,γ2,··· ,γMp )T . If un
m,vn
m,pn
m in (1.3.10) are approximately re-
placed with u∗n
m ,v∗n
m , p∗n
m in (1.3.11) (n = 0,1,2,··· ,N), by noting that
three matrices u,v, and p are formed with the orthonormal eigenvec-
tors, the PODROEFD scheme for the channel flow with local expansion, with
Mu + Mv + Mp (Mu,Mv,Mp L m) unknowns, is denoted by
⎧
⎪
⎪
⎪
⎪
⎪
⎪
⎨
⎪
⎪
⎪
⎪
⎪
⎪
⎩
αn
Mu
= T
u un
m,βn
Mv
= T
v vn
m,γ n
Mp
= T
p pn
m,n = 1,2,··· ,L,
γ n
Mp
= T
p F̃1(uαn−1
Mu
,vβn−1
Mv
), n = L,L + 1,··· ,N,
αn+1
Mu
,βn+1
Mv
!T
= αn−1
Mu
,βn−1
Mv
!T
+ G̃(αn
Mu
,βn
Mv
,γ n
Mp
),
n = L,L + 1,··· ,N − 1,
(1.3.12)
where G̃(αn
Mu
,βn
Mv
,γ n
Mp
) = (T
u ,T
v )T F̃(uαn
Mu
,vβn
Mv
,pγ n
Mp
).
After αn
Mu
, βn
Mv
, and γ n
Mp
are obtained from (1.3.12), the reduced-order FD
solutions for the PODROEFD scheme are obtained by
u∗n
m = uαn
Mu
, v∗n
m = vβn
Mv
, p∗n
m = pγ Mp
, n = 0,1,2,··· ,N.
(1.3.13)
Thus, in component forms: u∗n
j+ 1
2 ,k
= u∗n
i , v∗n
j,k+ 1
2
= v∗n
i , and p∗n
j,k+ 1
2
= p∗n
i (0 ⩽
j ⩽ J, 0 ⩽ k ⩽ K, i = k(J + 1) + j + 1, 1 ⩽ i ⩽ m = (K + 1)(J + 1)); the
PODROEFD scheme with second-order accuracy is attained.
Remark 1.3.2. Since the classical FD scheme (1.3.3)–(1.3.5) at each time
level includes 3m degrees of freedom, while the system of Eqs. (1.3.12) and
(1.3.13) at each time level (when n  L) contains only (Mu +Mv +Mp) degrees
of freedom (Mu,Mv,Mp L m, for example, in Section 1.3.6, L = 20,
Mu = Mv = Mp = 6, but m = 136 × 104), the system of Eqs. (1.3.12) and
(1.3.13) is the PODROEFD scheme with much fewer degrees of freedom and the
second-order accuracy and has no repetitive computations. Thus, it has the same
advantage and efficiency as the PODROEFD scheme, just as in Section 1.2.
1.3.4 Error Estimates and a Criterion for Renewing the POD
Basis
In the following, we estimate the errors between the reduced-order FD solutions
for (1.3.12) and (1.3.13) and the classical FD solutions for (1.3.3)–(1.3.5).
26 Proper Orthogonal Decomposition Methods for Partial Differential Equations
By using (1.3.13), we write the second and third equations of (1.3.12) in the
following vector form:
p∗n
m = F̃1(u∗n−1
m ,v∗n−1
m ), L + 1 ⩽ n ⩽ N,

u∗n+1
m ,v∗n+1
m
T
=

u∗n−1
m ,v∗n−1
m
T
+F̃(u∗n
m ,v∗n
m ,p∗n
m ), L ⩽ n ⩽ N − 1,
(1.3.14)
where their stability conditions are also made to satisfy tRe ⩽ 4 and
4t ⩽ min{Rex2, Rey2} (see [34] or [192]). Set en = (un
m,vn
m,pn
m)T −
(u∗n
m ,v∗n
m ,p∗n
m )T . By the first equation of (1.3.12) and (1.3.13), we obtain
en = (un
,vn
,pn
)T
− (uT
u un
,vT
v vn
,pT
p pn
)T
, (1.3.15)
where n = 1,2,··· ,L. From (1.3.9) and (1.3.15), we obtain
en2 ⩽

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1), n = 1,2,··· ,L. (1.3.16)
By (1.3.10) and (1.3.14), we have
en+12 ⩽ en−12 + Men2,n = L,L + 1,··· ,N − 1, (1.3.17)
where M = max{tRe/4,4t/(Rex2),4t/(Rey2)} ⩽ 1 under the stabil-
ity conditions tRe ⩽ 4 and 4t ⩽ min{Rex2,Rey2}. If t = o(x,y,
Rex2,Rey2), M is a small positive constant. Summing (1.3.17) from L to
n − 1 yields
en2 ⩽ eL−12 + eL2 + M
n−1

i=L
ei2, n = L + 1,L + 2,··· ,N.
(1.3.18)
If we write ξn = M n−1
i=L ei2 + eL−12 + eL2 and δ = M + 1, then we
have en2 ⩽ ξn and ξn − ξn−1 = Men−12 (n ⩾ 2). Therefore, we get
ξn ⩽ (M + 1)ξn−1 = δξn−1 ⩽ δ2
ξn−2 ⩽ ··· ⩽ δn−L
ξL
= δn−L
(eL−12 + eL2). (1.3.19)
Set C(δn) = 2δn−L. We obtain from (1.3.16)
en2 ⩽ 2δn−L
[

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1)]
= C(δn
)[

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1)], (1.3.20)
where n = L + 1,L + 2,...,N. Synthesizing the above discussions yields the
following result.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 27
Theorem 1.3.1. If (un
m,vn
m,pn
m)T (n = 1,2,··· ,N) are the solution vec-
tors formed by the solutions for the classical FD scheme (1.3.3)–(1.3.5),
(u∗n
m ,v∗n
m ,p∗n
m )T (n = 1,2,··· ,N) are the reduced-order FD solutions for the
PODROEFD scheme (1.3.12) and (1.3.13), we have the following error esti-
mates:
(un
m,vn
m,pn
m) − (u∗n
m ,v∗n
m ,p∗n
m )2
⩽ C(δn
)
#
λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1)
$
, n = 1,2,··· ,N,
where C(δn) = 1 (1 ⩽ n ⩽ L) and C(δn) = 2(1 + M)n−L (L + 1 ⩽ n ⩽ N)
together with M = max{tRe/4,4t/(Rex2),4t/(Rey2)}.
Since the absolute value of each vector component is less than its norm,
combining (1.3.6) with Theorem 1.3.1 yields the following result.
Theorem 1.3.2. The exact solution for the nonstationary Stokes equation and
the reduced-order FD solutions obtained from the PODROEFD scheme (1.3.12)
and (1.3.13) satisfy the following error estimates: for 1 ⩽ n ⩽ N,
| u(xj+ 1
2
,yk,tn) − u∗n
j+ 1
2 ,k
| + | v(xj ,yk+ 1
2
,tn) − v∗n
j,k+ 1
2
|
+ | p(xj ,yk,tn) − p∗n
j,k |
= O

C(δn)

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1) ,t2,x2,y2

.
Remark 1.3.3. The error estimates of Theorem 1.3.2 give a guide for choos-
ing the number of POD bases, namely, taking Mu, Mv, and Mp such that

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1) = O(t2,x2,y2). Here, C(δn) =
2(1 + M)n−L (L + 1 ⩽ n ⩽ N) are caused by extrapolating iteration and they
may act as a guide for renewing POD bases, namely, when C(δn)[

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1)]  max(t2,x2,y2), it is time for the renewal of
POD bases.
1.3.5 Implementation for the POD-Based Reduced-Order
Extrapolating Finite Difference Scheme
The implementation for the PODROEFD scheme (1.3.12) and (1.3.13) consists
of the following five steps.
Step 1. Classical FD computation and formulation of snapshots
Solving the classical FD scheme (1.3.3)–(1.3.5) at the first few L steps (empir-
ically, say, take L = 20) yields the classical FD solutions un
j+ 1
2 ,k
, vn
j,k+ 1
2
, and
pn
j,k(0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,1 ⩽ n ⩽ L) and further produces a set of snapshots
{ul
i,vl
i,pl
i}L
l=1 (1 ⩽ i ⩽ m) with L × m elements, where un
i = un
j+ 1
2 ,k
, vn
i =
vn
j,k+ 1
2
, and pn
i = pn
j,k (i = kJ + j + 1, 1 ⩽ i ⩽ m, m = (J + 1)(K + 1),0 ⩽
j ⩽ J − 1, 0 ⩽ k ⩽ K − 1), respectively.
28 Proper Orthogonal Decomposition Methods for Partial Differential Equations
Step 2. Snapshot matrices As and eigenvalues of AT
s As
Let sn
m = (sn
1 ,sn
2 ,··· ,sn
m)T (s = u,v,p, n = 1,2,··· ,N). Formulate the snap-
shot matrices As = (sl
i )m×L (s = u,v,p) and find the eigenvalues λs1 ⩾
λs2 ⩾ ··· ⩾ λsM̃s
 0 (M̃s = rankAs) and corresponding eigenvectors ϕsj
(j = 1,2,··· ,M̃s,s = u,v,p) of AT
s As (s = u,v,p).
Step 3. Choice of POD bases
For the error tolerance μ = O(t2,x2,y2) desired, decide the numbers Ms
(Ms ⩽ M̃s,s = u,v,p) of POD bases such that

λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1) ⩽ μ, and formulate the POD bases s = (φs1,φs2,··· ,φsMs
)
(where φsj = Asϕsj /

λsj ,j = 1,2,··· ,Ms, s = u,v,p).
Step 4. Solve/compute the PODROEFD model
Solving the PODROEFD scheme (1.3.12) and (1.3.13) yields the reduced-
order solution vectors u∗n
m = (u∗n
1 ,u∗n
2 ,··· ,u∗n
m ), v∗n
m = (v∗n
1 ,v∗n
2 ,··· ,v∗n
m ),
and p∗n
m = (p∗n
1 ,p∗n
2 ,··· ,p∗n
m ); further, the process produces the components
u∗n
j+ 1
2 ,k
= u∗n
i , v∗n
j,k+ 1
2
= v∗n
i , and p∗n
j,k+ 1
2
= p∗n
i (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K,
i = k(J + 1) + j + 1, 1 ⩽ i ⩽ m = (K + 1)(J + 1)).
Step 5. Check accuracy and renew POD bases to continue
Set M = max{0.25tRe,4t/(Rex2),4t/(Rey2)}. If
2(1 + M)n−L
#
λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1)
$
⩽ μ,
then u∗n
m = (u∗n
1 ,u∗n
2 ,··· ,u∗n
m ), v∗n
m = (v∗n
1 ,v∗n
2 ,··· ,v∗n
m ), and p∗n
m = (p∗n
1 ,
p∗n
2 ,··· ,p∗n
m ) (n = 1,2,··· ,N) are exactly the solutions satisfying the desir-
able accuracy. Else, namely, if
2(1 + M)n−L
#
λu(Mu+1) +

λv(Mv+1) +

λp(Mp+1)
$
 μ,
put (sl
1,sl
2,··· ,sl
m) = (s
∗(n−l)
1 ,s
∗(n−l)
2 ,··· ,s
∗(n−l)
m ) (l = 1,2,··· ,L, s = u,v,
p) and return to Step 2.
Remark 1.3.4. Step 5 could be adapted as follows: if u∗n−1
m −u∗n
m 0 ⩾ u∗n
m −
u∗n+1
m 2, v∗n−1
m − v∗n
m 0 ⩾ v∗n
m − v∗n+1
m 2, and p∗n−1
m − p∗n
m 2 ⩾ p∗n
m −
p∗n+1
m 2 (n = L,L + 1,··· ,N − 1), then (u∗n
m ,v∗n
m ,p∗n
m ) (n = 1,2,··· ,N)
are the reduced-order solution vectors for PODROEFD scheme (1.3.12) and
(1.3.13) satisfying the desirable accuracy. Else, namely, if u∗n−1
m − u∗n
m 0 
u∗n
m − u∗n+1
m 2, v∗n−1
m − v∗n
m 0  v∗n
m − v∗n+1
m 2, or p∗n−1
m − p∗n
m 2 
p∗n
m − p∗n+1
m 2 (n = L,L + 1,··· ,N − 1), let si
m = s
∗(n−i)
m (i = 1,2,··· ,L,
s = u,v,p) and return to Step 2.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 29
FIGURE 1.3.2 When Re = 1000, the top chart (A) and the bottom chart (B) are the contours of
the classical FD solution and the reduced-order FD solution of the velocity (u,v) at time instant
t = 9, respectively.
1.3.6 Some Numerical Experiments for the 2D Nonstationary
Stokes Equation
In the following, we present some numerical experiments of the channel flow
with local expansions, i.e., with two squared protrusions at the middle top
and bottom of the channel to validate the feasibility and efficiency of the PO-
DROEFD scheme with second-order accuracy and to show that the numerical
results are consistent with theoretical estimates.
We assume that the computational domain ¯ is given as in Fig. 1.3.1. Take
Re = 1000, f = g = 0. Except for the inflow on the left boundary with the
periodic flow velocity of (u,v) = (0.1(y − 2)(8 − y)sin2πt,0) (2 ⩽ y ⩽ 8)
and outflow on the right boundary with velocity of (u,v) satisfying v = 0 and
∂u/∂x = 0, all initial and boundary value conditions are taken as 0. We divide
the ¯ into mesh by taking spatial step increments as x = y = 10−2 and then
taking time step increment as t = 0.001.
We have obtained a numerical solution (un
j+ 1
2 ,k
,vn
j,k+ 1
2
) of velocity (u,v)
and a numerical solution pn
j,k+ 1
2
of pressure p by the classical FD schemes
(1.3.3)–(1.3.5) with n = 9000 (i.e., t = 9), which are depicted graphically as the
contours in Figs. 1.3.2(A) and 1.3.3(A), respectively.
We have only employed the first L = 20 numerical solutions (un
j+ 1
2 ,k
,vn
j,k+ 1
2
,
pn
j,k+ 1
2
) (n = 1,2,··· ,20) from the classical FD scheme, forming 20 snap-
shot vectors (un
m,vn
m,pn
m) (n = 1,2,··· ,20, m = 136 × 104). Afterwards,
by Step 2 in Section 1.3.5, we have found three groups of 20 eigenvalues
λuj ,λvj , and λpj (j = 1,2,··· ,20) which are arranged in a nonincreasing or-
der and three groups of 20 eigenvectors ϕuj , ϕvj , and ϕpj (j = 1,2,··· ,20)
corresponding to the 20 eigenvalues. By computing, we have achieved that
30 Proper Orthogonal Decomposition Methods for Partial Differential Equations
FIGURE 1.3.3 When Re = 1000, the top chart (A) and the bottom chart (B) are the classical
FD solution and the reduced-order FD solution of the pressure field p at the time instant t = 9,
respectively.
the eigenvalues satisfy
√
λu7 +
√
λv7 +

λp7 ⩽ 4 × 10−4, which guides us
to take the first three groups of six eigenvectors as POD bases by Step 3
in Section 1.3.5, i.e., the POD bases are taken as u = (φu1,φu2,··· ,φu6)
(with φuj = Auϕuj /

λuj , j = 1,2,··· ,6), v = (φv1,φv2,··· ,φv6) (with
φvj = Avϕvj /

λvj ,j = 1,2,··· ,6), and p = (φp1,φp2,··· ,φp6) (with
φpj = Apϕpj /

λpj , j = 1,2,··· ,6). Finally, when n = 9000 (i.e., at t = 9)
and Re = 103, the numerical solutions u∗n
j+ 1
2 ,k
, v∗n
j,k+ 1
2
, and p∗n
j,k+ 1
2
are obtained
from the PODROEFD scheme by Step 4 in Section 1.3.5. It is necessary to
renew POD bases once t = 5. The reduced-order FD solution at t = 9 is ob-
tained and depicted graphically in Figs. 1.3.2(B) and 1.3.3(B), respectively.
Each pair of figures (A) and (B) in Figs. 1.3.2(A) and 1.3.3(B) has exhibited
close similarity, respectively, but the reduced-order FD solutions obtained from
the PODROEFD scheme are computed with higher efficiency than the classi-
cal FD solutions since the PODROEFD scheme needs much fewer degrees of
freedom and, thus, it can also significantly reduce the TE accumulation in the
computational process.
Fig. 1.3.4 shows the mean absolute errors (MAEs) between the reduced-
order FD solutions obtained from the PODROEFD scheme with second-order
accuracy and a different number of POD bases and the solutions obtained from
classical FD schemes (1.3.3)–(1.3.5) when Re = 1000 at t = 9. Comparing the
classical FD scheme with the PODROEFD scheme containing six optimal bases
and implementing the numerical simulation computations with Re = 1000 at
t = 9, we have found that, for the classical FD schemes (1.3.3)–(1.3.5) contain-
ing 3×136×104 unknown quantities at each time level, the required computing
time was about 48 minutes on a laptop, while, for the PODROEFD scheme with
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 31
FIGURE 1.3.4 The MAEs between the reduced-order FD solutions with different number of POD
bases and the classical FD solution when Re = 1000 at t = 9.
six optimal bases only including 3 × 6 unknown quantities at the same time
level, the corresponding time was only 16 seconds on the same laptop, so the
ratio of speed is 180:1, while the errors between their solutions do not exceed
4 × 10−4. Though our experiments are in a sense recomputing what we have al-
ready computed by the classical FD scheme at the first few L = 20 steps, when
we compute actual problems, we may construct the snapshots and POD basis
by drawing samples from experiments and then solve directly the PODROEFD
scheme with second-order accuracy, while it is unnecessary to solve the clas-
sical FD schemes (1.3.3)–(1.3.5). Thus, the time consuming calculations and
resource demands in the computational process have resulted in great saving.
It has also shown that finding the approximate solutions for the nonstationary
Stokes equation using the PODROEFD scheme with second-order accuracy is
computationally effective. The numerical results are consistent with those ob-
tained for the theoretical case as neither the theoretical nor the numerical errors
exceeds 4 × 10−4.
In addition, if one uses the reduced-order FD scheme with first-order time ac-
curacy as in [113] to find the reduced-order FD solution at t = 9, it is necessary
to take the time step as k = 10−4 and carry out 9 × 104 steps in order to obtain
the same accuracy as here. Thus, its computing load is 10 times that of the PO-
DROEFD scheme with second-order accuracy, and its TE accumulation in the
computational process increases significantly as well as it is repeating computa-
tions of the classical first-order time accuracy FD scheme on the same time inter-
val [0,T ]. Therefore, the PODROEFD scheme with second-order accuracy here
is different from the existing reduced-order schemes and it offers improvement
over the existing reduced-order methods [2,5,11,19,38,40,45,48,51,52,57,58,62,
63,91,113,118,122,128,135,137,141,155,156,166,167,170,171,184–186,199].
32 Proper Orthogonal Decomposition Methods for Partial Differential Equations
1.4 POD-BASED REDUCED-ORDER EXTRAPOLATING FINITE
DIFFERENCE SCHEME FOR 2D SHALLOW WATER
EQUATION
In this section, we employ the POD method to establish a PODROEFD scheme
with very few degrees of freedom for the 2D shallow water equations (SWEs)
with sediment concentration. We also provide the error estimates between the
accurate solution and the classical FD solutions as well as those between the
exact solution and the PODROEFD solutions. Moreover, we present two numer-
ical simulations to illustrate that the PODROEFD scheme can greatly reduce the
computational load. Thus, both the feasibility and efficiency of the PODROEFD
scheme are validated. The main reference for the work here is [96].
1.4.1 Model Background and Survey for the 2D Shallow Water
Equation
A system of SWEs can be used to describe the propagation and evolution of
short waves in shallow waters, also referred to as the Saint-Venant system (see
[36]). It has extensive applications in ocean, environmental, and hydraulic en-
gineering. In particular, in coastal engineering, [46] discusses applications to
various problems in open-channel flows in rivers and reservoirs, tidal flows
in estuary and coastal water regions, bore wave propagation, and the station-
ary hydraulic jump in river. Because SWEs are a system of nonlinear PDEs,
they generally have no analytical solutions. One has to rely on numerical solu-
tions.
Here we mention a number of references on the study of numerical solutions
for the 2D SWEs including only the continuity equation and the momentum
equation, modeling the effects of the water depth and the velocity of fluid; for
example, the finite volume (FV) method on unstructured triangular meshes in
Anatasiou and Chan [9], the upwind methods in Bermudez and Vazquez [17],
the parallel block preconditioning techniques in Cai and Navon [22], the opti-
mal control technique of finite element (FE) limited-area in Chen and Navon
[30], the least-squares FE method in Liang and Hsu [74], the FD Lax–Wendroff
weighted essentially nonoscillatory (WENO) schemes in Lu and Qiu [77], the
FE simulation technique in Navon [131], the FD WENO schemes in Qiu and
Shu [136], the Roe approximate Riemann solver technique in Rogers et al.
[142], the essentially nonoscillatory and WENO schemes with the exact conser-
vation property in Vukovic and Sopta [172], the explicit multiconservation FD
scheme in Wang [173], the composite FV method on unstructured meshes in
Wang and Liu [174], the high-order FD WENO schemes in Xing and Shu [180],
the high-order well-balanced FV WENO schemes and discontinuous Galerkin
(DG) methods in Xing and Shu [181], the positivity preserving high-order well-
balanced DG methods in Xing et al. [182], the dispersion–correction FD scheme
in Yoon et al. [189], the nonoscillatory FV method in Yuan and Song [190], the
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 33
surface gradient method in Zhou et al. [196], and the total variation diminishing
FD scheme in Wang et al. [175]. Nevertheless, the transport and sedimenta-
tion of silt and sand are some important processes in causing change to the
natural environment such as the formation and evolution of a delta, expan-
sion of alluvial plains, detouring of rivers, etc. They also cause some serious
problems that should be carefully addressed to in hydraulic problems, such as
irrigation systems, transportation channels, hydroelectric stations, ports, and
other coastal engineering works. A model for the 2D SWEs including sedi-
ment concentration is available in [191] with some numerical methods based
on an optimal control approach (see [198]) and a mixed FE technique (see [125,
126]).
It is well known that the model based on a classical FD scheme in [191]
is one of the simplest and most convenient methods for solving 2D SWEs with
sediment concentration. However, it also contains many degrees of freedom, i.e.,
unknown quantities. Therefore, its computational complexity is also higher with
previously mentioned adverse effects in generating errors. It is advantageous to
build the reduced-order FD scheme with sufficiently high accuracy and very few
degrees of freedom. Here, we continue with the PODROEFD scheme for the 2D
SWEs with sediment concentration.
Some POD-based reduced-order models for the 2D SWEs already exist
(see, e.g., [29,151,152,199]), but these POD-based reduced-order models for
the 2D SWEs have not included the sediment concentration effect. They also
employ the numerical solutions obtained from the classical numerical methods
on the entire time span [0,T ] to formulate the POD basis, build the POD-
based reduced-order models, and recompute the solutions on the same time span
[0,T ]; these also belong to repeated computations.
Here we thoroughly improve the existing methods, where we only adopt the
first few snapshots of given classical FD numerical solutions for the 2D SWEs
on a very short time span [0,T0] (T0 T ) to formulate the POD basis and build
the PODROEFD scheme, before finding the numerical solutions on the total
time span [0,T ] by extrapolation and iteration as well as the POD basis update.
Thus, an important advantage of the POD method is kept, i.e., using the given
data on a very short time span [0,T0] to predict future physics on the whole
time span [T0,T ]. So this study is useful as a motivating example for real-world
computational problems and big data.
In the following, we first devote ourselves to the formulation of the snap-
shots and the POD basis from the classical FD solutions for the 2D SWEs with
sediment concentration and the PODROEFD scheme, and then we provide the
error estimates of solutions and the implementation of the algorithm for the PO-
DROEFD scheme. We will provide two numerical simulation examples to verify
the reliability and effectiveness for our PODROEFD scheme.
34 Proper Orthogonal Decomposition Methods for Partial Differential Equations
1.4.2 The Governing Equations and the Classical FD Scheme for
the 2D Shallow Water Equation Including Sediment
Concentration
Let ⊂ R2 be a bounded and connected domain. The governing equations for
the 2D SWEs with sediment concentration are known as follows (see [46,191],
with notational adaptation):
∂Z
∂t
+
∂(Zu)
∂x
+
∂(Zv)
∂y
= γ

∂2Z
∂x2
+
∂2Z
∂y2

, (x,y,t) ∈ × (0,T ), (1.4.1)
∂u
∂t
+
u∂u
∂x
+
v∂u
∂y
− f v = A

∂2u
∂x2
+
∂2u
∂y2

− g
∂(Z + zb)
∂x
−
CDu
√
u2 + v2
Z
,(x,y,t) ∈ × (0,T ), (1.4.2)
∂v
∂t
+
u∂v
∂x
+
v∂v
∂y
+ f u = A

∂2v
∂x2
+
∂2v
∂y2

− g
∂(Z + zb)
∂y
−
CDv
√
u2 + v2
Z
,(x,y,t) ∈ × (0,T ), (1.4.3)
∂S
∂t
+
u∂S
∂x
+
v∂S
∂y
= ε

∂2S
∂x2
+
∂2S
∂y2

+
αω(S − S∗)
Z
, (x,y,t) ∈ × (0,T ), (1.4.4)
∂zb
∂t
+ gb

∂u
∂x
+
∂v
∂y

=
αω(S − S∗)
ρ
, (x,y,t) ∈ × (0,T ),(1.4.5)
where γ (m2/s) and A (m2/s) are two viscosity coefficients, (u,v) (m/s) is
the velocity vector, Z = z − zb (m) the water depth, z (m) the surface height,
zb (m) the height of the river bed (see Fig. 1.4.1), f (1/s) the Coriolis con-
stant, g (m/s2) the gravitational constant, CD (nondimensional) the coefficient
of bottom drag, ε (m2/s) the diffusion coefficient of sand, ω (m/s) the falling
speed of suspended sediment particles, S (kg/m3) the concentration of sediment
in water, ρ (kg/m3) the density of dry sand (taken as a constant), α (nondi-
mensional) the constant of sediment variety, S∗ = K[(u2 + v2)3/2/(gωZ)]l the
capability of sediment transport in a bottom bed (a given empirical function),
gb = (u2 + v2)3/2Zpdq[1 − vc/(u2 + v2)1/2] also a given empirical func-
tion, vc (m/s) the velocity of sediment mass transport (a given function, too),
d (m) the diameter of sediment, and K (kg/m3), l (nondimension),  (s3/m2),
p (nondimensional), and q = −p are all empirical constants.
The boundary conditions are assumed as follows:
Z(x,y,t) = Z0(x,y,t), u(x,y,t) = u0(x,y,t), v(x,y,t) = v0(x,y,t),
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 35
FIGURE 1.4.1 Water profile.
S(x,y,t) = S0(x,y,t), zb(x,y,t) = zb0(x,y,t),
(x,y,t) ∈ ∂ × (0,T ), (1.4.6)
where Z0(x,y,t), u0(x,y,t), v0(x,y,t), S0(x,y,t), and zb0(x,y,t) are all
given functions. The initial conditions are assumed as follows:
Z(x,y,0) = Z0
(x,y),u(x,y,0) = u0
(x,y),v(x,y,0) = v0
(x,y),
S(x,y,0) = S0
(x,y),zb(x,y,0) = z0
b(x,y), x ∈ ∂ × (0,T ), (1.4.7)
where Z0(x,y), u0(x,y), v0(x,y), S0(x,y), and z0
b(x,y) also are all given
functions.
Let t be the time step, let x and y be the spatial steps, and let
N = T /t . By discretizing (1.4.1), (1.4.4), and (1.4.5) at a reference point
(xj ,yk,tn), (1.4.2) at a reference point (xj+ 1
2
,yk,tn), and (1.4.3) at a reference
point (xj ,yk+ 1
2
,tn), we obtain the classical FD scheme for the 2D SWEs with
sediment concentration as follows:
Zn+1
j,k = tγ

Zn
j+1,k − 2Zn
j,k + Zn
j−1,k
x2
+
Zn
j,k+1 − 2Zn
j,k + Zn
j,k−1
y2

− t
⎛
⎝
un
j+ 1
2 ,k
Zn
j+ 1
2 ,k
− un
j− 1
2 ,k
Zn
j− 1
2 ,k
x
+
vn
j,k+ 1
2
Zn
j,k+ 1
2
− vn
j,k− 1
2
Zn
j,k− 1
2
y
⎞
⎠
+ Zn
j,k, (1.4.8)
un+1
j+ 1
2 ,k
= −
tCDun
j+ 1
2 ,k

(un
j+ 1
2 ,k
)2 + (vn
j+ 1
2 ,k
)2
Zn
j+ 1
2 ,k
+ tA
⎛
⎝
un
j+ 3
2 ,k
− 2un
j+ 1
2 ,k
+ un
j− 1
2 ,k
x2
+
un
j+ 1
2 ,k+1
− 2un
j+ 1
2 ,k
+ un
j+ 1
2 ,k−1
y2
⎞
⎠
− t
⎛
⎝
un
j+ 1
2 ,k
(un
j+1,k − un
j,k)
x
+
vn
j+ 1
2 ,k
(un
j+ 1
2 ,k+ 1
2
− un
j+ 1
2 ,k− 1
2
)
y
⎞
⎠
36 Proper Orthogonal Decomposition Methods for Partial Differential Equations
− gt
Zn
j+1,k + zn
b,j+1,k − Zn
j,k − zn
b,j,k
x
+ un
j+ 1
2 ,k
+ t(f v)n
j+ 1
2 ,k
, (1.4.9)
vn+1
j,k+ 1
2
= −
tCDvn
j,k+ 1
2

(un
j,k+ 1
2
)2 + (vn
j,k+ 1
2
)2
Zn
j,k+ 1
2
+ vn
j,k+ 1
2
+ tA
⎛
⎝
vn
j+1,k+ 1
2
− 2vn
j,k+ 1
2
+ vn
j−1,k+ 1
2
x2
+
vn
j,k+ 3
2
− 2vn
j,k+ 1
2
+ vn
j,k− 1
2
y2
⎞
⎠
− t
⎛
⎝
un
j,k+ 1
2
(vn
j+ 1
2 ,k+ 1
2
− vn
j− 1
2 ,k+ 1
2
)
x
−
vn
j,k+ 1
2
(vn
j,k+1 − vn
j,k)
y
⎞
⎠
− gt
Zn
j,k+1 + zn
b,j,k+1 − Zn
j,k − zn
b,j,k
y
− t(f u)n
j,k+ 1
2
, (1.4.10)
Sn+1
j,k = tε

Sn
j+1,k − 2Sn
j,k + Sn
j−1,k
x2
+
Sn
j,k+1 − 2Sn
j,k + Sn
j,k−1
y2

−
tun
j,k(Sn
j+ 1
2 ,k
− Sn
j− 1
2 ,k
)
x
−
tvn
j,k(Sn
j,k+ 1
2
− Sn
j,k− 1
2
)
y
+
αωt(Sn
j,k − S∗n
j,k)
Zn
j,k
+ Sn
j,k, (1.4.11)
zn+1
b,j,k = zn
b,j,k −
tgn
b,j,k(un
j+ 1
2 ,k
− un
j− 1
2 ,k
)
x
−
tgn
b,j,k(vn
j,k+ 1
2
− vn
j,k− 1
2
)
y
+
αωt(Sn
j,k − S∗n
j,k)
ρ
, (1.4.12)
where n = 1,2,··· ,N, j = 1,2,··· ,J, k = 1,2,··· ,K, J = max{[| x1 − x2 |]:
(x1,y),(x2,y) ∈ }, and K = max{[| y1 − y2 |] : (x,y1),(x,y2) ∈ }.
In order to prove the stability of the FD scheme (1.4.8)–(1.4.12), it is neces-
sary to introduce the following discrete Gronwall lemma (see [80]).
Lemma 1.4.1 (The discrete Gronwall lemma). If {an} and {bn} are two nonneg-
ative sequences, and {cn} is a positive monotone sequence, that satisfy
an + bn ⩽ cn + λ̄
n−1

i=0
ai (λ̄  0), a0 + b0 ⩽ c0,
then
an + bn ⩽ cn exp(nλ̄), n = 0,1,2,··· .
For the classical FD scheme (1.4.8)–(1.4.12), we have the following result.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 37
Theorem 1.4.2. Under the conditions t · (| u | + | v |) ⩽ min{4γ,4ε,4A} and
4t max{γ,A,ε} ⩽ min{x2,y2}, the classical FD scheme (1.4.8)–(1.4.12)
is locally stable. Further, we have the following error estimates:
| Z(xj ,yk,tn) − Zn
j,k | + | u(xj+ 1
2
,yk,tn) − un
j+ 1
2 ,k
|
+ | v(xj ,yk+ 1
2
,tn) − vn
j,k+ 1
2
| + | S(xj ,yk,tn) − Sn
j,k |
+ | zb(xj ,yk,tn) − zn
b,j,k |= O(t,x2
,y2
),
1 ⩽ n ⩽ N,1 ⩽ j ⩽ J,1 ⩽ k ⩽ K. (1.4.13)
Proof. If γ t/x2 ⩽ 1/4, γ t/y2 ⩽ 1/4, and 4t(| u | + | v |) ⩽
min{γ,ε,A}, implying 4t(u∞ +v∞) ⩽ min{γ,ε,A}, by (1.4.8), we have
| Zn+1
j,k |⩽

1 −
2γ t
x2
−
2γ t
y2

| Zn
j,k |
+
γ t
x2
(| Zn
j+1,k | + | Zn
j−1,k |) +
γ t
y2
(| Zn
j,k+1 | + | Zn
j,k−1 |)
+
t
x
(| un
j+ 1
2 ,k
| · | Zn
j+ 1
2 ,k
| + | un
j− 1
2 ,k
| · | Zn
j− 1
2 ,k
|)
+
t
y
(| vn
j,k+ 1
2
| · | Zn
j,k+ 1
2
| + | vn
j,k− 1
2
| · | Zn
j,k− 1
2
|)
⩽

1 +
2t
x
u∞ +
2t
y
v∞

Zn
∞
⩽

1 +
γ
2x
+
γ
2y

Zn
∞, (1.4.14)
where  · ∞ is the L∞( ) norm. Thus, from (1.4.14), we obtain
Zn
∞ ⩽

1 +
γ
2x
+
γ
2y

Zn−1
∞,n = 1,2,··· ,N. (1.4.15)
By summing (1.4.15) from 1 to n, we obtain
Zn
∞ ⩽ Z0
∞ +

γ
2x
+
γ
2y
 n−1

j=0
Zj
∞,n = 1,2,··· ,N. (1.4.16)
By applying the discrete Gronwall lemma, Lemma 1.4.1, to (1.4.16), we obtain
Zn
∞ ⩽ Z0
∞ exp

nγ
2x
+
nγ
2y

,n = 1,2,··· ,N, (1.4.17)
showing that the series
%
Zn+1

is locally stable when the time interval [0,T ]
is finite. Further, it is convergent from the stability theories of FD schemes (see
38 Proper Orthogonal Decomposition Methods for Partial Differential Equations
[34] or [76]). As the water depth is positive, there are two positive constants β1
and β2 such that
β1 ⩽ Zn
∞ ⩽ β2,n = 0,1,2,··· ,N. (1.4.18)
If 4At ⩽ min{x2,y2} and 4εt ⩽ min{x2,y2}, by using the same
technique as when proving (1.4.15), from (1.4.9)–(1.4.12) and (1.4.18), we ob-
tain
un+1
∞ ⩽

1 +
A
2x
+
A
2y

un
∞ +
2gt
x
Zn
∞
+
2gt
x
zn
b∞ + tf ∞vn
∞ +
CDA
4β1
(un
∞ + vn
∞), (1.4.19)
vn+1
∞ ⩽

1 +
A
2x
+
A
2y

vn
∞ +
2gt
y
Zn
∞
+
2gt
y
zn
b∞ + tf ∞un
∞ +
CDA
4β1
(un
∞ + vn
∞), (1.4.20)
Sn+1
∞ ⩽

1 +
ε
2x
+
ε
2y

Sn
∞
+
αωt
β1
(Sn
∞ + S∗n
∞), (1.4.21)
zn+1
b ∞ ⩽ 2tgn
b ∞

un∞
x
+
vn∞
y

+
αωt
ρ
(Sn
∞ + S∗n
∞), (1.4.22)
where n = 0,1,2,··· ,N − 1.
Note that
S∗n
∞ ⩽ K[(un
2
∞ + vn
2
∞)3/2
/(gωβ1)]m
⩽ K[(A/t)2m
/(gωβ1)m
](un
∞ + vn
∞)
and
gn
b ∞ ⩽ β
p
2 dq
(un
2
∞ + vn
2
∞)3/2
⩽ β
p
2 dq
(A/t)3
.
Set
 =max{K[(A/t)2m
/(gωβ1)m
]αωt/(β1 + ρ) + A/(2x + 2y)
+ tf ∞ + 2CDA/(4β1) + 2β
p
2 dq
A3
/(xt2
),
2gt/(x + y) + K[(A/t)2m
/(gωβ1)m
]αωt/(β1 + ρ),
A/(2x + tf ∞ + 2y) + 2β
p
2 dq
A3
/(yt2
) + 2CDA/(4β1),
ε/(2x) + ε/(2y) + αωt/(β1 + ρ)}.
Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 39
By (1.4.19)–(1.4.22), we obtain
un
∞ + vn
∞ + Sn
∞ + zn
b∞
⩽ (1 + )(un−1
∞ + vn−1
∞ + Sn−1
∞ + zn−1
b ∞)
+

2gt
x
+
2gt
y

Z0
∞ exp

Nγ
2x
+
Nγ
2y

, (1.4.23)
where n = 1,2,··· ,N. By summing (1.4.23) from 1 to n and using the discrete
Gronwall lemma, Lemma 1.4.1, we obtain
un
∞ + vn
∞ + Sn
∞ + zn
b∞
⩽ (u0
∞ + v0
∞ + S0
∞ + z0
b∞)exp(n)
+

2gnt
x
+
2gnt
y

Z0
∞ exp

Nγ
2x
+
Nγ
2y

exp(n),
n = 1,2,··· ,N. (1.4.24)
As the time interval [0,T ] is finite, the RHS of (1.4.24) is bounded. Thus, from
the stability theories of FD schemes (see Theorem 1.1.1 or [34,76]) and (1.4.24),
we conclude that the classical FD scheme (1.4.8)–(1.4.12) is locally stable, and
so its FD solutions are convergent.
By Taylor’s formula, expanding (1.4.8) and (1.4.11) as well as (1.4.12) at
the reference point (xj ,yk,tn), (1.4.9) at the reference point (xj+ 1
2
,yk,tn), and
(1.4.10) at the reference point (xj ,yk+ 1
2
,tn), or from the approximations to the
difference quotient to the derivatives, we obtain the error estimates (1.4.13).
Remark 1.4.1. The classical FD scheme (1.4.8)–(1.4.12) is only first-order
accurate in time. If one wants to obtain higher-order time approximation ac-
curacy, it is necessary to change the time difference coefficients on the LHSs
in (1.4.8)–(1.4.12) into higher order (for example, central difference or second-
order difference).
Remark 1.4.2. The Coriolis constant f , the gravity acceleration g, the viscos-
ity coefficients γ and A, the bottom drag coefficient CD, the sand diffusion
coefficient ε, the falling speed of suspended sediment particles ω, the sediment
mass transport velocity vc, the sediment diameter d, and empirical constants
K,m,n,p, and q, boundary value functions Z0(x,y,t), u0(x,y,t), v0(x,y,t),
S0(x,y,t), and zb0(x,y,t), initial value functions Z0(x,y), u0(x,y), v0(x,y),
S0(x,y), and z0
b(x,y), the time step increment t, and the spatial step in-
crements x and y are the requisites for solving the classical FD solu-
tions un
j+ 1
2 ,k
, vn
j,k+ 1
2
, Sn
j,k, Zn
j,k, and zn
b,j,k (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, 1 ⩽ n ⩽
N) for the 2D SWEs with sediment concentration through the FD schemes
(1.4.8)–(1.4.12).
Exploring the Variety of Random
Documents with Different Content
Thicken cream of corn soup a little more if necessary, or, add corn to
thin cream sauce, and serve on toast. Left-overs of all sorts of cream
soups may be utilized for toast: celery, asparagus, string bean,
oyster plant and spinach, also succotash and other stewed or
creamed vegetables.
Lentil and Other Legume Toasts
Use any lentil gravy or thickened lentil soup, cream of peas or peas
and tomato soup thickened, red kidney beans purée or thickened
soup, on moistened slices of zwieback.
Toast Royal
1 cup drawn butter sauce
3 eggs
1 cup minced trumese or nutmese or ½ cup chopped nuts
Add meat to hot sauce and pour all over beaten salted eggs;
cook as scrambled eggs. Serve immediately on moistened slices of
zwieback, with baked tomatoes when convenient.
The following toasts are of a different nature (though slices of
zwieback may be used instead of bread), but they are good
emergency dishes.
French Toast
Add ½ cup of milk with salt to 2 or 3 beaten eggs. Dip slices of stale
bread or moistened zwieback in the mixture and brown delicately on
both sides on moderately hot buttered griddle or in quick oven, or in
frying pan covered. Serve plain or with any suitable sauce.
Drain slices after dipping in egg mixture; crumb, bake, and serve
with honey, maple syrup or jelly for Breaded French Toast.
German Toast
Add grated or fine chopped onion to egg mixture and finish the
same as French toast.
Spanish Cakes
Batter—2 eggs, 2 tablespns. flour, 1 teaspn. of oil, milk for smooth
thin batter. Nut milk may be used and oil omitted.
Cut thin slices of bread into any desired shape (round with biscuit
cutter), spread each one of half the pieces with jelly, jam or
marmalade and press another on to it; dip in the batter, lay on oiled
baking pan, stand 15 m. or longer in a cold place. Bake in a quick
oven, serve with a bit of the preserve on top and half of a nut
pressed into each, or, dusted with powdered sugar.
Mamie’s Surprise Biscuit
Inclose small cakes of nicely seasoned mashed potato in pastry
crust; bake, serve with milk gravy, drawn butter or cream sauce, or
with celery only. This is the original recipe which leads to the
following variations:
Mix finely-sliced celery with the potato.
Use the mixture of black walnut and potato stuffing, or mashed
lentils or mashed peas for filling.
Serve peas biscuit with tomato or tomato cream sauce.
Serve lentil biscuit with cream, cream of tomato or mushroom
sauce.
Lentil biscuit with fresh mushroom or Boundary Castle sauce,
with or without celery, might constitute one course at a dinner.
Make a filling of minced trumese, salt, oil, chopped parsley, onion
and mushrooms into small cakes or balls, inclose them in universal
crust, and when light, steam 25–30 m. Serve with drawn butter,
flavored with onion and parsley, or as garnish for a meat dish. Make
balls quite small for garnish.
Yorkshire Pudding
½ cup flour
salt
1⅓ cup milk
2 eggs
1 teaspn. oil
Beat eggs, add milk and pour gradually into flour mixed with salt;
add oil, beat well, turn into well oiled, or oiled and crumbed gem
pans; bake in moderate (slow at first) oven.
Serve as garnish or accompaniment to ragout, or if baked in flat
cakes, with slices of broiled or à la mode meats laid on them, and
gravy poured around. The pudding may be baked in a flat pan and
cut into any desired shape for serving. Whites and yolks of eggs may
be beaten separately. A large onion chopped may be used in the
pudding.
Rice Border
Pack hot boiled rice into well oiled border mold and let stand in a
warm place (over kettle of hot water) for 10 m. Turn on to serving
dish carefully.
Or, parboil 1 cup of rice in salted water 5 m.; drain and cook in a
double boiler with 2½–3 cups of milk and salt, until the rice is
tender and the milk absorbed, then pack into the mold.
1 tablespn. of butter and the yolks of 2 eggs may be added to
the rice about 2 m. before it is taken from the double boiler.
Oyster Plant and Potato Omelet—without eggs
With nicely seasoned, not too moist, mashed potato, mix slices of
cooked oyster plant which have been simmered in cream or butter.
Spread in well oiled frying or omelet pan. When delicately browned
on the bottom, fold, omelet fashion, turn on to a hot platter, garnish.
Serve plain or with cream sauce or with thin drawn butter. Or, grind
oyster plant, cook in a small quantity of water, add cream or butter
and mix with plain potato. Finely-sliced raw celery or chopped raw
onion and parsley may be used in the potato sometimes.
Baked Potatoes and Milk
Wash potatoes well, scrubbing with vegetable brush. Cut out any
imperfect spots. Bake until just done. Break up, skins and all, into
nice rich milk and eat like bread and milk for supper. A favorite dish
of some of the early settlers in Michigan.
Bread and Milk with Sweet Fruits
Add nice ripe blueberries to bread and milk for supper, also ripe
black raspberries or baked sweet apples. They are all delicious.
★ Apples in Oil
Simmer finely-sliced onion in oil 5–10 m. without browning; add salt
and a little water, then apples which have been washed, quartered,
cored and sliced without paring. Sprinkle lightly with salt. Cover and
cook until apples are just tender, not broken. Serve for breakfast or
supper, or with a meat dish instead of a vegetable, for luncheon or
dinner.
The onion may be omitted. Use a little sugar when apples are
very sour.
Onion Apples
Simmer sliced onions in oil, with salt, in baking pan. Place apples,
pared and cored, on top of the onions; sprinkle with sugar and put
¼ teaspn. in each cavity. Cover, bake; uncover and brown. Serve for
luncheon, or as garnish for meat dish.
TRUE MEATS
“And God said, Behold I have given you every herb bearing seed,
which is upon the face of all the earth, and every tree, in the which
is the fruit of a tree yielding seed; to you it shall be for meat.” Gen.
1:29.
“The food which God gave Adam in his sinless state is the best
for man’s use as he seeks to regain that sinless state.
“The intelligence displayed by many dumb animals approaches so
closely to human intelligence that it is a mystery.
“The animals see and hear and love and fear and suffer.
“They manifest sympathy and tenderness toward their
companions in suffering.
“They form attachments for man which are not broken without
great suffering to them.
“Think of the cruelty to animals that meat eating involves and its
effect on those who inflict and those who behold it. How it destroys
the tenderness with which we should regard these creatures of
God!”
The high price of flesh foods, the knowledge of the waste matter
in the blood of even healthy animals which remains in their flesh
after death, and the well authenticated reports of the increasing
prevalence of most loathsome diseases among them, causes a
growing desire among thinking people to take their food at first
hand, before it has become a part of the body of some lower animal.
So, the great food question of the day is—“What shall we use in
the place of meat?”
Nuts, legumes (peas, beans, lentils and peanuts) and eggs
contain as do flesh meats, an excess of the proteid or muscle-
building elements (nuts and legumes a much larger proportion than
flesh), so we may combine these with fruits, vegetables and some of
the cereals (rice, for instance) and have a perfect proportion of food
elements.
It must be borne in mind, however, that proteid foods must be
used sparingly, since an excess of these foods causes some of the
most serious diseases.
The bulk of our foods should be made up of fruits and vegetables
and some of the less hearty cereals and breads.
NUTS
As nuts occupy the highest round of the true meat ladder, we give a
variety of recipes for their use, following with legumes and eggs in
their order.
With nuts, as with other foods, the simplest way to use them is
the best. There are greater objections to foods than that they are
difficult of digestion, and in the case of nuts, that objection is
overcome by thorough mastication; in fact, they are an aid to the
cultivation of that important function in eating.
For those who are not able to chew their food, nuts may be
ground into butter.
Another aid to the digestion of nuts is the use with them of an
abundance of acid fruits. Fruits and nuts seem to be each the
complement of the other, the nuts as well, preventing the unpleasant
effects felt by some in the free use of fruits.
“No investigations have been found on record which demonstrate
any actual improvement in the digestibility of nuts due to salt.”—
M. E. Jaffa, M. S., Professor of Nutrition, University of California.
Be sure that nuts are fresh. Rancid nuts are no better than rancid
butter. Shelled nuts do not keep as well as those in the shell.
Almonds stand at the head of the nut family. It is better to buy
them in the shell as shelled almonds are apt to have bitter ones
among them. Almonds should not be partaken of largely with the
brown covering on, but are better to be blanched.
To Blanch Almonds—Throw them into perfectly boiling water, let
them come to the boiling point again, drain, pour cold water over
them and slip the skins off with the thumb and finger. Drop the
meats on to a dry towel, and when they are all done, roll them in
the towel for a moment, then spread them on plates or trays to dry.
They must be dried slowly as they color easily, and the sweet
almond flavor is gone when a delicate color only, is developed. For
butter they must be very dry, really brittle.
Brazil Nuts—castanas—cream nuts, do not require blanching, as
their covering does not seem to be objectionable. They are rich in oil
and are most valuable nuts. Slice and dry them for grinding.
Filberts—hazelnuts—cobnuts—Barcelonas, also may be eaten
without blanching, though they may be heated in the oven (without
browning) or put into boiling water and much of the brown covering
removed. They are at their best unground, as they do not give an
especially agreeable flavor to cooked foods. They may be made into
butter.
Brazil nuts and filberts often agree with those who cannot use
English walnuts and peanuts.
English Walnuts—The covering of the English walnut is
irritating and would better be removed when practicable. This is
done by the hot water method, using a knife instead of the thumb
and finger. The unblanched nuts may however, be used in
moderation by nearly every one.
Butternuts and black walnuts blanch more easily than the English
walnut.
When whole halves of such nuts as hickory nuts, pecans or
English walnuts are required, throw the nuts into boiling water for
two or three minutes, or steam them for three or four minutes, or
wrap them in woolen cloths wrung out of boiling water. Crack, and
remove meats at once. Do not leave nuts in water long enough to
soak the meats.
Pinenuts come all ready blanched. When they require washing,
pour boiling water over them first, then cold water. Drain, dry in
towels, then on plates in warm oven.
Peanuts—ground nuts, because of their large proportion of oil,
and similarity in other respects to nuts are classed with them,
though they are truly legumes.
The Spanish peanut contains more oil than the Virginia, but the
flavor of the Virginia is finer and its large size makes it easier to
prepare. The “Jumbos” are the cheapest.
To blanch Spanish peanuts the usual way, heat for some time,
without browning, in a slow oven, stirring often. When cool rub
between the hands or in a bag to remove the skins. The best way to
blow the hulls away after they are removed is to turn the nuts from
one pan to another in the wind.
Spanish peanuts can be obtained all ready blanched from the nut
food factories.
The Virginias, not being so rich in oil must always be blanched
the same as almonds. Be sure to let them boil well before draining. I
prefer to blanch the Spanish ones that way, too, the results are so
much more satisfactory.
When peanuts are partly dried, break them apart and remove the
germ, which is disagreeable and unwholesome: then finish drying.
A FEW SUGGESTIVE COMBINATIONS
For Using Nuts in the Simplest Ways
Brazil nuts, filberts or blanched almonds with:—
Fresh apples, pears or peaches;
Dried, steamed or stewed figs, raisins, dates, prunes, apple
sauce, baked apples or baked quinces;
Celery, lettuce, cabbage, tender inside leaves of spinach, grated
raw carrot or turnip;
Breakfast cereals, parched or popped corn, well browned
granella, crackers, gems, zwieback, Boston brown and other
breads;
Stewed green peas, string beans, asparagus, corn, greens,
potatoes, squash, cauliflower, all vegetables;
Pies, cakes and different desserts when used.
Nut Butter
A good nut butter mill is an excellent thing to have, but butter can
be made with the food cutters found nowadays in almost every
home. If the machine has a nut butter attachment, so much the
better; otherwise the nuts will need to be ground repeatedly until
the desired fineness is reached.
For almond butter, blanch and dry the almonds according to
directions, adjust the nut butter cutter, not too tight, put two or
three nuts into the mill at a time, and grind. When the almonds are
thoroughly dried they will work nicely if the mill is not fed too fast.
Brazil nuts and filberts need to be very dry for butter.
Pine nuts are usually dry enough as they come to us.
All nuts grind better when first dried.
Raw peanut butter is a valuable adjunct to cookery. To make,
grind blanched dried nuts; pack in tins or jars and keep in a dry
place.
For steamed butter, put raw butter without water into a double
boiler or close covered tins and steam 3–5 hours. Use without
further cooking in recipes calling for raw nut butter.
Or, grind dried boiled nuts the same as raw nuts. For immediate
use, boiled nuts may be ground without drying.
When roasted nut butter is used, it should be in small quantities
only, for flavoring soups, sauces or desserts.
My experience is that the best way to roast nuts for butter is to
heat them, after they are blanched and dried, in a slow oven, stirring
often, until of a cream or delicate straw color. By this method they
are more evenly colored all through. Do not salt the butter, as salt
spoils it for use with sweet dried fruits as a confection, and many
prefer it without salt on their bread.
The objection to roasted nuts is the same as for browning any
oil. Raising the oil of the nuts to a temperature high enough to
brown it, decomposes it and develops a poisonous acid.
Hardly too much can be said of the evil effects of the free use of
roasted nut butter.
“There are many persons who find that roasted peanuts eaten in
any quantity are indigestible in the sense of bringing on pain and
distress.... Sometimes this distress seems to be due to eating
peanuts which are roasted until they are very brown.”
—Mary Hinman Abel, Farmers’ Bulletin, No. 121, U.S. Department
of Agriculture.
Nut Meal
Nut meal is made the same as nut butter except that the nuts are
ground fewer times through the finest cutter of the mill, or once only
through the nut butter cutter loosely adjusted. Either cooked or raw
peanuts may be used, but a cooked peanut meal is very desirable.
The nuts may be cooked, dried and ground, or cooked without
water, after grinding, the same as steamed nut butter.
When one has no mill, meal of many kinds of nuts may be made
in the following manner:
Pound a few at a time in a small strong muslin bag; sift them
through a wire strainer and return the coarse pieces to the bag
again with the next portion. Be sure that not the smallest particle of
shell is left with the meats.
A dear friend of mine used to keep jars of different nut meals
prepared in this way on hand long before any manufactured ones
were on the market.
One writer says: “The children enjoy cracking the nuts and
picking out the meats, and it is a short task to prepare a cupful.”
Cooked nuts and some raw ones may be rubbed through the
colander for meal.
Nut meals are used for shortening pie crust, crackers and sticks;
and all except peanut, are delightful sprinkled over stewed fruits or
breakfast foods.
Nut Butter for Bread
Nut butters (except raw peanut) may be used on bread as they are
ground; but are usually stirred up with water to an agreeable butter-
like consistency, and salt added.
Strained tomato may be used instead of water for a change. This
is especially nice for sandwiches. With peanut butter made from
boiled or steamed nuts it has a flavor similar to cheese.
Nut butter is more attractive for the table when pressed through
a pastry tube in roses on to individual dishes. Use a cloth (not
rubber) pastry bag.
While pure nut butter, if kept in a dry place, will keep almost
indefinitely, it will sour as quickly as milk after water is added to it.
Nut Cream and Milk
Add water to nut butter until of the desired consistency, for cream;
then still more, for milk.
Almond milk makes a delightful drink and can be used by many
who cannot take dairy milk. It may be heated and a trifle of salt
added.
Cocoanut Milk
If you have not a cocoanut scraper, grate fresh cocoanut, one with
milk in it, or grind it four or five times through the finest cutter of a
mill. Pour over it an equal bulk or twice its bulk, of boiling water,
according to the richness of the milk desired or the quality of the
cocoanut. Stir and mix well and strain through cheese cloth or a wire
strainer. Add a second quantity of hot water and strain again,
wringing or pressing very dry. Throw the fibre away.
Use cocoanut milk or cream for vegetable or pudding sauces or
in almost any way that dairy milk and cream are used. Stir before
using. To break the nut in halves, take it in the left hand and strike it
with a hammer in a straight line around the center. It may be sawed
in two if the cups are desired for use.
Cocoanut Butter
Place milk on ice for a few hours when the butter will rise to the top
and can be skimmed off.
Ground or Grated Cocoanut
Is delightful on breakfast cereals, or eaten with bread in place of
butter. The brown covering of the meat should first be taken off.
Shredded Cocoanut
Put any left-overs of prepared cocoanut on a plate and set in the sun
or near the stove to dry. Keep in glass jars in a dry place. This
unsweetened cocoanut can be used for shortening and in many
places where sweet is not desirable.
Milk and Rich Cream of Raw Peanuts
May be prepared the same as cocoanut milk, except that cold or
lukewarm water is used instead of hot.
To raw nut meal (not butter) add one half more of water than
you have of meal. Mix and beat well, strain through a thin cloth,
squeeze as dry as possible. Let milk stand in a cool place and a very
rich cream will rise which may be used for shortening pie crust,
crackers and sticks, or in place of dairy cream in other ways. The
skimmed milk will be suitable for soups, stews or gravies. It may be
cooked before using if more convenient. The pulp also may be used
in soups. It should be thoroughly cooked.
Nut Relish
Different nut butters and meals may be combined in varying
proportions. For instance, 2 parts Brazil nuts, 1 part each pine nuts
and almonds; or 1 part each Brazil nuts, almonds, pecans, and pine
nuts. Dry nuts well and grind all together or combine after grinding.
Press into tumblers or small tins and stand in cool place. Unmold to
serve. The relish may be used in combinations suggested for whole
nuts, and it is a great improvement over cheese, with apple pie.
Toasted Almonds
When blanched almonds are thoroughly dried, put them into a slow
oven and let them come gradually to a delicate cream color, not
brown. These may be served in place of salted almonds.
Sweetmeats of fruits and nuts will be found among confections.
COOKED NUT DISHES
Nut Croquettes
1 cup chopped nuts (not too fine), hickory, pecan, pine or
butternuts, or a mixture of two with some almonds if desired; 2 cups
boiled rice or hominy, 1½ tablespn. oil or melted butter, salt, sage.
Mix, shape into rolls about 1 in. in diameter and 2½ in. in length.
Egg and crumb; bake in quick oven until just heated through and
delicately browned, 8 to 10 m. Serve plain or with any desired sauce
or vegetable.
Nut Croquettes No. 2
1 cup chopped nuts, 1 cup cooked rice, any desired seasoning or
none, salt; mix.
Sauce—
2 tablespns. oil
½ cup flour
1–1¼ cup milk
1 egg or yolk only or no egg
salt
Heat but do not brown the oil, add half the flour, then the milk,
and when smooth, the salt and the remainder of the flour, and
combine with mixed nuts and rice. Cool, shape, egg, crumb, bake.
Crumb also before dipping in egg the same as Trumese croquettes, if
necessary. Bake only until beginning to crack. Serve at once.
Savory Nut Croquettes
1 cup stale, quite dry, bread crumbs, ½ cup (scant) milk or
consommé, ¼–½ level teaspn. powdered leaf sage or winter savory,
½ cup black walnut or butternut meats, salt. Mix, shape, egg,
crumb, bake.
1 cup chopped mixed nuts may be used and celery salt or no
flavoring. Hickory nut meats alone, require no flavoring.
Nut and Sweet Potato Cutlets
1 cup chopped nut meats
2 cups chopped boiled sweet potato
1 tablespn. butter
1 egg
salt
Mix while warm. Pack in brick-shaped tin until cold. Unmold, slice,
egg, crumb or flour. Brown in quick oven or on oiled griddle. Serve
plain or with sauce 16 or 17.
★ Baked Pine Nuts
After picking out the pieces of shell, pour boiling water over 2 lbs. of
pine nuts in a fine colander. Rinse in cold water and put into the
bean pot, with 2 large onions sliced fine, 1–1⅓ cup strained tomato
and 2–2½ teaspns. salt. Heat quite rapidly at first; boil gently for a
half hour, then simmer slowly in the oven 10–12 hours or longer.
Leave just juicy for serving.
Black Walnut and Potato Mound
Mix 1 qt. nicely seasoned, well beaten mashed potato, ½–1 cup
chopped black walnut meats and 2 or 3 tablespns. grated onion. Pile
in rocky mound on baking pan or plate. Sprinkle with crumbs or not.
Bake in quick oven until delicately browned. Garnish and serve with
sauce 6 or 16.
Nut and Rice Roast or Timbale
1–2 cups chopped nuts, one kind or mixed (no English walnuts
unless blanched), 2 cups boiled or steamed rice, 1½–3 tablespns. oil
or melted butter, salt.
Mix ingredients and put into well oiled timbale mold or individual
molds or brick shaped tin. Bake covered, in pan of water ¾–1½ hr.
according to size of mold. Uncover large mold a short time at the
last. Let stand a few minutes after removing from oven, unmold, and
serve with creamed celery or peas or with sauce 16 (cocoanut cream
if convenient) or 34.
Loaf may be flavored, and served with any suitable sauce.
Loaf of Nuts
2 tablespns. raw nut butter
⅓ cup whole peanuts cooked almost tender
½ cup each chopped or ground pecans, almonds and filberts (or
butternuts, hazelnuts, and hickory nuts)
2 cups stale bread crumbs pressed firmly into the cup
salt
¾–1 cup water or 1 of milk
The quantity of liquid will depend upon the crumbs and other
conditions. Put into oiled mold or can, cover, steam 3 hours. Or, have
peanuts cooked tender, form into oval loaf, bake on tin in oven,
basting occasionally with butter and water or salted water only.
Serve with sauce 9, 10, 57, 59, or 69. Loaf may be served cold in
slices, or dipped in egg, and crumbed, and baked as cutlets.
Other nuts may be substituted for peanuts.
One-half cup black walnuts and 1½ cup cooked peanuts,
chopped, make a good combination. A delicate flavoring of sage,
savory or onion is not out of place with these.
To Boil Peanuts
Put blanched, shelled peanuts into boiling water and boil
continuously, for from 3–5 hrs., or until tender. (When the altitude is
not great it takes Virginias 4 or 5 hours and Spanish about 3 to cook
tender).
Drain, saving the liquid for soup stock, and use when boiled
peanuts are called for.
Nut Soup Stock
Use the liquid, well diluted, poured off from boiled peanuts, for
soups. Large quantities may be boiled down to a jelly and kept for a
long time in a dry place. If paraffine is poured over the jelly, it will
keep still better. Use 1 tablespn. only of this jelly for each quart of
soup.
Peanuts with Green Peas
Boil 1 cup blanched peanuts 1–2 hrs., drain off the water and save
for soup. Put fresh water on to the peanuts, add salt and finish
cooking. Just before serving add 1 pt. of drained, canned peas. Heat
well. Add more salt if necessary, and serve. Or, 1 pt. of fresh green
peas may be cooked with the nuts at the last. Small new potatoes
would be a suitable addition also.
★ Peanuts Baked like Beans
1 lb. (¾ qt.) blanched peanuts
¼ cup strained tomato
½–1 tablespn. browned flour
1¼–1½ teaspn. salt
Mix browned flour, tomato and salt, put into bean pot with the
nuts and a large quantity of boiling water. Boil rapidly ½ hr., then
bake in a slow oven 8–14 hours. Add boiling water without stirring,
when necessary. When done the peanuts should be slightly juicy.
Small dumplings steamed separately, may be served with baked
peanuts sometimes.
Baked Peanuts—Lemon Apples
Pile peanuts in center of platter or chop tray. Surround with lemon
apples, garnish with grape leaves and tendrils or with foliage plant
leaves.
Peanuts with Noodles or Vermicelli
Cook peanuts in bouillon with bay leaf and onions. Just before
serving, add cooked noodles or vermicelli.
Nut Chinese Stew
Use boiled peanuts instead of nutmese and raw nut butter, and rice
(not too much) in place of potato, in Nut Irish Stew.
Peanut Gumbo
Simmer sliced or chopped onion in butter; add 1 pt. stewed okra;
simmer 5–10 m. Add 1 pt. strained tomato, then ¾–1 qt. of baked
or boiled peanuts. Turn into a double boiler and add ½ cup boiled
rice. Heat 15–20 m.
Hot Pot of Peanuts
Put layers of sliced onion, sliced potatoes and boiled peanuts into
baking dish with salt and a slight sprinkling of sage. Cover the top
with halved potatoes. Stir a little raw nut butter with water and pour
over all. Cover with a plate or close fitting cover and bake 2 hours.
Remove cover and brown.
Peanut Hashes
Cooked peanuts, chopped very little if any, may be used in place of
trumese with potatoes or rice for hash.
Bread, cracker or zwieback crumbs may be substituted for potato
or rice.
Peanut German Chowder
1 pt. cooked peanuts
1 large onion
2 tablespns. chopped parsley
½ medium sized bay leaf
⅛ level teaspn. thyme
1 small carrot
1 level tablespn. browned flour
2 level tablespns. white flour
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Modeling and analysis of modern fluid problems Zhang
PDF
Advanced Mathematics For Engineering Students The Essential Toolbox 1st Editi...
PDF
Numerical Methods 4th Edition George Lindfield John Penny
PDF
Numerical Analysis 2000 Volume 7 Partial Differential Equations 1st Edition K...
PDF
Separated Representations And Pgdbased Model Reduction Fundamentals And Appli...
PDF
Model Reduction And Approximation Theory And Algorithms Peter Benner
PPTX
Partial-Differential-Equations-An-Introduction-for-Engineers (2).pptx
PDF
Advanced Differential Quadrature Methods 1st Edition Zhi Zong
Modeling and analysis of modern fluid problems Zhang
Advanced Mathematics For Engineering Students The Essential Toolbox 1st Editi...
Numerical Methods 4th Edition George Lindfield John Penny
Numerical Analysis 2000 Volume 7 Partial Differential Equations 1st Edition K...
Separated Representations And Pgdbased Model Reduction Fundamentals And Appli...
Model Reduction And Approximation Theory And Algorithms Peter Benner
Partial-Differential-Equations-An-Introduction-for-Engineers (2).pptx
Advanced Differential Quadrature Methods 1st Edition Zhi Zong

Similar to Proper Orthogonal Decomposition Methods For Partial Differential Equations Chen (20)

PDF
Exploring Odes Lloyd Ntrefethen Asgeir Birkisson Tobin A Driscoll
PDF
Ordinary And Partial Differential Equation Routines C C Plus Plus Fortran Jav...
PPTX
diffrential eqation basics and application .pptx
PPTX
diffrencial eqaution application and basics .pptx
PDF
Computational Methods For Modelling Of Nonlinear Systems A Torokhti And P How...
PPTX
Benchmarking on aaa bbbbbbbbcccccccc.pptx
PDF
Decomposition Methods For Differential Equations Theory And Applications 1st ...
DOCX
Homework 21. Complete Chapter 3, Problem #1 under Project.docx
PPTX
LAPDE ..PPT (STANDARD TYPES OF PDE).pptx
PDF
Introductory Finite Difference Methods For Pdes D M Causon
DOCX
Applied Numerical Methodswith MATLAB® for Engineers and .docx
PDF
A Computational Approach For The Analytical Solving Of Partial Differential E...
PDF
Partial-Differential-Equations-An-Introduction-for-Engineers.pdf
PDF
Complex Dynamics Advanced System Dynamics In Complex Variables 1st Edition Iv...
PDF
Fractal Functions, Dimensions and Signal Analysis Santo Banerjee
PDF
Generalizing Scientific Machine Learning and Differentiable Simulation Beyond...
PDF
Calculus Research Lab 3: Differential Equations!
PDF
Numerical Methods for Engineers and Scientists 3rd Edition Amos Gilat
PDF
Download full ebook of Partial Differential Equations Cox R instant download pdf
PDF
Handbook Of Differential Equations Evolutionary Equations Volume 2 1st Editio...
Exploring Odes Lloyd Ntrefethen Asgeir Birkisson Tobin A Driscoll
Ordinary And Partial Differential Equation Routines C C Plus Plus Fortran Jav...
diffrential eqation basics and application .pptx
diffrencial eqaution application and basics .pptx
Computational Methods For Modelling Of Nonlinear Systems A Torokhti And P How...
Benchmarking on aaa bbbbbbbbcccccccc.pptx
Decomposition Methods For Differential Equations Theory And Applications 1st ...
Homework 21. Complete Chapter 3, Problem #1 under Project.docx
LAPDE ..PPT (STANDARD TYPES OF PDE).pptx
Introductory Finite Difference Methods For Pdes D M Causon
Applied Numerical Methodswith MATLAB® for Engineers and .docx
A Computational Approach For The Analytical Solving Of Partial Differential E...
Partial-Differential-Equations-An-Introduction-for-Engineers.pdf
Complex Dynamics Advanced System Dynamics In Complex Variables 1st Edition Iv...
Fractal Functions, Dimensions and Signal Analysis Santo Banerjee
Generalizing Scientific Machine Learning and Differentiable Simulation Beyond...
Calculus Research Lab 3: Differential Equations!
Numerical Methods for Engineers and Scientists 3rd Edition Amos Gilat
Download full ebook of Partial Differential Equations Cox R instant download pdf
Handbook Of Differential Equations Evolutionary Equations Volume 2 1st Editio...
Ad

More from mapacjuhel (7)

PDF
Download full ebook of Food 1st Edition Jennifer Clapp instant download pdf
PDF
Citizen Media And Public Spaces 1st Edition Mona Baker Bolette B Blaagaard
PDF
Jewish Philosophy Perspectives And Retrospectives Raphael Jospe
PDF
Philosophy Of Anthropology And Sociology Turner Sp Risjord Mw Eds
PDF
Philosophy Of Psychology And Cognitive Science 1st Ed Thagard P Ed
PDF
Philosophy Of Mathematics Selected Writings Peirce Charles Sanders Moore
PDF
Random Sets In Econometrics Molchanov Ilya S Molinari Francesca
Download full ebook of Food 1st Edition Jennifer Clapp instant download pdf
Citizen Media And Public Spaces 1st Edition Mona Baker Bolette B Blaagaard
Jewish Philosophy Perspectives And Retrospectives Raphael Jospe
Philosophy Of Anthropology And Sociology Turner Sp Risjord Mw Eds
Philosophy Of Psychology And Cognitive Science 1st Ed Thagard P Ed
Philosophy Of Mathematics Selected Writings Peirce Charles Sanders Moore
Random Sets In Econometrics Molchanov Ilya S Molinari Francesca
Ad

Recently uploaded (20)

PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
Pre independence Education in Inndia.pdf
PDF
Microbial disease of the cardiovascular and lymphatic systems
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Basic Mud Logging Guide for educational purpose
PDF
O7-L3 Supply Chain Operations - ICLT Program
PPTX
human mycosis Human fungal infections are called human mycosis..pptx
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
Complications of Minimal Access Surgery at WLH
PPTX
Cell Types and Its function , kingdom of life
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Classroom Observation Tools for Teachers
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PPTX
GDM (1) (1).pptx small presentation for students
PPTX
Renaissance Architecture: A Journey from Faith to Humanism
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
01-Introduction-to-Information-Management.pdf
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
FourierSeries-QuestionsWithAnswers(Part-A).pdf
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
102 student loan defaulters named and shamed – Is someone you know on the list?
Pre independence Education in Inndia.pdf
Microbial disease of the cardiovascular and lymphatic systems
PPH.pptx obstetrics and gynecology in nursing
Basic Mud Logging Guide for educational purpose
O7-L3 Supply Chain Operations - ICLT Program
human mycosis Human fungal infections are called human mycosis..pptx
VCE English Exam - Section C Student Revision Booklet
Complications of Minimal Access Surgery at WLH
Cell Types and Its function , kingdom of life
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Classroom Observation Tools for Teachers
STATICS OF THE RIGID BODIES Hibbelers.pdf
GDM (1) (1).pptx small presentation for students
Renaissance Architecture: A Journey from Faith to Humanism
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
01-Introduction-to-Information-Management.pdf
TR - Agricultural Crops Production NC III.pdf
FourierSeries-QuestionsWithAnswers(Part-A).pdf
Module 4: Burden of Disease Tutorial Slides S2 2025

Proper Orthogonal Decomposition Methods For Partial Differential Equations Chen

  • 1. Proper Orthogonal Decomposition Methods For Partial Differential Equations Chen download https://ebookbell.com/product/proper-orthogonal-decomposition- methods-for-partial-differential-equations-chen-9973088 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Inverse Analyses With Model Reduction Proper Orthogonal Decomposition In Structural Mechanics 1st Edition Vladimir Buljak Auth https://ebookbell.com/product/inverse-analyses-with-model-reduction- proper-orthogonal-decomposition-in-structural-mechanics-1st-edition- vladimir-buljak-auth-4195416 Online Damage Detection In Structural Systems Applications Of Proper Orthogonal Decomposition And Kalman And Particle Filters 1st Edition Saeed Eftekhar Azam Auth https://ebookbell.com/product/online-damage-detection-in-structural- systems-applications-of-proper-orthogonal-decomposition-and-kalman- and-particle-filters-1st-edition-saeed-eftekhar-azam-auth-4635174 Proper Names Versus Common Nouns Morphosyntactic Contrasts In The Languages Of The World Javier Caro Reina Editor Johannes Helmbrecht Editor https://ebookbell.com/product/proper-names-versus-common-nouns- morphosyntactic-contrasts-in-the-languages-of-the-world-javier-caro- reina-editor-johannes-helmbrecht-editor-51135430 Proper Group Actions And The Baumconnes Conjecture Guido Mislin https://ebookbell.com/product/proper-group-actions-and-the-baumconnes- conjecture-guido-mislin-51985678
  • 3. Proper Generalized Decompositions An Introduction To Computer Implementation With Matlab 1st Edition Elas Cueto https://ebookbell.com/product/proper-generalized-decompositions-an- introduction-to-computer-implementation-with-matlab-1st-edition-elas- cueto-5483782 Proper And Improper Forcing 2nd Edition Saharon Shelah https://ebookbell.com/product/proper-and-improper-forcing-2nd-edition- saharon-shelah-6683506 Proper Life With Origami 15 Inventive Paper Initiatives For Your Lovely Family Belloli Publishing https://ebookbell.com/product/proper-life-with-origami-15-inventive- paper-initiatives-for-your-lovely-family-belloli-publishing-23699150 Proper People Early Asylum Life In The Words Of Those Who Were There David Scrimgeour Scrimgeour https://ebookbell.com/product/proper-people-early-asylum-life-in-the- words-of-those-who-were-there-david-scrimgeour-scrimgeour-24344160 Proper Healthy Food Hearty Vegan And Vegetarian Recipes For Meat Lovers Nick Knowles https://ebookbell.com/product/proper-healthy-food-hearty-vegan-and- vegetarian-recipes-for-meat-lovers-nick-knowles-11872050
  • 6. Proper Orthogonal Decomposition Methods for Partial Differential Equations
  • 8. Mathematics in Science and Engineering Proper Orthogonal Decomposition Methods for Partial Differential Equations Zhendong Luo School of Mathematics and Physics North China Electric Power University Beijing, China Goong Chen Department of Mathematics Texas A&M University College Station, TX, USA Series Editor Goong Chen
  • 9. Academic Press is an imprint of Elsevier 125 London Wall, London EC2Y 5AS, United Kingdom 525 B Street, Suite 1650, San Diego, CA 92101, United States 50 Hampshire Street, 5th Floor, Cambridge, MA 02139, United States The Boulevard, Langford Lane, Kidlington, Oxford OX5 1GB, United Kingdom Copyright © 2019 Elsevier Inc. All rights reserved. No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photocopying, recording, or any information storage and retrieval system, without permission in writing from the publisher. Details on how to seek permission, further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors, contributors, or editors, assume any liability for any injury and/or damage to persons or property as a matter of products liability, negligence or otherwise, or from any use or operation of any methods, products, instructions, or ideas contained in the material herein. Library of Congress Cataloging-in-Publication Data A catalog record for this book is available from the Library of Congress British Library Cataloguing-in-Publication Data A catalogue record for this book is available from the British Library ISBN: 978-0-12-816798-4 For information on all Academic Press publications visit our website at https://www.elsevier.com/books-and-journals Publisher: Candice Janco Acquisition Editor: Scott J. Bentley Editorial Project Manager: Devlin Person Production Project Manager: Maria Bernard Designer: Matthew Limbert Typeset by VTeX
  • 10. Foreword and Introduction We are living in an era of internet, WiFi, mobile apps, Facebook, Twitters, In- stagram, selfies, clouds, ...– surely we have not named them all, but one thing certain is that these all deal with digital and computer-generated data. There is a trendy name for the virtual space of all these things together: big data. The size of the big data space is ever-growing with an exponential rate. Major issues such as analytics, effective processing, storage, mining, prediction, visualiza- tion, compression, and encryption/decryption of big data have become problems of major interest in contemporary technology. This book aims at treating numerical methods for partial differential equa- tions (PDEs) in science and engineering. Applied mathematicians, scientists, and engineers are always dealing with big data, all the time. Where do their big data originate? They come, mostly, from problems and solutions of equations of physical and technological systems. A large number of the modeling equations are PDEs. Therefore, effective methods and algorithms for processing and re- solving such data are much in demand. This is not a treatise on general big data. However, the main objective is to develop a methodology that can effectively help resolve the challenges of dealing with large data sets and with speedup in treating time-dependent PDEs. Our approach here is not the way in which standard textbooks on numerical PDEs are written. The central theme of this book is actually on the technical treatment for effective methods that can generate numerical solutions for time- dependent PDEs involving only a small set of data, but yield decent solutions that are accurate and suitable for applications. It reduces data storage, CPU time, and, especially, computational complexity – several orders of magnitude. The key idea and methodology is proper orthogonal decomposition (POD), from properties of eigensolutions to a problem involving a large data set. (Indeed, POD has been known as an effective method for big data even before the term big data was coined.) In the process, we have developed the necessary mathe- matical methods and techniques adapting POD to fundamental numerical PDE methods of finite differences, finite elements, and finite volume in connection with various numerical schemes for a wide class of time-dependent PDEs as showcases. xi
  • 11. xii Foreword and Introduction PDES AND THEIR NUMERICAL SOLUTIONS Physical, biological, and engineering processes are commonly described by PDEs. Such processes are naturally dynamic, meaning that their time evolution constitutes the main features, properties, and significance of the system under investigation or observation. The spatial domains of definition of the PDEs are usually multidimensional and irregular in shape. Thus, there are fundamental difficulties involving geometry and dimensionality. The PDEs themselves can also take rather complex forms, involving a large variety of nonlinearities, sys- tem couplings, and source terms. These are inherent difficulties of the PDEs that compound those due to geometry and dimensionality. In general, exact (an- alytic) nontrivial solutions to PDEs are rarely available. Numerical methods and algorithms must be developed and then implemented on computers to render ap- proximate numerical solutions. Therefore, computations become the only way to treat PDE problems quantitatively. The study of numerical solutions for PDEs is now a major field in computational science that includes computational math- ematics, physics, engineering, chemistry, biology, atmospheric, geophysical and ocean sciences, etc. Computational PDEs represent an active, prosperous field. New methods and developments are constantly emerging. However, three canonical schemes stand out: finite difference (FD), finite element (FE), and finite volume element (FVE) methods. These methods all require the division of the computational do- main into meshes. Thus, they involve many degrees of freedom, i.e., unknowns, which are related to the number of nodes of mesh partition of the computa- tional domains. For a routine, real-world problem in engineering, the number of unknowns can easily reach hundreds of thousands or even millions. Thus, the amount of computational load and complexity is extremely high. The accuracy of numerical solutions is also affected as truncation errors tend to accumulate. For a large-scale problem, the CPU time on a supercomputer may require days, weeks, or even months. It is possible, for example, if we use these canonical methods of FD, FE, and FVE to simulate the weather forecast in atmospheric science, after a protracted period of computer calculations, that the output nu- merical results have already lost their significance as the days of interest are bygone. There are two ways of thinking for the resolution of these difficulties. First, one can think of computer speedup by building the best supercomputers with continuous refinement. As of June 2016, the world’s fastest supercomputer on the TOP500 (http://top500.org) supercomputer list is the Sunway Taihu- Light in China, with a LINPACK benchmark score of 93 PFLOPS (Peta, or 1015, FLOPS), exceeding the previous record holder, Tianhe-2, by around 59 PFLOPS. Tianhe-2 had its peak electric power consumption at 17.8 MW, and its annual electricity bill is more than $14 million or 100 million Chinese Yuan. Thus, most tier-1 universities cannot afford to pay such a high expense. The second option is to instead develop highly effective computational methods that can reduce the degrees of freedom for the canonical FD, FE, and FVE schemes,
  • 12. Foreword and Introduction xiii lighten the computational load, and reduce the running CPU time and the accu- mulation of truncation errors in the computational processes. This approach is based on cost-optimal, rational, and mathematical thinking and will be the one taken by us here. The focal topic of the book, the POD method (see [56,60]), is one of the most effective methods that aims exactly at helping computational PDEs. THE ADVANTAGES AND BENEFITS OF POD Reduce the degrees of freedom of numerical computational models for time- dependent PDEs, alleviate the calculation load, reduce the accumulation of truncated errors in the computational process, and save CPU computing time and resources for large-scale scientific computing. POD in a Nutshell The POD method essentially provides an orthogonal basis for representing a given set of data in a certain least-squares optimal sense, i.e., it offers ways to find optimal lower-dimensional approximations for the given data set. A Brief Prior History of the Development of POD The POD method has a long history. The predecessor of the POD method was an eigenvector analysis method, which was initially presented by K. Pearson in 1901 and was used to extract targeted, main ingredients of huge amounts of data (see [132]). (The trendy name of such data is “big data”.) Pearson’s data mining, sample analysis, and data processing techniques are relevant even today. However, the method of snapshots for POD was first presented by Sirovich in 1987 (see [150]). The POD method has been widely and successfully ap- plied to numerous fields, including signal analysis and pattern recognition (see [43]), statistics (see [60]), geophysical fluid dynamics or meteorology (see also [60], [60] or [78]), and biomedical engineering (see [48]). For a long time since 1987, the POD method was mainly used to perform the princi- pal component analysis in statistical computations and to search for certain major behavior of dynamic systems (see reduced-order Galerkin methods for PDEs, proposed in the excellent work in 2001 by Kunisch and Volkweind [62, 63]). From that moment forth, the model reduction or reduced basis of the numerical computational methods based on POD for PDEs underwent some rapid development, providing improved efficiency for finding numerical so- lutions to PDEs (see [2,11,15,19,45,48,54,57,58,135,137,138,141,166,167,170, 171,184–186,199]). At first, Kunisch–Volkweind’s POD-based reduced-order Galerkin methods were applied to reduced-order models of numerical solutions for PDEs with error estimates presented in [62,63]. Those error estimates con- sist of some uncertain matrix norms. In particular, they took all the numerical solutions of the classical Galerkin method on the total time span [0,T ] in the
  • 13. xiv Foreword and Introduction formulation of the POD basis, used them to establish the POD-based reduced- order models, and then recomputed the numerical solutions on the same time span [0,T ]. This produces some repetitive computation but not much extra gain. One begins to ponder how to improve this and, furthermore, how to generalize the methodology initiated by Kunisch–Volkweind’s work beyond the Galerkin FE method to other FE methods and also to FD and FVE schemes. This book aims exactly at answering these questions. Development of POD for Time-Dependent PDEs The first author, Zhendong Luo, was attracted to the study of reduced-order nu- merical methods based on POD for PDEs at the beginning of 2003. At that time, few or no comprehensive accounts existed and only fragmentary introductions about POD were available. He spent three years (2003–2005) studying the un- derlying optimization methods, statistical principles, and numerical solutions for POD. Then, in 2006, he and his collaborators published their first two pa- pers for POD methods (see [26,27]). These dealt with oceanic models and data assimilation. Afterwards, Luo and his coauthors have established some POD-based reduced-order FD schemes (see [5,38,40,91,113,118,122,155]) as well as FE formulations (see [37,39,70,88–90,92,93,100,101,103,109,112,119,123,124, 164]). They deduced the error estimates for POD-based reduced-order solutions for PDEs of various types since 2007 in a series of papers. They also proposed some POD-based reduced-order formulations and relevant error estimates for POD-based reduced-order FVE solutions (see [71,104,106,108,120]) for PDEs in another series of papers beginning in 2011. These POD-based reduced-order methods were specific to the classical FD schemes, FE methods, and FVE meth- ods for the construction of the reduced-order models, in which they extracted one from every ten classical numerical solutions as snapshots, significantly dif- ferent from Kunisch–Volkweind’s methods, in which numerical solutions from the classical Galerkin method were extracted at all instants on the total time span [0,T ]. Therefore, these POD-based reduced-order methods constitute im- provements, generalizations, and extensions for Kunisch–Volkweind’s methods in [62,63]. The reduced-order methods in the above cited work need only repeat part of the computations on the same time span [0,T ]. Since 2012, Luo and his collaborators have established the following three main methods: i. PODROEFD: POD-based reduced-order extrapolation FD schemes (see [6, 7,79,81,94–96,102,110,111,117,121,127,154,158]); ii. PODROEFE: POD-based reduced-order extrapolation FE methods (see [69, 75,82,83,97,116,159–161,165,179]); iii. PODROEFVE: POD-based reduced-order extrapolation FVE methods (see [84,85,87,98,99,114,115,162,163]).
  • 14. Foreword and Introduction xv These POD-based reduced-order extrapolation methods need only adopt the standard numerical solutions on some initial rather short time span [0,t0] (t0 T ) of, respectively, the classical FD, FE, and FVE schemes as snapshots in order to formulate the POD basis. Therefore, they have significantly improved the previous, existing version of the reduced-order models. They do not have to repeat wholesale computations. The physical significance is that one can use existing data to forecast the future evolution of nature. Furthermore, our PO- DROEFD, PODROEFE, and PODROEFVE methods can be treated in a similar way as the classical FD, FE, and FVE methods, leading to error estimates with concrete orders of convergence. The application of these POD-based extrapola- tion methods will provide anyone with the advantages and benefits of POD mentioned earlier. The second author, Goong Chen, has strong interests in the computation of numerical solutions of PDEs arising from real-world applications. He has constantly been faced with the challenges to deal with the needs for large data storage, process speedup, effective reduction of order, and the extraction of prominent physical features from the supercomputer numerical solution of PDEs. When he noticed that Zhendong Luo had already done significant work on the POD methods for time-dependent PDEs fitting many of his needs, he got very excited and proposed that a book project be prepared to publish and promote this very important topic. This started the collaboration of the authors, with this book as the outcome. Our collaboration is ongoing, hoping more re- search papers will be produced in the near future demonstrating the advantages of POD-based methods. However, G. Chen happily acknowledges that all tech- nical contributions in this book are to be credited to the first author alone. He has learned tremendously from the collaboration – this by itself makes the book project worthwhile and satisfying as far as the second author is concerned. ORGANIZATION OF THE BOOK In this book, we aim to provide the technical details of the construction, theoreti- cal analysis, and implementations of algorithms and examples for PODROEFD, PODROEFE, and PODROEFVE methods for a broad class of dynamic PDEs. It is organized into the following four chapters. Chapter 1 includes four sections. In the first section, we review the basic the- ory of classical FD schemes. It is intended to ensure the self-containedness of the book. Then, in the subsequent three sections, we introduce the construction, the theoretical analysis, and the implementations of algorithms for the PODROEFD schemes for the following two-dimensional (2D) PDEs: the parabolic equation, the nonstationary Stokes equation, and the shallow water equation with sedi- ment concentration, respectively. Examples and discussions are also given for each equation. Chapter 2 is similarly structured as Chapter 1, with four sections. There we begin by reviewing the basic theory of Sobolev spaces and elliptic theory,
  • 15. xvi Foreword and Introduction the classical FE method, and the mixed FE (MFE) method. Then we describe the construction, theoretical analysis, and implementations of algorithms for the PODROEFE methods for the following 2D PDEs: the viscoelastic wave equa- tion, the Burgers equation, and the nonstationary parabolized Navier–Stokes equation (for which the stabilized Crank–Nicolson extrapolation scheme is used), respectively. Numerical examples and graphics are again illustrated. Chapter 3 contains three sections, aiming at the treatment of PODROEFVE. We first introduce the basics of FVE. Then three sections for the construction, theoretical error analysis, and the implementations of algorithms for the PO- DROEFVE methods for the following three 2D dynamic PDEs are studied: the hyperbolic equation, Sobolev equation, and incompressible Boussinesq equa- tion, respectively, are developed, with concrete examples and illustrations. Numerical results on these model equations as presented in the book have demonstrated the effectiveness and accuracy of our POD methods. Finally, Chapter 4 is a short epilogue and outlook, consisting of concluding remarks and forward-looking statements. The book is written to be as self-contained as possible. Readers and students need only to have an undergraduate level in applied and numerical mathemat- ics for the understanding of this book. Many parts can be used as a standard graduate-level textbook on numerical PDEs. The theory, methods, and com- putational algorithms will be valuable to students and practitioners in science, engineering, and technology. ACKNOWLEDGMENTS The authors thank all collaborators, colleagues, and institutions that have gen- erously supported our work. In particular, the authors are delighted to acknowl- edge the partial financial support over the years by the National Natural Science Foundation of China (under grant #11671106), the Qatar National Research Fund (under grant #NPRP 8-028-1-001), the North China Electric Power Uni- versity, and the Texas AM University. Zhendong Luo Beijing, China Goong Chen College Station, TX, USA
  • 16. Chapter 1 Reduced-Order Extrapolation Finite Difference Schemes Based on Proper Orthogonal Decomposition The key objective of this book is to develop the numerical treatments of proper orthogonal decomposition (POD) for partial differential equations (PDEs). With regard to numerical methods for PDEs, the finite difference (FD) method essentially constitutes the basis of all numerical methods for PDEs. In order to introduce how POD works, it is natural to start with the FD method. For the sake of proper self-containedness, in this chapter we first review the basic theory of classical FD schemes. We then introduce the construction, theo- retical analysis, and implementations of algorithms for the POD-based reduced- order extrapolation FD (PODROEFD) schemes for the two-dimensional (2D) parabolic equation, 2D nonstationary Stokes equation, and 2D shallow wa- ter equation with sediment concentration. Finally, we provide some numerical examples to show what the PODROEFD schemes have over the classical FD schemes. Moreover, it is shown that the PODROEFD schemes are reliable and effective for solving above-mentioned PDEs. The numerical models treated here include both simple equations and cou- pled systems. The systematic approach we take in this chapter, namely, fol- lowing the logical sequence of rudiments, modeling equations, error estimates- stability-convergence, POD methods, error estimates for POD solutions, and finally concrete numerical examples, will be the standard for all chapters. 1.1 REVIEW OF CLASSICAL BASIC FINITE DIFFERENCE THEORY 1.1.1 Approximation of Derivative The FD schemes use difference quotients to approximate derivatives. Denote un i,j = u(ix,jy,nt) = u(xi,yj ,tn). Then un i±1,j±1 = u(xi ± x,yj ± y,tn), un±1 i,j = u(xi,yj ,tn ± t). Derivative approximations have usually the following four forms. Proper Orthogonal Decomposition Methods for Partial Differential Equations https://doi.org/10.1016/B978-0-12-816798-4.00006-1 Copyright © 2019 Elsevier Inc. All rights reserved. 1
  • 17. 2 Proper Orthogonal Decomposition Methods for Partial Differential Equations 1. Approximation to first-order derivative by a forward difference We have ∂u ∂x n i,j = lim x→0 u(xi + x,yj ,tn) − u(xi,yj ,tn) x = u(xi + x,yj ,tn) − u(xi,yj ,tn) x + O(x) = un i+1,j − un i,j x + O(x) ≈ un i+1,j − un i,j x . (1.1.1) 2. Approximation to first-order derivative by a backward difference We have ∂u ∂x n i,j = lim x→0 u(xi,yj ,tn) − u(xi − x,yj ,tn) x = u(xi,yj ,tn) − u(xi − x,yj ,tn) x + O(x) = un i,j − un i−1,j x + O(x) ≈ un i,j − un i−1,j x . (1.1.2) 3. Approximation to first-order derivative by a central difference We have ∂u ∂x n i,j = un i+1,j − un i−1,j 2x + O(x2 ) ≈ un i+1,j − un i−1,j 2x . (1.1.3) 4. Approximation to second derivative by a second-order central differ- ence We have ∂2u ∂x2 n i,j = un i+1,j − 2un i,j + un i−1,j x2 + O(x2 ) ≈ un i+1,j − 2un i,j + un i−1,j x2 . (1.1.4)
  • 18. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 3 1.1.2 Difference Operators 1. The definitions of difference operators We denote operators I,Ex,E−1 x ,x,∇x,δx,μx by the following: Iun j = un j ; I is known as the unit operator; Exun j = un j+1; Ex is known as the forward shift operator and denoted by Ex = E+1 x ; E−1 x un j = un j−1; E−1 x is known as the backward shift operator and denoted by E− x = E−1 x ; xun j = un j+1 − un j ; x is known as the forward difference operator and sat- isfies x = Ex − I; ∇xun j = un j − un j−1; ∇x is known as the backward difference operator and satisfies ∇x = I − E− x ; δxun j = un j+ 1 2 − un j− 1 2 ; δx is known as the one step central difference and satisfies δx = E 1 2 x − E − 1 2 x ; μxun j = 1 2 (uj+ 1 2 + uj− 1 2 ); μx is known as the average operator and satisfies μx = 1 2 (E 1 2 x + E − 1 2 x ). 2. Composite difference operators We have i. (μδ)x = 1 2 (Ex − E−1 x ) = 1 2 (x + ∇x); ii. (δx)2 = δxδx = (E 1 2 x − E − 1 2 x )2 = (Ex − 2I + E−1 x ); iii. (δx)n = δx(δn−1 x );n x = xn−1 x = ··· ;∇n x = ∇x · ∇n−1 x = ···. 3. Derivative relations with difference operators We have i. ∂u ∂x n j = xun j x + O(x)—forward difference = ∇xun j x + O(x)—backward difference = δxun j x + O(x2 )—central difference; ii. ∂2u ∂x2 n j = δ2 xun j x2 + O(x2 )—the second-order central difference = 2 xun j x2 + O(x2 )—the second-order forward difference = ∇2 x un j x2 + O(x2 )—the second-order backward difference.
  • 19. 4 Proper Orthogonal Decomposition Methods for Partial Differential Equations 1.1.3 The Formation of Difference Equations 1. Explicit FD schemes The explicit FD scheme implies that the time farthest point values appear only once. For example, un+1 j − un j t = δ x2 (un j+1 − 2un j + un j−1) is an explicit FD scheme for ∂u ∂t = δ ∂2u ∂x2 , where δ is a positive “diffusion coef- ficient”. 2. Implicit FD schemes The implicit FD schemes imply that the time farthest point values in the FD scheme appear more than once. For example, un+1 j − un j t = δ x2 (un+1 j+1 − 2un+1 j + un+1 j−1) is an implicit FD scheme for ∂u ∂t = δ ∂2u ∂x2 . 3. Semidiscretized difference schemes The semidiscretized difference scheme implies that only the spatial variable is discretized and the time variable is not discretized. For example, du dt j = δ x2 (uj+1 − 2uj + uj−1) is a semidiscretized difference scheme for ∂u ∂t = δ ∂2u ∂x2 . 1.1.4 The Effectiveness of Finite Difference Schemes 1. The error of an FD scheme For the equation ∂u ∂t = δ ∂2u ∂x2 , consider an FD scheme discretized by forward difference for time and central difference for space (FTCS) as follows: un+1 j − un j t = δ x2 (un j+1 − 2un j + un j−1),
  • 20. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 5 which is expanded at reference point (xj ,tn) by Taylor’s formula into ∂u ∂t − δ ∂2u ∂x2 n j + t 2 ∂2u ∂t2 n j − δ x2 12 ∂4u ∂x4 n j + ··· = 0. The above equation is known as the modified PDE, where the first parenthesis above is called the source equation (i.e., PDE) and the bracket above is called the remainder (R) or truncation error (TE), denoted by R = TE. The TE is equal to the source equation (PDE) subtracting the FD equation (FDE), i.e., TE = PDE − FDE. The discretization error (DE, i.e., the error of the FD scheme) is equal to the TE plus the boundary error (BE), i.e., DE = TE + BE. Thus, DE = BE + PDE − FDE. The round-off error (ROE) denotes the rounding error of the computing pro- cedure by the computer and the total error is denoted by CE. Then CE = DE + ROE = BE + PDE − FDE + ROE. However, ROE and BE are usually omitted. The error of FD schemes mainly considers TE, which is directly obtained from the approximation of the deriva- tive (1.1.1)–(1.1.4). 2. The consistency of an FD scheme Definition 1.1.1. (1) Let the PDE ut = Lu (1.1.5) be discretized by an FD scheme as follows: μ αμun+1 j+μ = γ βγ un j+γ . (1.1.6) When an FD grid is indefinitely refined and if R = T E satisfies the property that it tends to zero, i.e., lim x→0 t→0 R = lim x→0 t→0 T E = 0, (1.1.7) then the FD scheme is said to be compatible with the source equation. (2) If t = o(xγ ) (γ 0), i.e., lim x→0 t→0 t xγ = 0, we have R → 0, then the FD scheme is said to be compatible with the source equation on the conditions t = o(xγ ) (γ 0). 3. The stability of an FD scheme
  • 21. 6 Proper Orthogonal Decomposition Methods for Partial Differential Equations Definition 1.1.2. i. If an error disturbance εn j = ũn j − un j is added at certain time level t = tn, i.e., ũn j = un j + εn j , and the errors εn+1 j = ũn+1 j − un+1 j of solutions ũn+1 j and un+1 j obtained from the FD scheme (1.1.6) do not produce extra-large overall growth, i.e., there is a constant K 0 independent of n and j such that εn+1 j ⩽ Kεn j , or omitting j, εn+1 ⩽ Kεn , where · represents a norm, then when 0 K 1, the FD scheme (1.1.6) is said to be strongly stable, else (when K ⩾ 1) the FD scheme (1.1.6) is said to be weakly stable. ii. If there is no restriction between the time step and the spatial step in the FD scheme (1.1.6), it is said to be absolutely stable or unconditionally stable. In this case, the time step t can take a larger size. iii. If the stability of the FD scheme (1.1.6) is restricted by some relationship of time step and spatial steps (usually constrained in the form of some inequal- ities, for example, t = o(xγ ) (γ 0)), then it is said to be conditionally stable. We have the following criteria for the stability of the FD scheme (1.1.6) (see [192,194]). Theorem 1.1.1. The FD scheme (1.1.6) is stable if and only if there are two positive constants t0 and x0 as well as a nonnegative constant K 0 inde- pendent of n and j such that the solutions of FD scheme (1.1.6) satisfy un j ⩽ K, n = 1,2,.... Theorem 1.1.2. If the FD scheme (1.1.6) of PDE (1.1.5) satisfies un+1 j · un j 0, un+1 j un j 1, n = 1,2,··· , (1.1.8) where · is a discrete norm, then the FD scheme (1.1.6) is stable. 4. Equivalence between stability and convergency of FD schemes Definition 1.1.3. i. Let u∗(x,t) be an exact solution for PDE (1.1.5) and un j the approximate solutions of the FD scheme (1.1.6) compatible with the PDE (1.1.5). If when t → 0,x → 0 (i.e., grid is infinitely refined), for any sequence (xj ,tn) → (x∗,t∗) ∈ , we have lim x,t→0 un j = u∗ (x∗ ,t∗ ), (1.1.9) then the FD scheme (1.1.6) is said to be convergent. The stability and convergence of FD schemes are their intrinsic properties with the following important equivalence (see [192] or [194, Theorem 1.3.18]).
  • 22. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 7 Theorem 1.1.3. If the PDE (1.1.5) is well posed, i.e., it has a unique solution and is continuously dependent on initial boundary value conditions and com- patible with its FD scheme (1.1.6), then the stability of the FD scheme (1.1.6) is equivalent to its convergence. Theorem 1.1.3 is briefly described as “if a PDE is well posed and compatible with its FD scheme, then the stability of the FD scheme (1.1.6) is equivalent to its convergence”. Remark 1.1.1. For linear PDEs, if their FD schemes are stable, then their con- vergence is ensured. Therefore, it is only necessary to discuss their stability. However, for nonlinear PDEs, because of the complexity, at present we can only investigate nearly similar convergence and local stability analysis instead of global analysis of convergence. 5. Von Neumann’s stability analysis of FD schemes Von Neumann’s stability analysis of FD schemes is also known as the Fourier analysis method. In order to analyze the stability of FD schemes, we first discuss the stability of exact solutions. i. Stability analysis of exact solution The initial value problem ⎧ ⎨ ⎩ ut = L( ∂ ∂x , ∂ ∂x2 ,···)u, x ∈ R, u(x,0) = φ(x) (1.1.10) is said to be well posed, if it has a unique and stable solution. A so-called stable solution means that the solution is continuously dependent on the initial value and can maintain the boundedness of small disturbances. Assume that the initial value φ(x) is periodic and expandable into the fol- lowing Fourier series: φ(x) = k fkeikx , (1.1.11) where the fks are the Fourier coefficients. Assume that the exact solution u(x,t) of the initial value problem (1.1.10) is also expandable into the following Fourier series: u(x,t) = k Fk(t)eikx , (1.1.12) where the Fk(t)s are the Fourier coefficients of the series depending on t.
  • 23. 8 Proper Orthogonal Decomposition Methods for Partial Differential Equations By substituting (1.1.12) into the first equation of (1.1.10), we obtain k Fk(t)eikx = L( ∂ ∂x , ∂ ∂x2 ,···) k Fk(t)eikx = k L( ∂ ∂x , ∂ ∂x2 ,···)eikx · Fk(t). (1.1.13) If L is a linear differential operator, we can rewrite (1.1.13) as follows: k Fk(t)eikx = k L(ik,(ik)2 ,···)eikx · Fk(t). (1.1.14) By comparing the coefficients eikx of the LHS and those of the RHS in (1.1.14) and setting them equal, we have Fk(t) = L(ik,(ik)2 ,···) · Fk(t). (1.1.15) Eq. (1.1.15) has a general solution as follows: Fk(t) = ckeL(ik,(ik)2,··· )t . (1.1.16) By using the initial condition u(x,0) = φ(x) of (1.1.10), we obtain k fkeikx = k Fk(0)eikx, which implies ck = Fk(0) = fk. Thus, (1.1.12) can be rewritten as follows: u(x,t) = k fkeL(ik,(ik)2,··· )t · eikx . (1.1.17) Note that the exact solution u(x,t) is said to be stable, if there is a nonnegative constant M, independent of u,t, and x, such that u(x,t)L2 ⩽ Mu(x,0)L2 . (1.1.18) Because {eikx} is a set of standard orthogonal bases in L2(−π,π), we have u(x,t)2 L2 = k | fkeL(ik,(ik)2,··· )t |2 ⩽ k | fk |2 sup | eL(ik,(ik)2,··· )t |2 , (1.1.19) u(x,0)2 L2 = k | fk |2 , (1.1.20) where u(x,t)L2 = π −π | u(x,t) |2 dx 1/2 is the norm of u(·,t) in L2(−π,π).
  • 24. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 9 From (1.1.19)–(1.1.20), we know that | eL(ik,(ik)2,··· )t |⩽ M (1.1.21) holds if and only if u(x,t)2 L2 = k | fkeL(ik,(ik)2,··· )t |2 ⩽ M2 k | fk |2 ⩽ M2 u(x,0)2 L2 . (1.1.22) Thus, the exact solution u(x,t) of (1.1.10) is stable if and only if there is a nonnegative constant M independent of u,t, and x, such that | eL(ik)t |⩽ M. ii. Stability analysis of an FD scheme In the following, we analyze the von Neumann stability conditions of an FD scheme. Let (1.1.10) have the following FD scheme: μ αμun+1 j+μ = γ βγ un j+γ . (1.1.23) Let the initial time level (n = 0) be denoted as u0 j = φ(xj ) = k g0 k eikxj = k g0 k eikjx . (1.1.24) Let the nth time level solution un j be denoted by un j = k gn k eikxj = k gn k eikjx . (1.1.25) By inserting (1.1.25) into (1.1.23), and by the standard orthogonality of {eikx}, we obtain gn+1 k = Ggn k , k ∈ Z, (1.1.26) where Z is the set of all integers and G is known as the growth factor, which is denoted by the following equation: G = μ αμeikμx −1 · ⎛ ⎝ γ βγ eikγ x ⎞ ⎠. (1.1.27) Let g = (··· ,g−k,··· ,g−2,g−1,g0,g1,g2,··· ,gk,···). We have gn = Ggn−1 ,n = 1,2,··· . (1.1.28)
  • 25. 10 Proper Orthogonal Decomposition Methods for Partial Differential Equations By using Eq. (1.1.28), we have gn 2 = Ggn−1 2 = G2 gn−2 2 = ··· = Gn g0 2,n = 1,2,··· , (1.1.29) where · 2 represents the norm in l2. Thus, by (1.1.24) and (1.1.25), from (1.1.29), we obtain un j 2 ⩽ Gn u0 j 2 = Gn φ(xj )2, n = 1,2,··· . (1.1.30) Then, by Theorem 1.1.1, we obtain the following result. Theorem 1.1.4. The FD scheme (1.1.13) is stable if and only if there are two positive constants t0 and x0 as well as a nonnegative constant K 0 inde- pendent of n and j, such that its growth factor G satisfies Gn ⩽ K, n = 1,2,··· . (1.1.31) Corollary 1.1.5. The FD scheme (1.1.13) is stable if and only if its growth factor G satisfies G ⩽ 1 + O(t), n = 1,2,··· . (1.1.32) iii. Von Neumann’s stability analysis of FD schemes The condition of (1.1.32) in Corollary 1.1.5 is usually known as the von Neumann stability condition. It follows that in order to distinguish the stability of the FD scheme (1.1.13), it is necessary to compute its growth factor G by formula (1.1.27) and determine the t and x such that (1.1.32) or G ⩽ 1 is satisfied. For specific FD schemes, it is easy to compute their growth factor G by (1.1.27). By the standard orthogonality of {eikx}, it is necessary to substitute un j = Gn eikxj , n = 1,2,··· (1.1.33) into the FD scheme (1.1.13), and then, eliminating some common factors, one can obtain the growth factor G of (1.1.27). For an FD scheme for two-dimensional linear PDEs, we need only to substi- tute un j,m = Gneikxj eikym (n = 1,2,···) into the FD scheme and then simplify, so we can also obtain the growth factor G. Some relative examples can be found in [192]. 1.2 A POD-BASED REDUCED-ORDER EXTRAPOLATION FINITE DIFFERENCE SCHEME FOR THE 2D PARABOLIC EQUATION In this section, we introduce the PODROEFD scheme for the two-dimensional (2D) parabolic equation. The work is based on Luo et al. [111].
  • 26. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 11 For convenience and without loss of generality, let = (ax,bx) × (cy,dy) and consider the following 2D parabolic equation. Find u such that ⎧ ⎪ ⎨ ⎪ ⎩ ut − u = f, (x,y,t) ∈ × (0,T ), u(x,y,t) = g(x,y,t), (x,y,t) ∈ ∂ × [0,T ), u(x,y,0) = s(x,y), (x,y) ∈ , (1.2.1) where f (x,y,t), g(x,y,t), and s(x,y) are given source item, boundary func- tion, and initial value function, respectively, and T is the total time duration. The main motivation and physical background of the parabolic equation are the modeling of heat conduction and diffusion phenomena. 1.2.1 A Classical Finite Difference Scheme for the 2D Parabolic Equation Let x and y be the spatial steps in the x and y directions, respectively, t the time step, and un j,k the function value of u at points (xj ,yk,tn) (0 ⩽ j ⩽ J = [(bx − ax)/x], 0 ⩽ k ⩽ K = [(dy − cy)/y],0 ⩽ n ⩽ N = [T /t], where [(bx − ax)/x], [(dy − cy)/y], and [T /t] denote the integral parts of (bx − ax)/x, (dy − cy)/y, and T /t, respectively). Thus, the forward difference explicit scheme for the 2D parabolic equa- tion (1.2.1) at reference point (xj ,yk,tn) is given by un+1 j,k = un j,k + t x2 (un j+1,k − 2un j,k + un j−1,k) + t y2 (un j,k+1 − 2un j,k + un j,k−1) + tf n j,k. (1.2.2) For the FD scheme (1.2.2), we have the following stability and convergence. Theorem 1.2.1. If 4t/x2 ⩽ 1 and 4t/y2 ⩽ 1, the FD scheme (1.2.2) is stabile. Further, we have the following error estimates: un j,k − u(xj ,yk,tn) = O(t,x2 ,y2 ), 1 ⩽ n ⩽ N. (1.2.3) Proof. If 4t/x2 ⩽ 1 and 4t/y2 ⩽ 1, we have | un+1 j,k | ⩽ 1 − 2t x2 − 2t y2 | un j,k | + t x2 (| un j+1,k | + | un j−1,k |) + t y2 (| un j,k+1 | + | un j,k−1 |) + t | f n j,k | ⩽ un ∞ + tf ∞, (1.2.4) where · ∞ is the L∞( ) norm. Thus, from (1.2.4), we obtain un+1 ∞ ⩽ un ∞ + tf ∞,n = 0,1,2,··· ,N − 1. (1.2.5)
  • 27. 12 Proper Orthogonal Decomposition Methods for Partial Differential Equations By summing (1.2.4) from 0 to n − 1, we obtain un ∞ ⩽ u0 ∞ + ntf ∞,n = 1,2,··· ,N. (1.2.6) Because nt ⩽ T , from (1.2.6), we obtain un ∞ ⩽ s(x,y)∞ + T f ∞,n = 1,2,··· ,N, (1.2.7) which shows that solutions of the FD scheme (1.2.2) are bounded and contin- uously dependent on the initial value s(x,y) and source term f (x,y,t). Thus, by Theorem 1.1.1, we deduce that the FD scheme (1.2.2) is stable. Furthermore, the error estimates (1.2.3) are directly obtainable from approximating difference quotients for derivatives. Thus, as long as the time step t, the spatial steps x and y, f (x,y,t), g(x,y,t), and s(x,y) are given, we can obtain classical FD solutions un j,k(0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,0 ⩽ n ⩽ N) for the 2D parabolic equation (1.2.1) by comput- ing the FD scheme (1.2.2). 1.2.2 Formulation of the POD Basis Set un i = un j,k and f n i = f n j,k (1 ⩽ i ⩽ m ≡ (J +1)(K +1),i = kJ + k + j + 1, 0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,0 ⩽ n ⩽ N). Then, the classical FD solutions for the FD scheme (1.2.2) can be denoted by {un i }N n=1(1 ⩽ i ⩽ m). We extract the first L sequence of solutions {un i }L n=1(1 ⩽ i ⩽ m,L N) as snapshots. Further, we formulate an m × L snapshot matrix A = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ u1 1 u2 1 ··· uL 1 u1 2 u2 2 ··· uL 2 . . . . . . ... . . . u1 m u2 m ··· uL m ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ . (1.2.8) By the singular value decomposition, the snapshot matrix A has a factoriza- tion A = U l×l Ol×(L−l) O(m−l)×l O(m−l)×(L−l) V T , (1.2.9) where l×l= diag{σ1,σ2,··· ,σl} is a diagonal matrix consisting of the sin- gular values of A according to the decreasing order σ1 ⩾ σ2 ⩾ ··· ⩾ σl 0, U = (ϕ1,ϕ2,··· ,ϕm) is an m×m orthogonal matrix consisting of the orthogo- nal eigenvectors of AAT , whereas V = (φ1,φ2,··· ,φL) is an L×L orthogonal matrix consisting of the orthogonal eigenvectors of AT A, and O is a zero ma- trix.
  • 28. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 13 Because the number of mesh points m is much larger than that of snap- shots L extracted, the height m of A for matrix AAT is much larger than the width L of A for matrix AT A, but the positive eigenvalues λj (j = 1,2,··· ,l) of AT A and AAT are identical and satisfy λj = σ2 j (j = 1,2,··· ,l). There- fore, we may first find the eigenvalues λ1 ⩾ λ2 ⩾ ··· ⩾ λl 0 (l = rankA) for matrices AT A and corresponding eigenvectors ϕj . Then, by the relationship ϕj = Aφj / λj , j = 1,2,...,l, we obtain these eigenvectors ϕj (j = 1,2,...,l) corresponding to the nonzero eigenvalues for matrix AAT . Take AM = U M×M OM×(L−M) O(m−M)×M O(m−M)×(L−M) V T , (1.2.10) where M×M = diag{σ1,σ2,··· ,σM} is the diagonal matrix consisting of the first M positive singular values of the diagonal matrix l×l. Define the norm of matrix A as A2,2 = supu∈RL Au2/u2 (where u2 is the l2 norm for vector u). We have the following. Lemma 1.2.2. Let = (ϕ1,ϕ2,··· ,ϕM) consist of the first M eigenvectors U = (ϕ1,ϕ2,··· ,ϕm). Then, we have AM = M i=1 σiϕj φT i = T A. (1.2.11) Proof. According to AM = M i=1 σiϕiφT i = ϕ1 ··· ϕM ⎛ ⎜ ⎜ ⎝ σ1 ··· 0 . . . ... . . . 0 ··· σM ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ φT 1 . . . φT M ⎞ ⎟ ⎟ ⎠, A = ϕ1 ··· ϕl ⎛ ⎜ ⎜ ⎝ σ1 ··· 0 . . . ... . . . 0 ··· σl ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ φT 1 . . . φT l ⎞ ⎟ ⎟ ⎠, we have
  • 29. 14 Proper Orthogonal Decomposition Methods for Partial Differential Equations T A = ⎛ ⎜ ⎜ ⎝ ϕT 1 . . . ϕT M ⎞ ⎟ ⎟ ⎠ ϕ1 ··· ϕl ⎛ ⎜ ⎜ ⎝ σ1 ··· 0 . . . ... . . . 0 ··· σl ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ φT 1 . . . φT l ⎞ ⎟ ⎟ ⎠ = IM O ⎛ ⎜ ⎜ ⎝ σ1 ··· 0 . . . ... . . . 0 ··· σl ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ φT 1 . . . φT l ⎞ ⎟ ⎟ ⎠ = ⎛ ⎜ ⎜ ⎝ σ1 ··· 0 ··· 0 . . . ... . . . ··· . . . 0 ··· σM ··· 0 ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ φT 1 . . . φT l ⎞ ⎟ ⎟ ⎠ = ϕ1 ··· ϕM ⎛ ⎜ ⎜ ⎝ σ1 ··· 0 . . . ... . . . 0 ··· σM ⎞ ⎟ ⎟ ⎠ ⎛ ⎜ ⎜ ⎝ φT 1 . . . φT M ⎞ ⎟ ⎟ ⎠ = AM. Thus, by the relationship between the matrix norm and its spectral radius, we have min rank(B)⩽M A − B2,2 = A − AM2,2 = A − T A2,2 = λM+1, (1.2.12) where = (ϕ1,ϕ2,··· ,ϕM) consists of the first M eigenvectors U = (ϕ1,ϕ2, ··· ,ϕm). If the L column vectors of A are denoted by un = (un 1,un 2,··· , un m)T (n = 1,2,··· ,L), we have un − un M2 = (A − T A)εn2 ⩽ A − T A2,2εn2 = λM+1, (1.2.13) where un M = M j=1(ϕj ,un)ϕj represents the projection of un onto = (ϕ1,ϕ2,··· ,ϕM), (ϕj ,un) is the inner product of ϕj and un, and εn denotes the unit vector with the nth component being 1. The inequality (1.2.13) shows that un M is an optimal approximation of un whose error is no more than √ λM+1. Thus, is just an orthonormal optimal POD base of A. 1.2.3 Establishment of the POD-Based Reduced-Order Finite Difference Scheme for the 2D Parabolic Equation We still denote the classical FD solution vectors from the FD scheme (1.2.2) by un = (un 1,un 2,··· ,un m)T (n = 1,2,··· ,L,L + 1,··· ,N). Thus, we can write
  • 30. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 15 the FD scheme (1.2.2) in vector form as follows: un+1 = un + t x2 Bun + t y2 Cun + tFn m, (1.2.14) where Fn m = (f n 1 ,f n 2 ,··· ,f n m)T , and B = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ −1 1 0 0 ··· 0 0 1 −2 1 0 ··· 0 0 0 1 −2 1 ··· 0 0 0 0 1 −2 ··· 0 0 . . . . . . . . . . . . ... . . . . . . 0 0 0 0 ··· −2 1 0 0 0 0 ··· 1 −1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ m×m , C = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ −1 J times 0 ··· ··· 1 −2 ... −2 ... 1 ... 1 ... −2 ... −2 1 ··· ··· J times 0 −1 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ m×m . In order to estimate a tridiagonal matrix, it is necessary to introduce the following lemma (see [192, Theorems 1.3.1 and 1.3.2]). Lemma 1.2.3. The tridiagonal matrix B̃ = ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ d b 0 0 ··· 0 0 c a b 0 ··· 0 0 0 c a b ··· 0 0 0 0 c a ··· 0 0 . . . . . . . . . . . . ... . . . . . . 0 0 0 0 ··· a b 0 0 0 0 ··· c d ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ m×m
  • 31. 16 Proper Orthogonal Decomposition Methods for Partial Differential Equations has the following eigenvalues: λ̃j = a + 2 √ bc cos[(2j − 1)π/(2m + 1)], j = 1,2,··· ,m. Therefore, by using the relationship between the matrix eigenvalue and its norm, we have B2,2 = C2,2 =| 2 − 2cos[(2m − 1)π/(2m + 1)] | =| 2 − 2cos[π − 2π/(2m + 1)] | =| 2 + 2cos[2π/(2m + 1)] | 4. (1.2.15) If we replace un of (1.2.14) with u∗n = T Aεn (n = 1,2,··· ,L) and u∗n = αn (n = L + 1,L + 2,··· ,N), we obtain the PODROEFD scheme as follows: ⎧ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎩ u∗n = T Aεn, n = 1,2,··· ,L, αn+1 = αn + t x2 Bαn + t y2 Cαn + tFn m, n = L,L + 1,··· ,N − 1, (1.2.16) where αn = (αn 1 ,αn 2 ,··· ,αn M)T are vectors yet to be determined. By using the orthogonal vectors in T multiplied by Eq. (1.2.16), we obtain ⎧ ⎪ ⎪ ⎨ ⎪ ⎪ ⎩ αn = T un , n = 1,2,··· ,L, αn+1 = αn + t x2 T Bαn + t y2 T Cαn + tT Fn m, n = L,L + 1,··· ,N − 1. (1.2.17) After αn (n = L,L+1,··· ,N) are obtained from the system of Eq. (1.2.17), we can obtain the PODROEFD solution vectors for Eq. (1.2.1) as follows: u∗n = αn , n = 1,2,··· ,L,L + 1,··· ,N. (1.2.18) Further, we can obtain the PODROEFD solution components for Eq. (1.2.1) as follows: u∗n j,k = u∗n i , 0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, 1 ⩽ i = k(J + 1) + j + 1 ⩽ m = (K + 1)(J + 1). (1.2.19) Remark 1.2.1. It is easily seen that the classical FD scheme (1.2.2) at each time level contains m unknown quantities, whereas the system of Eqs. (1.2.17)– (1.2.18) at the same time level (when n L) contains only M unknown quanti- ties (M L m, usually M = 6, but m = O(104) ∼ O(106)). Therefore, the PODROEFD model (1.2.17)–(1.2.18) includes very few degrees of freedom and does not involve repeated computations. Here, we extract the snapshots from the
  • 32. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 17 first L classical FD solutions; but when we solve real-world problems we may, instead, extract snapshots from the samples of experiments of physical system trajectories. 1.2.4 Error Estimates of the Reduced-Order Finite Difference Solutions for the 2D Parabolic Equation First, from (1.2.13), we obtain un − u∗n 2 = un − un M2 = (A − T A)εn2 ⩽ λM+1, n = 1,2,··· ,L. (1.2.20) Rewrite the second equation of the system of Eqs. (1.2.16) as follows: u∗n+1 = u∗n + t x2 Bu∗n + t y2 Cu∗n + tFn m, n = L,L + 1,··· ,N − 1. (1.2.21) Put δ = tB2,2/x2 + tC2,2/y2. By subtracting (1.2.21) from (1.2.14) and taking norm, from (1.2.20), we have un+1 − u∗n+1 2 ⩽ (1 + δ)un − u∗n 2 ⩽ ··· ⩽ (1 + δ)n+1−L uL − u∗L 2 ⩽ (1 + δ)n+1−L λM+1, n = L,L + 1,··· ,N − 1. (1.2.22) By summarizing the above discussion and noting that the absolute value of each vector component is less than the vector norm, from (1.2.3), we obtain the following theorem. Theorem 1.2.4. The errors between the solutions un jk from the classical FD scheme and the solutions u∗n jk from the PODROEFD scheme (1.2.17)–(1.2.18) satisfy the following estimates: | un jk − u∗n jk |⩽ E(n) λM+1, 1 ⩽ j ⩽ J,1 ⩽ k ⩽ K,1 ⩽ n ⩽ N, (1.2.23) where E(n) = 1 (1 ⩽ n ⩽ L), whereas E(n) = (1 + δ)n−L (L + 1 ⩽ n ⩽ N), δ = tB2,2/x2 +tC2,2/y2. Further, the errors between the accurate solution u(x,y,t) from Eq. (1.2.1) and the solutions u∗n jk from the POD-based reduced-order FD scheme (1.2.17)–(1.2.18) satisfy the following estimates: | u(xj ,yk,tn) − u∗n jk |= O(E(n) λM+1,t,x2 ,y2 ), 1 ⩽ j ⩽ J,1 ⩽ k ⩽ K,1 ⩽ n ⩽ N. (1.2.24)
  • 33. 18 Proper Orthogonal Decomposition Methods for Partial Differential Equations Remark 1.2.2. The error terms containing √ λM+1 in Theorem 1.2.4 are caused by the POD-based reduced-order for the classical FD scheme, which could be used to select the number of the POD basis, i.e., it is necessary to take M such that √ λM+1 = O(t,x2,y2). Whereas E(n) = (1 + δ)n−L (L + 1 ⩽ n ⩽ N) are caused by extrapolating iterations, which could be used as the guide for renewing the POD basis, i.e., if (1 + δ)n−L√ λM+1 max(t,x2,y2), it is necessary to update the POD basis. If we take λM+1 that satisfies (1 + δ)N−L√ λM+1 = O(t,x2,y2), then the PODROEFD scheme (1.2.17)–(1.2.18) is convergent and, thus, we do not have to update the POD basis. 1.2.5 The Implementation of the Algorithm of the POD-Based Reduced-Order Finite Difference Scheme for the 2D Parabolic Equation In order to facilitate the use of the PODROEFD scheme for the 2D parabolic equation, the following implementation steps of the algorithm for the PO- DROEFD scheme (1.2.17)–(1.2.18) are helpful. Step 1. Classical FD computation and extraction of snapshots Write the classical FD scheme (1.2.2) in vector form (1.2.14) and find the so- lution vectors un = (un 1,un 2,··· ,un m)T (n = 1,2,··· ,L) of (1.2.14) at the first few L steps (in the following, say, take L = 20 in Section 1.2.6). Step 2. Snapshot matrix A and eigenvalues of AT A Formulate snapshot matrices A = (un i )m×L and compute the eigenvalues λ1 ⩾ λ2 ⩾ ··· ⩾ λl 0 (l = rankA) and the eigenvectors φj (j = 1,2,··· ,M̃) of matrices AT A. Step 3. Choice of POD basis For the error tolerance μ = O(t,x2,y2), decide the numbers M (M ⩽ M̃) of POD bases such that λu(M+1) ⩽ μ and formulate the POD bases = (ϕ1,ϕ2,··· ,ϕM) (where ϕj = Aφj / λj ,j = 1,2,··· ,M). Step 4. Solve/compute the PODROEFD model Solve the PODROEFD scheme (1.2.17)–(1.2.18) to obtain the reduced-order so- lution vectors u∗n = (u∗n 1 ,u∗n 2 ,··· ,u∗n m )T ; further, obtain the component forms u∗n j,k = u∗n i (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, i = k(J + 1) + j + 1, 1 ⩽ i ⩽ m = (K + 1)(J + 1)). Step 5. Check accuracy and renew POD basis to continue Set δ = tB2,2/x2 + tC2,2/y2. If (1 + δ)n−L λu(M+1) ⩽ μ. Then u∗n = (u∗n 1 ,u∗n 2 ,··· ,u∗n m )T is just the solution vectors for the PODROEFD scheme (1.2.17)–(1.2.18) that satisfy the accuracy requirement. Else, i.e., if (1+ δ)n−L λu(M+1) μ, put u1 = u∗(n−L), u2 = u∗(n−L+1),...,uL = u∗(n−1) and return to Step 2.
  • 34. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 19 FIGURE 1.2.1 The solution u obtained via the classical FD scheme. 1.2.6 A Numerical Example for the 2D Parabolic Equation A numerical example here is presented to demonstrate the advantage of the PODROEFD scheme (1.2.17)–(1.2.18). To this end, we take f (x,y,t) = 0, u(x,y,0) = s(x,y) = sinπx sinπy, and u(x,y,t) = 0 in Eq. (1.2.1). The com- putational domain is taken as = {(x,y);0 ⩽ x ⩽ 2,0 ⩽ y ⩽ 2}. The spatial steps are taken as x = y = 0.02 and the time step t = 0.00005. Thus, we have 8t/x2 ⩽ 1 and 8t/y2 ⩽ 1 such that δ 1. We first find the classical FD solution at t = 2 by means of the classical FD scheme (1.2.2), which is depicted graphically in Fig. 1.2.1. Next, we take 20 classical FD solutions of the classical FD scheme (1.2.2) in the first 20 steps as snapshots. By computing directly, we have achieved √ λ7 ⩽ 4 × 10−4. It can be shown that as long as we take the first six POD bases, the theoretical accu- racy requirement can be satisfied. Next, we find the reduced-order FD solution at t = 2 (n = 40000) by means of the PODROEFD scheme (1.2.17)–(1.2.18), where it is unnecessary to update the POD basis and the solution is depicted graphically in Fig. 1.2.2. Because the PODROEFD scheme (1.2.17)–(1.2.18) only includes six unknown quantities and uses six optimizing data of the first 20 classical solutions as initial values, it saves computing time, reduces the degrees of freedom in the numerical computations, and alleviates the TE accumulation and, therefore, the PODROEFD solution is more efficient. Fig. 1.2.3 is the (log 10) error chart between the classical FD solutions and the reduced-order FD solution with different number of up to 20 POD bases at t = 2. It is shown that as long as we take M = 6, the reduced-order FD solutions obtained satisfy the accuracy requirement (i.e., its error does not exceed 4 × 10−4). Thus, the theoretical results are consistent with the ones from numerical calculations (they are all O(10−4)). Hence, it is shown that the PODROEFD
  • 35. 20 Proper Orthogonal Decomposition Methods for Partial Differential Equations FIGURE 1.2.2 The solution u obtained via the PODROEFD scheme. FIGURE 1.2.3 The (log10) error plot between the classical FD solutions and the reduced-order FD solution with different number of up to 20 POD bases at t = 2. scheme (1.2.17)–(1.2.18) is effective for the 2D parabolic equation. See the advantages and benefits of POD in Foreword and Introduction of the book. 1.3 A POD-BASED REDUCED-ORDER EXTRAPOLATION FINITE DIFFERENCE SCHEME FOR THE 2D NONSTATIONARY STOKES EQUATION In this section, a PODROEFD scheme is given for the 2D nonstationary Stokes equation. The PODROEFD scheme to produce the solutions on the time span [T0,T ] (T0 T ) is obtained from the information on a short time span [0,T0] by extrapolation and iteration. The guidelines to choose the number of POD bases and renew the POD bases are provided, and an implementation for solv- ing the PODROEFD scheme is given. Some numerical experiments are provided to illustrate the feasibility and efficiency of the PODROEFD scheme for simu- lating a channel flow with local expansion. The PODROEFD scheme for the 2D nonstationary Stokes equation is based on the work in [117].
  • 36. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 21 FIGURE 1.3.1 The domain of Problem 1.3.1. 1.3.1 Background for the 2D Nonstationary Stokes Equation The channel flow with local expansion in this section is motivated by applica- tions, for example, for the case of the capillary blood vessel flow in the human body, where there is the microchannel flow with expansion geometries. It can be simplified into a nonstationary Stokes channel flow with local expansion [16,34, 144]. Its geometry is approximately made by two squares at the top and bottom of the channel. The flow domain is shown in Fig. 1.3.1. Thus, the mathematical model for the channel flow with local expansion is described by the following system of the nonstationary Stokes equation. Problem 1.3.1. Find (u,v) and p such that, for T 0, ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ ∂u ∂t − 1 Re ∂2u ∂x2 + ∂2u ∂y2 + ∂p ∂x = f, (x,y,t) ∈ × (0,T ], ∂v ∂t − 1 Re ∂2v ∂x2 + ∂2v ∂y2 + ∂p ∂y = g, (x,y,t) ∈ × (0,T ], ∂u ∂x + ∂v ∂y = 0, (x,y,t) ∈ × (0,T ], u(x,y,t) = ϕ1(x,y,t),v(x,y,t) = ϕ2(x,y,t), (x,y,t) ∈ ∂ × (0,T ], u(x,y,0) = u0(x,y),v(x,y,0) = v0(x,y), (x,y) ∈ , (1.3.1) where (u,v) is the velocity vector, p is the pressure, T is the total time duration, Re is the Reynolds number, f (x,y,t) and g(x,y,t) are the given body forces in the x-direction and y-direction, respectively, and ϕ1(x,y,t), ϕ2(x,y,t), u0(x,y), and v0(x,y) are four given functions. The nonstationary Stokes equation constitutes an important system of equa- tions in fluid dynamics with many applications beyond the channel flow with local expansion [8,16,18,34,42,144,146,183]. For Problem 1.3.1, generally there are no analytical solutions. One has to rely on numerical solutions. The classical FD scheme with second-order accuracy in time and spatial variable discretiza- tions is one of the simplest and most convenient high accuracy methods for
  • 37. 22 Proper Orthogonal Decomposition Methods for Partial Differential Equations finding numerical solutions of the nonstationary Stokes equation. However, the approach involves a large number of degrees of freedom (unknown quantities). Especially, due to the TE accumulation in the computational process, it may also appear not to converge after several computation steps. Thus, an important task is to lessen the degrees of freedom so as to reduce the computational load and save CPU time in the process in a way that guarantees sufficiently accurate numerical solutions. Here, we employ the POD method to reduce its degrees of freedom. 1.3.2 A Classical Finite Difference Scheme for the 2D Nonstationary Stokes Equation and the Generation of Snapshots Let N be a positive integer, t = T /N the time step increment, x and y the spatial step increments in the x- and y-directions, respectively, and un j+ 1 2 ,k , vn j,k+ 1 2 , and pn j,k denote the value of function u,v, and p at the points (xj+ 1 2 ,yk,tn), (xj ,yk+ 1 2 ,tn), and (xj ,yk,tn) (0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,0 ⩽ n ⩽ N), respectively. Then Problem 1.3.1 is known to have the following classical FD scheme with second-order accuracy: uj+ 1 2 ,k − uj− 1 2 ,k x + vj,k+ 1 2 − vj,k− 1 2 y n+1 = 0, (1.3.2) un+1 j+ 1 2 ,k = Fn j+ 1 2 ,k − 2t x (pn j+1,k − pn j,k) + 2tf n j+ 1 2 ,k , (1.3.3) vn+1 j,k+ 1 2 = Gn j,k+ 1 2 − 2t x (pn j,k+1 − pn j,k) + 2tgn j,k+ 1 2 , (1.3.4) where Fn j+ 1 2 ,k = un−1 j+ 1 2 ,k + 2t Re uj+ 1 2 ,k−1 − 2uj+ 1 2 ,k + uj+ 1 2 ,k+1 y2 + uj− 1 2 ,k − 2uj+ 1 2 ,k + uj+ 3 2 ,k x2 n , Gn j,k+ 1 2 = vn−1 j,k+ 1 2 + 2t Re vj−1,k+ 1 2 − 2vj,k+ 1 2 + vj+1,k+ 1 2 x2 + vj,k− 1 2 − 2vj,k+ 1 2 + vj,k+ 3 2 y2 n . Inserting (1.3.3) and (1.3.4) into (1.3.2), one can obtain the approximate FD scheme of Poisson equations for p as follows: pj−1,k − 2pj,k + pj+1,k x2 + pj,k−1 − 2pj,k + pj,k+1 y2 n = R, (1.3.5)
  • 38. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 23 where R = 1 tx [2t(fj+ 1 2 ,k − fj− 1 2 ,k) + Fj+ 1 2 ,k − Fj− 1 2 ,k]n + 1 ty × [2t(gj,k+ 1 2 − gj,k− 1 2 ) + Gj,k+ 1 2 − Gj,k− 1 2 ]n. If tRe ⩽ 4 and 4t ⩽ min{Rex2,Rey2} or t = o(Rex2,Rey2), by taking the same approach as in the proof of Theorem 1.2.1, we easily prove that the FD scheme (1.3.3)–(1.3.5) is stable (see also [34,192]). If the solution triple (u,v,p) to Problem 3.2.1 is sufficiently regular, we have the following error estimates: (u(xj+ 1 2 ,yk,tn),v(xj ,yk+ 1 2 ,tn),p(xj ,yk,tn)) − (un j+ 1 2 ,k ,vn j,k+ 1 2 ,pn j,k)2 = O(t2 ,x2 ,y2 ), n = 1,2,··· ,N, (1.3.6) where · 2 denotes the l2 norm of the vector. If the Reynolds number Re, body force f (x,y,t) in the x-direction and body force g(x,y,t) in the y-direction, boundary value functions ϕ1(x,y,t) and ϕ2(x,y,t), initial value functions u0(x,y) and v0(x,y), time step incre- ment t, and spatial step increments x and y are given, let u0 j+ 1 2 ,k = u1 j+ 1 2 ,k = u0(xj+ 1 2 ,yk), v0 j,k+ 1 2 = v1 j,k+ 1 2 = v0(xj ,yk+ 1 2 ). By solving the FD scheme (1.3.3)–(1.3.5), we can obtain the classical FD solutions un j+ 1 2 ,k , vn j,k+ 1 2 , and pn j,k (0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,1 ⩽ n ⩽ N). Set un i = un j+ 1 2 ,k , vn i = vn j,k+ 1 2 , and pn i = pn j,k (i = kJ + j + 1, 1 ⩽ i ⩽ m, m = (J + 1)(K + 1), 0 ⩽ j ⩽ J − 1, 0 ⩽ k ⩽ K − 1), respectively. We may choose the first L solutions from the set {un i ,vn i ,pn i }N n=1 (1 ⩽ i ⩽ m) including N ×m elements to construct a set {ul i,vl i,pl i}L l=1 (1 ⩽ i ⩽ m,L N) containing L × m elements, which are just snapshots. Remark 1.3.1. The snapshots are drawn from the first L FD solutions here. However, when one computes the real-world problems, one may also choose the ensemble of snapshots from physical system trajectories by drawing samples from the experiments. 1.3.3 Formulations of the POD Basis and the POD-Based Reduced-Order Extrapolating Finite Difference Scheme In the following, we first formulate a set of POD bases, and then establish the PODROEFD scheme with second-order accuracy for the nonstationary Stokes equation. The set of snapshots {ul i,vl i,pl i}L l=1 (1 ⩽ i ⩽ m) in Section 1.3.2 can be expressed in the following three m × L matrices As = (sl i )m×L (s = u,v,p). Let λs1 ⩾ λs1 ⩾ ··· ⩾ λsM̃s 0 (M̃s = rank(As)) be the positive eigenvalues of the matrices AsAT s (s = u,v,p) and let the matrices Us = (φs1,φs2,··· ,φsm) and the matrices V s = (ϕs1,ϕs2,··· ,ϕsL) consist of the
  • 39. 24 Proper Orthogonal Decomposition Methods for Partial Differential Equations orthonormal eigenvectors of the matrices AsAT s and AT s As, respectively. Then, it follows easily from linear algebra that As = UsDM̃s V T s (where DM̃s = diag{ √ λs1, √ λs2,··· , √ λsM̃s ,0,··· ,0}). Thus, Us = (φs1,φs2,··· ,φsm) (s = u,v,p) make up three sets of POD bases. The number of mesh points is far larger than that of the snapshots drawn, that is, m L; the order m for matrices AsAT s is far larger than the order L for matrices AT s As. But the numbers of their positive eigenvalues are identi- cal, so we may first solve the eigenequation corresponding to matrices AT s As to find the eigenvectors ϕsj (j = 1,2,...,M̃s), and then, by the relationship φsj = Asϕsj / λsj (j = 1,2,...,M̃s, s = u,v,p), we obtain the eigenvectors φsj (j = 1,2,...,M̃s) corresponding to the nonzero eigenvalues for matrix AsAT s . Thus, by using the same technique as in Section 1.2.2, three optimiz- ing orthonormal POD bases s = (φs1,φs2,··· ,φsMs ) (Ms L, s = u,v,p) are formed by the first Ms (0 Ms ⩽ M̃s ⩽ L) columns from three groups of POD bases Us = (φs1,φs2,··· ,φsm). Further, by the properties of the norm of a matrix (see (1.2.12) in Section 1.2.2), the following error estimates hold: As − AMs 2,2 = As − sT s As2,2 = λs(Ms+1),s = u,v,p, (1.3.7) where A2,2 = supx Ax2/x2, · 2 is the l2 vector norm, AMs = UsDMs V T s , and DMs = diag{ √ λs1, √ λs2,··· , √ λsMs ,0,··· ,0} (0 Ms ⩽ M̃s ⩽ L). Let εl (l = 1,2,...,L) denote unit column vectors whose lth component is 1. Set sn m = (sn 1 ,sn 2 ,··· ,un m)T , s = u,v,p, n = 1,2,··· ,N. (1.3.8) Then, from (1.3.7), the following error estimates hold: sl m − sT s sl m2 = (As − sT s As)εl2 ⩽ As − sT s As2,2εl2 = λs(Ms+1), s = u,v,p, l = 1,2,··· ,L, (1.3.9) which shows that sT s sl m is the optimal approximation for sl m and the errors are λs(Ms+1) (s = u,v,p). By using the notation of (1.3.8), the classical FD scheme (1.3.3)–(1.3.5) is now written in the following vector scheme: pn m = F̃1(un m,vn m), 0 ⩽ n ⩽ N, un+1 m ,vn+1 m T = un−1 m ,vn−1 m T + F̃(un m,vn m,pn m),1 ⩽ n ⩽ N − 1, (1.3.10) where F̃ and F̃1 are determined from (1.3.3)–(1.3.4) and (1.3.5), respectively.
  • 40. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 25 Set u∗n m ,v∗n m ,p∗n m T = uαn Mu ,vβn Mv ,pγ n Mp !T , (1.3.11) where u∗n m = (u∗n 1 ,u∗n 2 ,··· ,u∗n m )T , v∗n m = (v∗n 1 ,v∗n 2 ,··· ,v∗n m )T , p∗n m = (p∗n 1 , p∗n 2 ,··· ,p∗n m )T are three column vectors corresponding to u, v, and p, re- spectively, and αn Mu = (α1,α2,··· ,αMu )T , βn Mv = (β1,β2,··· ,βMv )T , and γ n Mp = (γ1,γ2,··· ,γMp )T . If un m,vn m,pn m in (1.3.10) are approximately re- placed with u∗n m ,v∗n m , p∗n m in (1.3.11) (n = 0,1,2,··· ,N), by noting that three matrices u,v, and p are formed with the orthonormal eigenvec- tors, the PODROEFD scheme for the channel flow with local expansion, with Mu + Mv + Mp (Mu,Mv,Mp L m) unknowns, is denoted by ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎨ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩ αn Mu = T u un m,βn Mv = T v vn m,γ n Mp = T p pn m,n = 1,2,··· ,L, γ n Mp = T p F̃1(uαn−1 Mu ,vβn−1 Mv ), n = L,L + 1,··· ,N, αn+1 Mu ,βn+1 Mv !T = αn−1 Mu ,βn−1 Mv !T + G̃(αn Mu ,βn Mv ,γ n Mp ), n = L,L + 1,··· ,N − 1, (1.3.12) where G̃(αn Mu ,βn Mv ,γ n Mp ) = (T u ,T v )T F̃(uαn Mu ,vβn Mv ,pγ n Mp ). After αn Mu , βn Mv , and γ n Mp are obtained from (1.3.12), the reduced-order FD solutions for the PODROEFD scheme are obtained by u∗n m = uαn Mu , v∗n m = vβn Mv , p∗n m = pγ Mp , n = 0,1,2,··· ,N. (1.3.13) Thus, in component forms: u∗n j+ 1 2 ,k = u∗n i , v∗n j,k+ 1 2 = v∗n i , and p∗n j,k+ 1 2 = p∗n i (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, i = k(J + 1) + j + 1, 1 ⩽ i ⩽ m = (K + 1)(J + 1)); the PODROEFD scheme with second-order accuracy is attained. Remark 1.3.2. Since the classical FD scheme (1.3.3)–(1.3.5) at each time level includes 3m degrees of freedom, while the system of Eqs. (1.3.12) and (1.3.13) at each time level (when n L) contains only (Mu +Mv +Mp) degrees of freedom (Mu,Mv,Mp L m, for example, in Section 1.3.6, L = 20, Mu = Mv = Mp = 6, but m = 136 × 104), the system of Eqs. (1.3.12) and (1.3.13) is the PODROEFD scheme with much fewer degrees of freedom and the second-order accuracy and has no repetitive computations. Thus, it has the same advantage and efficiency as the PODROEFD scheme, just as in Section 1.2. 1.3.4 Error Estimates and a Criterion for Renewing the POD Basis In the following, we estimate the errors between the reduced-order FD solutions for (1.3.12) and (1.3.13) and the classical FD solutions for (1.3.3)–(1.3.5).
  • 41. 26 Proper Orthogonal Decomposition Methods for Partial Differential Equations By using (1.3.13), we write the second and third equations of (1.3.12) in the following vector form: p∗n m = F̃1(u∗n−1 m ,v∗n−1 m ), L + 1 ⩽ n ⩽ N, u∗n+1 m ,v∗n+1 m T = u∗n−1 m ,v∗n−1 m T +F̃(u∗n m ,v∗n m ,p∗n m ), L ⩽ n ⩽ N − 1, (1.3.14) where their stability conditions are also made to satisfy tRe ⩽ 4 and 4t ⩽ min{Rex2, Rey2} (see [34] or [192]). Set en = (un m,vn m,pn m)T − (u∗n m ,v∗n m ,p∗n m )T . By the first equation of (1.3.12) and (1.3.13), we obtain en = (un ,vn ,pn )T − (uT u un ,vT v vn ,pT p pn )T , (1.3.15) where n = 1,2,··· ,L. From (1.3.9) and (1.3.15), we obtain en2 ⩽ λu(Mu+1) + λv(Mv+1) + λp(Mp+1), n = 1,2,··· ,L. (1.3.16) By (1.3.10) and (1.3.14), we have en+12 ⩽ en−12 + Men2,n = L,L + 1,··· ,N − 1, (1.3.17) where M = max{tRe/4,4t/(Rex2),4t/(Rey2)} ⩽ 1 under the stabil- ity conditions tRe ⩽ 4 and 4t ⩽ min{Rex2,Rey2}. If t = o(x,y, Rex2,Rey2), M is a small positive constant. Summing (1.3.17) from L to n − 1 yields en2 ⩽ eL−12 + eL2 + M n−1 i=L ei2, n = L + 1,L + 2,··· ,N. (1.3.18) If we write ξn = M n−1 i=L ei2 + eL−12 + eL2 and δ = M + 1, then we have en2 ⩽ ξn and ξn − ξn−1 = Men−12 (n ⩾ 2). Therefore, we get ξn ⩽ (M + 1)ξn−1 = δξn−1 ⩽ δ2 ξn−2 ⩽ ··· ⩽ δn−L ξL = δn−L (eL−12 + eL2). (1.3.19) Set C(δn) = 2δn−L. We obtain from (1.3.16) en2 ⩽ 2δn−L [ λu(Mu+1) + λv(Mv+1) + λp(Mp+1)] = C(δn )[ λu(Mu+1) + λv(Mv+1) + λp(Mp+1)], (1.3.20) where n = L + 1,L + 2,...,N. Synthesizing the above discussions yields the following result.
  • 42. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 27 Theorem 1.3.1. If (un m,vn m,pn m)T (n = 1,2,··· ,N) are the solution vec- tors formed by the solutions for the classical FD scheme (1.3.3)–(1.3.5), (u∗n m ,v∗n m ,p∗n m )T (n = 1,2,··· ,N) are the reduced-order FD solutions for the PODROEFD scheme (1.3.12) and (1.3.13), we have the following error esti- mates: (un m,vn m,pn m) − (u∗n m ,v∗n m ,p∗n m )2 ⩽ C(δn ) # λu(Mu+1) + λv(Mv+1) + λp(Mp+1) $ , n = 1,2,··· ,N, where C(δn) = 1 (1 ⩽ n ⩽ L) and C(δn) = 2(1 + M)n−L (L + 1 ⩽ n ⩽ N) together with M = max{tRe/4,4t/(Rex2),4t/(Rey2)}. Since the absolute value of each vector component is less than its norm, combining (1.3.6) with Theorem 1.3.1 yields the following result. Theorem 1.3.2. The exact solution for the nonstationary Stokes equation and the reduced-order FD solutions obtained from the PODROEFD scheme (1.3.12) and (1.3.13) satisfy the following error estimates: for 1 ⩽ n ⩽ N, | u(xj+ 1 2 ,yk,tn) − u∗n j+ 1 2 ,k | + | v(xj ,yk+ 1 2 ,tn) − v∗n j,k+ 1 2 | + | p(xj ,yk,tn) − p∗n j,k | = O C(δn) λu(Mu+1) + λv(Mv+1) + λp(Mp+1) ,t2,x2,y2 . Remark 1.3.3. The error estimates of Theorem 1.3.2 give a guide for choos- ing the number of POD bases, namely, taking Mu, Mv, and Mp such that λu(Mu+1) + λv(Mv+1) + λp(Mp+1) = O(t2,x2,y2). Here, C(δn) = 2(1 + M)n−L (L + 1 ⩽ n ⩽ N) are caused by extrapolating iteration and they may act as a guide for renewing POD bases, namely, when C(δn)[ λu(Mu+1) + λv(Mv+1) + λp(Mp+1)] max(t2,x2,y2), it is time for the renewal of POD bases. 1.3.5 Implementation for the POD-Based Reduced-Order Extrapolating Finite Difference Scheme The implementation for the PODROEFD scheme (1.3.12) and (1.3.13) consists of the following five steps. Step 1. Classical FD computation and formulation of snapshots Solving the classical FD scheme (1.3.3)–(1.3.5) at the first few L steps (empir- ically, say, take L = 20) yields the classical FD solutions un j+ 1 2 ,k , vn j,k+ 1 2 , and pn j,k(0 ⩽ j ⩽ J,0 ⩽ k ⩽ K,1 ⩽ n ⩽ L) and further produces a set of snapshots {ul i,vl i,pl i}L l=1 (1 ⩽ i ⩽ m) with L × m elements, where un i = un j+ 1 2 ,k , vn i = vn j,k+ 1 2 , and pn i = pn j,k (i = kJ + j + 1, 1 ⩽ i ⩽ m, m = (J + 1)(K + 1),0 ⩽ j ⩽ J − 1, 0 ⩽ k ⩽ K − 1), respectively.
  • 43. 28 Proper Orthogonal Decomposition Methods for Partial Differential Equations Step 2. Snapshot matrices As and eigenvalues of AT s As Let sn m = (sn 1 ,sn 2 ,··· ,sn m)T (s = u,v,p, n = 1,2,··· ,N). Formulate the snap- shot matrices As = (sl i )m×L (s = u,v,p) and find the eigenvalues λs1 ⩾ λs2 ⩾ ··· ⩾ λsM̃s 0 (M̃s = rankAs) and corresponding eigenvectors ϕsj (j = 1,2,··· ,M̃s,s = u,v,p) of AT s As (s = u,v,p). Step 3. Choice of POD bases For the error tolerance μ = O(t2,x2,y2) desired, decide the numbers Ms (Ms ⩽ M̃s,s = u,v,p) of POD bases such that λu(Mu+1) + λv(Mv+1) + λp(Mp+1) ⩽ μ, and formulate the POD bases s = (φs1,φs2,··· ,φsMs ) (where φsj = Asϕsj / λsj ,j = 1,2,··· ,Ms, s = u,v,p). Step 4. Solve/compute the PODROEFD model Solving the PODROEFD scheme (1.3.12) and (1.3.13) yields the reduced- order solution vectors u∗n m = (u∗n 1 ,u∗n 2 ,··· ,u∗n m ), v∗n m = (v∗n 1 ,v∗n 2 ,··· ,v∗n m ), and p∗n m = (p∗n 1 ,p∗n 2 ,··· ,p∗n m ); further, the process produces the components u∗n j+ 1 2 ,k = u∗n i , v∗n j,k+ 1 2 = v∗n i , and p∗n j,k+ 1 2 = p∗n i (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, i = k(J + 1) + j + 1, 1 ⩽ i ⩽ m = (K + 1)(J + 1)). Step 5. Check accuracy and renew POD bases to continue Set M = max{0.25tRe,4t/(Rex2),4t/(Rey2)}. If 2(1 + M)n−L # λu(Mu+1) + λv(Mv+1) + λp(Mp+1) $ ⩽ μ, then u∗n m = (u∗n 1 ,u∗n 2 ,··· ,u∗n m ), v∗n m = (v∗n 1 ,v∗n 2 ,··· ,v∗n m ), and p∗n m = (p∗n 1 , p∗n 2 ,··· ,p∗n m ) (n = 1,2,··· ,N) are exactly the solutions satisfying the desir- able accuracy. Else, namely, if 2(1 + M)n−L # λu(Mu+1) + λv(Mv+1) + λp(Mp+1) $ μ, put (sl 1,sl 2,··· ,sl m) = (s ∗(n−l) 1 ,s ∗(n−l) 2 ,··· ,s ∗(n−l) m ) (l = 1,2,··· ,L, s = u,v, p) and return to Step 2. Remark 1.3.4. Step 5 could be adapted as follows: if u∗n−1 m −u∗n m 0 ⩾ u∗n m − u∗n+1 m 2, v∗n−1 m − v∗n m 0 ⩾ v∗n m − v∗n+1 m 2, and p∗n−1 m − p∗n m 2 ⩾ p∗n m − p∗n+1 m 2 (n = L,L + 1,··· ,N − 1), then (u∗n m ,v∗n m ,p∗n m ) (n = 1,2,··· ,N) are the reduced-order solution vectors for PODROEFD scheme (1.3.12) and (1.3.13) satisfying the desirable accuracy. Else, namely, if u∗n−1 m − u∗n m 0 u∗n m − u∗n+1 m 2, v∗n−1 m − v∗n m 0 v∗n m − v∗n+1 m 2, or p∗n−1 m − p∗n m 2 p∗n m − p∗n+1 m 2 (n = L,L + 1,··· ,N − 1), let si m = s ∗(n−i) m (i = 1,2,··· ,L, s = u,v,p) and return to Step 2.
  • 44. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 29 FIGURE 1.3.2 When Re = 1000, the top chart (A) and the bottom chart (B) are the contours of the classical FD solution and the reduced-order FD solution of the velocity (u,v) at time instant t = 9, respectively. 1.3.6 Some Numerical Experiments for the 2D Nonstationary Stokes Equation In the following, we present some numerical experiments of the channel flow with local expansions, i.e., with two squared protrusions at the middle top and bottom of the channel to validate the feasibility and efficiency of the PO- DROEFD scheme with second-order accuracy and to show that the numerical results are consistent with theoretical estimates. We assume that the computational domain ¯ is given as in Fig. 1.3.1. Take Re = 1000, f = g = 0. Except for the inflow on the left boundary with the periodic flow velocity of (u,v) = (0.1(y − 2)(8 − y)sin2πt,0) (2 ⩽ y ⩽ 8) and outflow on the right boundary with velocity of (u,v) satisfying v = 0 and ∂u/∂x = 0, all initial and boundary value conditions are taken as 0. We divide the ¯ into mesh by taking spatial step increments as x = y = 10−2 and then taking time step increment as t = 0.001. We have obtained a numerical solution (un j+ 1 2 ,k ,vn j,k+ 1 2 ) of velocity (u,v) and a numerical solution pn j,k+ 1 2 of pressure p by the classical FD schemes (1.3.3)–(1.3.5) with n = 9000 (i.e., t = 9), which are depicted graphically as the contours in Figs. 1.3.2(A) and 1.3.3(A), respectively. We have only employed the first L = 20 numerical solutions (un j+ 1 2 ,k ,vn j,k+ 1 2 , pn j,k+ 1 2 ) (n = 1,2,··· ,20) from the classical FD scheme, forming 20 snap- shot vectors (un m,vn m,pn m) (n = 1,2,··· ,20, m = 136 × 104). Afterwards, by Step 2 in Section 1.3.5, we have found three groups of 20 eigenvalues λuj ,λvj , and λpj (j = 1,2,··· ,20) which are arranged in a nonincreasing or- der and three groups of 20 eigenvectors ϕuj , ϕvj , and ϕpj (j = 1,2,··· ,20) corresponding to the 20 eigenvalues. By computing, we have achieved that
  • 45. 30 Proper Orthogonal Decomposition Methods for Partial Differential Equations FIGURE 1.3.3 When Re = 1000, the top chart (A) and the bottom chart (B) are the classical FD solution and the reduced-order FD solution of the pressure field p at the time instant t = 9, respectively. the eigenvalues satisfy √ λu7 + √ λv7 + λp7 ⩽ 4 × 10−4, which guides us to take the first three groups of six eigenvectors as POD bases by Step 3 in Section 1.3.5, i.e., the POD bases are taken as u = (φu1,φu2,··· ,φu6) (with φuj = Auϕuj / λuj , j = 1,2,··· ,6), v = (φv1,φv2,··· ,φv6) (with φvj = Avϕvj / λvj ,j = 1,2,··· ,6), and p = (φp1,φp2,··· ,φp6) (with φpj = Apϕpj / λpj , j = 1,2,··· ,6). Finally, when n = 9000 (i.e., at t = 9) and Re = 103, the numerical solutions u∗n j+ 1 2 ,k , v∗n j,k+ 1 2 , and p∗n j,k+ 1 2 are obtained from the PODROEFD scheme by Step 4 in Section 1.3.5. It is necessary to renew POD bases once t = 5. The reduced-order FD solution at t = 9 is ob- tained and depicted graphically in Figs. 1.3.2(B) and 1.3.3(B), respectively. Each pair of figures (A) and (B) in Figs. 1.3.2(A) and 1.3.3(B) has exhibited close similarity, respectively, but the reduced-order FD solutions obtained from the PODROEFD scheme are computed with higher efficiency than the classi- cal FD solutions since the PODROEFD scheme needs much fewer degrees of freedom and, thus, it can also significantly reduce the TE accumulation in the computational process. Fig. 1.3.4 shows the mean absolute errors (MAEs) between the reduced- order FD solutions obtained from the PODROEFD scheme with second-order accuracy and a different number of POD bases and the solutions obtained from classical FD schemes (1.3.3)–(1.3.5) when Re = 1000 at t = 9. Comparing the classical FD scheme with the PODROEFD scheme containing six optimal bases and implementing the numerical simulation computations with Re = 1000 at t = 9, we have found that, for the classical FD schemes (1.3.3)–(1.3.5) contain- ing 3×136×104 unknown quantities at each time level, the required computing time was about 48 minutes on a laptop, while, for the PODROEFD scheme with
  • 46. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 31 FIGURE 1.3.4 The MAEs between the reduced-order FD solutions with different number of POD bases and the classical FD solution when Re = 1000 at t = 9. six optimal bases only including 3 × 6 unknown quantities at the same time level, the corresponding time was only 16 seconds on the same laptop, so the ratio of speed is 180:1, while the errors between their solutions do not exceed 4 × 10−4. Though our experiments are in a sense recomputing what we have al- ready computed by the classical FD scheme at the first few L = 20 steps, when we compute actual problems, we may construct the snapshots and POD basis by drawing samples from experiments and then solve directly the PODROEFD scheme with second-order accuracy, while it is unnecessary to solve the clas- sical FD schemes (1.3.3)–(1.3.5). Thus, the time consuming calculations and resource demands in the computational process have resulted in great saving. It has also shown that finding the approximate solutions for the nonstationary Stokes equation using the PODROEFD scheme with second-order accuracy is computationally effective. The numerical results are consistent with those ob- tained for the theoretical case as neither the theoretical nor the numerical errors exceeds 4 × 10−4. In addition, if one uses the reduced-order FD scheme with first-order time ac- curacy as in [113] to find the reduced-order FD solution at t = 9, it is necessary to take the time step as k = 10−4 and carry out 9 × 104 steps in order to obtain the same accuracy as here. Thus, its computing load is 10 times that of the PO- DROEFD scheme with second-order accuracy, and its TE accumulation in the computational process increases significantly as well as it is repeating computa- tions of the classical first-order time accuracy FD scheme on the same time inter- val [0,T ]. Therefore, the PODROEFD scheme with second-order accuracy here is different from the existing reduced-order schemes and it offers improvement over the existing reduced-order methods [2,5,11,19,38,40,45,48,51,52,57,58,62, 63,91,113,118,122,128,135,137,141,155,156,166,167,170,171,184–186,199].
  • 47. 32 Proper Orthogonal Decomposition Methods for Partial Differential Equations 1.4 POD-BASED REDUCED-ORDER EXTRAPOLATING FINITE DIFFERENCE SCHEME FOR 2D SHALLOW WATER EQUATION In this section, we employ the POD method to establish a PODROEFD scheme with very few degrees of freedom for the 2D shallow water equations (SWEs) with sediment concentration. We also provide the error estimates between the accurate solution and the classical FD solutions as well as those between the exact solution and the PODROEFD solutions. Moreover, we present two numer- ical simulations to illustrate that the PODROEFD scheme can greatly reduce the computational load. Thus, both the feasibility and efficiency of the PODROEFD scheme are validated. The main reference for the work here is [96]. 1.4.1 Model Background and Survey for the 2D Shallow Water Equation A system of SWEs can be used to describe the propagation and evolution of short waves in shallow waters, also referred to as the Saint-Venant system (see [36]). It has extensive applications in ocean, environmental, and hydraulic en- gineering. In particular, in coastal engineering, [46] discusses applications to various problems in open-channel flows in rivers and reservoirs, tidal flows in estuary and coastal water regions, bore wave propagation, and the station- ary hydraulic jump in river. Because SWEs are a system of nonlinear PDEs, they generally have no analytical solutions. One has to rely on numerical solu- tions. Here we mention a number of references on the study of numerical solutions for the 2D SWEs including only the continuity equation and the momentum equation, modeling the effects of the water depth and the velocity of fluid; for example, the finite volume (FV) method on unstructured triangular meshes in Anatasiou and Chan [9], the upwind methods in Bermudez and Vazquez [17], the parallel block preconditioning techniques in Cai and Navon [22], the opti- mal control technique of finite element (FE) limited-area in Chen and Navon [30], the least-squares FE method in Liang and Hsu [74], the FD Lax–Wendroff weighted essentially nonoscillatory (WENO) schemes in Lu and Qiu [77], the FE simulation technique in Navon [131], the FD WENO schemes in Qiu and Shu [136], the Roe approximate Riemann solver technique in Rogers et al. [142], the essentially nonoscillatory and WENO schemes with the exact conser- vation property in Vukovic and Sopta [172], the explicit multiconservation FD scheme in Wang [173], the composite FV method on unstructured meshes in Wang and Liu [174], the high-order FD WENO schemes in Xing and Shu [180], the high-order well-balanced FV WENO schemes and discontinuous Galerkin (DG) methods in Xing and Shu [181], the positivity preserving high-order well- balanced DG methods in Xing et al. [182], the dispersion–correction FD scheme in Yoon et al. [189], the nonoscillatory FV method in Yuan and Song [190], the
  • 48. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 33 surface gradient method in Zhou et al. [196], and the total variation diminishing FD scheme in Wang et al. [175]. Nevertheless, the transport and sedimenta- tion of silt and sand are some important processes in causing change to the natural environment such as the formation and evolution of a delta, expan- sion of alluvial plains, detouring of rivers, etc. They also cause some serious problems that should be carefully addressed to in hydraulic problems, such as irrigation systems, transportation channels, hydroelectric stations, ports, and other coastal engineering works. A model for the 2D SWEs including sedi- ment concentration is available in [191] with some numerical methods based on an optimal control approach (see [198]) and a mixed FE technique (see [125, 126]). It is well known that the model based on a classical FD scheme in [191] is one of the simplest and most convenient methods for solving 2D SWEs with sediment concentration. However, it also contains many degrees of freedom, i.e., unknown quantities. Therefore, its computational complexity is also higher with previously mentioned adverse effects in generating errors. It is advantageous to build the reduced-order FD scheme with sufficiently high accuracy and very few degrees of freedom. Here, we continue with the PODROEFD scheme for the 2D SWEs with sediment concentration. Some POD-based reduced-order models for the 2D SWEs already exist (see, e.g., [29,151,152,199]), but these POD-based reduced-order models for the 2D SWEs have not included the sediment concentration effect. They also employ the numerical solutions obtained from the classical numerical methods on the entire time span [0,T ] to formulate the POD basis, build the POD- based reduced-order models, and recompute the solutions on the same time span [0,T ]; these also belong to repeated computations. Here we thoroughly improve the existing methods, where we only adopt the first few snapshots of given classical FD numerical solutions for the 2D SWEs on a very short time span [0,T0] (T0 T ) to formulate the POD basis and build the PODROEFD scheme, before finding the numerical solutions on the total time span [0,T ] by extrapolation and iteration as well as the POD basis update. Thus, an important advantage of the POD method is kept, i.e., using the given data on a very short time span [0,T0] to predict future physics on the whole time span [T0,T ]. So this study is useful as a motivating example for real-world computational problems and big data. In the following, we first devote ourselves to the formulation of the snap- shots and the POD basis from the classical FD solutions for the 2D SWEs with sediment concentration and the PODROEFD scheme, and then we provide the error estimates of solutions and the implementation of the algorithm for the PO- DROEFD scheme. We will provide two numerical simulation examples to verify the reliability and effectiveness for our PODROEFD scheme.
  • 49. 34 Proper Orthogonal Decomposition Methods for Partial Differential Equations 1.4.2 The Governing Equations and the Classical FD Scheme for the 2D Shallow Water Equation Including Sediment Concentration Let ⊂ R2 be a bounded and connected domain. The governing equations for the 2D SWEs with sediment concentration are known as follows (see [46,191], with notational adaptation): ∂Z ∂t + ∂(Zu) ∂x + ∂(Zv) ∂y = γ ∂2Z ∂x2 + ∂2Z ∂y2 , (x,y,t) ∈ × (0,T ), (1.4.1) ∂u ∂t + u∂u ∂x + v∂u ∂y − f v = A ∂2u ∂x2 + ∂2u ∂y2 − g ∂(Z + zb) ∂x − CDu √ u2 + v2 Z ,(x,y,t) ∈ × (0,T ), (1.4.2) ∂v ∂t + u∂v ∂x + v∂v ∂y + f u = A ∂2v ∂x2 + ∂2v ∂y2 − g ∂(Z + zb) ∂y − CDv √ u2 + v2 Z ,(x,y,t) ∈ × (0,T ), (1.4.3) ∂S ∂t + u∂S ∂x + v∂S ∂y = ε ∂2S ∂x2 + ∂2S ∂y2 + αω(S − S∗) Z , (x,y,t) ∈ × (0,T ), (1.4.4) ∂zb ∂t + gb ∂u ∂x + ∂v ∂y = αω(S − S∗) ρ , (x,y,t) ∈ × (0,T ),(1.4.5) where γ (m2/s) and A (m2/s) are two viscosity coefficients, (u,v) (m/s) is the velocity vector, Z = z − zb (m) the water depth, z (m) the surface height, zb (m) the height of the river bed (see Fig. 1.4.1), f (1/s) the Coriolis con- stant, g (m/s2) the gravitational constant, CD (nondimensional) the coefficient of bottom drag, ε (m2/s) the diffusion coefficient of sand, ω (m/s) the falling speed of suspended sediment particles, S (kg/m3) the concentration of sediment in water, ρ (kg/m3) the density of dry sand (taken as a constant), α (nondi- mensional) the constant of sediment variety, S∗ = K[(u2 + v2)3/2/(gωZ)]l the capability of sediment transport in a bottom bed (a given empirical function), gb = (u2 + v2)3/2Zpdq[1 − vc/(u2 + v2)1/2] also a given empirical func- tion, vc (m/s) the velocity of sediment mass transport (a given function, too), d (m) the diameter of sediment, and K (kg/m3), l (nondimension), (s3/m2), p (nondimensional), and q = −p are all empirical constants. The boundary conditions are assumed as follows: Z(x,y,t) = Z0(x,y,t), u(x,y,t) = u0(x,y,t), v(x,y,t) = v0(x,y,t),
  • 50. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 35 FIGURE 1.4.1 Water profile. S(x,y,t) = S0(x,y,t), zb(x,y,t) = zb0(x,y,t), (x,y,t) ∈ ∂ × (0,T ), (1.4.6) where Z0(x,y,t), u0(x,y,t), v0(x,y,t), S0(x,y,t), and zb0(x,y,t) are all given functions. The initial conditions are assumed as follows: Z(x,y,0) = Z0 (x,y),u(x,y,0) = u0 (x,y),v(x,y,0) = v0 (x,y), S(x,y,0) = S0 (x,y),zb(x,y,0) = z0 b(x,y), x ∈ ∂ × (0,T ), (1.4.7) where Z0(x,y), u0(x,y), v0(x,y), S0(x,y), and z0 b(x,y) also are all given functions. Let t be the time step, let x and y be the spatial steps, and let N = T /t . By discretizing (1.4.1), (1.4.4), and (1.4.5) at a reference point (xj ,yk,tn), (1.4.2) at a reference point (xj+ 1 2 ,yk,tn), and (1.4.3) at a reference point (xj ,yk+ 1 2 ,tn), we obtain the classical FD scheme for the 2D SWEs with sediment concentration as follows: Zn+1 j,k = tγ Zn j+1,k − 2Zn j,k + Zn j−1,k x2 + Zn j,k+1 − 2Zn j,k + Zn j,k−1 y2 − t ⎛ ⎝ un j+ 1 2 ,k Zn j+ 1 2 ,k − un j− 1 2 ,k Zn j− 1 2 ,k x + vn j,k+ 1 2 Zn j,k+ 1 2 − vn j,k− 1 2 Zn j,k− 1 2 y ⎞ ⎠ + Zn j,k, (1.4.8) un+1 j+ 1 2 ,k = − tCDun j+ 1 2 ,k (un j+ 1 2 ,k )2 + (vn j+ 1 2 ,k )2 Zn j+ 1 2 ,k + tA ⎛ ⎝ un j+ 3 2 ,k − 2un j+ 1 2 ,k + un j− 1 2 ,k x2 + un j+ 1 2 ,k+1 − 2un j+ 1 2 ,k + un j+ 1 2 ,k−1 y2 ⎞ ⎠ − t ⎛ ⎝ un j+ 1 2 ,k (un j+1,k − un j,k) x + vn j+ 1 2 ,k (un j+ 1 2 ,k+ 1 2 − un j+ 1 2 ,k− 1 2 ) y ⎞ ⎠
  • 51. 36 Proper Orthogonal Decomposition Methods for Partial Differential Equations − gt Zn j+1,k + zn b,j+1,k − Zn j,k − zn b,j,k x + un j+ 1 2 ,k + t(f v)n j+ 1 2 ,k , (1.4.9) vn+1 j,k+ 1 2 = − tCDvn j,k+ 1 2 (un j,k+ 1 2 )2 + (vn j,k+ 1 2 )2 Zn j,k+ 1 2 + vn j,k+ 1 2 + tA ⎛ ⎝ vn j+1,k+ 1 2 − 2vn j,k+ 1 2 + vn j−1,k+ 1 2 x2 + vn j,k+ 3 2 − 2vn j,k+ 1 2 + vn j,k− 1 2 y2 ⎞ ⎠ − t ⎛ ⎝ un j,k+ 1 2 (vn j+ 1 2 ,k+ 1 2 − vn j− 1 2 ,k+ 1 2 ) x − vn j,k+ 1 2 (vn j,k+1 − vn j,k) y ⎞ ⎠ − gt Zn j,k+1 + zn b,j,k+1 − Zn j,k − zn b,j,k y − t(f u)n j,k+ 1 2 , (1.4.10) Sn+1 j,k = tε Sn j+1,k − 2Sn j,k + Sn j−1,k x2 + Sn j,k+1 − 2Sn j,k + Sn j,k−1 y2 − tun j,k(Sn j+ 1 2 ,k − Sn j− 1 2 ,k ) x − tvn j,k(Sn j,k+ 1 2 − Sn j,k− 1 2 ) y + αωt(Sn j,k − S∗n j,k) Zn j,k + Sn j,k, (1.4.11) zn+1 b,j,k = zn b,j,k − tgn b,j,k(un j+ 1 2 ,k − un j− 1 2 ,k ) x − tgn b,j,k(vn j,k+ 1 2 − vn j,k− 1 2 ) y + αωt(Sn j,k − S∗n j,k) ρ , (1.4.12) where n = 1,2,··· ,N, j = 1,2,··· ,J, k = 1,2,··· ,K, J = max{[| x1 − x2 |]: (x1,y),(x2,y) ∈ }, and K = max{[| y1 − y2 |] : (x,y1),(x,y2) ∈ }. In order to prove the stability of the FD scheme (1.4.8)–(1.4.12), it is neces- sary to introduce the following discrete Gronwall lemma (see [80]). Lemma 1.4.1 (The discrete Gronwall lemma). If {an} and {bn} are two nonneg- ative sequences, and {cn} is a positive monotone sequence, that satisfy an + bn ⩽ cn + λ̄ n−1 i=0 ai (λ̄ 0), a0 + b0 ⩽ c0, then an + bn ⩽ cn exp(nλ̄), n = 0,1,2,··· . For the classical FD scheme (1.4.8)–(1.4.12), we have the following result.
  • 52. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 37 Theorem 1.4.2. Under the conditions t · (| u | + | v |) ⩽ min{4γ,4ε,4A} and 4t max{γ,A,ε} ⩽ min{x2,y2}, the classical FD scheme (1.4.8)–(1.4.12) is locally stable. Further, we have the following error estimates: | Z(xj ,yk,tn) − Zn j,k | + | u(xj+ 1 2 ,yk,tn) − un j+ 1 2 ,k | + | v(xj ,yk+ 1 2 ,tn) − vn j,k+ 1 2 | + | S(xj ,yk,tn) − Sn j,k | + | zb(xj ,yk,tn) − zn b,j,k |= O(t,x2 ,y2 ), 1 ⩽ n ⩽ N,1 ⩽ j ⩽ J,1 ⩽ k ⩽ K. (1.4.13) Proof. If γ t/x2 ⩽ 1/4, γ t/y2 ⩽ 1/4, and 4t(| u | + | v |) ⩽ min{γ,ε,A}, implying 4t(u∞ +v∞) ⩽ min{γ,ε,A}, by (1.4.8), we have | Zn+1 j,k |⩽ 1 − 2γ t x2 − 2γ t y2 | Zn j,k | + γ t x2 (| Zn j+1,k | + | Zn j−1,k |) + γ t y2 (| Zn j,k+1 | + | Zn j,k−1 |) + t x (| un j+ 1 2 ,k | · | Zn j+ 1 2 ,k | + | un j− 1 2 ,k | · | Zn j− 1 2 ,k |) + t y (| vn j,k+ 1 2 | · | Zn j,k+ 1 2 | + | vn j,k− 1 2 | · | Zn j,k− 1 2 |) ⩽ 1 + 2t x u∞ + 2t y v∞ Zn ∞ ⩽ 1 + γ 2x + γ 2y Zn ∞, (1.4.14) where · ∞ is the L∞( ) norm. Thus, from (1.4.14), we obtain Zn ∞ ⩽ 1 + γ 2x + γ 2y Zn−1 ∞,n = 1,2,··· ,N. (1.4.15) By summing (1.4.15) from 1 to n, we obtain Zn ∞ ⩽ Z0 ∞ + γ 2x + γ 2y n−1 j=0 Zj ∞,n = 1,2,··· ,N. (1.4.16) By applying the discrete Gronwall lemma, Lemma 1.4.1, to (1.4.16), we obtain Zn ∞ ⩽ Z0 ∞ exp nγ 2x + nγ 2y ,n = 1,2,··· ,N, (1.4.17) showing that the series % Zn+1 is locally stable when the time interval [0,T ] is finite. Further, it is convergent from the stability theories of FD schemes (see
  • 53. 38 Proper Orthogonal Decomposition Methods for Partial Differential Equations [34] or [76]). As the water depth is positive, there are two positive constants β1 and β2 such that β1 ⩽ Zn ∞ ⩽ β2,n = 0,1,2,··· ,N. (1.4.18) If 4At ⩽ min{x2,y2} and 4εt ⩽ min{x2,y2}, by using the same technique as when proving (1.4.15), from (1.4.9)–(1.4.12) and (1.4.18), we ob- tain un+1 ∞ ⩽ 1 + A 2x + A 2y un ∞ + 2gt x Zn ∞ + 2gt x zn b∞ + tf ∞vn ∞ + CDA 4β1 (un ∞ + vn ∞), (1.4.19) vn+1 ∞ ⩽ 1 + A 2x + A 2y vn ∞ + 2gt y Zn ∞ + 2gt y zn b∞ + tf ∞un ∞ + CDA 4β1 (un ∞ + vn ∞), (1.4.20) Sn+1 ∞ ⩽ 1 + ε 2x + ε 2y Sn ∞ + αωt β1 (Sn ∞ + S∗n ∞), (1.4.21) zn+1 b ∞ ⩽ 2tgn b ∞ un∞ x + vn∞ y + αωt ρ (Sn ∞ + S∗n ∞), (1.4.22) where n = 0,1,2,··· ,N − 1. Note that S∗n ∞ ⩽ K[(un 2 ∞ + vn 2 ∞)3/2 /(gωβ1)]m ⩽ K[(A/t)2m /(gωβ1)m ](un ∞ + vn ∞) and gn b ∞ ⩽ β p 2 dq (un 2 ∞ + vn 2 ∞)3/2 ⩽ β p 2 dq (A/t)3 . Set =max{K[(A/t)2m /(gωβ1)m ]αωt/(β1 + ρ) + A/(2x + 2y) + tf ∞ + 2CDA/(4β1) + 2β p 2 dq A3 /(xt2 ), 2gt/(x + y) + K[(A/t)2m /(gωβ1)m ]αωt/(β1 + ρ), A/(2x + tf ∞ + 2y) + 2β p 2 dq A3 /(yt2 ) + 2CDA/(4β1), ε/(2x) + ε/(2y) + αωt/(β1 + ρ)}.
  • 54. Reduced-Order Extrapolation Finite Difference Schemes Chapter | 1 39 By (1.4.19)–(1.4.22), we obtain un ∞ + vn ∞ + Sn ∞ + zn b∞ ⩽ (1 + )(un−1 ∞ + vn−1 ∞ + Sn−1 ∞ + zn−1 b ∞) + 2gt x + 2gt y Z0 ∞ exp Nγ 2x + Nγ 2y , (1.4.23) where n = 1,2,··· ,N. By summing (1.4.23) from 1 to n and using the discrete Gronwall lemma, Lemma 1.4.1, we obtain un ∞ + vn ∞ + Sn ∞ + zn b∞ ⩽ (u0 ∞ + v0 ∞ + S0 ∞ + z0 b∞)exp(n) + 2gnt x + 2gnt y Z0 ∞ exp Nγ 2x + Nγ 2y exp(n), n = 1,2,··· ,N. (1.4.24) As the time interval [0,T ] is finite, the RHS of (1.4.24) is bounded. Thus, from the stability theories of FD schemes (see Theorem 1.1.1 or [34,76]) and (1.4.24), we conclude that the classical FD scheme (1.4.8)–(1.4.12) is locally stable, and so its FD solutions are convergent. By Taylor’s formula, expanding (1.4.8) and (1.4.11) as well as (1.4.12) at the reference point (xj ,yk,tn), (1.4.9) at the reference point (xj+ 1 2 ,yk,tn), and (1.4.10) at the reference point (xj ,yk+ 1 2 ,tn), or from the approximations to the difference quotient to the derivatives, we obtain the error estimates (1.4.13). Remark 1.4.1. The classical FD scheme (1.4.8)–(1.4.12) is only first-order accurate in time. If one wants to obtain higher-order time approximation ac- curacy, it is necessary to change the time difference coefficients on the LHSs in (1.4.8)–(1.4.12) into higher order (for example, central difference or second- order difference). Remark 1.4.2. The Coriolis constant f , the gravity acceleration g, the viscos- ity coefficients γ and A, the bottom drag coefficient CD, the sand diffusion coefficient ε, the falling speed of suspended sediment particles ω, the sediment mass transport velocity vc, the sediment diameter d, and empirical constants K,m,n,p, and q, boundary value functions Z0(x,y,t), u0(x,y,t), v0(x,y,t), S0(x,y,t), and zb0(x,y,t), initial value functions Z0(x,y), u0(x,y), v0(x,y), S0(x,y), and z0 b(x,y), the time step increment t, and the spatial step in- crements x and y are the requisites for solving the classical FD solu- tions un j+ 1 2 ,k , vn j,k+ 1 2 , Sn j,k, Zn j,k, and zn b,j,k (0 ⩽ j ⩽ J, 0 ⩽ k ⩽ K, 1 ⩽ n ⩽ N) for the 2D SWEs with sediment concentration through the FD schemes (1.4.8)–(1.4.12).
  • 55. Exploring the Variety of Random Documents with Different Content
  • 56. Thicken cream of corn soup a little more if necessary, or, add corn to thin cream sauce, and serve on toast. Left-overs of all sorts of cream soups may be utilized for toast: celery, asparagus, string bean, oyster plant and spinach, also succotash and other stewed or creamed vegetables. Lentil and Other Legume Toasts Use any lentil gravy or thickened lentil soup, cream of peas or peas and tomato soup thickened, red kidney beans purée or thickened soup, on moistened slices of zwieback. Toast Royal 1 cup drawn butter sauce 3 eggs 1 cup minced trumese or nutmese or ½ cup chopped nuts Add meat to hot sauce and pour all over beaten salted eggs; cook as scrambled eggs. Serve immediately on moistened slices of zwieback, with baked tomatoes when convenient. The following toasts are of a different nature (though slices of zwieback may be used instead of bread), but they are good emergency dishes. French Toast Add ½ cup of milk with salt to 2 or 3 beaten eggs. Dip slices of stale bread or moistened zwieback in the mixture and brown delicately on
  • 57. both sides on moderately hot buttered griddle or in quick oven, or in frying pan covered. Serve plain or with any suitable sauce. Drain slices after dipping in egg mixture; crumb, bake, and serve with honey, maple syrup or jelly for Breaded French Toast. German Toast Add grated or fine chopped onion to egg mixture and finish the same as French toast. Spanish Cakes Batter—2 eggs, 2 tablespns. flour, 1 teaspn. of oil, milk for smooth thin batter. Nut milk may be used and oil omitted. Cut thin slices of bread into any desired shape (round with biscuit cutter), spread each one of half the pieces with jelly, jam or marmalade and press another on to it; dip in the batter, lay on oiled baking pan, stand 15 m. or longer in a cold place. Bake in a quick oven, serve with a bit of the preserve on top and half of a nut pressed into each, or, dusted with powdered sugar. Mamie’s Surprise Biscuit Inclose small cakes of nicely seasoned mashed potato in pastry crust; bake, serve with milk gravy, drawn butter or cream sauce, or with celery only. This is the original recipe which leads to the following variations: Mix finely-sliced celery with the potato. Use the mixture of black walnut and potato stuffing, or mashed lentils or mashed peas for filling.
  • 58. Serve peas biscuit with tomato or tomato cream sauce. Serve lentil biscuit with cream, cream of tomato or mushroom sauce. Lentil biscuit with fresh mushroom or Boundary Castle sauce, with or without celery, might constitute one course at a dinner. Make a filling of minced trumese, salt, oil, chopped parsley, onion and mushrooms into small cakes or balls, inclose them in universal crust, and when light, steam 25–30 m. Serve with drawn butter, flavored with onion and parsley, or as garnish for a meat dish. Make balls quite small for garnish. Yorkshire Pudding ½ cup flour salt 1⅓ cup milk 2 eggs 1 teaspn. oil Beat eggs, add milk and pour gradually into flour mixed with salt; add oil, beat well, turn into well oiled, or oiled and crumbed gem pans; bake in moderate (slow at first) oven. Serve as garnish or accompaniment to ragout, or if baked in flat cakes, with slices of broiled or à la mode meats laid on them, and gravy poured around. The pudding may be baked in a flat pan and cut into any desired shape for serving. Whites and yolks of eggs may be beaten separately. A large onion chopped may be used in the pudding.
  • 59. Rice Border Pack hot boiled rice into well oiled border mold and let stand in a warm place (over kettle of hot water) for 10 m. Turn on to serving dish carefully. Or, parboil 1 cup of rice in salted water 5 m.; drain and cook in a double boiler with 2½–3 cups of milk and salt, until the rice is tender and the milk absorbed, then pack into the mold. 1 tablespn. of butter and the yolks of 2 eggs may be added to the rice about 2 m. before it is taken from the double boiler. Oyster Plant and Potato Omelet—without eggs With nicely seasoned, not too moist, mashed potato, mix slices of cooked oyster plant which have been simmered in cream or butter. Spread in well oiled frying or omelet pan. When delicately browned on the bottom, fold, omelet fashion, turn on to a hot platter, garnish. Serve plain or with cream sauce or with thin drawn butter. Or, grind oyster plant, cook in a small quantity of water, add cream or butter and mix with plain potato. Finely-sliced raw celery or chopped raw onion and parsley may be used in the potato sometimes. Baked Potatoes and Milk Wash potatoes well, scrubbing with vegetable brush. Cut out any imperfect spots. Bake until just done. Break up, skins and all, into nice rich milk and eat like bread and milk for supper. A favorite dish of some of the early settlers in Michigan. Bread and Milk with Sweet Fruits
  • 60. Add nice ripe blueberries to bread and milk for supper, also ripe black raspberries or baked sweet apples. They are all delicious. ★ Apples in Oil Simmer finely-sliced onion in oil 5–10 m. without browning; add salt and a little water, then apples which have been washed, quartered, cored and sliced without paring. Sprinkle lightly with salt. Cover and cook until apples are just tender, not broken. Serve for breakfast or supper, or with a meat dish instead of a vegetable, for luncheon or dinner. The onion may be omitted. Use a little sugar when apples are very sour. Onion Apples Simmer sliced onions in oil, with salt, in baking pan. Place apples, pared and cored, on top of the onions; sprinkle with sugar and put ¼ teaspn. in each cavity. Cover, bake; uncover and brown. Serve for luncheon, or as garnish for meat dish.
  • 61. TRUE MEATS “And God said, Behold I have given you every herb bearing seed, which is upon the face of all the earth, and every tree, in the which is the fruit of a tree yielding seed; to you it shall be for meat.” Gen. 1:29. “The food which God gave Adam in his sinless state is the best for man’s use as he seeks to regain that sinless state. “The intelligence displayed by many dumb animals approaches so closely to human intelligence that it is a mystery. “The animals see and hear and love and fear and suffer. “They manifest sympathy and tenderness toward their companions in suffering. “They form attachments for man which are not broken without great suffering to them. “Think of the cruelty to animals that meat eating involves and its effect on those who inflict and those who behold it. How it destroys the tenderness with which we should regard these creatures of God!” The high price of flesh foods, the knowledge of the waste matter in the blood of even healthy animals which remains in their flesh after death, and the well authenticated reports of the increasing prevalence of most loathsome diseases among them, causes a growing desire among thinking people to take their food at first hand, before it has become a part of the body of some lower animal. So, the great food question of the day is—“What shall we use in the place of meat?”
  • 62. Nuts, legumes (peas, beans, lentils and peanuts) and eggs contain as do flesh meats, an excess of the proteid or muscle- building elements (nuts and legumes a much larger proportion than flesh), so we may combine these with fruits, vegetables and some of the cereals (rice, for instance) and have a perfect proportion of food elements. It must be borne in mind, however, that proteid foods must be used sparingly, since an excess of these foods causes some of the most serious diseases. The bulk of our foods should be made up of fruits and vegetables and some of the less hearty cereals and breads. NUTS As nuts occupy the highest round of the true meat ladder, we give a variety of recipes for their use, following with legumes and eggs in their order. With nuts, as with other foods, the simplest way to use them is the best. There are greater objections to foods than that they are difficult of digestion, and in the case of nuts, that objection is overcome by thorough mastication; in fact, they are an aid to the cultivation of that important function in eating. For those who are not able to chew their food, nuts may be ground into butter. Another aid to the digestion of nuts is the use with them of an abundance of acid fruits. Fruits and nuts seem to be each the complement of the other, the nuts as well, preventing the unpleasant effects felt by some in the free use of fruits.
  • 63. “No investigations have been found on record which demonstrate any actual improvement in the digestibility of nuts due to salt.”— M. E. Jaffa, M. S., Professor of Nutrition, University of California. Be sure that nuts are fresh. Rancid nuts are no better than rancid butter. Shelled nuts do not keep as well as those in the shell. Almonds stand at the head of the nut family. It is better to buy them in the shell as shelled almonds are apt to have bitter ones among them. Almonds should not be partaken of largely with the brown covering on, but are better to be blanched. To Blanch Almonds—Throw them into perfectly boiling water, let them come to the boiling point again, drain, pour cold water over them and slip the skins off with the thumb and finger. Drop the meats on to a dry towel, and when they are all done, roll them in the towel for a moment, then spread them on plates or trays to dry. They must be dried slowly as they color easily, and the sweet almond flavor is gone when a delicate color only, is developed. For butter they must be very dry, really brittle. Brazil Nuts—castanas—cream nuts, do not require blanching, as their covering does not seem to be objectionable. They are rich in oil and are most valuable nuts. Slice and dry them for grinding. Filberts—hazelnuts—cobnuts—Barcelonas, also may be eaten without blanching, though they may be heated in the oven (without browning) or put into boiling water and much of the brown covering removed. They are at their best unground, as they do not give an especially agreeable flavor to cooked foods. They may be made into butter. Brazil nuts and filberts often agree with those who cannot use English walnuts and peanuts.
  • 64. English Walnuts—The covering of the English walnut is irritating and would better be removed when practicable. This is done by the hot water method, using a knife instead of the thumb and finger. The unblanched nuts may however, be used in moderation by nearly every one. Butternuts and black walnuts blanch more easily than the English walnut. When whole halves of such nuts as hickory nuts, pecans or English walnuts are required, throw the nuts into boiling water for two or three minutes, or steam them for three or four minutes, or wrap them in woolen cloths wrung out of boiling water. Crack, and remove meats at once. Do not leave nuts in water long enough to soak the meats. Pinenuts come all ready blanched. When they require washing, pour boiling water over them first, then cold water. Drain, dry in towels, then on plates in warm oven. Peanuts—ground nuts, because of their large proportion of oil, and similarity in other respects to nuts are classed with them, though they are truly legumes. The Spanish peanut contains more oil than the Virginia, but the flavor of the Virginia is finer and its large size makes it easier to prepare. The “Jumbos” are the cheapest. To blanch Spanish peanuts the usual way, heat for some time, without browning, in a slow oven, stirring often. When cool rub between the hands or in a bag to remove the skins. The best way to blow the hulls away after they are removed is to turn the nuts from one pan to another in the wind.
  • 65. Spanish peanuts can be obtained all ready blanched from the nut food factories. The Virginias, not being so rich in oil must always be blanched the same as almonds. Be sure to let them boil well before draining. I prefer to blanch the Spanish ones that way, too, the results are so much more satisfactory. When peanuts are partly dried, break them apart and remove the germ, which is disagreeable and unwholesome: then finish drying. A FEW SUGGESTIVE COMBINATIONS
  • 66. For Using Nuts in the Simplest Ways Brazil nuts, filberts or blanched almonds with:— Fresh apples, pears or peaches; Dried, steamed or stewed figs, raisins, dates, prunes, apple sauce, baked apples or baked quinces; Celery, lettuce, cabbage, tender inside leaves of spinach, grated raw carrot or turnip; Breakfast cereals, parched or popped corn, well browned granella, crackers, gems, zwieback, Boston brown and other breads; Stewed green peas, string beans, asparagus, corn, greens, potatoes, squash, cauliflower, all vegetables; Pies, cakes and different desserts when used. Nut Butter A good nut butter mill is an excellent thing to have, but butter can be made with the food cutters found nowadays in almost every home. If the machine has a nut butter attachment, so much the better; otherwise the nuts will need to be ground repeatedly until the desired fineness is reached. For almond butter, blanch and dry the almonds according to directions, adjust the nut butter cutter, not too tight, put two or three nuts into the mill at a time, and grind. When the almonds are thoroughly dried they will work nicely if the mill is not fed too fast. Brazil nuts and filberts need to be very dry for butter. Pine nuts are usually dry enough as they come to us. All nuts grind better when first dried.
  • 67. Raw peanut butter is a valuable adjunct to cookery. To make, grind blanched dried nuts; pack in tins or jars and keep in a dry place. For steamed butter, put raw butter without water into a double boiler or close covered tins and steam 3–5 hours. Use without further cooking in recipes calling for raw nut butter. Or, grind dried boiled nuts the same as raw nuts. For immediate use, boiled nuts may be ground without drying. When roasted nut butter is used, it should be in small quantities only, for flavoring soups, sauces or desserts. My experience is that the best way to roast nuts for butter is to heat them, after they are blanched and dried, in a slow oven, stirring often, until of a cream or delicate straw color. By this method they are more evenly colored all through. Do not salt the butter, as salt spoils it for use with sweet dried fruits as a confection, and many prefer it without salt on their bread. The objection to roasted nuts is the same as for browning any oil. Raising the oil of the nuts to a temperature high enough to brown it, decomposes it and develops a poisonous acid. Hardly too much can be said of the evil effects of the free use of roasted nut butter. “There are many persons who find that roasted peanuts eaten in any quantity are indigestible in the sense of bringing on pain and distress.... Sometimes this distress seems to be due to eating peanuts which are roasted until they are very brown.” —Mary Hinman Abel, Farmers’ Bulletin, No. 121, U.S. Department of Agriculture.
  • 68. Nut Meal Nut meal is made the same as nut butter except that the nuts are ground fewer times through the finest cutter of the mill, or once only through the nut butter cutter loosely adjusted. Either cooked or raw peanuts may be used, but a cooked peanut meal is very desirable. The nuts may be cooked, dried and ground, or cooked without water, after grinding, the same as steamed nut butter. When one has no mill, meal of many kinds of nuts may be made in the following manner: Pound a few at a time in a small strong muslin bag; sift them through a wire strainer and return the coarse pieces to the bag again with the next portion. Be sure that not the smallest particle of shell is left with the meats. A dear friend of mine used to keep jars of different nut meals prepared in this way on hand long before any manufactured ones were on the market. One writer says: “The children enjoy cracking the nuts and picking out the meats, and it is a short task to prepare a cupful.” Cooked nuts and some raw ones may be rubbed through the colander for meal. Nut meals are used for shortening pie crust, crackers and sticks; and all except peanut, are delightful sprinkled over stewed fruits or breakfast foods. Nut Butter for Bread Nut butters (except raw peanut) may be used on bread as they are ground; but are usually stirred up with water to an agreeable butter-
  • 69. like consistency, and salt added. Strained tomato may be used instead of water for a change. This is especially nice for sandwiches. With peanut butter made from boiled or steamed nuts it has a flavor similar to cheese. Nut butter is more attractive for the table when pressed through a pastry tube in roses on to individual dishes. Use a cloth (not rubber) pastry bag. While pure nut butter, if kept in a dry place, will keep almost indefinitely, it will sour as quickly as milk after water is added to it. Nut Cream and Milk Add water to nut butter until of the desired consistency, for cream; then still more, for milk. Almond milk makes a delightful drink and can be used by many who cannot take dairy milk. It may be heated and a trifle of salt added. Cocoanut Milk If you have not a cocoanut scraper, grate fresh cocoanut, one with milk in it, or grind it four or five times through the finest cutter of a mill. Pour over it an equal bulk or twice its bulk, of boiling water, according to the richness of the milk desired or the quality of the cocoanut. Stir and mix well and strain through cheese cloth or a wire strainer. Add a second quantity of hot water and strain again, wringing or pressing very dry. Throw the fibre away. Use cocoanut milk or cream for vegetable or pudding sauces or in almost any way that dairy milk and cream are used. Stir before
  • 70. using. To break the nut in halves, take it in the left hand and strike it with a hammer in a straight line around the center. It may be sawed in two if the cups are desired for use. Cocoanut Butter Place milk on ice for a few hours when the butter will rise to the top and can be skimmed off. Ground or Grated Cocoanut Is delightful on breakfast cereals, or eaten with bread in place of butter. The brown covering of the meat should first be taken off. Shredded Cocoanut Put any left-overs of prepared cocoanut on a plate and set in the sun or near the stove to dry. Keep in glass jars in a dry place. This unsweetened cocoanut can be used for shortening and in many places where sweet is not desirable. Milk and Rich Cream of Raw Peanuts May be prepared the same as cocoanut milk, except that cold or lukewarm water is used instead of hot. To raw nut meal (not butter) add one half more of water than you have of meal. Mix and beat well, strain through a thin cloth, squeeze as dry as possible. Let milk stand in a cool place and a very rich cream will rise which may be used for shortening pie crust, crackers and sticks, or in place of dairy cream in other ways. The
  • 71. skimmed milk will be suitable for soups, stews or gravies. It may be cooked before using if more convenient. The pulp also may be used in soups. It should be thoroughly cooked. Nut Relish Different nut butters and meals may be combined in varying proportions. For instance, 2 parts Brazil nuts, 1 part each pine nuts and almonds; or 1 part each Brazil nuts, almonds, pecans, and pine nuts. Dry nuts well and grind all together or combine after grinding. Press into tumblers or small tins and stand in cool place. Unmold to serve. The relish may be used in combinations suggested for whole nuts, and it is a great improvement over cheese, with apple pie. Toasted Almonds When blanched almonds are thoroughly dried, put them into a slow oven and let them come gradually to a delicate cream color, not brown. These may be served in place of salted almonds. Sweetmeats of fruits and nuts will be found among confections. COOKED NUT DISHES Nut Croquettes 1 cup chopped nuts (not too fine), hickory, pecan, pine or butternuts, or a mixture of two with some almonds if desired; 2 cups boiled rice or hominy, 1½ tablespn. oil or melted butter, salt, sage. Mix, shape into rolls about 1 in. in diameter and 2½ in. in length. Egg and crumb; bake in quick oven until just heated through and
  • 72. delicately browned, 8 to 10 m. Serve plain or with any desired sauce or vegetable. Nut Croquettes No. 2 1 cup chopped nuts, 1 cup cooked rice, any desired seasoning or none, salt; mix. Sauce— 2 tablespns. oil ½ cup flour 1–1¼ cup milk 1 egg or yolk only or no egg salt Heat but do not brown the oil, add half the flour, then the milk, and when smooth, the salt and the remainder of the flour, and combine with mixed nuts and rice. Cool, shape, egg, crumb, bake. Crumb also before dipping in egg the same as Trumese croquettes, if necessary. Bake only until beginning to crack. Serve at once. Savory Nut Croquettes 1 cup stale, quite dry, bread crumbs, ½ cup (scant) milk or consommé, ¼–½ level teaspn. powdered leaf sage or winter savory, ½ cup black walnut or butternut meats, salt. Mix, shape, egg, crumb, bake. 1 cup chopped mixed nuts may be used and celery salt or no flavoring. Hickory nut meats alone, require no flavoring.
  • 73. Nut and Sweet Potato Cutlets 1 cup chopped nut meats 2 cups chopped boiled sweet potato 1 tablespn. butter 1 egg salt Mix while warm. Pack in brick-shaped tin until cold. Unmold, slice, egg, crumb or flour. Brown in quick oven or on oiled griddle. Serve plain or with sauce 16 or 17. ★ Baked Pine Nuts After picking out the pieces of shell, pour boiling water over 2 lbs. of pine nuts in a fine colander. Rinse in cold water and put into the bean pot, with 2 large onions sliced fine, 1–1⅓ cup strained tomato and 2–2½ teaspns. salt. Heat quite rapidly at first; boil gently for a half hour, then simmer slowly in the oven 10–12 hours or longer. Leave just juicy for serving. Black Walnut and Potato Mound Mix 1 qt. nicely seasoned, well beaten mashed potato, ½–1 cup chopped black walnut meats and 2 or 3 tablespns. grated onion. Pile in rocky mound on baking pan or plate. Sprinkle with crumbs or not. Bake in quick oven until delicately browned. Garnish and serve with sauce 6 or 16. Nut and Rice Roast or Timbale
  • 74. 1–2 cups chopped nuts, one kind or mixed (no English walnuts unless blanched), 2 cups boiled or steamed rice, 1½–3 tablespns. oil or melted butter, salt. Mix ingredients and put into well oiled timbale mold or individual molds or brick shaped tin. Bake covered, in pan of water ¾–1½ hr. according to size of mold. Uncover large mold a short time at the last. Let stand a few minutes after removing from oven, unmold, and serve with creamed celery or peas or with sauce 16 (cocoanut cream if convenient) or 34. Loaf may be flavored, and served with any suitable sauce. Loaf of Nuts 2 tablespns. raw nut butter ⅓ cup whole peanuts cooked almost tender ½ cup each chopped or ground pecans, almonds and filberts (or butternuts, hazelnuts, and hickory nuts) 2 cups stale bread crumbs pressed firmly into the cup salt ¾–1 cup water or 1 of milk The quantity of liquid will depend upon the crumbs and other conditions. Put into oiled mold or can, cover, steam 3 hours. Or, have peanuts cooked tender, form into oval loaf, bake on tin in oven, basting occasionally with butter and water or salted water only. Serve with sauce 9, 10, 57, 59, or 69. Loaf may be served cold in slices, or dipped in egg, and crumbed, and baked as cutlets. Other nuts may be substituted for peanuts.
  • 75. One-half cup black walnuts and 1½ cup cooked peanuts, chopped, make a good combination. A delicate flavoring of sage, savory or onion is not out of place with these. To Boil Peanuts Put blanched, shelled peanuts into boiling water and boil continuously, for from 3–5 hrs., or until tender. (When the altitude is not great it takes Virginias 4 or 5 hours and Spanish about 3 to cook tender). Drain, saving the liquid for soup stock, and use when boiled peanuts are called for. Nut Soup Stock Use the liquid, well diluted, poured off from boiled peanuts, for soups. Large quantities may be boiled down to a jelly and kept for a long time in a dry place. If paraffine is poured over the jelly, it will keep still better. Use 1 tablespn. only of this jelly for each quart of soup. Peanuts with Green Peas Boil 1 cup blanched peanuts 1–2 hrs., drain off the water and save for soup. Put fresh water on to the peanuts, add salt and finish cooking. Just before serving add 1 pt. of drained, canned peas. Heat well. Add more salt if necessary, and serve. Or, 1 pt. of fresh green peas may be cooked with the nuts at the last. Small new potatoes would be a suitable addition also.
  • 76. ★ Peanuts Baked like Beans 1 lb. (¾ qt.) blanched peanuts ¼ cup strained tomato ½–1 tablespn. browned flour 1¼–1½ teaspn. salt Mix browned flour, tomato and salt, put into bean pot with the nuts and a large quantity of boiling water. Boil rapidly ½ hr., then bake in a slow oven 8–14 hours. Add boiling water without stirring, when necessary. When done the peanuts should be slightly juicy. Small dumplings steamed separately, may be served with baked peanuts sometimes. Baked Peanuts—Lemon Apples Pile peanuts in center of platter or chop tray. Surround with lemon apples, garnish with grape leaves and tendrils or with foliage plant leaves. Peanuts with Noodles or Vermicelli Cook peanuts in bouillon with bay leaf and onions. Just before serving, add cooked noodles or vermicelli. Nut Chinese Stew Use boiled peanuts instead of nutmese and raw nut butter, and rice (not too much) in place of potato, in Nut Irish Stew. Peanut Gumbo
  • 77. Simmer sliced or chopped onion in butter; add 1 pt. stewed okra; simmer 5–10 m. Add 1 pt. strained tomato, then ¾–1 qt. of baked or boiled peanuts. Turn into a double boiler and add ½ cup boiled rice. Heat 15–20 m. Hot Pot of Peanuts Put layers of sliced onion, sliced potatoes and boiled peanuts into baking dish with salt and a slight sprinkling of sage. Cover the top with halved potatoes. Stir a little raw nut butter with water and pour over all. Cover with a plate or close fitting cover and bake 2 hours. Remove cover and brown. Peanut Hashes Cooked peanuts, chopped very little if any, may be used in place of trumese with potatoes or rice for hash. Bread, cracker or zwieback crumbs may be substituted for potato or rice. Peanut German Chowder 1 pt. cooked peanuts 1 large onion 2 tablespns. chopped parsley ½ medium sized bay leaf ⅛ level teaspn. thyme 1 small carrot 1 level tablespn. browned flour 2 level tablespns. white flour
  • 78. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com