Accuracy And Reliability In Scientific Computing
Bo Einarsson download
https://ebookbell.com/product/accuracy-and-reliability-in-
scientific-computing-bo-einarsson-4702536
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Simulation Of Stochastic Processes With Given Accuracy And Reliability
1st Edition Yuriy V Kozachenko
https://ebookbell.com/product/simulation-of-stochastic-processes-with-
given-accuracy-and-reliability-1st-edition-yuriy-v-kozachenko-6614860
Simulation Of Stochastic Processes With Given Accuracy And Reliability
1st Edition Yuriy V Kozachenko
https://ebookbell.com/product/simulation-of-stochastic-processes-with-
given-accuracy-and-reliability-1st-edition-yuriy-v-kozachenko-6748640
Approximate Computing And Its Impact On Accuracy Reliability And
Faulttolerance Gennaro S Rodrigues
https://ebookbell.com/product/approximate-computing-and-its-impact-on-
accuracy-reliability-and-faulttolerance-gennaro-s-rodrigues-47284166
Introduction To Simulation Methods For Gas Discharge Plasmas Accuracy
Reliability And Limitations Ismail Rafatov
https://ebookbell.com/product/introduction-to-simulation-methods-for-
gas-discharge-plasmas-accuracy-reliability-and-limitations-ismail-
rafatov-46110890
Accuracy And Stability Of Numerical Algorithms 2nd Edition Nicholas J
Higham
https://ebookbell.com/product/accuracy-and-stability-of-numerical-
algorithms-2nd-edition-nicholas-j-higham-2464564
Accuracy And Fuzziness A Life In Science And Politics A Festschrift
Book To Enric Trillas Ruiz 1st Edition Luis Argelles Mndez Auth
https://ebookbell.com/product/accuracy-and-fuzziness-a-life-in-
science-and-politics-a-festschrift-book-to-enric-trillas-ruiz-1st-
edition-luis-argelles-mndez-auth-5141410
Accuracy And The Laws Of Credence 1st Edition Pettigrew Richard
https://ebookbell.com/product/accuracy-and-the-laws-of-credence-1st-
edition-pettigrew-richard-5892214
Ballistic Imaging Accuracy And Technical Capability Of A National
Ballistics Database Committee To Assess The Feasibility
https://ebookbell.com/product/ballistic-imaging-accuracy-and-
technical-capability-of-a-national-ballistics-database-committee-to-
assess-the-feasibility-1809896
Grammar For Academic Purposes Accuracy And Sentence Structure Steve
Marshall
https://ebookbell.com/product/grammar-for-academic-purposes-accuracy-
and-sentence-structure-steve-marshall-56060060
Accuracy And Reliability In Scientific Computing Bo Einarsson
Accuracy And Reliability In Scientific Computing Bo Einarsson
Accuracy and
Reliability in
Scientific Computing
SOFTWARE • ENVIRONMENTS • TOOLS
The series includes handbooks and software guides as well as monographs
on practical implementation of computational methods, environments, and tools.
The focus is on making recent developments available in a practical format
to researchers and other users of these methods and tools.
Editor-in-Chief
Jack J. Dongarra
University of Tennessee and Oak Ridge National Laboratory
Editorial Board
James W. Demmel, University of California, Berkeley
Dennis Gannon, indiana University
Eric Grosse, AT&T Bell Laboratories
Ken Kennedy, Rice University
Jorge J. More, Argonne National Laboratory
Software, Environments, and Tools
Bo Einarsson,editor, Accuracy and Reliabilty in Scientific Computing
Michael W. Berry and Murray Browne, Understanding Search Engines: Mathematical Modeling and Text
Retrieval, Second Edition
Craig C. Douglas, Gundolf Haase, and Ulrich Langer, A Tutorial on Elliptic PDESolvers and Their Parallelization
Louis Komzsik, The Lanczos Method: Evolution and Application
Bard Ermentrout, Simulating,Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers
and Students
V. A. Barker, L. S. Blackford, J. Dongarra, J. Du Croz, S. Hammarling, M. Marinova, J. Wasniewski, and
P. Yalamov, LAPACK95 Users' Guide
Stefan Goedecker and Adolfy Hoisie, Performance Optimization of Numerically Intensive Codes
Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst,
Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide
Lloyd N. Trefethen, Spectral Methods in MATLAB
E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz,
A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users' Guide, Third Edition
Michael W. Berry and Murray Browne, UnderstandingSearch Engines: Mathematical Modeling and Text Retrieval
Jack J. Dongarra, lain S. Duff, Danny C. Sorensen, and Henk A. van der Vorst, Numerical Linear Algebra for
High-Performance Computers
R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARRACK Users' Guide: Solution of Large-Scale Eigenvalue
Problems with Implicitly Restarted Arnoldi Methods
Randolph E. Bank, PLTMG: A Software Package for Solving Elliptic Partial Differential Equations, Users' Guide 8.0
L. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling,
G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, ScaLAPACK Users' Guide
Greg Astfalk, editor, Applications on Advanced Architecture Computers
Francoise Chaitin-Chatelin and Valerie Fraysse, Lectures on Finite Precision Computations
Roger W. Hockney, The Science of ComputerBenchmarking
Richard Barrett, Michael Berry, Tony F.Chan, James Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan
Pozo, Charles Romine, and Henk van der Vorst, Templates for the Solution of Linear Systems: Building
Blocks for Iterative Methods
E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling,
A. McKenney, S. Ostrouchov, and D. Sorensen, LAPACK Users' Guide, SecondEdition
jack J. Dongarra, lain S. Duff, Danny C. Sorensen, and Henk van der Vorst, Solving Linear Systems on Vector
and Shared Memory Computers
J. J. Dongarra, J. R. Bunch, C. B. Moler, and G. W. Stewart, Linpack Users' Guide
Accuracy and
Reliability in
Scientific Computing
Edited by
Bo Einarsson
Linkoping University
Linkoping, Sweden
siam.
Society for Industrial and Applied Mathematics
Philadelphia
Copyright © 2005 by the Society for Industrial and Applied Mathematics.
1 0 9 8 7 6 5 4 3 2 1
All rights reserved. Printed in the United States of America. No part of this book may be
reproduced, stored, or transmitted in any manner without the written permission of the
publisher. For information, write to the Society for Industrial and Applied Mathematics,
3600 University City Science Center, Philadelphia, PA 19104-2688.
MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB® product
information, please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760-
2098 USA, 508-647-7000, Fax: 508-647-7101, info@mathworks.com, www.mathworks.com/
Trademarked names may be used in this book without the inclusion of a trademark symbol.
These names are used in an editorial context only; no infringement of trademark is intended.
Library of Congress Cataloging-in-Publication Data
Accuracy and reliability in scientific computing / edited by Bo Einarsson.
p. cm. — (Software, environments, tools)
Includes bibliographical references and index.
ISBN 0-89871-584-9 (pbk.)
1. Science—Data processing. 2. Reliability (Engineering)—Mathematics. 3. Computer
programs—Correctness—Scientific applications. 4. Software productivity—Scientific
applications. I. Einarsson, Bo, 1939- II. Series.
Q183.9.A28 2005
502'.85-dc22
2005047019
is a registered trademark.
Contents
List of Contributors
List of Figures
List of Tables
Preface
I PITFALLS IN NUMERICAL COMPUTATION
1 What Can GoWrong in Scientific Computing?
Bo Einarsson
1.1 Introduction
1.2 Basic Problems in Numerical Computation
1.2.1 Rounding
1.2.2 Cancellation
1.2.3 Recursion
1.2.4 Integer overflow
1.3 Floating-point Arithmetic
1.3.1 Initial work on an arithmetic standard
1.3.2 IEEE floating-point representation
1.3.3 Future standards
1.4 What Really WentWrong in Applied Scientific Computing! . . .
1.4.1 Floating-point precision
1.4.2 Illegal conversion between data types
1.4.3 Illegal data
1.4.4 Inaccurate finite element analysis
1.4.5 Incomplete analysis . ....
1.5 Conclusion
2 Assessment ofAccuracy and Reliability
Ronald F.Boisvert, Ronald Cools, Bo Einarsson
2.1 Models of Scientific Computing
2.2 Verification and Validation
xiii
XV
xix
xxi
1
3
3
3
4
4
5
6
6
6
7
9
9
9
11
11
11
12
12
13
13
15
V
vi Contents
3
2.3 Errors in Software 17
2.4
2.5
2.6
Precision, Accuracy, and Reliability
Numerical Pitfalls Leading to Anomalous Program Behavior
Methods of Verificationand Validation
2.6.1 Code verification
2.6.2 Sources of test problems for numerical software
2.6.3 Solution verification
2.6.4 Validation
20
22
23
24
26
28
31
Approximating Integrals, Estimating Errors, and Giving theWrong Solution
for a Deceptively Easy Problem 33
Ronald Cools
4
3.1
3.2
3.3
3.4
3.5
3.6
3.7
3.8
3.9
Introduction
The Given Problem
The First Correct Answer
View Behind the Curtain
A More Convenient Solution
Estimating Errors: Phase 1
Estimating Errors: Phase 2
The More Convenient Solution Revisited
Epilogue
An Introduction to the Quality of Computed Solutions
33
34
34
36
36
38
40
40
41
43
Sven Hammarling
5
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
4.9
Introduction
Floating-point Numbers and IEEE Arithmetic
Why WorryAbout Computed Solutions?
Condition, Stability, and Error Analysis
4.4.1 Condition
4.4.2 Stability
4.4.3 Error analysis
Floating-point Error Analysis
Posing the Mathematical Problem
Error Bounds and Software
Other Approaches
Summary
Qualitative Computing
43
44
46
50
50
56
60
64
70
71
74
75
77
Franqoise Chaitin-Chatelin, Elisabeth Traviesas-Cassan
5.1
5.2
Introduction
Numbers as Building Blocks for Computation
5.2.1 Thinking the unthinkable
5.2.2 Breaking the rule
5.2.3 Hypercomputation inductively defined by multiplication .
5.2.4 The Newcomb—Borel paradox
5.2.5 Effective calculability
77
77
78
78
78
79
80
Contents vii
5.3 Exact Versus Inexact Computing
5.3.1 What is calculation?
5.3.2 Exact andinexact computing :
5.3.3 Computer arithmetic
5.3.4 Singularities in exact and inexact computing
5.3.5 Homotopic deviation
5.3.6 The map : z p(Fz)
5.3.7 Graphical illustration
5.4 Numerical Software
5.4.1 Local error analysis in finite precision computations .
5.4.2 Homotopic deviation versus normwise perturbation .
5.4.3 Application to Krylov-typemethods
5.5 The Levy Law of Large Numbers for Computation
5.6 Summary
II DIAGNOSTIC TOOLS
6 PRECISE and the Quality of Reliable Numerical Software
Francoise Chaitin-Chatelin, Elisabeth Traviesas-Cassan
6.1 Introduction
6.2 Reliability of Algorithms
6.3 Backward Error Analysis
6.3.1 Consequence of limited accuracy of data
6.3.2 Quality of reliable software
6.4 Finite Precision Computations at a Glance
6.5 What Is PRECISE?
6.5.1 Perturbation values
6.5.2 Perturbation types
6.5.3 Data metrics
6.5.4 Data to be perturbed
6.5.5 Choosing a perturbation model
6.6 Implementation Issues
6.7 Industrial Use of PRECISE
6.8 PRECISE in Academic Research
6.9 Conclusion
7 Tools for the Verification ofApproximate Solutions to Differential
Equations
Wayne H. Enright
7.1 Introduction
7.1.1 Motivation and overview
7.2 Characteristics of a PSE
7.3 Verification Tools for Use with an ODE Solver
7.4 Two Examples of Use of These Tools
7.5 Discussion and Future Extensions
81
81
82
82
83
83
86
87
88
88
89
90
91
92
93
95
95
96
96
98
98
99
99
101
102
103
103
103
105
107
108
108
109
109
109
110
1ll
112
119
viii Contents
III TECHNOLOGY FOR IMPROVING ACCURACY ANDRELIABILITY
8 General Methods for Implementing Reliable and Correct Software
Bo Einarsson
8.1 Ada
Brian Wichmann, Kenneth W.Dritz
8.1.1 Introduction and language features
8.1.2 The libraries
8.1.3 The Numerics Annex
8.1.4 Other issues
8.1.5 Conclusions
8.2 C
Craig C. Douglas, Hans Fetter Langtangen
8.2.1 Introduction
8.2.2 Language features
8.2.3 Standardized preprocessor, error handling, and debuggin
8.2.4 Numerical oddities and math libraries
8.2.5 Calling Fortran libraries
8.2.6 Array layouts
8.2.7 Dynamic data and pointers
8.2.8 Data structures
8.2.9 Performance issues
8.3 C++
Craig C. Douglas, Hans Petter Langtangen
8.3.1 Introduction
8.3.2 Basic language features
8.3.3 Special features
8.3.4 Error handling and debugging
8.3.5 Math libraries
8.3.6 Array layouts
8.3.7 Dynamic data
8.3.8 User-defined data structures
8.3.9 Programming styles
8.3.10 Performance issues
8.4 Fortran
Van Snyder
8.4.1 Introduction
8.4.2 History of Fortran
8.4.3 Major features of Fortran 95
8.4.4 Features of Fortran 2003
8.4.5 Beyond Fortran 2003
8.4.6 Conclusion
8.5 Java
Ronald F. Boisvert, Roldan Pozo
8.5.1 Introduction
8.5.2 Language features
123
125
. 127
127
. 128
no
13?
135
136
136
136
ig 138
139
140
140
141
. 141
142
142
14?
143
145
146
. 147
147
147
147
148
148
149
149
. 149
149
151
153
153
153
153
154
Contents ix
9
10
8.5.3 Portability in the Java environment
8.5.4 Performance challenges
8.5.5 Performance results
8.5.6 Other difficulties encountered in scientific programming
in Java
8.5.7 Summary
8.6 Python
Craig C. Douglas, Hans Petter Langtangen
8.6.1 Introduction
8.6.2 Basic language features
8.6.3 Special features
8.6.4 Error handling and debugging
8.6.5 Math libraries
8.6.6 Array layouts
8.6.7 Dynamic data
8.6.8 User defined data structures
8.6.9 Programming styles
8.6.10 Performance issues
The Use and Implementation of Interval Data Types
G. William Walster
9.1 Introduction
9.2 Intervals and Interval Arithmetic
9.2.1 Intervals
9.2.2 Interval arithmetic
9.3 Interval Arithmetic Utility
9.3.1 Fallible measures
9.3.2 Enter interval arithmetic
9.4 The Path to Intrinsic Compiler Support
9.4.1 Interval-specific operators andintrinsic functions
9.4.2 Quality of implementation opportunities
9.5 Fortran Code Example
9.6 Fortran Standard Implications
9.6.1 The interval-specific alternative
9.6.2 The enriched module alternative
9.7 Conclusions
Computer-assisted Proofs and Self-validating Methods
Siegfried M. Rump
10.1 Introduction
10.2 Proofs and Computers
10.3 Arithmetical Issues
10.4 ComputerAlgebra Versus Self-validating Methods
10.5 Interval Arithmetic
10.6 Directed Roundings
10.7 A Common Misconception About Interval Arithmetic
155
156
158
160
162
162
162
163
166
167
167
168
169
169
170
170
173
173
173
174
174
175
175
176
177
179
184
191
193
193
194
194
195
195
195
200
?03
?04
?06
?09
x Contents
10.8 Self-validating Methods and INTLAB
10.9 Implementation of Interval Arithmetic
10.10 Performance and Accuracy
10.11 Uncertain Parameters
10.12 Conclusion
11 Hardware-assisted Algorithms
Craig C. Douglas, Hans Petter Langtangen
11.1 Introduction
11.2 A Teaser
11.3 Processor and Memory Subsystem Organization
11.4 Cache Conflicts and Trashing
11.5 Prefetching
11.6 Pipelining and Loop Unrolling
11.7 Padding and Data Reorganization
11.8 Loop Fusion
11.9 Bitwise Compatibility
11.10 Useful Tools
12 Issues in Accurate and Reliable Use of Parallel Computing in
Numerical Programs
William D. Gropp
12.1 Introduction
12.2 Special Features of Parallel Computers
12.2.1 The major programming models
12.2.2 Overview
12.3 Impact on the Choice of Algorithm
12.3.1 Consequences of latency
12.3.2 Consequences of blocking
12.4 Implementation Issues
12.4.1 Races
12.4.2 Out-of-order execution
12.4.3 Message buffering
12.4.4 Nonblocking andasynchronousoperations . . . .
12.4.5 Hardware errors
12.4.6 Heterogeneous parallel systems
12.5 Conclusionsand Recommendations
13 Software-reliability Engineering of Numerical Systems
Mladen A. Vouk
13.1 Introduction
13.2 About SRE
13.3 Basic Terms
13.4 Metrics and Models
13.4.1 Reliability
13.4.2 Availability
215
220
228
230
239
241
241
242
243
245
245
246
247
248
249
250
253
253
253
254
254
255
255
257
2 5 8
2 5 8
259
260
261
262
262
262
265
265
266
2 6 7
2 6 9
2 6 9
274
Contents xi
13.5
13.6
13.7
13.8
Bibliography
Index
General Practice
13.5.1
13.5.2
13.5.3
13.5.4
Verification and validation
Operational profile
Testing
Software process control
Numerical Software
13.6.1
13.6.2
13.6.3
13.6.4
Acceptance testing
External consistency checking
Automaticverification of numerical precision (error prop-
agation control)
Redundancy-based techniques
Basic Fault-tolerance Techniques
13.7.1
13.7.2
13.7.3
13.7.4
Summary
Check-pointing and exception handling
Recovery through redundancy
Advanced techniques
Reliability and performance
277
277
279
280
284
284
285
286
287
289
294
294
295
297
298
298
301
335
This page intentionally left blank
List of Contributors
Ronald F. Boisvert
Mathematical and Computational Sci-
ences Division, National Institute of Stan-
dards and Technology (NIST), Mail Stop
8910, Gaithersburg, MD 20899, USA,
email: boisvert@nist.gov
Francoise Chaitin-Chatelin
Universite Toulouse 1 and CERFACS
(Centre Europeen de Recherche et de For-
mation Avancee en Calcul Scientifique),
42 av. G. Coriolis, FR-31057 Toulouse
Cedex, France,
e-mail: chatelin@cerfacs.fr
Ronald Cools
Department of Computer Science,
Katholieke Universiteit Leuven, Celestij-
nenlaan 200A, B-3001 Heverlee, Belgium,
email: Ronald.Cools@cs.
kuleuven.be
Craig C. Douglas
Center for Computational Sciences, Uni-
versity of Kentucky, Lexington, Kentucky
40506-0045, USA,
email: douglas@ccs. uky. edu
Kenneth W. Dritz
Argonne National Laboratory, 9700 South
Cass Avenue, Argonne, Illinois 60439,
USA,
email: dritz@anl. gov
Bo Einarsson
National Supercomputer Centre and the
Mathematics Department, Linkopings uni-
versitet, SE-581 83 Linkoping, Sweden,
email: boein@nsc .liu.se
Wayne H. Enright
Department of Computer Science, Univer-
sity of Toronto, Toronto,Canada M553G4,
email: enright@cs .utoronto. ca
William D. Gropp
Argonne National Laboratory, 9700 South
Cass Avenue, Argonne, Illinois 60439,
USA,
email: gropp@mcs .anl. gov
Sven Hammarling
The Numerical Algorithms Group Ltd,
Wilkinson House, Jordan Hill Road,
Oxford OX2 8DR, England,
email: sven@nag.co. uk
Hans Petter Langtangen
Institutt for informatikk, Universitetet i
Oslo, Box 1072 Blindern, NO-0316 Oslo,
and Simula Research Laboratory, NO-
1325 Lysaker, Norway,
email: hpl@simula.no
Roldan Pozo
Mathematical and Computational Scien-
ces Division, National Institute of Stan-
dards and Technology (NIST), Mail Stop
8910, Gaithersburg, MD 20899, USA,
email: boisvert@nist. gov
xiii
XIV List of Contributors
Siegfried M. Rump
Institut fiir Informatik III, Technische Uni-
versitat Hamburg-Harburg, Schwarzen-
bergstrasse 95, DE-21071 Hamburg,
Germany,
email: rump@tu-harburg. de
Van Snyder
Jet Propulsion Laboratory, 4800 Oak
Grove Drive, Mail Stop 183-701,
Pasadena, CA 91109, USA,
email: van. snyder@jpl. nasa.gov
Elisabeth Traviesas-Cassan
CERFACS, Toulouse, France. Presently
at TRANSICIEL Technologies, FR-31025
Toulouse Cedex, France,
e-mail: ecassan@mail.transiciel,
com
Mladen A. Vouk
Department of Computer Science, Box
8206, North Carolina State University,
Raleigh, NC 27695, USA,
email: vouk@csc.ncsu. edu
G. William Walster
Sun Microsystems Laboratories, 16 Net-
work Circle, MS UMPK16-160, Menlo
Park, CA 94025, USA,
email: bill.walster@sun.com
Brian Wichmann
Retired,
email: Brian.Wichmann@bcs.
org.uk
List of Figures
2.1
2.2
3.1
3.2
3.3
3.4
3.5
4.1
4.2
4.3
4.4
4.5
4.6
4.7
4.8
5.1
5.2
5.3
7.1
7.2
7.3
7.4
7.5
7.6
7.7
7.8
7.9
7.10
7.11
A model of computational modeling
A model of computational validation
f .(x) for several values of
Error as a function of N for = 0.5, 0.75, 1, 1.2, 1.3, and 1.5
i[ f ], q4[ f ] Q6[ f ],Q8[f]
Reliability of the primitive error estimator
Naive automatic integration with requested error = 0.05
Floating-point number example
Hypotenuse of a right angled triangle
Cubic equation example
Integral example
Linear equations example
Stable ODE example
Unstable ODE example
James Hardy Wilkinson (1919-1986)
The map for the matrix One in three dimensions
Map : z p(E(A —zI)--1
), A = B —E, with B =Venice and E is
rank 1
Map y :z A = B-E, with B =Venice
Approximate solution for predator-prey problem . .
Approximate solution for Lorenz problem
ETOL(x) with low tolerance
ETOL(x) with medium tolerance
ETOL(x) with high tolerance
Check 1, Check 2, and Check 4 results for low tolerance
Check 1, Check 2, and Check 4 results for medium tolerance
Check 1, Check 2, and Check 4 results for high tolerance
Check 3 results for low tolerance
Check 3 results for medium tolerance
Check 3 results for high tolerance
14
16
35
37
39
39
41
45
48
51
57
57
59
59
62
88
90
91
113
114
116
117
117
118
118
119
119
170
120
XV
xvi List of Figures
8.1
8.2
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
10.10
10.11
10.12
10.13
10.14
10.15
10.16
10.17
10.18
10.19
10.20
10.21
10.22
10.23
10.24
10.25
10.26
10.27
10.28
10.29
Java Scimark performance on a 333 MHz Sun Ultra 10 system using
successive releases of the JVM
Performance of Scimark component benchmarks in C and Java on a 500
MHz Pentium III system
IBM prime letterhead
University of Illinois postmark
Pocket calculators with 8 decimal digits, no exponent
Point matrix times interval vector
INTLAB program to check (I —RA x < x and therefore the nonsingu-
larity of A
INTLAB program for checking nonsingularityof an interval matrix . . .
Naive interval Gaussian elimination: Growth ofrad(Uii)
Plot of the function in equation (10.8) with 20 meshpoints in x- and
y-directions
Plot of the function in equation (10.8) with 50 meshpoints in x- and
y-directions
INTLAB program for the inclusion of the determinant of a matrix . . . .
Comparison of a naive interval algorithm and a self-validating method . .
INTLAB program of the function in equation (10.9) with special treat-
ment of
INTLAB results of verifynlss for Broyden's function (10.9)
INTLAB example of quadratic convergence
Dot product
Dot product with unrolled loop
ijk-loop for matrix multiplication
Top-down approach for point matrix times interval matrix
Improved code for point matrix times interval matrix
INTLAB program to convert inf-sup to mid-rad representation
INTLAB program for point matrix times interval matrix
INTLAB program for interval matrix times interval matrix
Inclusion as computed by verifylss for the example in Table 10.12 .
Inner and outer inclusions and true solution set for the linear system with
tolerances in Table 10.12
Monte Carlo approach to estimate the solution set of a parameterized
linear system
Result of Monte Carlo approach as in the program in Figure 10.12 for
100 x 100 random linear system with tolerances, projection of 1st and
2nd component of solution
Inner and outer inclusions of symmetric solution set for the example in
Table 10.12
Nonzero elements of the matrix from Harwell/Boeing BCSSTK15 . . . .
Approximation of eigenvalues of the matrix in Table 10.14 computed by
MATLAB
159
160
196
196
199
206
207
208
210
212
212
214
215
218
219
220
221
221
222
223
224
226
226
227
233
234
234
235
236
237
238
List of Figures xvii
11.1
11.2
12.1
13.1
13.2
13.3
13.4
13.5
13.6
13.7
13.8
13.9
Refinement patterns
An unstructured grid
Two orderings for summing 4 values. Shown in (a) is the ordering typi-
cally used by parallel algorithms. Shown in (b) is the natural "do loop"
ordering
Empirical and modeled failure intensity
Observed and modeled cumulative failures
Empirical and modeled intensity profile obtained during an early testing
phase
Empirical and modeled failures obtained during an early testing phase . .
Field recovery andfailure rates for a telecommunications product . . . .
Unavailability fit using LPET and constant repair rate with data up to
"cut-off point" only
Fraction of shipped defects for two ideal testing strategies based on sam-
pling withandwithoutreplacement, andanonideal testing under schedule
and resource constraints
Illustration of failure space for three functionally equivalent programs . .
Illustration of comparison events
242
250
257
272
273
273
274
275
276
282
291
294
This page intentionally left blank
List of Tables
2.1
3.1
3.2
3.3
4.1
4.2
4.3
5.1
6.1
6.2
7.1
7.2
7.3
7.4
9.1
9.2
10.1
10.2
10.3
10.4
10.5
10.6
10.7
10.8
10.9
Sources of test problems for mathematical software
Some numerical results
Where the maximum appears to be
Location of maximum by combination of routines
IEEE 754 arithmetic formats
Forward recurrence for yn
Asymptotic error bounds for Ax=x
Properties of R(t, z) as a function of t and z
Unit roundoff u for IEEE 754 standard floating-point arithmetic
General scheme for generating data for a backward error analysis
Check 1results
Check 2 results
Check 3 results
Check 4 results
Set-theoretic interval operators
Interval-specific intrinsic functions
Exponential growth of radii of forward substitution in equation (10.7) in
naive interval arithmetic
Relative performance without and with unrolled loops
Relative performance for different methods for matrix multiplication. . .
Performance of LAPACKroutines using Intel MKL
Performance of ATLAS routines
Performance of algorithms in Figures 10.18 and 10.19 for point matrix
times interval matrix.
Relative performance of MATLAB matrix multiplication with interpre-
tation overhead
Performance of interval matrix times interval matrix
Measured computing time for linear system solver.
28
36
37
42
45
57
73
85
102
106
114
115
115
116
183
184
210
221
222
223
223
225
225
227
229
xix
xx List of Tables
10.10
10.11
10.12
10.13
10.14
11.1
13.1
13.2
13.3
13.4
13.5
13.6
MATLAB solution of a linear system derived from (10.12) for n = 70
and T = 40
INTLAB example of verifynlss for Broyden's function (10.9) with
uncertain parameters
230
731
INTLAB example of verifylss for a linear system with uncertain data.232
Computing time without and with verification
Matrix with ill-conditioned eigenvalues
Speedups using a cache aware multigrid algorithm on adaptively refined
structured grids
Hypothetical telephone switch
User breakdown of the profile
Mode decomposition
Illustration of parameter values
Pairwise test cases
Example of standard deviation computation
736
237
242
279
779
780
783
283
788
Preface
Much of the software available today is poorly written, inadequate in its facilities,
and altogethera numberof years behind the most advanced state of the art.
—Professor Maurice V.Wilkes, September 1973.
Scientific software is central to our computerized society. It is used to design airplanes
and bridges, to operatemanufacturing lines, to controlpowerplantsandrefineries,toanalyze
financial derivatives, to map genomes, and to provide the understanding necessary for the
diagnosis and treatment of cancer. Because of the high stakes involved, it is essential that the
software be accurate and reliable. Unfortunately, developing accurate and reliable scientific
software is notoriously difficult, and Maurice Wilkes' assessment of 1973 still rings true
today. Not only is scientific software beset with all the well-known problems affecting
software development in general, it must cope with the special challenges of numerical
computation. Approximations occur at all levels. Continuous functions are replaced by
discretized versions. Infinite processes are replaced by finite ones. Real numbers are
replaced by finite precision numbers. As a result, errors are built into the mathematical
fabric of scientific software which cannot be avoided. At best they can be judiciously
managed. The nature of these errors, and how they are propagated, must be understood
if the resulting software is to be accurate and reliable. The objective of this book is to
investigate the nature of some of these difficulties, and to provide some insight into how to
overcome them.
The book is divided into three parts.
1. Pitfalls in Numerical Computation.
We first illustrate some of the difficulties in producing robust and reliable scientific
software. Well-known cases of failure by scientific software are reviewed, and the
"what" and "why" of numerical computations are considered.
2. Diagnostic Tools.
Wenextdescribe tools thatcan be used to assess the accuracy andreliability of existing
scientific applications. Such tools do not necessarily improve results, but they can be
used to increase one's confidence in their validity.
3. Technology for Improving Accuracy and Reliability.
We describe a variety of techniques that can be employed to improve the accuracy and
reliability of newly developed scientific applications. In particular, we consider the
effect of the choice of programming language, underlying hardware, and the parallel
xxi
xxii Preface
computing environment. We provide a description of interval data types and their
application to validated computations.
This book has been produced by the International Federation for InformationPro-
cessing (IFIP) Working Group 2.5.An arm of the IFIP Technical Committee 2 on Software
Practice and Experience, WG 2.5 seeks to improve the quality of numerical computation by
promoting the development and availability of sound numerical software. WG 2.5 has been
fortunate to be able to assemble a set of contributions from authors with a wealth of expe-
rience in the development and assessment of numerical software. The following WG 2.5
members participated in this project: Ronald Boisvert, Francoise Chaitin-Chatelin, Ronald
Cools, Craig Douglas, BoEinarsson, WayneEnright, Patrick Gaffney, Ian Gladwell, William
Gropp, Jim Pool, Siegfried Rump, Brian Smith, VanSnyder, Michael Thune, Mladen Vouk,
and Wolfgang Walter. Additional contributions were made by KennethW.Dritz, Sven Ham-
marling, Hans Petter Langtangen, Roldan Pozo, Elisabeth Traviesas-Cassan, Bill Walster,
and Brian Wichmann. The volume was edited by Bo Einarsson.
Several of the contributions have been presented at other venues in somewhatdif-
ferent forms. Chapter 1 was presented at the Workshop on Scientific Computing and the
Computational Sciences, May 28-29, 2001, in Amsterdam, The Netherlands. Chapter 5
was presented at the IFIP Working Group 2.5 Meeting, May 26-27, 2001,in Amsterdam,
The Netherlands. Four of the chapters—6,7,10, and 13—are based on lectures presented at
the SIAM Minisymposium on Accuracy and Reliability in Scientific Computing held July
9, 2001,in San Diego, California. Chapter 10was also presented at the Annual Conference
of Japan SIAM at Kyushu-University, Fukuoka, October 7-9, 2001.
Thebook has an accompanying web http: / /www. nsc . 1 iu. se /wg2 5/book/
with updates, codes, links, color versions of some of the illustrations, and additional
material.
A problem with references to links on the internet is that, as Diomidis Spinellis has
shown in [422], the half-life of a referenced URL is approximately four years from its
publication date. The accompanying website will contain updated links.
A number of trademarked products are identified in this book. Java, Java HotSpot,
and SUN are trademarks of Sun Microsystems, Inc. Pentium and Itanium are trademarks
of Intel. PowerPC is a trademark of IBM. Microsoft Windows is a trademark of Microsoft.
Apple is a trademark of Apple Computer, Inc. NAG is a trademark of The Numerical
Algorithms Group, Ltd. MATLAB is a trademark of The MathWorks,Inc.
While we expect that developing accurate and reliable scientific software will remain
a challenging enterprise for some time to come, we believe that techniques and tools are
now beginning to emerge to improve the process. If this volume aids in the recognition of
the problems and helps point developers in the direction of solutions, then this volume will
have been a success.
Linkoping and Gaithersburg, September 15, 2004.
Bo Einarsson, Project Leader
Ronald F. Boisvert, Chair, IFIP Working Group 2.5
Preface xxiii
Acknowledgment
As editor I wish to thank the contributors and the additional project members for sup-
porting the project through submitting and refereeing the chapters. I am also very grateful
to the anonymousreviewers who did a marvelous job, gave constructive criticism andmuch
appreciated comments and suggestions. The final book did benefit quite a lot from their
work!
I also thank the National Supercomputer Centre and the Mathematics Department of
Linkopings universitet for supporting the project.
Working with SIAM on the publication of this book was a pleasure. Special thanks go
to Simon Dickey, Elizabeth Greenspan, Ann Manning Allen, and Linda Thiel, for all their
assistance.
BoEinarsson
This page intentionally left blank
Part I
PITFALLS IN NUMERICAL
COMPUTATION
1
This page intentionally left blank
Chapter 1
What Can Go Wrong in
Scientific Computing?
Bo Einarsson
1.1 Introduction
Numerical software is central to our computerized society. It is used, for example, to design
airplanes and bridges, to operate manufacturinglines, to control power plants andrefineries,
to analyze financial derivatives, to determine genomes, and to provide the understanding
necessary for the treatment for cancer. Because of the high stakes involved, it is essential
that the software be accurate, reliable, and robust.
A report [385] written for the National Institute of Standards and Technology (NIST)
states that software bugs cost the U.S. economy about $ 60 billion each year, and that more
than a third of that cost, or $ 22 billion, could be eliminated by improved testing. Note
that these figures apply to software in general, not only to scientific software. An article
[18] stresses that computer system failures are usually rooted in human error rather than
technology. The article gives a few examples: delivery problems at a computer company,
radio system outage for air traffic control, delayed financial aid. Many companies are now
working on reducing the complexity of testing but are still requiring it to be robust.
The objective of this book is to investigate some of the difficulties related to scientific
computing, such as accuracy requirements and rounding, and to provide insight into how to
obtain accurate and reliable results.
This chapter serves as an introduction and consists of three sections. In Section 1.2
we discuss some basic problems in numerical computation, like rounding, cancellation, and
recursion. In Section 1.3 we discuss implementation of real arithmetic on computers, and
in Section 1.4 we discuss some cases where unreliable computations have caused loss of
life or property.
1.2 Basic Problems in Numerical Computation
Some illustrative examples are given in this section, but further examples and discussion
can be found in Chapter 4.
3
4 Chapter 1. What Can Go Wrong in Scientific Computing?
Two problems in numerical computation are that often the inputvalues are notknown
exactly, and that some of the calculations cannot be performed exactly. The errors obtained
can cooperate in later calculations, causing an error growth, which may be quite large.
Rounding is the cause of an error, while cancellation increases its effect and recursion
may cause a build-up of the final error.
1.2.1 Rounding
The calculations are usually performed with a certain fixed number of significant digits,
so after each operation the result usually has to be rounded, introducing a rounding error
whose modulus in the optimal case is less than or equal to half a unit in the last digit. At the
next computation this rounding error has to be taken into account, as well as a newrounding
error. The propagation of the rounding error is therefore quite complex.
Example 1.1 (Rounding) Consider the followingMATLABcode for advancing from a to
b with the step h = (b — a)/n.
function step(a,b,n)
% step from a to b with n steps
h=(b-a)/n;
x=a;
disp(x)
while x < b,
x = x + h;
disp(x)
end
We get one step too many with a = l,b = 2, and n = 3, but the correct numberof
steps with b = 1.1. In the first case because of the rounding downward of h = 1/3 after
three steps we are almost but not quite at b, and therefore the loop continues. In the second
case also b is an inexact number on a binary computer, and the inexact values of x and b
happen to compare as wanted. It is advisable to let such a loop work with an integer variable
instead ofareal variable. Ifreal variables areused itis advisabletoreplace while x < b
with while x < b-h/2.
The example was run in IEEE 754 double precision, discussed in section 1.3.2. In
another precision a different result may be obtained!
1.2.2 Cancellation
Cancellation occurs from the subtraction of two almost equal quantities. Assume x1 =
1.243 ± 0.0005 andx2 = 1.234 ± 0.0005. We then obtain x 1 - x 2 = 0.009± 0.001, aresult
where several significant leading digits have been lost, resulting in a large relative error!
Example 1.2 (Quadratic equation) Theroots oftheequation ax2
+ bx + c = 0 aregiven
1.2. Basic Problems in Numerical Computation 5
by the following mathematically, but not numerically, equivalent expressions:
Using IEEE 754 single precision and a = 1.0 •10-5
, b = 1.0 •103
, and c = 1.0 •103
we get = -3.0518, x = -1.0000 .10 , = -1.0000, andx = -3.2768 • 107
. We
thus get two very different sets of roots for the equation! The reason is that since b2
is much
larger than 4 the square root will get a value very close to , and whenthe subtraction
of two almost equal values is performed the error in the square root evaluation will domi-
nate. Indouble precision thevalue ofthesquare root of 106
- 0.04 is999.9999799999998,
which is very close to b = 1000. The two correct roots in this case are one from each set,
and x, for which there is addition of quantities of the same sign, and no cancellation
occurs. •
Example 1.3 (Exponential function) The exponential function ex
can be evaluated using
the MacLaurin series expansion. This works reasonably well for x > 0 but not for x <—3
wherethe expansion terms an will alternate in signandthemodulusofthe terms will increase
until n . Even for moderate values ofx the cancellation can be so severe that anegative
value of the function is obtained!
Using double (or multiple) precision is not the cure for cancellation, but switching to
another algorithm may help. In order to avoid cancellation in Example 1.2 we let the sign
of b decide which formula to use, and in Example 1.3we use the relation e-x
= 1/ex
.
1.2.3 Recursion
A common method in scientificcomputing is to calculate a newentity based on the previous
one, and continuing in that way,either in an iterative process (hopefully converging) or in a
recursive process calculating newvalues all the time. Inboth cases the errors canaccumulate
and finally destroy the computation.
Example 1.4 (Differential equation) Let us look atthe solution of a first order differential
equation y' = f ( x , y). A well-known numerical method is the Euler method yn+ =
yn + h •f(xn, yn). Twoalternatives with smaller truncation errors are the midpoint method
yn+1 = yn-1 + 2h • f(xn, yn), which has the obvious disadvantage that it requires two
starting points, and the trapezoidal method yn+1 = yn + • [f(xn, yn) + f(xn + 1 , yn+1)],
which has the obvious disadvantage that it is implicit.
Theoretical analysis shows that the Euler method is stable1
for small h, and the mid-
point method is always unstable, while the trapezoidal method is always stable. Numerical
experiments on the test problem y' = —2y with the exact solution y(x) = e-2x
confirm
that themidpoint method gives a solution which oscillates wildly.2
1
Stability can be defined such that if the analytic solution tends to zero as the independent variable tends to
infinity, then also the numeric solution should tend to zero.
2
Compare with Figures 4.6 and 4.7 in Chapter 4.
6 Chapter 1. What Can Go Wrong in ScientificComputing?
1.2.4 Integer overflow
There is also a problem with integer arithmetic: integer overflow is usually not signaled.
This can cause numerical problems, for example, in the calculation of the factorial function
"n!". Writing the code in a natural way using repeated multiplication on a computer with
32 bit integer arithmetic, the factorials up to 12! are all correctly evaluated, but 13! gets
the wrong value and 17! becomes negative. The range of floating-point values is usually
larger than the range of integers, but the best solution is usually not to evaluate the factorial
function. When evaluatingthe Taylor formulayou instead successively multiply with
for each n to get the next term, thus avoiding computing the large quantity n!. The
factorial function overflows at n = 35 in IEEE single precision and at n = 171 in IEEE
double precision.
Multiple integer overflow vulnerabilities in a Microsoft Windows library (before a
patch was applied) could have allowed an unauthenticated, remote attacker to execute arbi-
trary code with system privileges [460].
1.3 Floating-point Arithmetic
During the 1960's almost every computer manufacturer had its own hardware and its own
representation of floating-point numbers. Floating-point numbers are used for variables
with a wide range of values, so that the value is represented by its sign, its mantissa,
represented by a fixed number of leading digits, and one signed integer for the exponent,
as in the representation of the mass of the earth 5.972 • 1024
kg or the mass of an electron
9.10938188 • 10-31
kg.
An introduction to floating-point arithmetic is given in Section 4.2.
The old and different floating-point representations had some flaws; on one popular
computer there existed values a > 0 such that a > 2 •a. This mathematical impossibility
was obtained from the fact that a nonnormalized number (a number with an exponent that
is too small to be represented) was automatically normalized to zero at multiplication.3
Such an effect can give rise to a problem in a program which tries to avoid division
by zero by checking that a 0, but still I/a may cause the condition "division by zero."
1.3.1 Initial work on an arithmetic standard
During the 1970's Professor William Kahan of the University of California at Berkeley
became interested in defining a floating-point arithmetic standard; see [248]. He managed
to assemble a group of scientists, including both academics and industrial representatives
(Apple, DEC, Intel, HP, Motorola), under the auspices of the IEEE.4
The group became
known as project 754. Its purpose was to produce the best possible definition of floating-
point arithmetic. It is now possible to say that they succeeded; all manufacturersnowfollow
the representation of IEEE 754. Some old systems with other representations are however
3
Consider a decimal system with two digits for the exponent, and three digits for the mantissa, normalized so
that the mantissa is not less than 1but less than 10. Then the smallest positive normalized number is 1.00 • 10_99
but the smallest positive nonnormalized number is 0.01 • 10--99
.
4
Institute for Electrical and Electronic Engineers, USA.
1.3. Floating-point Arithmetic 7
still available from Cray, Digital (now HP), and IBM, but all new systems also from these
manufacturers follow IEEE 754.
The resulting standard was rather similar to the DEC floating-point arithmetic on the
VAX system.5
1.3.2 IEEE floating-point representation
TheIEEE 754 [226] contains single, extended single, double, andextended double precision.
Itbecame anIEEE standard [215] in 1985 and anIEC6
standard in 1989. There is an excellent
discussion in the book [355].
In the following subsections the format for the different precisions are given, but the
standard includes much more than these formats. It requires correctly rounded operations
(add, subtract, multiply, divide, remainder, and square root) as well as correctly rounded
format conversion. There are four rounding modes (round down, round up, round toward
zero, andround to nearest), withroundto nearest asthe default. There are also five exception
types (invalid operation, division by zero, overflow, underflow, and inexact) which must be
signaled by setting a status flag.
1.3.2.1 IEEE single precision
Single precision is based on the 32 bit word, using 1bit for the sign s, 8 bits for the biased
exponent e, andtheremaining 23bits forthefractional part / ofthemantissa. Thefact that
a normalized number in binary representation must have the integer part of the mantissa
equal to 1is used, and this bit therefore does not have to be stored, which actually increases
the accuracy.
There are five cases:
1. e = 255 and f 0 give an x which is not a number (NaN).
2. e = 255and /= 0 give infinity with its sign,
x = (-1)s
.
3. 1 e 254, the normal case,
x = (-1)s
• (1.f)-2e
-127
.
Note that the smallest possible exponent gives numbers of the form
x = (-1)s
• (1.f)-2-126
.
4. e = 0 and f 0, gradual underflow, subnormal numbers,
x = (-l)s
.(0.f)-2-126
.
5. e = 0 and f= 0, zero with its sign,
x = (-1)s
.0.
The largest number that can be represented is (2 - 2-23
) • 2127
3.4028 • 1038
,
the smallest positive normalized number is 1 • 2-126
1.1755 • 10-38
, and the smallest
5
After 22 successful years, starting 1978 with the VAX 11/780, the VAX platform was phased out. HP will
continue to support VAXcustomers.
6
TheInternational Electrotechnical Commission handlesinformation about electric,electronic, andelectrotech-
nical international standards and compliance and conformity assessment for electronics.
8 Chapter 1. What Can Go Wrong in Scientific Computing?
positive nonnormalized numberis 2-23
.2-126
= 2-149
1.4013.10--45
. Theunit roundoff
u = 2-24
5.9605 • 10-8
corresponds to about seven decimal digits.
The concept of gradual underflow has been rather difficult for the user community
to accept, but it is very useful in that there is no unnecessary loss of information. Without
gradual underflow, a positive number less than the smallest permitted one must either be
rounded up to the smallest permitted one or replaced with zero, in both cases causinga large
relative error.
The NaN can be used to represent (zero/zero), (infinity - infinity), and other quantities
that do not haveaknownvalue. Note thatthe computationdoes not haveto stop for overflow,
since the infinity NaN can be used until a calculation with it does not give a well-determined
value. The sign of zero is useful only in certain cases.
1.3.2.2 IEEE extended single precision
The purpose of extended precision is to make it possible to evaluate subexpressions to full
single precision. The details are implementation dependent, but the number of bits in the
fractional part / has to be at least 31, and the exponent, which may be biased, has to
have at least the range —1022 exponent 1023. IEEE double precision satisfies these
requirements!
1.3.2.3 IEEE double precision
Double precision is based on two 32 bit words (or one 64 bit word), using 1bit for the sign
s, 11 bits for the biased exponent e, and the remaining 52 bits for the fractional part / of
the mantissa. It is very similar to single precision, with an implicit bit for the integer part
of the mantissa, a biased exponent, and five cases:
1. e = 2047 and f 0 give an x which is not a number(NaN).
2. e = 2047 and /= 0 give infinity with itssign,
x = (-1)S.
.
3. 1 e 2046, the normal case,
x = (-1)s
.(1.f). 2e
-1023
.
Note that the smallest possible exponent gives numbers of the form
x = (-1)s
. (1.f). 2-1022
.
4. e = 0 and f 0, gradual underflow, subnormalnumbers,
x = (-1)s
-(0.f) .2-1022
.
5. e = 0 and /= 0, zero with itssign,
x = (-1)s
.0.
The largest number that can be represented is (2 - 2_52
) •21023
1.7977 • 10308
,
the smallest positive normalized number is 1 •2--1022
2.2251 • 10--308
, and the smallest
positive nonnormalized number is 2_52
• 2_1022
= 2_1074
. 4.9407 • 10_324
. The unit
roundoff u= 2_53
1.1102 • 10_16
corresponds to about 16 decimal digits.
1.4. What Really Went Wrong in Applied Scientific Computing! 9
The fact that the exponent is wider for double precision is a useful innovation not
available, for example, on the IBM System 360.7
On the DEC VAX/VMStwo different
double precisions D and G were available: D with the same exponent range as in single
precision, and G with a wider exponent range. The choice between the two double preci-
sions was done via a compiler switch at compile time. In addition there was a quadruple
precision H.
1.3.2.4 IEEE extended double precision
The purpose of extended double precision is to make it possible to evaluate subexpressions
to full double precision. The details are implementation dependent, but the number of
bits in the fractional part / has to be at least 63, andthe number of bits in the exponent part
has to be at least 15.
1.3.3 Future standards
There is also an IEEE Standard-for Radix-Independent Floating-Point Arithmetic, ANSI/
IEEE 854 [217]. This is however of less interest here.
Currently double precision is not always sufficient, so quadrupleprecision is available
from many manufacturers. There is no official standard available; most manufacturers
generalize the IEEE 754, but some (includingSGI) haveanother convention. SGIquadruple
precision is very different from the usual quadrupleprecision, whichhas a very large range;
with SGI the range is about the same in double and quadruple precision. The reason is
that here the quadruple variables are represented as the sum or difference of two doubles,
normalized so that the smaller double is 0.5 units in the last position of the larger. Care
must therefore be taken when using quadruple precision.
Packages for multiple precision also exist.
1.4 What Really Went Wrong in AppliedScientific
Computing!
We include only examples where numerical problems have occurred, not the more common
pure programming errors (bugs). More examples are given in the paper [424] and in the
Thomas Ruckle web site Collection of SoftwareBugs [212]. Quite a different view is taken
in [4], in which how to get the mathematics and numerics correct is discussed.
1.4.1 Floating-point precision
Floating-point precision has to be sufficiently accurate to handle the task. In this section we
give some examples where this has not been the case.
7
IBM System 360 was announced in April 1964 and was a very successful spectrum of compatible computers
that continues in an evolutionary form to this day.
Following System 360 was System 370, announced in June, 1970. The line after System 370 continued under
different names: 303X (announced in March, 1977), 308X, 3090, 9021, 9121, and the /Series. These machines
all share a common heritage. The floating-point arithmetic is hexadecimal.
10 Chapter 1. What Can Go Wrong in Scientific Computing?
1.4.1.1 Patriot missile
A well-known example is the Patriot missile8
failure [158, 419] with a Scud missile9
on
February 25, 1991, at Dhahran, Saudi Arabia. The Patriot missile was designed, in order
to avoid detection, to operate for only a few hours at one location. The velocity of the
incoming missile is a floating-point number but the time from the internal clock is an integer,
represented by the time in tenths of a second. Before that time is used, the integer number
is multiplied with a numerical approximation of 0.1 to 24 bits, causing an error 9.5 • 10_8
in the conversion factor. The inaccuracy in the position of the target is proportional to the
product of the target velocity and the length of time the system has been running. This
is a somewhat oversimplified discussion; a more detailed one is given in [419]. With the
system up and running for 100 hours and a velocity of the Scud missile of 1676 meters per
second, an error of 573 meters is obtained, which is more than sufficient to cause failure of
the Patriot and success for the Scud. The Scud missile killed 28 soldiers.
Modified software, which compensated for the inaccurate time calculation, arrived
the following day. The potential problem had been identifiedby the Israelis and reported to
the Patriot Project Office on February 11, 1991.
1.4.1.2 TheVancouver Stock Exchange
The Vancouver Stock Exchange (see the references in [154]) in 1982 experienced a problem
with its index. The index (with three decimals) was updated (and truncated) after each
transaction. After 22 months it had fallen from the initial value 1000.000 to 524.881, but
the correctly evaluated index was 1098.811.
Assuming 2000 transactions a day a simple statistical analysis gives directly that the
index will lose one unit per day, since the mean truncation error is 0.0005 per transaction.
Assuming 22 working days a month the index would be 516 instead of the actual (but still
false) 524.881.
1.4.1.3 Schleswig-Holstein local elections
In the Schleswig-Holstein local elections in 1992, one party got 5.0 % in the printed results
(which was correctly rounded to one decimal), but the correct value rounded to two decimals
was 4.97 %, and the party did therefore not pass the 5 % threshold for getting into the local
parliament, which in turn caused a switch of majority [481]. Similar rules apply not only
in German elections. A special rounding algorithm is required at the threshold, truncating
all values between 4.9 and 5.0 in Germany, or all values between 3.9 and 4.0 in Sweden!
8
The Patriot is a long-range, all-altitude, all-weather air defense system to counter tactical ballistic missiles,
cruise missiles, and advancedaircraft.
Patriot missile systems were deployed by U.S. forces during the First Gulf War. The systems were stationed in
Kuwait and destroyed a numberof hostile surface-to-surface missiles.
9
The Scud was first deployed by the Soviets in the mid-1960s. The missile was originally designed to carry
a 100 kiloton nuclear warhead or a 2000 pound conventional warhead, with ranges from 100 to 180 miles. Its
principal threat was its warhead's potential to hold chemical or biological agents.
The Iraqis modified Scuds for greater range, largely by reducingwarheadweight,enlarging their fuel tanks, and
burning all of the fuel during the early phase of flight. It has been estimated that 86 Scuds were launchedduring
the First Gulf War.
1.4. What Really Went Wrong in Applied Scientific Computing! 11
1.4.1.4 Criminal usages of roundoff
Criminal usages of roundoff have been reported [30], involving many tiny withdrawals
from bank accounts. The common practice of rounding to whole units of the currency (for
example, to dollars, removing the cents) implies that the cross sums do not exactly agree,
which diminishes the chance/risk of detecting the fraud.
1.4.1.5 Euro-conversion
A problem is connected with the European currency Euro, which replaced 12 national
currencies from January 1, 2002. Partly due to the strictly defined conversion rules, the
roundoff can have a significant impact [162]. A problem is that the conversion factors from
old local currencies have six significant decimal digits, thus permitting a varying relative
error, and for small amountsthe final result is also to be rounded according to local customs
to at most two decimals.
1.4.2 Illegal conversion between data types
On June 4, 1996, an unmanned Ariane 5 rocket launched by the European Space Agency
exploded forty seconds after its lift-off from Kourou, French Guiana. The Report by the
Inquiry Board [292, 413] found that the failure was caused by the conversion of a 64 bit
floating-point number to a 16 bit signed integer. The floating-point number was too large
to be represented by a 16 bit signed integer (larger than 32767). In fact, this part of the
software was required in Ariane 4 but not in Ariane 5!
A somewhat similar problem is illegal mixing of different units of measurement (SI,
Imperial, andU.S.). Anexample istheMars Climate Orbiter whichwaslost onentering orbit
around Mars on September 23,1999. The "root cause" of the loss was that a subcontractor
failed to obey the specification that SI units should be used and instead used Imperial units
in their segment of the ground-based software; see [225]. See also pages 35-38 in the
book [440].
1.4.3 Illegal data
A crew member of the USSYorktown mistakenly entered a zero for adata valuein September
1997, which resulted in a division by zero. The error cascaded and eventually shut down
the ship's propulsion system. The ship was dead in the water for 2 hours and 45 minutes;
see [420, 202].
1.4.4 Inaccurate finite element analysis
On August 23, 1991, the Sleipner A offshore platform collapsed in Gandsfjorden near
Stavanger, Norway. The conclusion of the investigation [417] was that the loss was caused
by a failure in a cell wall, resulting in a serious crack and a leakage that the pumps could
not handle. The wall failed as a result of a combination of a serious usage error in the
finite element analysis (using the popular NASTRAN code) and insufficient anchorage of
the reinforcement in a critical zone.
12 Chapter 1. What Can Go Wrong in ScientificComputing?
The shear stress was underestimated by 47 %, leading to insufficient strength in the
design. A more careful finite element analysis after the accident predicted that failurewould
occur at 62 meters depth; it did occur at 65 meters.
1.4.5 Incomplete analysis
The Millennium Bridge [17, 24, 345] over the Thames in London was closed on June 12,
directly after its public opening on June 10,2000, since it wobbled more than expected. The
simulations performed during the design process handled the vertical force (which was all
that was required by the British Standards Institution) of a pedestrian at around 2 Hz, but
not the horizontal force at about 1Hz. What happened was that the slight wobbling (within
tolerances) due to the wind caused the pedestrians to walk in step (synchronous walking)
which made the bridge wobble even more.
On the first day almost 100000 persons walked the bridge. This wobbling problem
was actually noted already in 1975 on the Auckland Harbour Road Bridge in New Zealand,
when a demonstration walked that bridge, but that incident was never widely published. The
wobbling started suddenly when a certain number of persons were walking the Millennium
Bridge; in a test 166 persons were required for the problem to appear. The bridge was
reopened in 2002, after 37 viscous dampers and 54 tuned mass dampers were installed and
all the modifications were carefully tested. The modifications were completely successful.
1.5 Conclusion
The aim of this book is to diminish the risk of future occurrences of incidents like those
described in the previous section. The techniques and tools to achieve this goal are now
emerging; some of them are presented in parts II and III.
Chapter 2
Assessment ofAccuracy
and Reliability
Ronald F. Boisvert,Ronald Cools, and
Bo Einarsson
One of the principal goals of scientific computing is to provide predictions of the
behavior of systems to aid in decision making. Good decisions require good predictions.
But how are we to be assured that our predictions are good? Accuracy and reliability are
two qualities of good scientific software. Assessing these qualities is one way to provide
confidence in the results of simulations. In this chapter we provide some background on
the meaning of these terms.10
2.1 Models of Scientific Computing
Scientific software is particularly difficult to analyze. One of the main reasons for this is
that it is inevitably infused with uncertainty from a wide variety of sources. Much of this
uncertainty is the result of approximations. These approximations are made in the context
of each of the physical world, the mathematical world, and the computer world.
To make these ideas more concrete, let's consider a scientific software system de-
signed to allow virtual experiments to be conducted on some physical system. Here, the
scientist hopes to develop a computer program which can be used as a proxy for some real
world system to facilitate understanding. The reason for conducting virtual experiments
is that developing and running a computer program can be much more cost effective than
developing and runninga fully instrumented physical experiment (consider a series of crash
tests for a car, for example). In other cases performing the physical experiment can be
practically impossible—for example, a physical experiment to understand the formation of
galaxies! The process of abstracting the physical system to the level of a computer program
is illustrated in Figure 2.1. This process occurs in a sequence of steps.
• From real world to mathematical model
A length scale is selected which will allow the determination of the desired results
using a reasonable amount of resources, for example, atomic scale (nanometers) or
10
Portions of this chapter were contributed by NIST, and are not subject to copyright in the USA.
13
14 Chapter 2. Assessment of Accuracy and Reliability
Figure 2.1. A model of computational modeling.
macro scale (kilometers). Next, the physical quantities relevant to the study, such
as temperature and pressure, are selected (and all other effects implicitly discarded).
Then, the physical principles underlying the real world system, such as conserva-
tion of energy and mass, lead to mathematical relations, typically partial differential
equations (PDEs), that express the mathematical model. Additional mathematical ap-
proximations may be introduced to further simplify the model. Approximations can
include discarded effects and inadequately modeled effects (e.g., discarded terms in
equations, linearization).
• From mathematical model to computational model
The equations expressing the mathematical model are typically set in some infinite
dimensional space. In order to admit anumerical solution, the problem istransformed
to a finite dimensional space by some discretization process. Finite differences and
finite elements are examples of discretization methods for PDE models. In such com-
putational models one must select an order of approximation for derivatives. One also
introduces a computational grid of some type. It is desirable that the discrete model
converges to the continuous model as either the mesh width approaches zero or the
order of approximation approaches infinity. In this way, accuracy can be controlled
using these parameters of the numerical method. A specification for how to solve the
discrete equations must also be provided. If an iterative method is used (which is cer-
tainly the case for nonlinear problems), then the solution is obtained only in the limit,
2.2. Verification and Validation 15
and hence a criterion for stopping the iteration must be specified. Approximations
can include discretization of domain, truncation of series, linearization, and stopping
before convergence.
• From computational model to computer implementation
The computational model and its solution procedure are implemented on a particu-
lar computer system. The algorithm may be modified to make use of parallel pro-
cessing capabilities or to take advantage of the particular memory hierarchy of the
device. Many arithmetic operations are performed using floating-point arithmetic.
Approximations can include floating-point arithmetic and approximation of standard
mathematical or special functions (typically via calls to library routines).
2.2 Verification andValidation
If one is to use the results of a computer simulation, then one must have confidence that the
answers produced are correct. However, absolute correctness may be an elusive quantity
in computer simulation. As we have seen, there will always be uncertainty—uncertaintyin
the mathematical model, in the computational model, in the computer implementation, and
in the input data. A more realistic goal is to carefully characterize this uncertainty. This is
the main goal of verification and validation.
• Code verification
This is the process of determining the extent to which the computer implementation
corresponds to the computational model. If the latter is expressed as a specification,
then code verification is the process of determining whether an implementation cor-
responds to its lowest level algorithmic specification. In particular, we ask whether
the specified algorithm has been correctly implemented, not whether it is an effective
algorithm.
• Solution verification
This is the process of determining the extent to which the computer implementation
corresponds to the mathematical model. Assuming that the code has been verified
(i.e., that the algorithm has been correctly implemented), solution verification asks
whether the underlyingnumerical methods correctly produce solutions to the abstract
mathematical problem.
• Validation
This is the process of determining the extent to which the computer implementation
corresponds to the real world. If solution verification has already been demonstrated,
then the validation asks whether the mathematical model is effective in simulating
those aspects of the real world system under study.
Of course, neither the mathematical nor the computational model can be expected to
be valid in all regions of their own parameter spaces. The validation process must confirm
these regions of validity.
Figure 2.2 illustrates how verification and validation are used to quantify the relation-
ship between the various models in the computational science and engineering process. In a
16 Chapter 2. Assessment of Accuracy and Reliability
Figure 2.2. A model of computational validation.
rough sense, validation is the aim of the application scientist (e.g., the physicist or chemist)
who will be using the software to perform virtual experiments. Solution verification is the
aim of the numerical analyst. Finally, code verification is the aim of the programmer.
There is now a large literature on the subject of verification and validation. Never-
theless, the words themselves remain somewhat ambiguous, with different authors often
assigning slightly different meanings. For software in general, the IEEE adopted the fol-
lowing definitionsin 1984 (they were subsequently adopted by various other organizations
and communities, such as the ISO11
).
• Verification: The process of evaluating the products of a software development phase
to provide assurance that they meet the requirements definedfor them by the previous
phase.
• Validation: The process of testing a computer program and evaluating the results to
ensure compliance with specific requirements.
These definitions are general in that "requirements" can be given different meaning for
different application domains. For computational simulation, the U.S. Defense Modeling
11
International Standards Organization.
2.3. Errors in Software 17
and Simulation Office (DMSO) proposed the following definitions (1994), which were
subsequently adopted in the context of computational fluid dynamics by the American
Institute of Aeronautics and Astronautics [7].
• Verification: The process of determining that a model implementation accurately
represents the developer's conceptual description of the model and the solution to the
model.
• Validation: The process of determining the degree to which a model is an accurate
representation of the real world from the perspective of the intended users of the
model.
The DMSO definitions can be regarded as special cases of the IEEE ones, given
appropriate interpretations of the word "requirements" in the general definitions. In the
DMSO proposal, the verification is withrespect to the requirements thatthe implementation
should correctly realize the mathematical model. The DMSO validation is with respect to
the requirement that the results generated by the model should be sufficiently in agreement
with the real world phenomena of interest, so that they can be used for the intended purpose.
The definitionsused in the present book, given in the beginning of this chapter, differ
from those of DMSO in that they make a distinction between the computational model and
the mathematical model. In our opinion, this distinction is so central in scientificcomputing
that it deserves to be made explicit in the verification and validation processes. The model
actually implemented in the computer program is the computational one. Consequently,
according to the IEEE definition, validation is about testing that the computational model
fulfills certain requirements, ultimately those of the DMSO definition of validation. The
full validation can then be divided into two levels. The first level of validation (solution
verification) will be to demonstrate that the computational model is a sufficiently accurate
representation of the mathematical one. The second level of validation is to determine that
the mathematical model is sufficiently effective in reproducing properties of the real world.
The book [383] and the more recent paper [349] present extensive reviews of the
literature in computational validation, arguing for the separation of the concepts of error
and uncertainty in computational simulations. A special issue of Computing in Scienceand
Engineering (see [452]) has been devoted to this topic.
2.3 Errors in Software
When we askwhetheraprogram is "correct"wewanttoknowwhetherit faithfully followsits
lowest-level specifications. Code verification is the process by which we establish correct-
ness in this sense. In order to understand the verificationprocess, it is useful to be mindful
of the most typical types of errors encountered in scientific software. Scientific software is
prone to many of the same problems as softwarein other areas of application. In this section
we consider these. Problems unique to scientific software are considered in Section 2.5.
Chapter 5 of the book [440] introduces a broad classification of bugs organized by the
original source of the error, i.e., how the error gets into the program, that the authors Telles
and Hsieh call "Classes of Bugs." We summarize them in the following list.
18 Chapter 2. Assessment of Accuracy and Reliability
• Requirementbugs.
The specification itself could be inadequate. For example, it could be too vague,
missing a critical requirement, or have two requirements in conflict.
• Implementationbugs.
These are the bugs in the logic of the code itself. They include problems like not fol-
lowing the specification, not correctly handling all input cases, missing functionality,
problems with the graphic user interface, improper memory management, and coding
errors.
• Processbugs.
These are bugs in the runtime environment of the executable program, such as im-
proper versions of dynamic linked libraries or broken databases.
• Build bugs.
These are bugs in the procedure used to build the executable program. For example,
a product may be built for a slightly wrong environment.
• Deployment bugs.
These are problems with the automatic updating of installed software.
• Futureplanning bugs.
These are bugs like the year 2000 problem, where the lifetime of the product was
underestimated, or the technical development went faster than expected (e.g., the 640
KiB12
limit of MS-DOS).
• Documentationbugs.
The software documentation, which should be considered as an important part of
the software itself, might be vague, incomplete, or inaccurate. For example, when
providing information on a library routine's procedure call it is important to provide
not only the meaning of each variable but also its exact type, as well as any possible
side effects from the call.
The book [440] also provides a list of common bugs in the implementation of soft-
ware. We summarize them in the following list.
• Memory or resourceleaks.
A memory leak occurs when memory is allocated but not deallocated when it is no
longer required. It can cause the memory associated with long-running programs
to grow in size to the point that they overwhelm existing memory. Memory leaks
can occur in any programming language and are sometimes caused by programming
errors.
• Logicerrors.
A logic error occurs when a program is syntactically correct but does not perform
according to the specification.
• Coding errors.
Coding errors occur when an incorrect list of arguments to a procedure is used, or
a part of the intended code is simply missing. Others are more subtle. A classical
example ofthe latter is the Fortran statementDO 25 I = 1.10 which inFortran 77
12
"Ki" is the IEC standard for the factor 210
= 1024, corresponding to "k"for the factor 1000, and "B" stands
for "byte." See, for example,http: //physics .nist.gov/cuu/Units/binary.html.
2.3. Errors in Software 19
(and earlier, and also in the obsolescent Fortran 90/95 fixed form) assigns the variable
DO25I the value 1.10 instead of creating the intended DOloop, which requires the
period to be replaced with a comma.
• Memory overruns.
Computing an index incorrectly can lead to the access of memory locations outside the
bounds of an array, with unpredictable results. Such errors are less likely in modern
high-level programming languages but may still occur.
• Loop errors.
Common cases are unintended infinite loops, off-by-one loops (i.e., loops executed
once too often or not enough), and loops with improper exits.
• Conditional errors.
It is quite easy to make a mistake in the logic of an if-then-else-endif construct. A
related problem arises in some languages, suchasPascal, thatdonotrequire an explicit
endif.
• Pointererrors.
Pointers maybeuninitialized,deleted (butstill used),orinvalid(pointing to something
that has been removed).
• Allocation errors.
Allocation and deallocation of objects mustbe done according to proper conventions.
For example, if you wishto change the size of an allocated array in Fortran, you must
first check if it is allocated, then deallocate it (andlose its content), and finally allocate
it to its correct size. Attempting to reallocate an existing array will lead to an error
condition detected by the runtime system.
• Multithreaded errors.
Programs made up of multiple independent and simultaneously executing threads are
subject to many subtle and difficult to reproduce errors sometimes known as race
conditions. These occur when two threads try to access or modify the same memory
address simultaneously, but correct operation requires a particular order of access.
• Timing errors.
A timing error occurs when two events are designed and implemented to occur at
a certain rate or within a certain margin. They are most common in connection
with interrupt service routines and are restricted to environments where the clock is
important. One symptomis input or outputhanging and not resuming.
• Distributed application errors.
Such an error is defined as an error in the interface between two applications in a
distributed system.
• Storage errors.
These errors occur when a storage device gets a soft or hard error and is unable to
proceed.
• Integration errors.
These occur whentwo fully tested and validated individual subsystems are combined
but do not cooperate as intended when combined.
• Conversion errors.
Data in use by the application might be given in the wrong format (integer, floating
20 Chapter 2. Assessment of Accuracy and Reliability
point, ...) or in the wrong units (m, cm, feet, inches, ...). An unanticipated result
from a type conversion can also be classified as a conversion error. A problem of this
nature occurs in many compiled programming languages by the assignment of 1/ 2
to a variable of floating-point type. The rules of integer arithmetic usually state that
when two integers are divided there is an integer result. Thus, if the variable A is
of a floating-point type, then the statement A = 1/2 will result in an integer zero,
converted to afloating-pointzero assigned to the variable A,since 1 and 2 are integer
constants, and integer division is rounded to zero.
• Hard-coded lengths orsizes.
If sizes of objects like arrays are defined to be of a fixed size, then care must be
taken that no problem instance will be permitted a larger size. It is best to avoid such
hard-coded sizes, either by using allocatable arrays that yield a correctly sized array
at runtime, or by parameterizing object sizes so that they are easily and consistently
modified.
• Version bugs.
In this case the functionality of a program unit or a data storage format is changed
between two versions, without backward compatibility. In some cases individual
version changes may be compatible, but not over several generations of changes.
• Inappropriate reuse bugs.
Program reuse is normally encouraged, both to reduce effort and to capitalize upon
the expertize of subdomain specialists. However, old routines that have been care-
fully tested and validated under certain constraints may cause serious problems if
those constraints are not satisfied in a new environment. It is important that high
standards of documentation and parameter checking be set for reuse libraries to avoid
incompatibilities of this type.
• Boolean bugs.
The authors of [440] note on page 159 that "Boolean algebra has virtually nothing
to do with the equivalent English words. When we say 'and', we really mean the
Boolean 'or' and vice versa." This leads to misunderstandings among both users and
programmers. Similarly, the meaning of "true" and "false" inthe code maybe unclear.
Looking carefully at the examples of the Patriot missile, section 1.4.1.1, and the
Ariane 5 rocket, section 1.4.2, we observe that in addition to the obvious conversion errors
and storage errors, inappropriate reuse bugs were also involved. In each case the failing
software was taken from earlier and less advanced hardware equipment, where the software
had worked well for manyyears. They were both well tested, but not in the new environment.
2.4 Precision, Accuracy, and Reliability
Many of the bugs listed in the previous section lead to anomalous behavior that is easy to
recognize, i.e., the results are clearly wrong. For example, it is easy to see if the output of
a program to sort data is really sorted. In scientific computing things are rarely so clear.
Consider a program to compute the roots of a polynomial, Checking the output here seems
easy; one can evaluate the function at the computed points. However, the result will seldom
be exactly zero. How close to zero does this residual have to be to consider the answer
2.4. Precision, Accuracy, and Reliability 21
correct? Indeed, for ill-conditioned polynomials the relative sizes of residuals provide a
very unreliable measure of accuracy. In other cases, such as the evaluation of a definite
integral, there is no such "easy" method.
Verification and validation in scientific computing, then, is not a simple process that
gives yes or no as an answer. There are many gradations. The concepts of accuracy and
reliability are used to characterize such gradations in the verification and validation of
scientific software. In everyday language the words accuracy, reliability, and the related
concept of precision are somewhat ambiguous. When used as quality measures they should
not be ambiguous. We use the followingdefinitions.
• Precision refers to the number of digits used for arithmetic, input, andoutput.
• Accuracy refers to the absolute or relative error of an approximate quantity.
• Reliability measures how "often" (as a percentage) the software fails, in the sense
that the true error is larger than what is requested.
Accuracy is a measure of the quality of the result. Since achieving a prescribed accu-
racy is rarely easy in numerical computations, the importance of this component of quality
is often underweighted.
We note that determining accuracy requires the comparison to something external (the
"exact" answer). Thus, stating the accuracy requires that one specifies what one is compar-
ing against. The "exact" answer may be different for each of the computational model, the
mathematical model, and the real world. In this book most of our attention will be on solu-
tion verification;hence we will be mostly concerned with comparison to the mathematical
model.
Todetermine accuracy one needs ameans of measuring (orestimating)error. Absolute
error and relative error are two important such measures. Absolute error is the magnitude
of the difference between a computed quantity x and its true value x*, i.e.,
Relative erroris the ratio of absolute error to the magnitude of the true value, i.e.,
Relative error provides a method of characterizing the percentage error; when the relative
error is less than one, the negative of the Iog10 of the relative error gives the number of
significant decimal digits in the computer solution. Relative error is not so useful a measure
as x* approaches 0; one often switches to absolute error in this case. When the computed
solution is amulticomponentquantity,such as avector, then onereplaces the absolute values
by an appropriate norm.
The terms precision and accuracy are frequently used inconsistently. Furthermore,
the misconception that high precision implies high accuracy is almost universal.
Dahlquist and Bjorck [108] use the term precision for the accuracy with which the
basic arithmetic operations +, —,x, and / are performed by the underlying hardware. For
floating-point operations this is given by the unit roundoff u.13
But even on that there is no
general agreement. One should be specific about the rounding mode used.
13
The unit roundoff u can roughly be considered as the largest positive floating-point number u, such that
1 + u = 1 in computer arithmetic. Because repeated rounding may occur this is not very useful as a strict
definition. The formal definition of u is given by (4.1).
22 Chapter 2. Assessment of Accuracy and Reliability
The difference between (traditional) rounding and truncation played an amusing role
in the 100 digit challenge posed in [447, 446]. Trefethen did not specify whether digits
should be rounded or truncated when he asked for "correct digits." First he announced 18
winners, but he corrected this a few days later to 20. In the interview included in [44] he
explains that two teams persuaded him that he had misjudged one of their answers by 1
digit. This was a matter of rounding instead of truncating.
Reliability in scientific software is considered in detail in Chapter 13. Robustness
is a concept related to reliability that indicates how "gracefully" the software fails (this is
nonquantitative) and also its sensitivity to small changes in the problem (this is related to
condition numbers). A robust program knows when it might have failed and reports that
fact.
2.5 Numerical Pitfalls Leading to Anomalous Program
Behavior
In this section we illustrate some common pitfalls unique to scientific computing that can
lead to the erosion of accuracy and reliability in scientificsoftware. These "numerical bugs"
arise from the complex structure of the mathematics underlying the problems being solved
and the sometimes fragile nature of the numerical algorithms and floating-point arithmetic
necessary to solve them. Suchbugs are often subtle and difficult to diagnose. Some of these
are described more completely in Chapter 4.
• Improper treatmentof constants with infinite decimalexpansions.
The coding of constants with infinite decimal expansions, like , 2, or even 1/9,
can have profoundeffects on the accuracy of a scientificcomputation. One will never
achieve 10 decimal digits of accuracy in a computation in which is encoded as
3.14159 or 1/9 is encoded as 0.1111. To obtain high accuracy and portability such
constants should, whenever possible, be declared as constants (and thus computed at
compile time) or be computed at runtime. In some languages, e.g.,MATLAB, is
stored to double precision accuracy and is available by a function call.
For the constants above we can in Fortran 95 use the working precision wp, with at
least 10 significant decimal digits:
integer, parameter : : wp = selected_real_kind(10)
real(kind=wp), parameter :: one = 1.0_wp, two = 2.0_wp, &
& four = 4.0_wp, ninth = 1.0_wp/9.0_wp
real(kind=wp) :: pi, sqrt2
pi - four*atan(one)
sqrt2 = sqrt(two)
• Testing onfloating-pointequality.
In scientific computations approximations and roundoff lead to quantities that are
rarely exact. So, testing whether aparticular variable that is the result of acomputation
is 0.0 or 1.0is rarely correct. Instead, one must determine what interval around 0.0 or
1.0is sufficient to satisfy the criteria at hand and then test for inclusion in that interval.
See, e.g., Example 1.1.
2.6. Methods of Verification and Validation 23
• Inconsistent precision.
The IEEE standard for floating-point arithmetic defined in [226] requires the avail-
ability of at least two different arithmetic precisions. If you wish to obtain a correct
result in the highest precision available, then it is usually imperative that all floating-
point variables and constants are in (at least) that precision. If variables and constants
of different precisions (i.e., different floating-point types) are mixed in an arithmetic
expression, this can result in a loss of accuracy in the result, dropping it to that of the
lowest precision involved.
• Faulty stopping criteria.
See, e.g., Example 1.1.
• Notobviously wrong code.
A code can be wrong but it can still work more or less correctly. That is, the code
may produce results that are acceptable, though they are somewhat less accurate
than expected, or are generated more slowly than expected. This can happen when
small errors in the coding of arithmetic expressions are made. For example, if one
makes a small mistake in coding a Jacobian in Newton's method for solving nonlinear
systems, the program may still converge to the correct solution, but, if it does, it
will do so more slowly than the quadratic convergence that one expects of Newton's
method.
• Notrecognizing ill-conditioned problems.
Problems are ill-conditioned when their solutions are highly sensitive to perturba-
tions in the input data. In other words, small changes in the problem to be solved
(such as truncating an input quantity to machine precision) lead to a computational
problem whose exact solution is far away from the solution to the original problem.
Ill-conditioning is an intrinsic property of the problem which is independent of the
method used to solve it. Robust software will recognize ill-conditioned problems and
will alert the user. See Sections 4.3 and 4.4 for illuminating examples.
• Unstable algorithmsand regions of instability.
A method is unstable when rounding errors are magnified without bound in the solu-
tion process. Some methods are stable only for certain ranges of its inputdata. Robust
software will either use stable methods or notify the user when the input is outside
the region of guaranteed stability. See section 4.4.2 for examples.
2.6 Methodsof Verificationand Validation
In this section we summarize some of the techniques that are used in the verification and
validation of scientific software. Many of these are well known in the field of software
engineering; see [6] and [440], for example. Others are specialized to the unique needs of
scientific software; see [383] for a more complete presentation. We emphasize that none
of these techniques are foolproof. It is rare that correctness of scientific software can be
rigorously demonstrated. Instead, the verification and validation processes provide a series
of techniques, each of which serves to increase our confidence that the software is behaving
in the desired manner.
24 Chapter 2. Assessment of Accuracy and Reliability
2.6.1 Codeverification
In code verification we seek to determine how faithfully the software is producing the
solution to the computational model, i.e., to its lowest level of specification. In effect, we
are asking whether the code correctly implements the specified numerical procedure. Of
course, the numerical method may be ineffective in solvingthe target mathematical problem;
that is not the concern at this stage.
Sophisticated software engineering techniques have been developed in recent years to
improve and automate the verificationprocess. Such techniques, known asformal methods,
rely on mathematically rigorous specifications for the expected behavior of software. Given
such a specification, one can (a) prove theorems about the projected program's behavior,
(b) automatically generate much of the code itself, and/or (c) automatically generate tests
for the resulting software system. See [93], for example. Unfortunately, such specifications
are quite difficult to write, especially for large systems. In addition, such specifications do
not cope well with the uncertainties of floating-point arithmetic. Thus, they have rarely
been employed in this context. A notable exception is the formal verification of low-level
floating-point arithmetic functions. See, for example, [197, 198]. Gunnels et al. [184]
employed formal specifications to automatically generate linear algebra kernels. In our
discussion we will concentrate on more traditional code verification techniques. The two
general approaches that we will consider are code analysis and testing.
2.6.1.1 Code analysis
Analysis of computer code is an important method of exposing bugs. Software engineers
have devised a wealth of techniques and tools for analyzing code.
One effective means of detecting errors is to have the code read and understood
by someone else. Many software development organizations use formal code reviews to
uncover misunderstandings in specifications or errors in logic. These are most effectively
done at the component level, i.e., for portions of code that are easier to assimilate by persons
other than the developer.
Static code analysis is another important tool. This refers to automated techniques for
determining properties of a program by inspection of the code itself. Static analyzers study
the flow of control in the code to look for anomalies, such as "dead code" (code that cannot
be executed) or infinite loops. They can also study the flow of data, locating variables that
are never used, variables used before they are set, variables defined multiple times before
their first use, or type mismatches. Each of these are symptoms of more serious logic errors.
Tools for static code analysis are now widely available; indeed, manycompilers have options
that provide such analysis.
Dynamic code analysis refers to techniques that subject a code to fine scrutiny as it
is executed. Tools that perform such analysis must first transform the code itself, inserting
additional code to track the behavior of control flow or variables. The resulting "instru-
mented" code is then executed, causing data to be collected as it executes. Analysis of the
data can be done either interactively or as a postprocessing phase. Dynamic code analy-
sis can be used to track the number of times that program units are invoked and the time
spent in each. Changes in individual variables can be traced. Assertions about the state of
the software at any particular point can be inserted and checked at runtime. Interactive
2.6. Methods of Verification andValidation 25
code debuggers, as well as code profiling tools, are now widely available to perform these
tasks.
2.6.1.2 Software testing
Exercising a code actually performing the task for which it was designed, i.e., testing, is an
indispensable component of software verification. Verification testing requires a detailed
specification of the expected behavior of the software to all of its potential inputs. Tests are
then designed to determine whether this expected behavior is achieved.
Designing test sets can be quite difficult. The tests must span all of the functionality
of the code. To the extent possible, they should also exercise all paths through the code.
Special attention should be paid to provide inputs that are boundary cases or that trigger
error conditions.
Exhaustive testing is rarely practical, however. Statistical design of experiments [45,
317] provides a collection of guiding principles and techniques that comprise a framework
for maximizing the amount of useful information resident in a resulting data set, while
attending to the practical constraints of minimizing the number of experimental runs. Such
techniques have begun to be applied to software testing. Of particular relevance to code
verification are orthogonal fractional factorian designs, as well as the covering designs;
see [109].
Because of the challenges in developing good test sets, a variety of techniques has
been developed to evaluate the test sets themselves. Forexample, dynamicanalysis tools can
be used to assess the extent of code coverage provided by tests. Mutation testing is another
valuable technique. Here, faults are randomly inserted into the code under test. Then the
test suite is run, and the ability of the suite to detect the errors is thereby assessed.
Well-designed software is typically composed of a large number of components (e.g.,
procedures) whose inputsandoutputsarelimited andeasier to characterize thanthe complete
software system. Similarly, one can simplify the software testing process by first testing the
behavior of each of the components in turn. This is called component testing or unittesting.
Of course, the interfaces between components are some of the most error-prone parts of
software systems. Hence, component testing must be followed by extensive integration
testing, which verifies that the combined functionality of the components is correct.
Once a software system attains a certain level of stability, changes to the code are
inevitably made in order to add new functionality or to fix bugs. Such changes run the
risk of introducing new bugs into code that was previously working correctly. Thus, it is
important to maintain a battery of tests which extensively exercise all aspects of the system.
When changes are applied to the code, then these tests are rerun to provide confidence that
all other aspects of the code have not been adversely affected. This is termed regression
testing. In large active software projects it is common to run regression tests on the current
version of the code each night.
Elements of the computing environmentitself, such as the operating system, compiler,
number of processors, and floating-point hardware, also have an effect on the behavior of
software. In some cases faulty system software can lead to erroneous behavior. In other
cases errors in the software itself may not be exposed until the environment changes. Thus,
it is important to perform exhaustive tests on software in each environment in which it will
execute. Regression tests are useful for such testing.
Another Random Document on
Scribd Without Any Related Topics
Accuracy And Reliability In Scientific Computing Bo Einarsson
Accuracy And Reliability In Scientific Computing Bo Einarsson
Accuracy And Reliability In Scientific Computing Bo Einarsson
The Project Gutenberg eBook of The Cultural
History of Marlborough, Virginia
This ebook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this ebook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
Title: The Cultural History of Marlborough, Virginia
Author: C. Malcolm Watkins
Release date: July 16, 2012 [eBook #40255]
Most recently updated: October 23, 2024
Language: English
Credits: Produced by Pat McCoy, Chris Curnow, Joseph Cooper
and the
Online Distributed Proofreading Team at
http://www.pgdp.net
*** START OF THE PROJECT GUTENBERG EBOOK THE CULTURAL
HISTORY OF MARLBOROUGH, VIRGINIA ***
TRANSCRIBER NOTES:
The List of Illustrations on page vi has been added to this project as an aid
to the reader. It does not appear in the original book.
Additional Transcriber Notes can be found at the end of this project
SMITHSONIAN INSTITUTION
UNITED STATES NATIONAL MUSEUM
BULLETIN 253
WASHINGTON, D.C.
1968
The Cultural History
of Marlborough, Virginia
An Archeological and Historical Investigation
of the
Port Town for Stafford County and the
Plantation of John Mercer, Including Data
Supplied by Frank M. Setzler and Oscar H. Darter
C. MALCOLM WATKINS
Curator of Cultural History
Museum of History and Technology
SMITHSONIAN INSTITUTION PRESS
SMITHSONIAN INSTITUTION · WASHINGTON, D.C. · 1968
Publications of the United States National Museum
The scholarly and scientific publications of the United States National
Museum include two series, Proceedings of the United States
National Museum and United States National Museum Bulletin.
In these series, the Museum publishes original articles and
monographs dealing with the collections and work of its constituent
museums—The Museum of Natural History and the Museum of
History and Technology—setting forth newly acquired facts in the
fields of anthropology, biology, history, geology, and technology.
Copies of each publication are distributed to libraries, to cultural and
scientific organizations, and to specialists and others interested in
the different subjects.
The Proceedings, begun in 1878, are intended for the publication, in
separate form, of shorter papers from the Museum of Natural
History. These are gathered in volumes, octavo in size, with the
publication date of each paper recorded in the table of contents of
the volume.
In the Bulletin series, the first of which was issued in 1875, appear
longer, separate publications consisting of monographs (occasionally
in several parts) and volumes in which are collected works on related
subjects. Bulletins are either octavo or quarto in size, depending on
the needs of the presentation. Since 1902 papers relating to the
botanical collections of the Museum of Natural History have been
published in the Bulletin series under the heading Contributions from
the United States National Herbarium, and since 1959, in Bulletins
titled "Contributions from the Museum of History and Technology,"
have been gathered shorter papers relating to the collections and
research of that Museum.
This work forms volume 253 of the Bulletin series.
Frank A. Taylor
Director, United States National
Museum
For sale by the Superintendent of Documents, U.S. Government
Printing Office
Washington, D.C. 20402—Price $3.75
Contents
Page
Preface vii
History 3
I.Official port towns in Virginia and origins of Marlborough 5
II.John Mercer’s occupation of Marlborough, 1726-1730 15
III.Mercer’s consolidation of Marlborough, 1730-1740 21
IV.Marlborough at its ascendancy, 1741-1750 27
V.
Mercer and Marlborough, from zenith to decline, 1751-
1768
49
VI.Dissolution of Marlborough 61
Archeology and Architecture 65
VII.The site, its problem, and preliminary tests 67
VIII.Archeological techniques 70
IX.Wall system 71
X.Mansion foundation (Structure B) 85
XI.Kitchen foundation (Structure E) 101
XII.Supposed smokehouse foundation (Structure F) 107
XIII.Pits and other structures 111
XIV.Stafford courthouse south of Potomac Creek 115
Artifacts 123
XV.Ceramics 125
XVI.Glass 145
XVII.Objects of personal use 155
XVIII.Metalwork 159
XIX.Conclusion 173
General Conclusions 175
XX.Summary of findings 177
Appendixes 181
A.Inventory of George Andrews, Ordinary Keeper 183
B.Inventory of Peter Beach 184
C.Charges to account of Mosley Battaley 185
D.“Domestick Expenses,” 1725 186
E.John Mercer’s reading, 1726-1732 191
F.
Credit side of John Mercer’s account with Nathaniel
Chapman
193
G.Overwharton Parish account 194
H.
Colonists identified by John Mercer according to
occupation
195
I.
Materials listed in accounts with Hunter and Dick,
Fredericksburg
196
J.George Mercer’s expenses while attending college 197
K.John Mercer’s library 198
L.Botanical record and prevailing temperatures, 1767 209
M.Inventory of Marlborough, 1771 211
Index 213
List of Illustrations
Figure
John Mercer's Bookplate 1
Survey plates of Marlborough 2
Portrait of John Mercer 3
The Neighborhood of John Mercer 4
King William Courthouse 5
Mother-of-pearl counters 6
John Mercer's Tobacco-cask symbols 7
Wine-bottle seal 8
French horn 9
Hornbook 10
Fireplace mantels 11
Doorways 12
Table-desk 13
Archeological survey plan 14
Portrait of Ann Roy Mercer 15
Advertisement of the services of
Mercer's stallion Ranter
16
Page from Maria Sibylla Merian's
Metamorphosis Insectorum
Surinamensium efte Veranderung
Surinaamsche Insecten
17
Aerial Photograph of Marlborough 18
Highway 621 19
Excavation plan of Marlborough 20
Excavation plan of wall system 21
Looking north 22
Outcropping of stone wall 23
Junction of stone Wall A 24
Looking north in line with Walls A and
A-II
25
Wall A-II 26
Junction of Wall A-I 27
Wall E 28
Detail of Gateway in Wall E 29
Wall B-II 30
Wall D 31
Excavation plan of Structure B 32
Site of Structure B 33
Southwest corner of Structure B 34
Southwest corner of Structure B 35
South wall of Structure B 36
Cellar of Structure B 37
Section of red-sandstone arch 38
Helically contoured red-sandstone 39
Cast-concrete block 40
Dressed red-sandstone block 41
Fossil-embedded black sedimentary
stone
42
Foundation of porch at north end of
Structure B
43
Plan of mansion house 44
The Villa of “the magnificent Lord
Leonardo Emo”
45
Excavation plan of Structure E 46
Foundation of Structure E 47
Paved floor of Room X, Structure E 48
North wall of Structure E 49
Wrought-iron slab 50
Excavation plan of structures north of
Wall D
51
Structure F 52
Virginia brick from Structure B 53
Structure D 54
Refuse found at exterior corner of Wall
A-II and Wall D
55
Excavation plan of Structure H 56
Structure H 57
1743 drawing showing location of
Stafford courthouse
58
Enlarged detail from figure 58 59
Excavation plan of Stafford courthouse
foundation
60
Hanover courthouse 61
Plan of King William courthouse 62
Tidewater-type pottery 63
Miscellaneous common earthenware
types
64
Buckley-type high-fired ware 65
Westerwald stoneware 66
Fine English stoneware 67
English Delftware 68
Delft plate 69
Delft plate 70
Whieldon-type tortoiseshell ware 71
Queensware 72
Fragment of Queensware 73
English white earthenwares 74
Polychrome Chinese porcelain 75
Blue-and-white Chinese porcelain 76
Blue-and-white Chinese porcelain 77
Wine bottle 78
Bottle seals 79
Octagonal spirits bottle 80
Snuff bottle 81
Glassware 82
Small metalwork 83
Personal miscellany 84
Cutlery 85
Metalwork 86
Ironware 87
Iron door and chest hardware 88
Tools 89
Scythe 90
Farm gear 91
Illustration
Front and back cast-concrete block 1 and 2
Iron tie bar 3
Cross section of plaster cornice molding
from Structure B
4
Reconstructed wine bottle 5
Fragment of molded white salt-glazed
platter
6
Iron bolt 7
Stone scraping tool 8
Indian celt 9
Milk pan 10
Milk pan 11
Ale mug 12
Cover of jar 13
Base of bowl 14
Handle of pot lid or oven door 15
Buff-earthenware cup 16
High-fired earthenware pan rim 17
High-fired earthenware jar rim 18
Rim and base profiles of high-fired
earthenware jars
19
Base sherd from unglazed red-
earthenware water cooler
20
Rim of an earthenware flowerpot 21
Base of gray-brown, salt-glazed-
stoneware ale mug
22
Stoneware jug fragment 23
Gray-salt-glazed-stoneware jar profile 24
Drab-stoneware mug fragment 25
Wheel-turned cover of white, salt-
glazed teapot
26
Body sherds of molded, white salt-
glaze-ware pitcher
27
English delftware washbowl sherd 28
English delftware plate 29
English delftware plate 30
Delftware ointment pot 31
Sherds of black basaltes ware 32
Blue-and-white Chinese porcelain
saucer
33
Blue-and-white Chinese porcelain plate 34
Beverage bottle 35
Beverage-bottle seal 36
Complete beverage bottle 37
Cylindrical beverage bottle 38
Cylindrical beverage bottle 39
Octagonal, pint-size beverage bottle 40
Square gin bottle 41
Square snuff bottle 42
Wineglass, reconstructed 43
Cordial glass 44
Sherds of engraved-glass wine and
cordial glasses
45
Clear-glass tumbler 46
Octagonal cut-glass trencher salt 47
Brass buckle 48
Brass knee buckle 49
Brass thimble 50
Chalk bullet mold 51
Fragments of tobacco-pipe bowl 52
White-kaolin tobacco pipe 53
Slate pencil 54
Fragment of long-tined fork 55
Fragment of long-tined fork 56
Fork with two-part handle 57
Trifid-handle pewter spoon 58
Wavy-end pewter spoon 59
Pewter teapot lid 60
Steel scissors 61
Iron candle snuffers 62
Iron butt hinge 63
End of strap hinge 64
Catch for door latch 65
Wrought-iron hasp 66
Brass drop handle 67
Wrought-iron catch or striker 68
Iron slide bolt 69
Series of wrought-iron nails 70
Series of wrought-iron flooring nails
and brads
71
Fragment of clouting nail 72
Hand-forged spike 73
Blacksmith's hammer 74
Iron wrench 75
Iron scraping tool 76
Bit or gouge chisel 77
Jeweler's hammer 78
Wrought-iron colter from plow 79
Hook used with wagon 80
Bolt with wingnut 81
Lashing hook from cart 82
Hilling hoe 83
Iron reinforcement strip from back of
shovel handle
84
Half of sheep shears 85
Animal trap 86
Iron bridle bit 87
Fishhook 88
Brass strap handle 89
Preface
A number of people participated in the preparation of this study. The
inspiration for the archeological and historical investigations came
from Professor Oscar H. Darter, who until 1960 was chairman of the
Department of Historical and Social Sciences at Mary Washington
College, the women’s branch of the University of Virginia. The actual
excavations were made under the direction of Frank M. Setzler,
formerly the head curator of anthropology at the Smithsonian
Institution. None of the investigation would have been possible had
not the owners of the property permitted the excavations to be
made, sometimes at considerable inconvenience to themselves. I am
indebted to W. Biscoe, Ralph Whitticar, Jr., and Thomas Ashby, all of
whom owned the excavated areas at Marlborough; and T. Ben
Williams, whose cornfield includes the site of the 18th-century
Stafford County courthouse, south of Potomac Creek.
For many years Dr. Darter has been a resident of Fredericksburg
and, in the summers, of Marlborough Point on the Potomac River.
During these years, he has devoted himself to the history of the
Stafford County area which lies between these two locations in
northeastern Virginia. Marlborough Point has interested Dr. Darter
especially since it is the site of one of the Virginia colonial port towns
designated by Act of Assembly in 1691. During the town’s brief
existence, it was the location of the Stafford County courthouse and
the place where the colonial planter and lawyer John Mercer
established his home in 1726. Tangible evidence of colonial activities
at Marlborough Point—in the form of brickbats and potsherds still
can be seen after each plowing, while John Mercer’s “Land Book,”
examined anew by Dr. Darter, has revealed the original survey plats
of the port town.
In this same period and as early as 1938, Dr. T. Dale Stewart (then
curator of physical anthropology at the Smithsonian Institution) had
commenced excavations at the Indian village site of Patawomecke, a
few hundred yards west of the Marlborough Town site. The
aboriginal backgrounds of the area including Marlborough Point
already had been investigated. As the result of his historical research
connected with this project, Dr. Stewart has contributed
fundamentally to the present undertaking by foreseeing the
excavations of Marlborough Town as a logical step beyond his own
investigation.
Motivated by this combination of interests, circumstances, and
historical clues, Dr. Darter invited the Smithsonian Institution to
participate in an archeological investigation of Marlborough.
Preliminary tests made in August 1954 were sufficiently rewarding to
justify such a project. Consequently, an application for funds was
prepared jointly and was submitted by Dr. Darter through the
University of Virginia to the American Philosophical Society. In
January 1956 grant number 159, Johnson Fund (1955), for $1500
was assigned to the program. In addition, the Smithsonian
Institution contributed the professional services necessary for field
research and directed the purchase of microfilms and photostats, the
drawing of maps and illustrations, and the preparation and
publication of this report. Dr. Darter hospitably provided the use of
his Marlborough Point cottage during the period of excavation, and
Mary Washington College administered the grant. Frank Setzler
directed the excavations during a six-week period in April and May
1956, while interpretation of cultural material and the searches of
historical data related to it were carried out by C. Malcolm Watkins.
At the commencement of archeological work it was expected that
traces of the 17th- and early 18th-century town would be found,
including, perhaps, the foundations of the courthouse. This
expectation was not realized, although what was found from the
Mercer period proved to be of greater importance. After completion,
a report was made in the 1956 Year Book of the American
Philosophical Society (pp. 304-308).
After the 1956 excavations, the question remained whether the
principal foundation (Structure B) might not have been that of the
courthouse. Therefore, in August 1957 a week-long effort was made
to find comparative evidence by digging the site of the succeeding
18th-century Stafford County courthouse at the head of Potomac
Creek. This disclosed a foundation sufficiently different from
Structure B to rule out any analogy between the two.
It should be made clear that—because of the limited size of the
grant—the archeological phase of the investigation was necessarily a
limited survey. Only the more obvious features could be examined
within the means at the project’s disposal. No final conclusions
relative to Structure B, for example, are warranted until the section
of foundation beneath the highway which crosses it can be
excavated. Further excavations need to be made south and
southeast of Structure B and elsewhere in search of outbuildings and
evidence of 17th-century occupancy.
Despite such limitations, this study is a detailed examination of a
segment of colonial Virginia’s plantation culture. It has been
prepared with the hope that it will provide Dr. Darter with essential
material for his area studies and, also, with the wider objective of
increasing the knowledge of the material culture of colonial America.
Appropriate to the function of a museum such as the Smithsonian,
this study is concerned principally with what is concrete—objects
and artifacts and the meanings that are to be derived from them. It
has relied upon the mutually dependent techniques of archeologist
and cultural historian and will serve, it is hoped, as a guide to further
investigations of this sort by historical museums and organizations.
Among the many individuals contributing to this study, I am
especially indebted to Dr. Darter; to the members of the American
Philosophical Society who made the excavations possible; to Dr.
Stewart, who reviewed the archeological sections at each step as
they were written; to Mrs. Sigrid Hull who drew the line-and-stipple
illustrations which embellish the report; Edward G. Schumacher of
the Bureau of American Ethnology, who made the archeological
maps and drawings; Jack Scott of the Smithsonian photographic
laboratory, who photographed the artifacts; and George Harrison
Sanford King of Fredericksburg, from whom the necessary
documentation for the 18th-century courthouse site was obtained.
I am grateful also to Dr. Anthony N. B. Garvan, professor of
American civilization at the University of Pennsylvania and former
head curator of the Smithsonian Institution’s department of civil
history, for invaluable encouragement and advice; and to Worth
Bailey formerly with the Historic American Buildings Survey, for many
ideas, suggestions, and important identifications of craftsmen listed
in Mercer’s ledgers.
I am equally indebted to Ivor Noël Hume, director of archeology at
Colonial Williamsburg and an honorary research associate of the
Smithsonian Institution, for his assistance in the identification of
artifacts; to Mrs. Mabel Niemeyer, librarian of the Bucks County
Historical Society, for her cooperation in making the Mercer ledgers
available for this report; to Donald E. Roy, librarian of the Darlington
Library, University of Pittsburgh, for providing the invaluable clue
that directed me to the ledgers; to the staffs of the Virginia State
Library and the Alexandria Library for repeated courtesies and
cooperation; and to Miss Rodris Roth, associate curator of cultural
history at the Smithsonian, for detecting Thomas Oliver’s inventory
of Marlborough in a least suspected source.
I greatly appreciate receiving generous permissions from the
University of Pittsburgh Press to quote extensively from the George
Mercer Papers Relating to the Ohio Company of Virginia, and from
Russell & Russell to copy Thomas Oliver’s inventory of Marlborough.
To all of these people and to the countless others who contributed in
one way or another to the completion of this study, I offer my
grateful thanks.
C. Malcolm Watkins
Washington, D.C.
1967
The Cultural History
of
Marlborough, Virginia
Figure 1.—John Mercer’s
bookplate.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Numerical Methods in Engineering with MATLAB - Jaan Kiusalaas.pdf
PPT
MATH685_Sp10_lengdrfjffvbbdgghhbfnhfghmmm
PDF
Numerical Issues In Statistical Computing For The Social Scientist Micah Altman
PDF
(Ebook) Parallel MATLAB for Multicore and Multinode Computers by Jeremy Kepne...
PDF
Bits And Bugs A Scientific And Historical Review Of Software Failures In Comp...
PDF
Applied Numerical Methods With MATLAB For Engineers And Scientists Third Ed...
PDF
Immediate download A tutorial on elliptic PDE solvers and their parallelizati...
PDF
0-introduction.pdf
Numerical Methods in Engineering with MATLAB - Jaan Kiusalaas.pdf
MATH685_Sp10_lengdrfjffvbbdgghhbfnhfghmmm
Numerical Issues In Statistical Computing For The Social Scientist Micah Altman
(Ebook) Parallel MATLAB for Multicore and Multinode Computers by Jeremy Kepne...
Bits And Bugs A Scientific And Historical Review Of Software Failures In Comp...
Applied Numerical Methods With MATLAB For Engineers And Scientists Third Ed...
Immediate download A tutorial on elliptic PDE solvers and their parallelizati...
0-introduction.pdf

Similar to Accuracy And Reliability In Scientific Computing Bo Einarsson (20)

DOCX
Applied Numerical Methodswith MATLAB® for Engineers and .docx
PDF
Steven C. Chapra, Raymond P. Canale - Numerical Methods for Engineers-McGraw-...
PDF
numerical methods for civil engineering for every one
PDF
Engineering mathematics.pdf
PDF
Engineering Computation: An Introduction Using MATLAB and Exce, 2nd Edition M...
PDF
Probabilistic Accuracy Bounds @ Papers We Love SF
PDF
Data Science: Notes and Toolkits
PDF
Unum Computing: An Energy Efficient and Massively Parallel Approach to Valid ...
PDF
IA-advanced-R
PDF
Slides ads ia
PDF
Book
PDF
Numerical methods by Jeffrey R. Chasnov
PDF
Topik1: Scientific Computing & Floating Point Aritmetics
PDF
2 Capitulo Metodos Numericos
PDF
Journal Club - Best Practices for Scientific Computing
PDF
Introduction To Modeling And Simulation With Matlab And Python 1st Edition St...
PDF
Numerical Methods and Optimization An Introduction 1st Edition Pardalos
PDF
M4D-v0.4.pdf
PDF
Modellbildung, Berechnung und Simulation in Forschung und Lehre
PDF
Mathematics - Elementary Differential Equations ( PDFDrive ).pdf
Applied Numerical Methodswith MATLAB® for Engineers and .docx
Steven C. Chapra, Raymond P. Canale - Numerical Methods for Engineers-McGraw-...
numerical methods for civil engineering for every one
Engineering mathematics.pdf
Engineering Computation: An Introduction Using MATLAB and Exce, 2nd Edition M...
Probabilistic Accuracy Bounds @ Papers We Love SF
Data Science: Notes and Toolkits
Unum Computing: An Energy Efficient and Massively Parallel Approach to Valid ...
IA-advanced-R
Slides ads ia
Book
Numerical methods by Jeffrey R. Chasnov
Topik1: Scientific Computing & Floating Point Aritmetics
2 Capitulo Metodos Numericos
Journal Club - Best Practices for Scientific Computing
Introduction To Modeling And Simulation With Matlab And Python 1st Edition St...
Numerical Methods and Optimization An Introduction 1st Edition Pardalos
M4D-v0.4.pdf
Modellbildung, Berechnung und Simulation in Forschung und Lehre
Mathematics - Elementary Differential Equations ( PDFDrive ).pdf
Ad

Recently uploaded (20)

PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PDF
International_Financial_Reporting_Standa.pdf
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PDF
Race Reva University – Shaping Future Leaders in Artificial Intelligence
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
My India Quiz Book_20210205121199924.pdf
PDF
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 2).pdf
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
Hazard Identification & Risk Assessment .pdf
PDF
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
Cambridge-Practice-Tests-for-IELTS-12.docx
International_Financial_Reporting_Standa.pdf
AI-driven educational solutions for real-life interventions in the Philippine...
Race Reva University – Shaping Future Leaders in Artificial Intelligence
Share_Module_2_Power_conflict_and_negotiation.pptx
Virtual and Augmented Reality in Current Scenario
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
A powerpoint presentation on the Revised K-10 Science Shaping Paper
My India Quiz Book_20210205121199924.pdf
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 2).pdf
FORM 1 BIOLOGY MIND MAPS and their schemes
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
Introduction to pro and eukaryotes and differences.pptx
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
Hazard Identification & Risk Assessment .pdf
LIFE & LIVING TRILOGY - PART - (2) THE PURPOSE OF LIFE.pdf
Ad

Accuracy And Reliability In Scientific Computing Bo Einarsson

  • 1. Accuracy And Reliability In Scientific Computing Bo Einarsson download https://ebookbell.com/product/accuracy-and-reliability-in- scientific-computing-bo-einarsson-4702536 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Simulation Of Stochastic Processes With Given Accuracy And Reliability 1st Edition Yuriy V Kozachenko https://ebookbell.com/product/simulation-of-stochastic-processes-with- given-accuracy-and-reliability-1st-edition-yuriy-v-kozachenko-6614860 Simulation Of Stochastic Processes With Given Accuracy And Reliability 1st Edition Yuriy V Kozachenko https://ebookbell.com/product/simulation-of-stochastic-processes-with- given-accuracy-and-reliability-1st-edition-yuriy-v-kozachenko-6748640 Approximate Computing And Its Impact On Accuracy Reliability And Faulttolerance Gennaro S Rodrigues https://ebookbell.com/product/approximate-computing-and-its-impact-on- accuracy-reliability-and-faulttolerance-gennaro-s-rodrigues-47284166 Introduction To Simulation Methods For Gas Discharge Plasmas Accuracy Reliability And Limitations Ismail Rafatov https://ebookbell.com/product/introduction-to-simulation-methods-for- gas-discharge-plasmas-accuracy-reliability-and-limitations-ismail- rafatov-46110890
  • 3. Accuracy And Stability Of Numerical Algorithms 2nd Edition Nicholas J Higham https://ebookbell.com/product/accuracy-and-stability-of-numerical- algorithms-2nd-edition-nicholas-j-higham-2464564 Accuracy And Fuzziness A Life In Science And Politics A Festschrift Book To Enric Trillas Ruiz 1st Edition Luis Argelles Mndez Auth https://ebookbell.com/product/accuracy-and-fuzziness-a-life-in- science-and-politics-a-festschrift-book-to-enric-trillas-ruiz-1st- edition-luis-argelles-mndez-auth-5141410 Accuracy And The Laws Of Credence 1st Edition Pettigrew Richard https://ebookbell.com/product/accuracy-and-the-laws-of-credence-1st- edition-pettigrew-richard-5892214 Ballistic Imaging Accuracy And Technical Capability Of A National Ballistics Database Committee To Assess The Feasibility https://ebookbell.com/product/ballistic-imaging-accuracy-and- technical-capability-of-a-national-ballistics-database-committee-to- assess-the-feasibility-1809896 Grammar For Academic Purposes Accuracy And Sentence Structure Steve Marshall https://ebookbell.com/product/grammar-for-academic-purposes-accuracy- and-sentence-structure-steve-marshall-56060060
  • 7. SOFTWARE • ENVIRONMENTS • TOOLS The series includes handbooks and software guides as well as monographs on practical implementation of computational methods, environments, and tools. The focus is on making recent developments available in a practical format to researchers and other users of these methods and tools. Editor-in-Chief Jack J. Dongarra University of Tennessee and Oak Ridge National Laboratory Editorial Board James W. Demmel, University of California, Berkeley Dennis Gannon, indiana University Eric Grosse, AT&T Bell Laboratories Ken Kennedy, Rice University Jorge J. More, Argonne National Laboratory Software, Environments, and Tools Bo Einarsson,editor, Accuracy and Reliabilty in Scientific Computing Michael W. Berry and Murray Browne, Understanding Search Engines: Mathematical Modeling and Text Retrieval, Second Edition Craig C. Douglas, Gundolf Haase, and Ulrich Langer, A Tutorial on Elliptic PDESolvers and Their Parallelization Louis Komzsik, The Lanczos Method: Evolution and Application Bard Ermentrout, Simulating,Analyzing, and Animating Dynamical Systems: A Guide to XPPAUT for Researchers and Students V. A. Barker, L. S. Blackford, J. Dongarra, J. Du Croz, S. Hammarling, M. Marinova, J. Wasniewski, and P. Yalamov, LAPACK95 Users' Guide Stefan Goedecker and Adolfy Hoisie, Performance Optimization of Numerically Intensive Codes Zhaojun Bai, James Demmel, Jack Dongarra, Axel Ruhe, and Henk van der Vorst, Templates for the Solution of Algebraic Eigenvalue Problems: A Practical Guide Lloyd N. Trefethen, Spectral Methods in MATLAB E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users' Guide, Third Edition Michael W. Berry and Murray Browne, UnderstandingSearch Engines: Mathematical Modeling and Text Retrieval Jack J. Dongarra, lain S. Duff, Danny C. Sorensen, and Henk A. van der Vorst, Numerical Linear Algebra for High-Performance Computers R. B. Lehoucq, D. C. Sorensen, and C. Yang, ARRACK Users' Guide: Solution of Large-Scale Eigenvalue Problems with Implicitly Restarted Arnoldi Methods Randolph E. Bank, PLTMG: A Software Package for Solving Elliptic Partial Differential Equations, Users' Guide 8.0 L. S. Blackford, J. Choi, A. Cleary, E. D'Azevedo, J. Demmel, I. Dhillon, J. Dongarra, S. Hammarling, G. Henry, A. Petitet, K. Stanley, D. Walker, and R. C. Whaley, ScaLAPACK Users' Guide Greg Astfalk, editor, Applications on Advanced Architecture Computers Francoise Chaitin-Chatelin and Valerie Fraysse, Lectures on Finite Precision Computations Roger W. Hockney, The Science of ComputerBenchmarking Richard Barrett, Michael Berry, Tony F.Chan, James Demmel, June Donato, Jack Dongarra, Victor Eijkhout, Roldan Pozo, Charles Romine, and Henk van der Vorst, Templates for the Solution of Linear Systems: Building Blocks for Iterative Methods E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorensen, LAPACK Users' Guide, SecondEdition jack J. Dongarra, lain S. Duff, Danny C. Sorensen, and Henk van der Vorst, Solving Linear Systems on Vector and Shared Memory Computers J. J. Dongarra, J. R. Bunch, C. B. Moler, and G. W. Stewart, Linpack Users' Guide
  • 8. Accuracy and Reliability in Scientific Computing Edited by Bo Einarsson Linkoping University Linkoping, Sweden siam. Society for Industrial and Applied Mathematics Philadelphia
  • 9. Copyright © 2005 by the Society for Industrial and Applied Mathematics. 1 0 9 8 7 6 5 4 3 2 1 All rights reserved. Printed in the United States of America. No part of this book may be reproduced, stored, or transmitted in any manner without the written permission of the publisher. For information, write to the Society for Industrial and Applied Mathematics, 3600 University City Science Center, Philadelphia, PA 19104-2688. MATLAB® is a registered trademark of The MathWorks, Inc. For MATLAB® product information, please contact The MathWorks, Inc., 3 Apple Hill Drive, Natick, MA 01760- 2098 USA, 508-647-7000, Fax: 508-647-7101, info@mathworks.com, www.mathworks.com/ Trademarked names may be used in this book without the inclusion of a trademark symbol. These names are used in an editorial context only; no infringement of trademark is intended. Library of Congress Cataloging-in-Publication Data Accuracy and reliability in scientific computing / edited by Bo Einarsson. p. cm. — (Software, environments, tools) Includes bibliographical references and index. ISBN 0-89871-584-9 (pbk.) 1. Science—Data processing. 2. Reliability (Engineering)—Mathematics. 3. Computer programs—Correctness—Scientific applications. 4. Software productivity—Scientific applications. I. Einarsson, Bo, 1939- II. Series. Q183.9.A28 2005 502'.85-dc22 2005047019 is a registered trademark.
  • 10. Contents List of Contributors List of Figures List of Tables Preface I PITFALLS IN NUMERICAL COMPUTATION 1 What Can GoWrong in Scientific Computing? Bo Einarsson 1.1 Introduction 1.2 Basic Problems in Numerical Computation 1.2.1 Rounding 1.2.2 Cancellation 1.2.3 Recursion 1.2.4 Integer overflow 1.3 Floating-point Arithmetic 1.3.1 Initial work on an arithmetic standard 1.3.2 IEEE floating-point representation 1.3.3 Future standards 1.4 What Really WentWrong in Applied Scientific Computing! . . . 1.4.1 Floating-point precision 1.4.2 Illegal conversion between data types 1.4.3 Illegal data 1.4.4 Inaccurate finite element analysis 1.4.5 Incomplete analysis . .... 1.5 Conclusion 2 Assessment ofAccuracy and Reliability Ronald F.Boisvert, Ronald Cools, Bo Einarsson 2.1 Models of Scientific Computing 2.2 Verification and Validation xiii XV xix xxi 1 3 3 3 4 4 5 6 6 6 7 9 9 9 11 11 11 12 12 13 13 15 V
  • 11. vi Contents 3 2.3 Errors in Software 17 2.4 2.5 2.6 Precision, Accuracy, and Reliability Numerical Pitfalls Leading to Anomalous Program Behavior Methods of Verificationand Validation 2.6.1 Code verification 2.6.2 Sources of test problems for numerical software 2.6.3 Solution verification 2.6.4 Validation 20 22 23 24 26 28 31 Approximating Integrals, Estimating Errors, and Giving theWrong Solution for a Deceptively Easy Problem 33 Ronald Cools 4 3.1 3.2 3.3 3.4 3.5 3.6 3.7 3.8 3.9 Introduction The Given Problem The First Correct Answer View Behind the Curtain A More Convenient Solution Estimating Errors: Phase 1 Estimating Errors: Phase 2 The More Convenient Solution Revisited Epilogue An Introduction to the Quality of Computed Solutions 33 34 34 36 36 38 40 40 41 43 Sven Hammarling 5 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 4.9 Introduction Floating-point Numbers and IEEE Arithmetic Why WorryAbout Computed Solutions? Condition, Stability, and Error Analysis 4.4.1 Condition 4.4.2 Stability 4.4.3 Error analysis Floating-point Error Analysis Posing the Mathematical Problem Error Bounds and Software Other Approaches Summary Qualitative Computing 43 44 46 50 50 56 60 64 70 71 74 75 77 Franqoise Chaitin-Chatelin, Elisabeth Traviesas-Cassan 5.1 5.2 Introduction Numbers as Building Blocks for Computation 5.2.1 Thinking the unthinkable 5.2.2 Breaking the rule 5.2.3 Hypercomputation inductively defined by multiplication . 5.2.4 The Newcomb—Borel paradox 5.2.5 Effective calculability 77 77 78 78 78 79 80
  • 12. Contents vii 5.3 Exact Versus Inexact Computing 5.3.1 What is calculation? 5.3.2 Exact andinexact computing : 5.3.3 Computer arithmetic 5.3.4 Singularities in exact and inexact computing 5.3.5 Homotopic deviation 5.3.6 The map : z p(Fz) 5.3.7 Graphical illustration 5.4 Numerical Software 5.4.1 Local error analysis in finite precision computations . 5.4.2 Homotopic deviation versus normwise perturbation . 5.4.3 Application to Krylov-typemethods 5.5 The Levy Law of Large Numbers for Computation 5.6 Summary II DIAGNOSTIC TOOLS 6 PRECISE and the Quality of Reliable Numerical Software Francoise Chaitin-Chatelin, Elisabeth Traviesas-Cassan 6.1 Introduction 6.2 Reliability of Algorithms 6.3 Backward Error Analysis 6.3.1 Consequence of limited accuracy of data 6.3.2 Quality of reliable software 6.4 Finite Precision Computations at a Glance 6.5 What Is PRECISE? 6.5.1 Perturbation values 6.5.2 Perturbation types 6.5.3 Data metrics 6.5.4 Data to be perturbed 6.5.5 Choosing a perturbation model 6.6 Implementation Issues 6.7 Industrial Use of PRECISE 6.8 PRECISE in Academic Research 6.9 Conclusion 7 Tools for the Verification ofApproximate Solutions to Differential Equations Wayne H. Enright 7.1 Introduction 7.1.1 Motivation and overview 7.2 Characteristics of a PSE 7.3 Verification Tools for Use with an ODE Solver 7.4 Two Examples of Use of These Tools 7.5 Discussion and Future Extensions 81 81 82 82 83 83 86 87 88 88 89 90 91 92 93 95 95 96 96 98 98 99 99 101 102 103 103 103 105 107 108 108 109 109 109 110 1ll 112 119
  • 13. viii Contents III TECHNOLOGY FOR IMPROVING ACCURACY ANDRELIABILITY 8 General Methods for Implementing Reliable and Correct Software Bo Einarsson 8.1 Ada Brian Wichmann, Kenneth W.Dritz 8.1.1 Introduction and language features 8.1.2 The libraries 8.1.3 The Numerics Annex 8.1.4 Other issues 8.1.5 Conclusions 8.2 C Craig C. Douglas, Hans Fetter Langtangen 8.2.1 Introduction 8.2.2 Language features 8.2.3 Standardized preprocessor, error handling, and debuggin 8.2.4 Numerical oddities and math libraries 8.2.5 Calling Fortran libraries 8.2.6 Array layouts 8.2.7 Dynamic data and pointers 8.2.8 Data structures 8.2.9 Performance issues 8.3 C++ Craig C. Douglas, Hans Petter Langtangen 8.3.1 Introduction 8.3.2 Basic language features 8.3.3 Special features 8.3.4 Error handling and debugging 8.3.5 Math libraries 8.3.6 Array layouts 8.3.7 Dynamic data 8.3.8 User-defined data structures 8.3.9 Programming styles 8.3.10 Performance issues 8.4 Fortran Van Snyder 8.4.1 Introduction 8.4.2 History of Fortran 8.4.3 Major features of Fortran 95 8.4.4 Features of Fortran 2003 8.4.5 Beyond Fortran 2003 8.4.6 Conclusion 8.5 Java Ronald F. Boisvert, Roldan Pozo 8.5.1 Introduction 8.5.2 Language features 123 125 . 127 127 . 128 no 13? 135 136 136 136 ig 138 139 140 140 141 . 141 142 142 14? 143 145 146 . 147 147 147 147 148 148 149 149 . 149 149 151 153 153 153 153 154
  • 14. Contents ix 9 10 8.5.3 Portability in the Java environment 8.5.4 Performance challenges 8.5.5 Performance results 8.5.6 Other difficulties encountered in scientific programming in Java 8.5.7 Summary 8.6 Python Craig C. Douglas, Hans Petter Langtangen 8.6.1 Introduction 8.6.2 Basic language features 8.6.3 Special features 8.6.4 Error handling and debugging 8.6.5 Math libraries 8.6.6 Array layouts 8.6.7 Dynamic data 8.6.8 User defined data structures 8.6.9 Programming styles 8.6.10 Performance issues The Use and Implementation of Interval Data Types G. William Walster 9.1 Introduction 9.2 Intervals and Interval Arithmetic 9.2.1 Intervals 9.2.2 Interval arithmetic 9.3 Interval Arithmetic Utility 9.3.1 Fallible measures 9.3.2 Enter interval arithmetic 9.4 The Path to Intrinsic Compiler Support 9.4.1 Interval-specific operators andintrinsic functions 9.4.2 Quality of implementation opportunities 9.5 Fortran Code Example 9.6 Fortran Standard Implications 9.6.1 The interval-specific alternative 9.6.2 The enriched module alternative 9.7 Conclusions Computer-assisted Proofs and Self-validating Methods Siegfried M. Rump 10.1 Introduction 10.2 Proofs and Computers 10.3 Arithmetical Issues 10.4 ComputerAlgebra Versus Self-validating Methods 10.5 Interval Arithmetic 10.6 Directed Roundings 10.7 A Common Misconception About Interval Arithmetic 155 156 158 160 162 162 162 163 166 167 167 168 169 169 170 170 173 173 173 174 174 175 175 176 177 179 184 191 193 193 194 194 195 195 195 200 ?03 ?04 ?06 ?09
  • 15. x Contents 10.8 Self-validating Methods and INTLAB 10.9 Implementation of Interval Arithmetic 10.10 Performance and Accuracy 10.11 Uncertain Parameters 10.12 Conclusion 11 Hardware-assisted Algorithms Craig C. Douglas, Hans Petter Langtangen 11.1 Introduction 11.2 A Teaser 11.3 Processor and Memory Subsystem Organization 11.4 Cache Conflicts and Trashing 11.5 Prefetching 11.6 Pipelining and Loop Unrolling 11.7 Padding and Data Reorganization 11.8 Loop Fusion 11.9 Bitwise Compatibility 11.10 Useful Tools 12 Issues in Accurate and Reliable Use of Parallel Computing in Numerical Programs William D. Gropp 12.1 Introduction 12.2 Special Features of Parallel Computers 12.2.1 The major programming models 12.2.2 Overview 12.3 Impact on the Choice of Algorithm 12.3.1 Consequences of latency 12.3.2 Consequences of blocking 12.4 Implementation Issues 12.4.1 Races 12.4.2 Out-of-order execution 12.4.3 Message buffering 12.4.4 Nonblocking andasynchronousoperations . . . . 12.4.5 Hardware errors 12.4.6 Heterogeneous parallel systems 12.5 Conclusionsand Recommendations 13 Software-reliability Engineering of Numerical Systems Mladen A. Vouk 13.1 Introduction 13.2 About SRE 13.3 Basic Terms 13.4 Metrics and Models 13.4.1 Reliability 13.4.2 Availability 215 220 228 230 239 241 241 242 243 245 245 246 247 248 249 250 253 253 253 254 254 255 255 257 2 5 8 2 5 8 259 260 261 262 262 262 265 265 266 2 6 7 2 6 9 2 6 9 274
  • 16. Contents xi 13.5 13.6 13.7 13.8 Bibliography Index General Practice 13.5.1 13.5.2 13.5.3 13.5.4 Verification and validation Operational profile Testing Software process control Numerical Software 13.6.1 13.6.2 13.6.3 13.6.4 Acceptance testing External consistency checking Automaticverification of numerical precision (error prop- agation control) Redundancy-based techniques Basic Fault-tolerance Techniques 13.7.1 13.7.2 13.7.3 13.7.4 Summary Check-pointing and exception handling Recovery through redundancy Advanced techniques Reliability and performance 277 277 279 280 284 284 285 286 287 289 294 294 295 297 298 298 301 335
  • 18. List of Contributors Ronald F. Boisvert Mathematical and Computational Sci- ences Division, National Institute of Stan- dards and Technology (NIST), Mail Stop 8910, Gaithersburg, MD 20899, USA, email: boisvert@nist.gov Francoise Chaitin-Chatelin Universite Toulouse 1 and CERFACS (Centre Europeen de Recherche et de For- mation Avancee en Calcul Scientifique), 42 av. G. Coriolis, FR-31057 Toulouse Cedex, France, e-mail: chatelin@cerfacs.fr Ronald Cools Department of Computer Science, Katholieke Universiteit Leuven, Celestij- nenlaan 200A, B-3001 Heverlee, Belgium, email: Ronald.Cools@cs. kuleuven.be Craig C. Douglas Center for Computational Sciences, Uni- versity of Kentucky, Lexington, Kentucky 40506-0045, USA, email: douglas@ccs. uky. edu Kenneth W. Dritz Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA, email: dritz@anl. gov Bo Einarsson National Supercomputer Centre and the Mathematics Department, Linkopings uni- versitet, SE-581 83 Linkoping, Sweden, email: boein@nsc .liu.se Wayne H. Enright Department of Computer Science, Univer- sity of Toronto, Toronto,Canada M553G4, email: enright@cs .utoronto. ca William D. Gropp Argonne National Laboratory, 9700 South Cass Avenue, Argonne, Illinois 60439, USA, email: gropp@mcs .anl. gov Sven Hammarling The Numerical Algorithms Group Ltd, Wilkinson House, Jordan Hill Road, Oxford OX2 8DR, England, email: sven@nag.co. uk Hans Petter Langtangen Institutt for informatikk, Universitetet i Oslo, Box 1072 Blindern, NO-0316 Oslo, and Simula Research Laboratory, NO- 1325 Lysaker, Norway, email: hpl@simula.no Roldan Pozo Mathematical and Computational Scien- ces Division, National Institute of Stan- dards and Technology (NIST), Mail Stop 8910, Gaithersburg, MD 20899, USA, email: boisvert@nist. gov xiii
  • 19. XIV List of Contributors Siegfried M. Rump Institut fiir Informatik III, Technische Uni- versitat Hamburg-Harburg, Schwarzen- bergstrasse 95, DE-21071 Hamburg, Germany, email: rump@tu-harburg. de Van Snyder Jet Propulsion Laboratory, 4800 Oak Grove Drive, Mail Stop 183-701, Pasadena, CA 91109, USA, email: van. snyder@jpl. nasa.gov Elisabeth Traviesas-Cassan CERFACS, Toulouse, France. Presently at TRANSICIEL Technologies, FR-31025 Toulouse Cedex, France, e-mail: ecassan@mail.transiciel, com Mladen A. Vouk Department of Computer Science, Box 8206, North Carolina State University, Raleigh, NC 27695, USA, email: vouk@csc.ncsu. edu G. William Walster Sun Microsystems Laboratories, 16 Net- work Circle, MS UMPK16-160, Menlo Park, CA 94025, USA, email: bill.walster@sun.com Brian Wichmann Retired, email: Brian.Wichmann@bcs. org.uk
  • 20. List of Figures 2.1 2.2 3.1 3.2 3.3 3.4 3.5 4.1 4.2 4.3 4.4 4.5 4.6 4.7 4.8 5.1 5.2 5.3 7.1 7.2 7.3 7.4 7.5 7.6 7.7 7.8 7.9 7.10 7.11 A model of computational modeling A model of computational validation f .(x) for several values of Error as a function of N for = 0.5, 0.75, 1, 1.2, 1.3, and 1.5 i[ f ], q4[ f ] Q6[ f ],Q8[f] Reliability of the primitive error estimator Naive automatic integration with requested error = 0.05 Floating-point number example Hypotenuse of a right angled triangle Cubic equation example Integral example Linear equations example Stable ODE example Unstable ODE example James Hardy Wilkinson (1919-1986) The map for the matrix One in three dimensions Map : z p(E(A —zI)--1 ), A = B —E, with B =Venice and E is rank 1 Map y :z A = B-E, with B =Venice Approximate solution for predator-prey problem . . Approximate solution for Lorenz problem ETOL(x) with low tolerance ETOL(x) with medium tolerance ETOL(x) with high tolerance Check 1, Check 2, and Check 4 results for low tolerance Check 1, Check 2, and Check 4 results for medium tolerance Check 1, Check 2, and Check 4 results for high tolerance Check 3 results for low tolerance Check 3 results for medium tolerance Check 3 results for high tolerance 14 16 35 37 39 39 41 45 48 51 57 57 59 59 62 88 90 91 113 114 116 117 117 118 118 119 119 170 120 XV
  • 21. xvi List of Figures 8.1 8.2 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 10.10 10.11 10.12 10.13 10.14 10.15 10.16 10.17 10.18 10.19 10.20 10.21 10.22 10.23 10.24 10.25 10.26 10.27 10.28 10.29 Java Scimark performance on a 333 MHz Sun Ultra 10 system using successive releases of the JVM Performance of Scimark component benchmarks in C and Java on a 500 MHz Pentium III system IBM prime letterhead University of Illinois postmark Pocket calculators with 8 decimal digits, no exponent Point matrix times interval vector INTLAB program to check (I —RA x < x and therefore the nonsingu- larity of A INTLAB program for checking nonsingularityof an interval matrix . . . Naive interval Gaussian elimination: Growth ofrad(Uii) Plot of the function in equation (10.8) with 20 meshpoints in x- and y-directions Plot of the function in equation (10.8) with 50 meshpoints in x- and y-directions INTLAB program for the inclusion of the determinant of a matrix . . . . Comparison of a naive interval algorithm and a self-validating method . . INTLAB program of the function in equation (10.9) with special treat- ment of INTLAB results of verifynlss for Broyden's function (10.9) INTLAB example of quadratic convergence Dot product Dot product with unrolled loop ijk-loop for matrix multiplication Top-down approach for point matrix times interval matrix Improved code for point matrix times interval matrix INTLAB program to convert inf-sup to mid-rad representation INTLAB program for point matrix times interval matrix INTLAB program for interval matrix times interval matrix Inclusion as computed by verifylss for the example in Table 10.12 . Inner and outer inclusions and true solution set for the linear system with tolerances in Table 10.12 Monte Carlo approach to estimate the solution set of a parameterized linear system Result of Monte Carlo approach as in the program in Figure 10.12 for 100 x 100 random linear system with tolerances, projection of 1st and 2nd component of solution Inner and outer inclusions of symmetric solution set for the example in Table 10.12 Nonzero elements of the matrix from Harwell/Boeing BCSSTK15 . . . . Approximation of eigenvalues of the matrix in Table 10.14 computed by MATLAB 159 160 196 196 199 206 207 208 210 212 212 214 215 218 219 220 221 221 222 223 224 226 226 227 233 234 234 235 236 237 238
  • 22. List of Figures xvii 11.1 11.2 12.1 13.1 13.2 13.3 13.4 13.5 13.6 13.7 13.8 13.9 Refinement patterns An unstructured grid Two orderings for summing 4 values. Shown in (a) is the ordering typi- cally used by parallel algorithms. Shown in (b) is the natural "do loop" ordering Empirical and modeled failure intensity Observed and modeled cumulative failures Empirical and modeled intensity profile obtained during an early testing phase Empirical and modeled failures obtained during an early testing phase . . Field recovery andfailure rates for a telecommunications product . . . . Unavailability fit using LPET and constant repair rate with data up to "cut-off point" only Fraction of shipped defects for two ideal testing strategies based on sam- pling withandwithoutreplacement, andanonideal testing under schedule and resource constraints Illustration of failure space for three functionally equivalent programs . . Illustration of comparison events 242 250 257 272 273 273 274 275 276 282 291 294
  • 24. List of Tables 2.1 3.1 3.2 3.3 4.1 4.2 4.3 5.1 6.1 6.2 7.1 7.2 7.3 7.4 9.1 9.2 10.1 10.2 10.3 10.4 10.5 10.6 10.7 10.8 10.9 Sources of test problems for mathematical software Some numerical results Where the maximum appears to be Location of maximum by combination of routines IEEE 754 arithmetic formats Forward recurrence for yn Asymptotic error bounds for Ax=x Properties of R(t, z) as a function of t and z Unit roundoff u for IEEE 754 standard floating-point arithmetic General scheme for generating data for a backward error analysis Check 1results Check 2 results Check 3 results Check 4 results Set-theoretic interval operators Interval-specific intrinsic functions Exponential growth of radii of forward substitution in equation (10.7) in naive interval arithmetic Relative performance without and with unrolled loops Relative performance for different methods for matrix multiplication. . . Performance of LAPACKroutines using Intel MKL Performance of ATLAS routines Performance of algorithms in Figures 10.18 and 10.19 for point matrix times interval matrix. Relative performance of MATLAB matrix multiplication with interpre- tation overhead Performance of interval matrix times interval matrix Measured computing time for linear system solver. 28 36 37 42 45 57 73 85 102 106 114 115 115 116 183 184 210 221 222 223 223 225 225 227 229 xix
  • 25. xx List of Tables 10.10 10.11 10.12 10.13 10.14 11.1 13.1 13.2 13.3 13.4 13.5 13.6 MATLAB solution of a linear system derived from (10.12) for n = 70 and T = 40 INTLAB example of verifynlss for Broyden's function (10.9) with uncertain parameters 230 731 INTLAB example of verifylss for a linear system with uncertain data.232 Computing time without and with verification Matrix with ill-conditioned eigenvalues Speedups using a cache aware multigrid algorithm on adaptively refined structured grids Hypothetical telephone switch User breakdown of the profile Mode decomposition Illustration of parameter values Pairwise test cases Example of standard deviation computation 736 237 242 279 779 780 783 283 788
  • 26. Preface Much of the software available today is poorly written, inadequate in its facilities, and altogethera numberof years behind the most advanced state of the art. —Professor Maurice V.Wilkes, September 1973. Scientific software is central to our computerized society. It is used to design airplanes and bridges, to operatemanufacturing lines, to controlpowerplantsandrefineries,toanalyze financial derivatives, to map genomes, and to provide the understanding necessary for the diagnosis and treatment of cancer. Because of the high stakes involved, it is essential that the software be accurate and reliable. Unfortunately, developing accurate and reliable scientific software is notoriously difficult, and Maurice Wilkes' assessment of 1973 still rings true today. Not only is scientific software beset with all the well-known problems affecting software development in general, it must cope with the special challenges of numerical computation. Approximations occur at all levels. Continuous functions are replaced by discretized versions. Infinite processes are replaced by finite ones. Real numbers are replaced by finite precision numbers. As a result, errors are built into the mathematical fabric of scientific software which cannot be avoided. At best they can be judiciously managed. The nature of these errors, and how they are propagated, must be understood if the resulting software is to be accurate and reliable. The objective of this book is to investigate the nature of some of these difficulties, and to provide some insight into how to overcome them. The book is divided into three parts. 1. Pitfalls in Numerical Computation. We first illustrate some of the difficulties in producing robust and reliable scientific software. Well-known cases of failure by scientific software are reviewed, and the "what" and "why" of numerical computations are considered. 2. Diagnostic Tools. Wenextdescribe tools thatcan be used to assess the accuracy andreliability of existing scientific applications. Such tools do not necessarily improve results, but they can be used to increase one's confidence in their validity. 3. Technology for Improving Accuracy and Reliability. We describe a variety of techniques that can be employed to improve the accuracy and reliability of newly developed scientific applications. In particular, we consider the effect of the choice of programming language, underlying hardware, and the parallel xxi
  • 27. xxii Preface computing environment. We provide a description of interval data types and their application to validated computations. This book has been produced by the International Federation for InformationPro- cessing (IFIP) Working Group 2.5.An arm of the IFIP Technical Committee 2 on Software Practice and Experience, WG 2.5 seeks to improve the quality of numerical computation by promoting the development and availability of sound numerical software. WG 2.5 has been fortunate to be able to assemble a set of contributions from authors with a wealth of expe- rience in the development and assessment of numerical software. The following WG 2.5 members participated in this project: Ronald Boisvert, Francoise Chaitin-Chatelin, Ronald Cools, Craig Douglas, BoEinarsson, WayneEnright, Patrick Gaffney, Ian Gladwell, William Gropp, Jim Pool, Siegfried Rump, Brian Smith, VanSnyder, Michael Thune, Mladen Vouk, and Wolfgang Walter. Additional contributions were made by KennethW.Dritz, Sven Ham- marling, Hans Petter Langtangen, Roldan Pozo, Elisabeth Traviesas-Cassan, Bill Walster, and Brian Wichmann. The volume was edited by Bo Einarsson. Several of the contributions have been presented at other venues in somewhatdif- ferent forms. Chapter 1 was presented at the Workshop on Scientific Computing and the Computational Sciences, May 28-29, 2001, in Amsterdam, The Netherlands. Chapter 5 was presented at the IFIP Working Group 2.5 Meeting, May 26-27, 2001,in Amsterdam, The Netherlands. Four of the chapters—6,7,10, and 13—are based on lectures presented at the SIAM Minisymposium on Accuracy and Reliability in Scientific Computing held July 9, 2001,in San Diego, California. Chapter 10was also presented at the Annual Conference of Japan SIAM at Kyushu-University, Fukuoka, October 7-9, 2001. Thebook has an accompanying web http: / /www. nsc . 1 iu. se /wg2 5/book/ with updates, codes, links, color versions of some of the illustrations, and additional material. A problem with references to links on the internet is that, as Diomidis Spinellis has shown in [422], the half-life of a referenced URL is approximately four years from its publication date. The accompanying website will contain updated links. A number of trademarked products are identified in this book. Java, Java HotSpot, and SUN are trademarks of Sun Microsystems, Inc. Pentium and Itanium are trademarks of Intel. PowerPC is a trademark of IBM. Microsoft Windows is a trademark of Microsoft. Apple is a trademark of Apple Computer, Inc. NAG is a trademark of The Numerical Algorithms Group, Ltd. MATLAB is a trademark of The MathWorks,Inc. While we expect that developing accurate and reliable scientific software will remain a challenging enterprise for some time to come, we believe that techniques and tools are now beginning to emerge to improve the process. If this volume aids in the recognition of the problems and helps point developers in the direction of solutions, then this volume will have been a success. Linkoping and Gaithersburg, September 15, 2004. Bo Einarsson, Project Leader Ronald F. Boisvert, Chair, IFIP Working Group 2.5
  • 28. Preface xxiii Acknowledgment As editor I wish to thank the contributors and the additional project members for sup- porting the project through submitting and refereeing the chapters. I am also very grateful to the anonymousreviewers who did a marvelous job, gave constructive criticism andmuch appreciated comments and suggestions. The final book did benefit quite a lot from their work! I also thank the National Supercomputer Centre and the Mathematics Department of Linkopings universitet for supporting the project. Working with SIAM on the publication of this book was a pleasure. Special thanks go to Simon Dickey, Elizabeth Greenspan, Ann Manning Allen, and Linda Thiel, for all their assistance. BoEinarsson
  • 30. Part I PITFALLS IN NUMERICAL COMPUTATION 1
  • 32. Chapter 1 What Can Go Wrong in Scientific Computing? Bo Einarsson 1.1 Introduction Numerical software is central to our computerized society. It is used, for example, to design airplanes and bridges, to operate manufacturinglines, to control power plants andrefineries, to analyze financial derivatives, to determine genomes, and to provide the understanding necessary for the treatment for cancer. Because of the high stakes involved, it is essential that the software be accurate, reliable, and robust. A report [385] written for the National Institute of Standards and Technology (NIST) states that software bugs cost the U.S. economy about $ 60 billion each year, and that more than a third of that cost, or $ 22 billion, could be eliminated by improved testing. Note that these figures apply to software in general, not only to scientific software. An article [18] stresses that computer system failures are usually rooted in human error rather than technology. The article gives a few examples: delivery problems at a computer company, radio system outage for air traffic control, delayed financial aid. Many companies are now working on reducing the complexity of testing but are still requiring it to be robust. The objective of this book is to investigate some of the difficulties related to scientific computing, such as accuracy requirements and rounding, and to provide insight into how to obtain accurate and reliable results. This chapter serves as an introduction and consists of three sections. In Section 1.2 we discuss some basic problems in numerical computation, like rounding, cancellation, and recursion. In Section 1.3 we discuss implementation of real arithmetic on computers, and in Section 1.4 we discuss some cases where unreliable computations have caused loss of life or property. 1.2 Basic Problems in Numerical Computation Some illustrative examples are given in this section, but further examples and discussion can be found in Chapter 4. 3
  • 33. 4 Chapter 1. What Can Go Wrong in Scientific Computing? Two problems in numerical computation are that often the inputvalues are notknown exactly, and that some of the calculations cannot be performed exactly. The errors obtained can cooperate in later calculations, causing an error growth, which may be quite large. Rounding is the cause of an error, while cancellation increases its effect and recursion may cause a build-up of the final error. 1.2.1 Rounding The calculations are usually performed with a certain fixed number of significant digits, so after each operation the result usually has to be rounded, introducing a rounding error whose modulus in the optimal case is less than or equal to half a unit in the last digit. At the next computation this rounding error has to be taken into account, as well as a newrounding error. The propagation of the rounding error is therefore quite complex. Example 1.1 (Rounding) Consider the followingMATLABcode for advancing from a to b with the step h = (b — a)/n. function step(a,b,n) % step from a to b with n steps h=(b-a)/n; x=a; disp(x) while x < b, x = x + h; disp(x) end We get one step too many with a = l,b = 2, and n = 3, but the correct numberof steps with b = 1.1. In the first case because of the rounding downward of h = 1/3 after three steps we are almost but not quite at b, and therefore the loop continues. In the second case also b is an inexact number on a binary computer, and the inexact values of x and b happen to compare as wanted. It is advisable to let such a loop work with an integer variable instead ofareal variable. Ifreal variables areused itis advisabletoreplace while x < b with while x < b-h/2. The example was run in IEEE 754 double precision, discussed in section 1.3.2. In another precision a different result may be obtained! 1.2.2 Cancellation Cancellation occurs from the subtraction of two almost equal quantities. Assume x1 = 1.243 ± 0.0005 andx2 = 1.234 ± 0.0005. We then obtain x 1 - x 2 = 0.009± 0.001, aresult where several significant leading digits have been lost, resulting in a large relative error! Example 1.2 (Quadratic equation) Theroots oftheequation ax2 + bx + c = 0 aregiven
  • 34. 1.2. Basic Problems in Numerical Computation 5 by the following mathematically, but not numerically, equivalent expressions: Using IEEE 754 single precision and a = 1.0 •10-5 , b = 1.0 •103 , and c = 1.0 •103 we get = -3.0518, x = -1.0000 .10 , = -1.0000, andx = -3.2768 • 107 . We thus get two very different sets of roots for the equation! The reason is that since b2 is much larger than 4 the square root will get a value very close to , and whenthe subtraction of two almost equal values is performed the error in the square root evaluation will domi- nate. Indouble precision thevalue ofthesquare root of 106 - 0.04 is999.9999799999998, which is very close to b = 1000. The two correct roots in this case are one from each set, and x, for which there is addition of quantities of the same sign, and no cancellation occurs. • Example 1.3 (Exponential function) The exponential function ex can be evaluated using the MacLaurin series expansion. This works reasonably well for x > 0 but not for x <—3 wherethe expansion terms an will alternate in signandthemodulusofthe terms will increase until n . Even for moderate values ofx the cancellation can be so severe that anegative value of the function is obtained! Using double (or multiple) precision is not the cure for cancellation, but switching to another algorithm may help. In order to avoid cancellation in Example 1.2 we let the sign of b decide which formula to use, and in Example 1.3we use the relation e-x = 1/ex . 1.2.3 Recursion A common method in scientificcomputing is to calculate a newentity based on the previous one, and continuing in that way,either in an iterative process (hopefully converging) or in a recursive process calculating newvalues all the time. Inboth cases the errors canaccumulate and finally destroy the computation. Example 1.4 (Differential equation) Let us look atthe solution of a first order differential equation y' = f ( x , y). A well-known numerical method is the Euler method yn+ = yn + h •f(xn, yn). Twoalternatives with smaller truncation errors are the midpoint method yn+1 = yn-1 + 2h • f(xn, yn), which has the obvious disadvantage that it requires two starting points, and the trapezoidal method yn+1 = yn + • [f(xn, yn) + f(xn + 1 , yn+1)], which has the obvious disadvantage that it is implicit. Theoretical analysis shows that the Euler method is stable1 for small h, and the mid- point method is always unstable, while the trapezoidal method is always stable. Numerical experiments on the test problem y' = —2y with the exact solution y(x) = e-2x confirm that themidpoint method gives a solution which oscillates wildly.2 1 Stability can be defined such that if the analytic solution tends to zero as the independent variable tends to infinity, then also the numeric solution should tend to zero. 2 Compare with Figures 4.6 and 4.7 in Chapter 4.
  • 35. 6 Chapter 1. What Can Go Wrong in ScientificComputing? 1.2.4 Integer overflow There is also a problem with integer arithmetic: integer overflow is usually not signaled. This can cause numerical problems, for example, in the calculation of the factorial function "n!". Writing the code in a natural way using repeated multiplication on a computer with 32 bit integer arithmetic, the factorials up to 12! are all correctly evaluated, but 13! gets the wrong value and 17! becomes negative. The range of floating-point values is usually larger than the range of integers, but the best solution is usually not to evaluate the factorial function. When evaluatingthe Taylor formulayou instead successively multiply with for each n to get the next term, thus avoiding computing the large quantity n!. The factorial function overflows at n = 35 in IEEE single precision and at n = 171 in IEEE double precision. Multiple integer overflow vulnerabilities in a Microsoft Windows library (before a patch was applied) could have allowed an unauthenticated, remote attacker to execute arbi- trary code with system privileges [460]. 1.3 Floating-point Arithmetic During the 1960's almost every computer manufacturer had its own hardware and its own representation of floating-point numbers. Floating-point numbers are used for variables with a wide range of values, so that the value is represented by its sign, its mantissa, represented by a fixed number of leading digits, and one signed integer for the exponent, as in the representation of the mass of the earth 5.972 • 1024 kg or the mass of an electron 9.10938188 • 10-31 kg. An introduction to floating-point arithmetic is given in Section 4.2. The old and different floating-point representations had some flaws; on one popular computer there existed values a > 0 such that a > 2 •a. This mathematical impossibility was obtained from the fact that a nonnormalized number (a number with an exponent that is too small to be represented) was automatically normalized to zero at multiplication.3 Such an effect can give rise to a problem in a program which tries to avoid division by zero by checking that a 0, but still I/a may cause the condition "division by zero." 1.3.1 Initial work on an arithmetic standard During the 1970's Professor William Kahan of the University of California at Berkeley became interested in defining a floating-point arithmetic standard; see [248]. He managed to assemble a group of scientists, including both academics and industrial representatives (Apple, DEC, Intel, HP, Motorola), under the auspices of the IEEE.4 The group became known as project 754. Its purpose was to produce the best possible definition of floating- point arithmetic. It is now possible to say that they succeeded; all manufacturersnowfollow the representation of IEEE 754. Some old systems with other representations are however 3 Consider a decimal system with two digits for the exponent, and three digits for the mantissa, normalized so that the mantissa is not less than 1but less than 10. Then the smallest positive normalized number is 1.00 • 10_99 but the smallest positive nonnormalized number is 0.01 • 10--99 . 4 Institute for Electrical and Electronic Engineers, USA.
  • 36. 1.3. Floating-point Arithmetic 7 still available from Cray, Digital (now HP), and IBM, but all new systems also from these manufacturers follow IEEE 754. The resulting standard was rather similar to the DEC floating-point arithmetic on the VAX system.5 1.3.2 IEEE floating-point representation TheIEEE 754 [226] contains single, extended single, double, andextended double precision. Itbecame anIEEE standard [215] in 1985 and anIEC6 standard in 1989. There is an excellent discussion in the book [355]. In the following subsections the format for the different precisions are given, but the standard includes much more than these formats. It requires correctly rounded operations (add, subtract, multiply, divide, remainder, and square root) as well as correctly rounded format conversion. There are four rounding modes (round down, round up, round toward zero, andround to nearest), withroundto nearest asthe default. There are also five exception types (invalid operation, division by zero, overflow, underflow, and inexact) which must be signaled by setting a status flag. 1.3.2.1 IEEE single precision Single precision is based on the 32 bit word, using 1bit for the sign s, 8 bits for the biased exponent e, andtheremaining 23bits forthefractional part / ofthemantissa. Thefact that a normalized number in binary representation must have the integer part of the mantissa equal to 1is used, and this bit therefore does not have to be stored, which actually increases the accuracy. There are five cases: 1. e = 255 and f 0 give an x which is not a number (NaN). 2. e = 255and /= 0 give infinity with its sign, x = (-1)s . 3. 1 e 254, the normal case, x = (-1)s • (1.f)-2e -127 . Note that the smallest possible exponent gives numbers of the form x = (-1)s • (1.f)-2-126 . 4. e = 0 and f 0, gradual underflow, subnormal numbers, x = (-l)s .(0.f)-2-126 . 5. e = 0 and f= 0, zero with its sign, x = (-1)s .0. The largest number that can be represented is (2 - 2-23 ) • 2127 3.4028 • 1038 , the smallest positive normalized number is 1 • 2-126 1.1755 • 10-38 , and the smallest 5 After 22 successful years, starting 1978 with the VAX 11/780, the VAX platform was phased out. HP will continue to support VAXcustomers. 6 TheInternational Electrotechnical Commission handlesinformation about electric,electronic, andelectrotech- nical international standards and compliance and conformity assessment for electronics.
  • 37. 8 Chapter 1. What Can Go Wrong in Scientific Computing? positive nonnormalized numberis 2-23 .2-126 = 2-149 1.4013.10--45 . Theunit roundoff u = 2-24 5.9605 • 10-8 corresponds to about seven decimal digits. The concept of gradual underflow has been rather difficult for the user community to accept, but it is very useful in that there is no unnecessary loss of information. Without gradual underflow, a positive number less than the smallest permitted one must either be rounded up to the smallest permitted one or replaced with zero, in both cases causinga large relative error. The NaN can be used to represent (zero/zero), (infinity - infinity), and other quantities that do not haveaknownvalue. Note thatthe computationdoes not haveto stop for overflow, since the infinity NaN can be used until a calculation with it does not give a well-determined value. The sign of zero is useful only in certain cases. 1.3.2.2 IEEE extended single precision The purpose of extended precision is to make it possible to evaluate subexpressions to full single precision. The details are implementation dependent, but the number of bits in the fractional part / has to be at least 31, and the exponent, which may be biased, has to have at least the range —1022 exponent 1023. IEEE double precision satisfies these requirements! 1.3.2.3 IEEE double precision Double precision is based on two 32 bit words (or one 64 bit word), using 1bit for the sign s, 11 bits for the biased exponent e, and the remaining 52 bits for the fractional part / of the mantissa. It is very similar to single precision, with an implicit bit for the integer part of the mantissa, a biased exponent, and five cases: 1. e = 2047 and f 0 give an x which is not a number(NaN). 2. e = 2047 and /= 0 give infinity with itssign, x = (-1)S. . 3. 1 e 2046, the normal case, x = (-1)s .(1.f). 2e -1023 . Note that the smallest possible exponent gives numbers of the form x = (-1)s . (1.f). 2-1022 . 4. e = 0 and f 0, gradual underflow, subnormalnumbers, x = (-1)s -(0.f) .2-1022 . 5. e = 0 and /= 0, zero with itssign, x = (-1)s .0. The largest number that can be represented is (2 - 2_52 ) •21023 1.7977 • 10308 , the smallest positive normalized number is 1 •2--1022 2.2251 • 10--308 , and the smallest positive nonnormalized number is 2_52 • 2_1022 = 2_1074 . 4.9407 • 10_324 . The unit roundoff u= 2_53 1.1102 • 10_16 corresponds to about 16 decimal digits.
  • 38. 1.4. What Really Went Wrong in Applied Scientific Computing! 9 The fact that the exponent is wider for double precision is a useful innovation not available, for example, on the IBM System 360.7 On the DEC VAX/VMStwo different double precisions D and G were available: D with the same exponent range as in single precision, and G with a wider exponent range. The choice between the two double preci- sions was done via a compiler switch at compile time. In addition there was a quadruple precision H. 1.3.2.4 IEEE extended double precision The purpose of extended double precision is to make it possible to evaluate subexpressions to full double precision. The details are implementation dependent, but the number of bits in the fractional part / has to be at least 63, andthe number of bits in the exponent part has to be at least 15. 1.3.3 Future standards There is also an IEEE Standard-for Radix-Independent Floating-Point Arithmetic, ANSI/ IEEE 854 [217]. This is however of less interest here. Currently double precision is not always sufficient, so quadrupleprecision is available from many manufacturers. There is no official standard available; most manufacturers generalize the IEEE 754, but some (includingSGI) haveanother convention. SGIquadruple precision is very different from the usual quadrupleprecision, whichhas a very large range; with SGI the range is about the same in double and quadruple precision. The reason is that here the quadruple variables are represented as the sum or difference of two doubles, normalized so that the smaller double is 0.5 units in the last position of the larger. Care must therefore be taken when using quadruple precision. Packages for multiple precision also exist. 1.4 What Really Went Wrong in AppliedScientific Computing! We include only examples where numerical problems have occurred, not the more common pure programming errors (bugs). More examples are given in the paper [424] and in the Thomas Ruckle web site Collection of SoftwareBugs [212]. Quite a different view is taken in [4], in which how to get the mathematics and numerics correct is discussed. 1.4.1 Floating-point precision Floating-point precision has to be sufficiently accurate to handle the task. In this section we give some examples where this has not been the case. 7 IBM System 360 was announced in April 1964 and was a very successful spectrum of compatible computers that continues in an evolutionary form to this day. Following System 360 was System 370, announced in June, 1970. The line after System 370 continued under different names: 303X (announced in March, 1977), 308X, 3090, 9021, 9121, and the /Series. These machines all share a common heritage. The floating-point arithmetic is hexadecimal.
  • 39. 10 Chapter 1. What Can Go Wrong in Scientific Computing? 1.4.1.1 Patriot missile A well-known example is the Patriot missile8 failure [158, 419] with a Scud missile9 on February 25, 1991, at Dhahran, Saudi Arabia. The Patriot missile was designed, in order to avoid detection, to operate for only a few hours at one location. The velocity of the incoming missile is a floating-point number but the time from the internal clock is an integer, represented by the time in tenths of a second. Before that time is used, the integer number is multiplied with a numerical approximation of 0.1 to 24 bits, causing an error 9.5 • 10_8 in the conversion factor. The inaccuracy in the position of the target is proportional to the product of the target velocity and the length of time the system has been running. This is a somewhat oversimplified discussion; a more detailed one is given in [419]. With the system up and running for 100 hours and a velocity of the Scud missile of 1676 meters per second, an error of 573 meters is obtained, which is more than sufficient to cause failure of the Patriot and success for the Scud. The Scud missile killed 28 soldiers. Modified software, which compensated for the inaccurate time calculation, arrived the following day. The potential problem had been identifiedby the Israelis and reported to the Patriot Project Office on February 11, 1991. 1.4.1.2 TheVancouver Stock Exchange The Vancouver Stock Exchange (see the references in [154]) in 1982 experienced a problem with its index. The index (with three decimals) was updated (and truncated) after each transaction. After 22 months it had fallen from the initial value 1000.000 to 524.881, but the correctly evaluated index was 1098.811. Assuming 2000 transactions a day a simple statistical analysis gives directly that the index will lose one unit per day, since the mean truncation error is 0.0005 per transaction. Assuming 22 working days a month the index would be 516 instead of the actual (but still false) 524.881. 1.4.1.3 Schleswig-Holstein local elections In the Schleswig-Holstein local elections in 1992, one party got 5.0 % in the printed results (which was correctly rounded to one decimal), but the correct value rounded to two decimals was 4.97 %, and the party did therefore not pass the 5 % threshold for getting into the local parliament, which in turn caused a switch of majority [481]. Similar rules apply not only in German elections. A special rounding algorithm is required at the threshold, truncating all values between 4.9 and 5.0 in Germany, or all values between 3.9 and 4.0 in Sweden! 8 The Patriot is a long-range, all-altitude, all-weather air defense system to counter tactical ballistic missiles, cruise missiles, and advancedaircraft. Patriot missile systems were deployed by U.S. forces during the First Gulf War. The systems were stationed in Kuwait and destroyed a numberof hostile surface-to-surface missiles. 9 The Scud was first deployed by the Soviets in the mid-1960s. The missile was originally designed to carry a 100 kiloton nuclear warhead or a 2000 pound conventional warhead, with ranges from 100 to 180 miles. Its principal threat was its warhead's potential to hold chemical or biological agents. The Iraqis modified Scuds for greater range, largely by reducingwarheadweight,enlarging their fuel tanks, and burning all of the fuel during the early phase of flight. It has been estimated that 86 Scuds were launchedduring the First Gulf War.
  • 40. 1.4. What Really Went Wrong in Applied Scientific Computing! 11 1.4.1.4 Criminal usages of roundoff Criminal usages of roundoff have been reported [30], involving many tiny withdrawals from bank accounts. The common practice of rounding to whole units of the currency (for example, to dollars, removing the cents) implies that the cross sums do not exactly agree, which diminishes the chance/risk of detecting the fraud. 1.4.1.5 Euro-conversion A problem is connected with the European currency Euro, which replaced 12 national currencies from January 1, 2002. Partly due to the strictly defined conversion rules, the roundoff can have a significant impact [162]. A problem is that the conversion factors from old local currencies have six significant decimal digits, thus permitting a varying relative error, and for small amountsthe final result is also to be rounded according to local customs to at most two decimals. 1.4.2 Illegal conversion between data types On June 4, 1996, an unmanned Ariane 5 rocket launched by the European Space Agency exploded forty seconds after its lift-off from Kourou, French Guiana. The Report by the Inquiry Board [292, 413] found that the failure was caused by the conversion of a 64 bit floating-point number to a 16 bit signed integer. The floating-point number was too large to be represented by a 16 bit signed integer (larger than 32767). In fact, this part of the software was required in Ariane 4 but not in Ariane 5! A somewhat similar problem is illegal mixing of different units of measurement (SI, Imperial, andU.S.). Anexample istheMars Climate Orbiter whichwaslost onentering orbit around Mars on September 23,1999. The "root cause" of the loss was that a subcontractor failed to obey the specification that SI units should be used and instead used Imperial units in their segment of the ground-based software; see [225]. See also pages 35-38 in the book [440]. 1.4.3 Illegal data A crew member of the USSYorktown mistakenly entered a zero for adata valuein September 1997, which resulted in a division by zero. The error cascaded and eventually shut down the ship's propulsion system. The ship was dead in the water for 2 hours and 45 minutes; see [420, 202]. 1.4.4 Inaccurate finite element analysis On August 23, 1991, the Sleipner A offshore platform collapsed in Gandsfjorden near Stavanger, Norway. The conclusion of the investigation [417] was that the loss was caused by a failure in a cell wall, resulting in a serious crack and a leakage that the pumps could not handle. The wall failed as a result of a combination of a serious usage error in the finite element analysis (using the popular NASTRAN code) and insufficient anchorage of the reinforcement in a critical zone.
  • 41. 12 Chapter 1. What Can Go Wrong in ScientificComputing? The shear stress was underestimated by 47 %, leading to insufficient strength in the design. A more careful finite element analysis after the accident predicted that failurewould occur at 62 meters depth; it did occur at 65 meters. 1.4.5 Incomplete analysis The Millennium Bridge [17, 24, 345] over the Thames in London was closed on June 12, directly after its public opening on June 10,2000, since it wobbled more than expected. The simulations performed during the design process handled the vertical force (which was all that was required by the British Standards Institution) of a pedestrian at around 2 Hz, but not the horizontal force at about 1Hz. What happened was that the slight wobbling (within tolerances) due to the wind caused the pedestrians to walk in step (synchronous walking) which made the bridge wobble even more. On the first day almost 100000 persons walked the bridge. This wobbling problem was actually noted already in 1975 on the Auckland Harbour Road Bridge in New Zealand, when a demonstration walked that bridge, but that incident was never widely published. The wobbling started suddenly when a certain number of persons were walking the Millennium Bridge; in a test 166 persons were required for the problem to appear. The bridge was reopened in 2002, after 37 viscous dampers and 54 tuned mass dampers were installed and all the modifications were carefully tested. The modifications were completely successful. 1.5 Conclusion The aim of this book is to diminish the risk of future occurrences of incidents like those described in the previous section. The techniques and tools to achieve this goal are now emerging; some of them are presented in parts II and III.
  • 42. Chapter 2 Assessment ofAccuracy and Reliability Ronald F. Boisvert,Ronald Cools, and Bo Einarsson One of the principal goals of scientific computing is to provide predictions of the behavior of systems to aid in decision making. Good decisions require good predictions. But how are we to be assured that our predictions are good? Accuracy and reliability are two qualities of good scientific software. Assessing these qualities is one way to provide confidence in the results of simulations. In this chapter we provide some background on the meaning of these terms.10 2.1 Models of Scientific Computing Scientific software is particularly difficult to analyze. One of the main reasons for this is that it is inevitably infused with uncertainty from a wide variety of sources. Much of this uncertainty is the result of approximations. These approximations are made in the context of each of the physical world, the mathematical world, and the computer world. To make these ideas more concrete, let's consider a scientific software system de- signed to allow virtual experiments to be conducted on some physical system. Here, the scientist hopes to develop a computer program which can be used as a proxy for some real world system to facilitate understanding. The reason for conducting virtual experiments is that developing and running a computer program can be much more cost effective than developing and runninga fully instrumented physical experiment (consider a series of crash tests for a car, for example). In other cases performing the physical experiment can be practically impossible—for example, a physical experiment to understand the formation of galaxies! The process of abstracting the physical system to the level of a computer program is illustrated in Figure 2.1. This process occurs in a sequence of steps. • From real world to mathematical model A length scale is selected which will allow the determination of the desired results using a reasonable amount of resources, for example, atomic scale (nanometers) or 10 Portions of this chapter were contributed by NIST, and are not subject to copyright in the USA. 13
  • 43. 14 Chapter 2. Assessment of Accuracy and Reliability Figure 2.1. A model of computational modeling. macro scale (kilometers). Next, the physical quantities relevant to the study, such as temperature and pressure, are selected (and all other effects implicitly discarded). Then, the physical principles underlying the real world system, such as conserva- tion of energy and mass, lead to mathematical relations, typically partial differential equations (PDEs), that express the mathematical model. Additional mathematical ap- proximations may be introduced to further simplify the model. Approximations can include discarded effects and inadequately modeled effects (e.g., discarded terms in equations, linearization). • From mathematical model to computational model The equations expressing the mathematical model are typically set in some infinite dimensional space. In order to admit anumerical solution, the problem istransformed to a finite dimensional space by some discretization process. Finite differences and finite elements are examples of discretization methods for PDE models. In such com- putational models one must select an order of approximation for derivatives. One also introduces a computational grid of some type. It is desirable that the discrete model converges to the continuous model as either the mesh width approaches zero or the order of approximation approaches infinity. In this way, accuracy can be controlled using these parameters of the numerical method. A specification for how to solve the discrete equations must also be provided. If an iterative method is used (which is cer- tainly the case for nonlinear problems), then the solution is obtained only in the limit,
  • 44. 2.2. Verification and Validation 15 and hence a criterion for stopping the iteration must be specified. Approximations can include discretization of domain, truncation of series, linearization, and stopping before convergence. • From computational model to computer implementation The computational model and its solution procedure are implemented on a particu- lar computer system. The algorithm may be modified to make use of parallel pro- cessing capabilities or to take advantage of the particular memory hierarchy of the device. Many arithmetic operations are performed using floating-point arithmetic. Approximations can include floating-point arithmetic and approximation of standard mathematical or special functions (typically via calls to library routines). 2.2 Verification andValidation If one is to use the results of a computer simulation, then one must have confidence that the answers produced are correct. However, absolute correctness may be an elusive quantity in computer simulation. As we have seen, there will always be uncertainty—uncertaintyin the mathematical model, in the computational model, in the computer implementation, and in the input data. A more realistic goal is to carefully characterize this uncertainty. This is the main goal of verification and validation. • Code verification This is the process of determining the extent to which the computer implementation corresponds to the computational model. If the latter is expressed as a specification, then code verification is the process of determining whether an implementation cor- responds to its lowest level algorithmic specification. In particular, we ask whether the specified algorithm has been correctly implemented, not whether it is an effective algorithm. • Solution verification This is the process of determining the extent to which the computer implementation corresponds to the mathematical model. Assuming that the code has been verified (i.e., that the algorithm has been correctly implemented), solution verification asks whether the underlyingnumerical methods correctly produce solutions to the abstract mathematical problem. • Validation This is the process of determining the extent to which the computer implementation corresponds to the real world. If solution verification has already been demonstrated, then the validation asks whether the mathematical model is effective in simulating those aspects of the real world system under study. Of course, neither the mathematical nor the computational model can be expected to be valid in all regions of their own parameter spaces. The validation process must confirm these regions of validity. Figure 2.2 illustrates how verification and validation are used to quantify the relation- ship between the various models in the computational science and engineering process. In a
  • 45. 16 Chapter 2. Assessment of Accuracy and Reliability Figure 2.2. A model of computational validation. rough sense, validation is the aim of the application scientist (e.g., the physicist or chemist) who will be using the software to perform virtual experiments. Solution verification is the aim of the numerical analyst. Finally, code verification is the aim of the programmer. There is now a large literature on the subject of verification and validation. Never- theless, the words themselves remain somewhat ambiguous, with different authors often assigning slightly different meanings. For software in general, the IEEE adopted the fol- lowing definitionsin 1984 (they were subsequently adopted by various other organizations and communities, such as the ISO11 ). • Verification: The process of evaluating the products of a software development phase to provide assurance that they meet the requirements definedfor them by the previous phase. • Validation: The process of testing a computer program and evaluating the results to ensure compliance with specific requirements. These definitions are general in that "requirements" can be given different meaning for different application domains. For computational simulation, the U.S. Defense Modeling 11 International Standards Organization.
  • 46. 2.3. Errors in Software 17 and Simulation Office (DMSO) proposed the following definitions (1994), which were subsequently adopted in the context of computational fluid dynamics by the American Institute of Aeronautics and Astronautics [7]. • Verification: The process of determining that a model implementation accurately represents the developer's conceptual description of the model and the solution to the model. • Validation: The process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended users of the model. The DMSO definitions can be regarded as special cases of the IEEE ones, given appropriate interpretations of the word "requirements" in the general definitions. In the DMSO proposal, the verification is withrespect to the requirements thatthe implementation should correctly realize the mathematical model. The DMSO validation is with respect to the requirement that the results generated by the model should be sufficiently in agreement with the real world phenomena of interest, so that they can be used for the intended purpose. The definitionsused in the present book, given in the beginning of this chapter, differ from those of DMSO in that they make a distinction between the computational model and the mathematical model. In our opinion, this distinction is so central in scientificcomputing that it deserves to be made explicit in the verification and validation processes. The model actually implemented in the computer program is the computational one. Consequently, according to the IEEE definition, validation is about testing that the computational model fulfills certain requirements, ultimately those of the DMSO definition of validation. The full validation can then be divided into two levels. The first level of validation (solution verification) will be to demonstrate that the computational model is a sufficiently accurate representation of the mathematical one. The second level of validation is to determine that the mathematical model is sufficiently effective in reproducing properties of the real world. The book [383] and the more recent paper [349] present extensive reviews of the literature in computational validation, arguing for the separation of the concepts of error and uncertainty in computational simulations. A special issue of Computing in Scienceand Engineering (see [452]) has been devoted to this topic. 2.3 Errors in Software When we askwhetheraprogram is "correct"wewanttoknowwhetherit faithfully followsits lowest-level specifications. Code verification is the process by which we establish correct- ness in this sense. In order to understand the verificationprocess, it is useful to be mindful of the most typical types of errors encountered in scientific software. Scientific software is prone to many of the same problems as softwarein other areas of application. In this section we consider these. Problems unique to scientific software are considered in Section 2.5. Chapter 5 of the book [440] introduces a broad classification of bugs organized by the original source of the error, i.e., how the error gets into the program, that the authors Telles and Hsieh call "Classes of Bugs." We summarize them in the following list.
  • 47. 18 Chapter 2. Assessment of Accuracy and Reliability • Requirementbugs. The specification itself could be inadequate. For example, it could be too vague, missing a critical requirement, or have two requirements in conflict. • Implementationbugs. These are the bugs in the logic of the code itself. They include problems like not fol- lowing the specification, not correctly handling all input cases, missing functionality, problems with the graphic user interface, improper memory management, and coding errors. • Processbugs. These are bugs in the runtime environment of the executable program, such as im- proper versions of dynamic linked libraries or broken databases. • Build bugs. These are bugs in the procedure used to build the executable program. For example, a product may be built for a slightly wrong environment. • Deployment bugs. These are problems with the automatic updating of installed software. • Futureplanning bugs. These are bugs like the year 2000 problem, where the lifetime of the product was underestimated, or the technical development went faster than expected (e.g., the 640 KiB12 limit of MS-DOS). • Documentationbugs. The software documentation, which should be considered as an important part of the software itself, might be vague, incomplete, or inaccurate. For example, when providing information on a library routine's procedure call it is important to provide not only the meaning of each variable but also its exact type, as well as any possible side effects from the call. The book [440] also provides a list of common bugs in the implementation of soft- ware. We summarize them in the following list. • Memory or resourceleaks. A memory leak occurs when memory is allocated but not deallocated when it is no longer required. It can cause the memory associated with long-running programs to grow in size to the point that they overwhelm existing memory. Memory leaks can occur in any programming language and are sometimes caused by programming errors. • Logicerrors. A logic error occurs when a program is syntactically correct but does not perform according to the specification. • Coding errors. Coding errors occur when an incorrect list of arguments to a procedure is used, or a part of the intended code is simply missing. Others are more subtle. A classical example ofthe latter is the Fortran statementDO 25 I = 1.10 which inFortran 77 12 "Ki" is the IEC standard for the factor 210 = 1024, corresponding to "k"for the factor 1000, and "B" stands for "byte." See, for example,http: //physics .nist.gov/cuu/Units/binary.html.
  • 48. 2.3. Errors in Software 19 (and earlier, and also in the obsolescent Fortran 90/95 fixed form) assigns the variable DO25I the value 1.10 instead of creating the intended DOloop, which requires the period to be replaced with a comma. • Memory overruns. Computing an index incorrectly can lead to the access of memory locations outside the bounds of an array, with unpredictable results. Such errors are less likely in modern high-level programming languages but may still occur. • Loop errors. Common cases are unintended infinite loops, off-by-one loops (i.e., loops executed once too often or not enough), and loops with improper exits. • Conditional errors. It is quite easy to make a mistake in the logic of an if-then-else-endif construct. A related problem arises in some languages, suchasPascal, thatdonotrequire an explicit endif. • Pointererrors. Pointers maybeuninitialized,deleted (butstill used),orinvalid(pointing to something that has been removed). • Allocation errors. Allocation and deallocation of objects mustbe done according to proper conventions. For example, if you wishto change the size of an allocated array in Fortran, you must first check if it is allocated, then deallocate it (andlose its content), and finally allocate it to its correct size. Attempting to reallocate an existing array will lead to an error condition detected by the runtime system. • Multithreaded errors. Programs made up of multiple independent and simultaneously executing threads are subject to many subtle and difficult to reproduce errors sometimes known as race conditions. These occur when two threads try to access or modify the same memory address simultaneously, but correct operation requires a particular order of access. • Timing errors. A timing error occurs when two events are designed and implemented to occur at a certain rate or within a certain margin. They are most common in connection with interrupt service routines and are restricted to environments where the clock is important. One symptomis input or outputhanging and not resuming. • Distributed application errors. Such an error is defined as an error in the interface between two applications in a distributed system. • Storage errors. These errors occur when a storage device gets a soft or hard error and is unable to proceed. • Integration errors. These occur whentwo fully tested and validated individual subsystems are combined but do not cooperate as intended when combined. • Conversion errors. Data in use by the application might be given in the wrong format (integer, floating
  • 49. 20 Chapter 2. Assessment of Accuracy and Reliability point, ...) or in the wrong units (m, cm, feet, inches, ...). An unanticipated result from a type conversion can also be classified as a conversion error. A problem of this nature occurs in many compiled programming languages by the assignment of 1/ 2 to a variable of floating-point type. The rules of integer arithmetic usually state that when two integers are divided there is an integer result. Thus, if the variable A is of a floating-point type, then the statement A = 1/2 will result in an integer zero, converted to afloating-pointzero assigned to the variable A,since 1 and 2 are integer constants, and integer division is rounded to zero. • Hard-coded lengths orsizes. If sizes of objects like arrays are defined to be of a fixed size, then care must be taken that no problem instance will be permitted a larger size. It is best to avoid such hard-coded sizes, either by using allocatable arrays that yield a correctly sized array at runtime, or by parameterizing object sizes so that they are easily and consistently modified. • Version bugs. In this case the functionality of a program unit or a data storage format is changed between two versions, without backward compatibility. In some cases individual version changes may be compatible, but not over several generations of changes. • Inappropriate reuse bugs. Program reuse is normally encouraged, both to reduce effort and to capitalize upon the expertize of subdomain specialists. However, old routines that have been care- fully tested and validated under certain constraints may cause serious problems if those constraints are not satisfied in a new environment. It is important that high standards of documentation and parameter checking be set for reuse libraries to avoid incompatibilities of this type. • Boolean bugs. The authors of [440] note on page 159 that "Boolean algebra has virtually nothing to do with the equivalent English words. When we say 'and', we really mean the Boolean 'or' and vice versa." This leads to misunderstandings among both users and programmers. Similarly, the meaning of "true" and "false" inthe code maybe unclear. Looking carefully at the examples of the Patriot missile, section 1.4.1.1, and the Ariane 5 rocket, section 1.4.2, we observe that in addition to the obvious conversion errors and storage errors, inappropriate reuse bugs were also involved. In each case the failing software was taken from earlier and less advanced hardware equipment, where the software had worked well for manyyears. They were both well tested, but not in the new environment. 2.4 Precision, Accuracy, and Reliability Many of the bugs listed in the previous section lead to anomalous behavior that is easy to recognize, i.e., the results are clearly wrong. For example, it is easy to see if the output of a program to sort data is really sorted. In scientific computing things are rarely so clear. Consider a program to compute the roots of a polynomial, Checking the output here seems easy; one can evaluate the function at the computed points. However, the result will seldom be exactly zero. How close to zero does this residual have to be to consider the answer
  • 50. 2.4. Precision, Accuracy, and Reliability 21 correct? Indeed, for ill-conditioned polynomials the relative sizes of residuals provide a very unreliable measure of accuracy. In other cases, such as the evaluation of a definite integral, there is no such "easy" method. Verification and validation in scientific computing, then, is not a simple process that gives yes or no as an answer. There are many gradations. The concepts of accuracy and reliability are used to characterize such gradations in the verification and validation of scientific software. In everyday language the words accuracy, reliability, and the related concept of precision are somewhat ambiguous. When used as quality measures they should not be ambiguous. We use the followingdefinitions. • Precision refers to the number of digits used for arithmetic, input, andoutput. • Accuracy refers to the absolute or relative error of an approximate quantity. • Reliability measures how "often" (as a percentage) the software fails, in the sense that the true error is larger than what is requested. Accuracy is a measure of the quality of the result. Since achieving a prescribed accu- racy is rarely easy in numerical computations, the importance of this component of quality is often underweighted. We note that determining accuracy requires the comparison to something external (the "exact" answer). Thus, stating the accuracy requires that one specifies what one is compar- ing against. The "exact" answer may be different for each of the computational model, the mathematical model, and the real world. In this book most of our attention will be on solu- tion verification;hence we will be mostly concerned with comparison to the mathematical model. Todetermine accuracy one needs ameans of measuring (orestimating)error. Absolute error and relative error are two important such measures. Absolute error is the magnitude of the difference between a computed quantity x and its true value x*, i.e., Relative erroris the ratio of absolute error to the magnitude of the true value, i.e., Relative error provides a method of characterizing the percentage error; when the relative error is less than one, the negative of the Iog10 of the relative error gives the number of significant decimal digits in the computer solution. Relative error is not so useful a measure as x* approaches 0; one often switches to absolute error in this case. When the computed solution is amulticomponentquantity,such as avector, then onereplaces the absolute values by an appropriate norm. The terms precision and accuracy are frequently used inconsistently. Furthermore, the misconception that high precision implies high accuracy is almost universal. Dahlquist and Bjorck [108] use the term precision for the accuracy with which the basic arithmetic operations +, —,x, and / are performed by the underlying hardware. For floating-point operations this is given by the unit roundoff u.13 But even on that there is no general agreement. One should be specific about the rounding mode used. 13 The unit roundoff u can roughly be considered as the largest positive floating-point number u, such that 1 + u = 1 in computer arithmetic. Because repeated rounding may occur this is not very useful as a strict definition. The formal definition of u is given by (4.1).
  • 51. 22 Chapter 2. Assessment of Accuracy and Reliability The difference between (traditional) rounding and truncation played an amusing role in the 100 digit challenge posed in [447, 446]. Trefethen did not specify whether digits should be rounded or truncated when he asked for "correct digits." First he announced 18 winners, but he corrected this a few days later to 20. In the interview included in [44] he explains that two teams persuaded him that he had misjudged one of their answers by 1 digit. This was a matter of rounding instead of truncating. Reliability in scientific software is considered in detail in Chapter 13. Robustness is a concept related to reliability that indicates how "gracefully" the software fails (this is nonquantitative) and also its sensitivity to small changes in the problem (this is related to condition numbers). A robust program knows when it might have failed and reports that fact. 2.5 Numerical Pitfalls Leading to Anomalous Program Behavior In this section we illustrate some common pitfalls unique to scientific computing that can lead to the erosion of accuracy and reliability in scientificsoftware. These "numerical bugs" arise from the complex structure of the mathematics underlying the problems being solved and the sometimes fragile nature of the numerical algorithms and floating-point arithmetic necessary to solve them. Suchbugs are often subtle and difficult to diagnose. Some of these are described more completely in Chapter 4. • Improper treatmentof constants with infinite decimalexpansions. The coding of constants with infinite decimal expansions, like , 2, or even 1/9, can have profoundeffects on the accuracy of a scientificcomputation. One will never achieve 10 decimal digits of accuracy in a computation in which is encoded as 3.14159 or 1/9 is encoded as 0.1111. To obtain high accuracy and portability such constants should, whenever possible, be declared as constants (and thus computed at compile time) or be computed at runtime. In some languages, e.g.,MATLAB, is stored to double precision accuracy and is available by a function call. For the constants above we can in Fortran 95 use the working precision wp, with at least 10 significant decimal digits: integer, parameter : : wp = selected_real_kind(10) real(kind=wp), parameter :: one = 1.0_wp, two = 2.0_wp, & & four = 4.0_wp, ninth = 1.0_wp/9.0_wp real(kind=wp) :: pi, sqrt2 pi - four*atan(one) sqrt2 = sqrt(two) • Testing onfloating-pointequality. In scientific computations approximations and roundoff lead to quantities that are rarely exact. So, testing whether aparticular variable that is the result of acomputation is 0.0 or 1.0is rarely correct. Instead, one must determine what interval around 0.0 or 1.0is sufficient to satisfy the criteria at hand and then test for inclusion in that interval. See, e.g., Example 1.1.
  • 52. 2.6. Methods of Verification and Validation 23 • Inconsistent precision. The IEEE standard for floating-point arithmetic defined in [226] requires the avail- ability of at least two different arithmetic precisions. If you wish to obtain a correct result in the highest precision available, then it is usually imperative that all floating- point variables and constants are in (at least) that precision. If variables and constants of different precisions (i.e., different floating-point types) are mixed in an arithmetic expression, this can result in a loss of accuracy in the result, dropping it to that of the lowest precision involved. • Faulty stopping criteria. See, e.g., Example 1.1. • Notobviously wrong code. A code can be wrong but it can still work more or less correctly. That is, the code may produce results that are acceptable, though they are somewhat less accurate than expected, or are generated more slowly than expected. This can happen when small errors in the coding of arithmetic expressions are made. For example, if one makes a small mistake in coding a Jacobian in Newton's method for solving nonlinear systems, the program may still converge to the correct solution, but, if it does, it will do so more slowly than the quadratic convergence that one expects of Newton's method. • Notrecognizing ill-conditioned problems. Problems are ill-conditioned when their solutions are highly sensitive to perturba- tions in the input data. In other words, small changes in the problem to be solved (such as truncating an input quantity to machine precision) lead to a computational problem whose exact solution is far away from the solution to the original problem. Ill-conditioning is an intrinsic property of the problem which is independent of the method used to solve it. Robust software will recognize ill-conditioned problems and will alert the user. See Sections 4.3 and 4.4 for illuminating examples. • Unstable algorithmsand regions of instability. A method is unstable when rounding errors are magnified without bound in the solu- tion process. Some methods are stable only for certain ranges of its inputdata. Robust software will either use stable methods or notify the user when the input is outside the region of guaranteed stability. See section 4.4.2 for examples. 2.6 Methodsof Verificationand Validation In this section we summarize some of the techniques that are used in the verification and validation of scientific software. Many of these are well known in the field of software engineering; see [6] and [440], for example. Others are specialized to the unique needs of scientific software; see [383] for a more complete presentation. We emphasize that none of these techniques are foolproof. It is rare that correctness of scientific software can be rigorously demonstrated. Instead, the verification and validation processes provide a series of techniques, each of which serves to increase our confidence that the software is behaving in the desired manner.
  • 53. 24 Chapter 2. Assessment of Accuracy and Reliability 2.6.1 Codeverification In code verification we seek to determine how faithfully the software is producing the solution to the computational model, i.e., to its lowest level of specification. In effect, we are asking whether the code correctly implements the specified numerical procedure. Of course, the numerical method may be ineffective in solvingthe target mathematical problem; that is not the concern at this stage. Sophisticated software engineering techniques have been developed in recent years to improve and automate the verificationprocess. Such techniques, known asformal methods, rely on mathematically rigorous specifications for the expected behavior of software. Given such a specification, one can (a) prove theorems about the projected program's behavior, (b) automatically generate much of the code itself, and/or (c) automatically generate tests for the resulting software system. See [93], for example. Unfortunately, such specifications are quite difficult to write, especially for large systems. In addition, such specifications do not cope well with the uncertainties of floating-point arithmetic. Thus, they have rarely been employed in this context. A notable exception is the formal verification of low-level floating-point arithmetic functions. See, for example, [197, 198]. Gunnels et al. [184] employed formal specifications to automatically generate linear algebra kernels. In our discussion we will concentrate on more traditional code verification techniques. The two general approaches that we will consider are code analysis and testing. 2.6.1.1 Code analysis Analysis of computer code is an important method of exposing bugs. Software engineers have devised a wealth of techniques and tools for analyzing code. One effective means of detecting errors is to have the code read and understood by someone else. Many software development organizations use formal code reviews to uncover misunderstandings in specifications or errors in logic. These are most effectively done at the component level, i.e., for portions of code that are easier to assimilate by persons other than the developer. Static code analysis is another important tool. This refers to automated techniques for determining properties of a program by inspection of the code itself. Static analyzers study the flow of control in the code to look for anomalies, such as "dead code" (code that cannot be executed) or infinite loops. They can also study the flow of data, locating variables that are never used, variables used before they are set, variables defined multiple times before their first use, or type mismatches. Each of these are symptoms of more serious logic errors. Tools for static code analysis are now widely available; indeed, manycompilers have options that provide such analysis. Dynamic code analysis refers to techniques that subject a code to fine scrutiny as it is executed. Tools that perform such analysis must first transform the code itself, inserting additional code to track the behavior of control flow or variables. The resulting "instru- mented" code is then executed, causing data to be collected as it executes. Analysis of the data can be done either interactively or as a postprocessing phase. Dynamic code analy- sis can be used to track the number of times that program units are invoked and the time spent in each. Changes in individual variables can be traced. Assertions about the state of the software at any particular point can be inserted and checked at runtime. Interactive
  • 54. 2.6. Methods of Verification andValidation 25 code debuggers, as well as code profiling tools, are now widely available to perform these tasks. 2.6.1.2 Software testing Exercising a code actually performing the task for which it was designed, i.e., testing, is an indispensable component of software verification. Verification testing requires a detailed specification of the expected behavior of the software to all of its potential inputs. Tests are then designed to determine whether this expected behavior is achieved. Designing test sets can be quite difficult. The tests must span all of the functionality of the code. To the extent possible, they should also exercise all paths through the code. Special attention should be paid to provide inputs that are boundary cases or that trigger error conditions. Exhaustive testing is rarely practical, however. Statistical design of experiments [45, 317] provides a collection of guiding principles and techniques that comprise a framework for maximizing the amount of useful information resident in a resulting data set, while attending to the practical constraints of minimizing the number of experimental runs. Such techniques have begun to be applied to software testing. Of particular relevance to code verification are orthogonal fractional factorian designs, as well as the covering designs; see [109]. Because of the challenges in developing good test sets, a variety of techniques has been developed to evaluate the test sets themselves. Forexample, dynamicanalysis tools can be used to assess the extent of code coverage provided by tests. Mutation testing is another valuable technique. Here, faults are randomly inserted into the code under test. Then the test suite is run, and the ability of the suite to detect the errors is thereby assessed. Well-designed software is typically composed of a large number of components (e.g., procedures) whose inputsandoutputsarelimited andeasier to characterize thanthe complete software system. Similarly, one can simplify the software testing process by first testing the behavior of each of the components in turn. This is called component testing or unittesting. Of course, the interfaces between components are some of the most error-prone parts of software systems. Hence, component testing must be followed by extensive integration testing, which verifies that the combined functionality of the components is correct. Once a software system attains a certain level of stability, changes to the code are inevitably made in order to add new functionality or to fix bugs. Such changes run the risk of introducing new bugs into code that was previously working correctly. Thus, it is important to maintain a battery of tests which extensively exercise all aspects of the system. When changes are applied to the code, then these tests are rerun to provide confidence that all other aspects of the code have not been adversely affected. This is termed regression testing. In large active software projects it is common to run regression tests on the current version of the code each night. Elements of the computing environmentitself, such as the operating system, compiler, number of processors, and floating-point hardware, also have an effect on the behavior of software. In some cases faulty system software can lead to erroneous behavior. In other cases errors in the software itself may not be exposed until the environment changes. Thus, it is important to perform exhaustive tests on software in each environment in which it will execute. Regression tests are useful for such testing.
  • 55. Another Random Document on Scribd Without Any Related Topics
  • 59. The Project Gutenberg eBook of The Cultural History of Marlborough, Virginia
  • 60. This ebook is for the use of anyone anywhere in the United States and most other parts of the world at no cost and with almost no restrictions whatsoever. You may copy it, give it away or re-use it under the terms of the Project Gutenberg License included with this ebook or online at www.gutenberg.org. If you are not located in the United States, you will have to check the laws of the country where you are located before using this eBook. Title: The Cultural History of Marlborough, Virginia Author: C. Malcolm Watkins Release date: July 16, 2012 [eBook #40255] Most recently updated: October 23, 2024 Language: English Credits: Produced by Pat McCoy, Chris Curnow, Joseph Cooper and the Online Distributed Proofreading Team at http://www.pgdp.net *** START OF THE PROJECT GUTENBERG EBOOK THE CULTURAL HISTORY OF MARLBOROUGH, VIRGINIA ***
  • 61. TRANSCRIBER NOTES: The List of Illustrations on page vi has been added to this project as an aid to the reader. It does not appear in the original book. Additional Transcriber Notes can be found at the end of this project SMITHSONIAN INSTITUTION UNITED STATES NATIONAL MUSEUM BULLETIN 253 WASHINGTON, D.C. 1968
  • 62. The Cultural History of Marlborough, Virginia An Archeological and Historical Investigation of the Port Town for Stafford County and the Plantation of John Mercer, Including Data Supplied by Frank M. Setzler and Oscar H. Darter C. MALCOLM WATKINS Curator of Cultural History Museum of History and Technology SMITHSONIAN INSTITUTION PRESS SMITHSONIAN INSTITUTION · WASHINGTON, D.C. · 1968 Publications of the United States National Museum The scholarly and scientific publications of the United States National Museum include two series, Proceedings of the United States
  • 63. National Museum and United States National Museum Bulletin. In these series, the Museum publishes original articles and monographs dealing with the collections and work of its constituent museums—The Museum of Natural History and the Museum of History and Technology—setting forth newly acquired facts in the fields of anthropology, biology, history, geology, and technology. Copies of each publication are distributed to libraries, to cultural and scientific organizations, and to specialists and others interested in the different subjects. The Proceedings, begun in 1878, are intended for the publication, in separate form, of shorter papers from the Museum of Natural History. These are gathered in volumes, octavo in size, with the publication date of each paper recorded in the table of contents of the volume. In the Bulletin series, the first of which was issued in 1875, appear longer, separate publications consisting of monographs (occasionally in several parts) and volumes in which are collected works on related subjects. Bulletins are either octavo or quarto in size, depending on the needs of the presentation. Since 1902 papers relating to the botanical collections of the Museum of Natural History have been published in the Bulletin series under the heading Contributions from the United States National Herbarium, and since 1959, in Bulletins titled "Contributions from the Museum of History and Technology," have been gathered shorter papers relating to the collections and research of that Museum. This work forms volume 253 of the Bulletin series. Frank A. Taylor Director, United States National Museum For sale by the Superintendent of Documents, U.S. Government Printing Office
  • 65. Contents Page Preface vii History 3 I.Official port towns in Virginia and origins of Marlborough 5 II.John Mercer’s occupation of Marlborough, 1726-1730 15 III.Mercer’s consolidation of Marlborough, 1730-1740 21 IV.Marlborough at its ascendancy, 1741-1750 27 V. Mercer and Marlborough, from zenith to decline, 1751- 1768 49 VI.Dissolution of Marlborough 61 Archeology and Architecture 65 VII.The site, its problem, and preliminary tests 67 VIII.Archeological techniques 70 IX.Wall system 71 X.Mansion foundation (Structure B) 85 XI.Kitchen foundation (Structure E) 101 XII.Supposed smokehouse foundation (Structure F) 107 XIII.Pits and other structures 111 XIV.Stafford courthouse south of Potomac Creek 115 Artifacts 123 XV.Ceramics 125 XVI.Glass 145 XVII.Objects of personal use 155 XVIII.Metalwork 159 XIX.Conclusion 173 General Conclusions 175 XX.Summary of findings 177 Appendixes 181
  • 66. A.Inventory of George Andrews, Ordinary Keeper 183 B.Inventory of Peter Beach 184 C.Charges to account of Mosley Battaley 185 D.“Domestick Expenses,” 1725 186 E.John Mercer’s reading, 1726-1732 191 F. Credit side of John Mercer’s account with Nathaniel Chapman 193 G.Overwharton Parish account 194 H. Colonists identified by John Mercer according to occupation 195 I. Materials listed in accounts with Hunter and Dick, Fredericksburg 196 J.George Mercer’s expenses while attending college 197 K.John Mercer’s library 198 L.Botanical record and prevailing temperatures, 1767 209 M.Inventory of Marlborough, 1771 211 Index 213
  • 67. List of Illustrations Figure John Mercer's Bookplate 1 Survey plates of Marlborough 2 Portrait of John Mercer 3 The Neighborhood of John Mercer 4 King William Courthouse 5 Mother-of-pearl counters 6 John Mercer's Tobacco-cask symbols 7 Wine-bottle seal 8 French horn 9 Hornbook 10 Fireplace mantels 11 Doorways 12 Table-desk 13 Archeological survey plan 14 Portrait of Ann Roy Mercer 15 Advertisement of the services of Mercer's stallion Ranter 16 Page from Maria Sibylla Merian's Metamorphosis Insectorum Surinamensium efte Veranderung Surinaamsche Insecten 17 Aerial Photograph of Marlborough 18 Highway 621 19 Excavation plan of Marlborough 20 Excavation plan of wall system 21 Looking north 22 Outcropping of stone wall 23
  • 68. Junction of stone Wall A 24 Looking north in line with Walls A and A-II 25 Wall A-II 26 Junction of Wall A-I 27 Wall E 28 Detail of Gateway in Wall E 29 Wall B-II 30 Wall D 31 Excavation plan of Structure B 32 Site of Structure B 33 Southwest corner of Structure B 34 Southwest corner of Structure B 35 South wall of Structure B 36 Cellar of Structure B 37 Section of red-sandstone arch 38 Helically contoured red-sandstone 39 Cast-concrete block 40 Dressed red-sandstone block 41 Fossil-embedded black sedimentary stone 42 Foundation of porch at north end of Structure B 43 Plan of mansion house 44 The Villa of “the magnificent Lord Leonardo Emo” 45 Excavation plan of Structure E 46 Foundation of Structure E 47 Paved floor of Room X, Structure E 48 North wall of Structure E 49 Wrought-iron slab 50 Excavation plan of structures north of Wall D 51
  • 69. Structure F 52 Virginia brick from Structure B 53 Structure D 54 Refuse found at exterior corner of Wall A-II and Wall D 55 Excavation plan of Structure H 56 Structure H 57 1743 drawing showing location of Stafford courthouse 58 Enlarged detail from figure 58 59 Excavation plan of Stafford courthouse foundation 60 Hanover courthouse 61 Plan of King William courthouse 62 Tidewater-type pottery 63 Miscellaneous common earthenware types 64 Buckley-type high-fired ware 65 Westerwald stoneware 66 Fine English stoneware 67 English Delftware 68 Delft plate 69 Delft plate 70 Whieldon-type tortoiseshell ware 71 Queensware 72 Fragment of Queensware 73 English white earthenwares 74 Polychrome Chinese porcelain 75 Blue-and-white Chinese porcelain 76 Blue-and-white Chinese porcelain 77 Wine bottle 78 Bottle seals 79 Octagonal spirits bottle 80
  • 70. Snuff bottle 81 Glassware 82 Small metalwork 83 Personal miscellany 84 Cutlery 85 Metalwork 86 Ironware 87 Iron door and chest hardware 88 Tools 89 Scythe 90 Farm gear 91 Illustration Front and back cast-concrete block 1 and 2 Iron tie bar 3 Cross section of plaster cornice molding from Structure B 4 Reconstructed wine bottle 5 Fragment of molded white salt-glazed platter 6 Iron bolt 7 Stone scraping tool 8 Indian celt 9 Milk pan 10 Milk pan 11 Ale mug 12 Cover of jar 13 Base of bowl 14 Handle of pot lid or oven door 15 Buff-earthenware cup 16 High-fired earthenware pan rim 17 High-fired earthenware jar rim 18
  • 71. Rim and base profiles of high-fired earthenware jars 19 Base sherd from unglazed red- earthenware water cooler 20 Rim of an earthenware flowerpot 21 Base of gray-brown, salt-glazed- stoneware ale mug 22 Stoneware jug fragment 23 Gray-salt-glazed-stoneware jar profile 24 Drab-stoneware mug fragment 25 Wheel-turned cover of white, salt- glazed teapot 26 Body sherds of molded, white salt- glaze-ware pitcher 27 English delftware washbowl sherd 28 English delftware plate 29 English delftware plate 30 Delftware ointment pot 31 Sherds of black basaltes ware 32 Blue-and-white Chinese porcelain saucer 33 Blue-and-white Chinese porcelain plate 34 Beverage bottle 35 Beverage-bottle seal 36 Complete beverage bottle 37 Cylindrical beverage bottle 38 Cylindrical beverage bottle 39 Octagonal, pint-size beverage bottle 40 Square gin bottle 41 Square snuff bottle 42 Wineglass, reconstructed 43 Cordial glass 44
  • 72. Sherds of engraved-glass wine and cordial glasses 45 Clear-glass tumbler 46 Octagonal cut-glass trencher salt 47 Brass buckle 48 Brass knee buckle 49 Brass thimble 50 Chalk bullet mold 51 Fragments of tobacco-pipe bowl 52 White-kaolin tobacco pipe 53 Slate pencil 54 Fragment of long-tined fork 55 Fragment of long-tined fork 56 Fork with two-part handle 57 Trifid-handle pewter spoon 58 Wavy-end pewter spoon 59 Pewter teapot lid 60 Steel scissors 61 Iron candle snuffers 62 Iron butt hinge 63 End of strap hinge 64 Catch for door latch 65 Wrought-iron hasp 66 Brass drop handle 67 Wrought-iron catch or striker 68 Iron slide bolt 69 Series of wrought-iron nails 70 Series of wrought-iron flooring nails and brads 71 Fragment of clouting nail 72 Hand-forged spike 73 Blacksmith's hammer 74 Iron wrench 75
  • 73. Iron scraping tool 76 Bit or gouge chisel 77 Jeweler's hammer 78 Wrought-iron colter from plow 79 Hook used with wagon 80 Bolt with wingnut 81 Lashing hook from cart 82 Hilling hoe 83 Iron reinforcement strip from back of shovel handle 84 Half of sheep shears 85 Animal trap 86 Iron bridle bit 87 Fishhook 88 Brass strap handle 89 Preface A number of people participated in the preparation of this study. The inspiration for the archeological and historical investigations came from Professor Oscar H. Darter, who until 1960 was chairman of the Department of Historical and Social Sciences at Mary Washington College, the women’s branch of the University of Virginia. The actual excavations were made under the direction of Frank M. Setzler, formerly the head curator of anthropology at the Smithsonian Institution. None of the investigation would have been possible had not the owners of the property permitted the excavations to be made, sometimes at considerable inconvenience to themselves. I am indebted to W. Biscoe, Ralph Whitticar, Jr., and Thomas Ashby, all of whom owned the excavated areas at Marlborough; and T. Ben
  • 74. Williams, whose cornfield includes the site of the 18th-century Stafford County courthouse, south of Potomac Creek. For many years Dr. Darter has been a resident of Fredericksburg and, in the summers, of Marlborough Point on the Potomac River. During these years, he has devoted himself to the history of the Stafford County area which lies between these two locations in northeastern Virginia. Marlborough Point has interested Dr. Darter especially since it is the site of one of the Virginia colonial port towns designated by Act of Assembly in 1691. During the town’s brief existence, it was the location of the Stafford County courthouse and the place where the colonial planter and lawyer John Mercer established his home in 1726. Tangible evidence of colonial activities at Marlborough Point—in the form of brickbats and potsherds still can be seen after each plowing, while John Mercer’s “Land Book,” examined anew by Dr. Darter, has revealed the original survey plats of the port town. In this same period and as early as 1938, Dr. T. Dale Stewart (then curator of physical anthropology at the Smithsonian Institution) had commenced excavations at the Indian village site of Patawomecke, a few hundred yards west of the Marlborough Town site. The aboriginal backgrounds of the area including Marlborough Point already had been investigated. As the result of his historical research connected with this project, Dr. Stewart has contributed fundamentally to the present undertaking by foreseeing the excavations of Marlborough Town as a logical step beyond his own investigation. Motivated by this combination of interests, circumstances, and historical clues, Dr. Darter invited the Smithsonian Institution to participate in an archeological investigation of Marlborough. Preliminary tests made in August 1954 were sufficiently rewarding to justify such a project. Consequently, an application for funds was prepared jointly and was submitted by Dr. Darter through the University of Virginia to the American Philosophical Society. In January 1956 grant number 159, Johnson Fund (1955), for $1500
  • 75. was assigned to the program. In addition, the Smithsonian Institution contributed the professional services necessary for field research and directed the purchase of microfilms and photostats, the drawing of maps and illustrations, and the preparation and publication of this report. Dr. Darter hospitably provided the use of his Marlborough Point cottage during the period of excavation, and Mary Washington College administered the grant. Frank Setzler directed the excavations during a six-week period in April and May 1956, while interpretation of cultural material and the searches of historical data related to it were carried out by C. Malcolm Watkins. At the commencement of archeological work it was expected that traces of the 17th- and early 18th-century town would be found, including, perhaps, the foundations of the courthouse. This expectation was not realized, although what was found from the Mercer period proved to be of greater importance. After completion, a report was made in the 1956 Year Book of the American Philosophical Society (pp. 304-308). After the 1956 excavations, the question remained whether the principal foundation (Structure B) might not have been that of the courthouse. Therefore, in August 1957 a week-long effort was made to find comparative evidence by digging the site of the succeeding 18th-century Stafford County courthouse at the head of Potomac Creek. This disclosed a foundation sufficiently different from Structure B to rule out any analogy between the two. It should be made clear that—because of the limited size of the grant—the archeological phase of the investigation was necessarily a limited survey. Only the more obvious features could be examined within the means at the project’s disposal. No final conclusions relative to Structure B, for example, are warranted until the section of foundation beneath the highway which crosses it can be excavated. Further excavations need to be made south and southeast of Structure B and elsewhere in search of outbuildings and evidence of 17th-century occupancy.
  • 76. Despite such limitations, this study is a detailed examination of a segment of colonial Virginia’s plantation culture. It has been prepared with the hope that it will provide Dr. Darter with essential material for his area studies and, also, with the wider objective of increasing the knowledge of the material culture of colonial America. Appropriate to the function of a museum such as the Smithsonian, this study is concerned principally with what is concrete—objects and artifacts and the meanings that are to be derived from them. It has relied upon the mutually dependent techniques of archeologist and cultural historian and will serve, it is hoped, as a guide to further investigations of this sort by historical museums and organizations. Among the many individuals contributing to this study, I am especially indebted to Dr. Darter; to the members of the American Philosophical Society who made the excavations possible; to Dr. Stewart, who reviewed the archeological sections at each step as they were written; to Mrs. Sigrid Hull who drew the line-and-stipple illustrations which embellish the report; Edward G. Schumacher of the Bureau of American Ethnology, who made the archeological maps and drawings; Jack Scott of the Smithsonian photographic laboratory, who photographed the artifacts; and George Harrison Sanford King of Fredericksburg, from whom the necessary documentation for the 18th-century courthouse site was obtained. I am grateful also to Dr. Anthony N. B. Garvan, professor of American civilization at the University of Pennsylvania and former head curator of the Smithsonian Institution’s department of civil history, for invaluable encouragement and advice; and to Worth Bailey formerly with the Historic American Buildings Survey, for many ideas, suggestions, and important identifications of craftsmen listed in Mercer’s ledgers. I am equally indebted to Ivor Noël Hume, director of archeology at Colonial Williamsburg and an honorary research associate of the Smithsonian Institution, for his assistance in the identification of artifacts; to Mrs. Mabel Niemeyer, librarian of the Bucks County Historical Society, for her cooperation in making the Mercer ledgers
  • 77. available for this report; to Donald E. Roy, librarian of the Darlington Library, University of Pittsburgh, for providing the invaluable clue that directed me to the ledgers; to the staffs of the Virginia State Library and the Alexandria Library for repeated courtesies and cooperation; and to Miss Rodris Roth, associate curator of cultural history at the Smithsonian, for detecting Thomas Oliver’s inventory of Marlborough in a least suspected source. I greatly appreciate receiving generous permissions from the University of Pittsburgh Press to quote extensively from the George Mercer Papers Relating to the Ohio Company of Virginia, and from Russell & Russell to copy Thomas Oliver’s inventory of Marlborough. To all of these people and to the countless others who contributed in one way or another to the completion of this study, I offer my grateful thanks. C. Malcolm Watkins Washington, D.C. 1967
  • 78. The Cultural History of Marlborough, Virginia Figure 1.—John Mercer’s bookplate.
  • 79. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com