Data Analysis Methods in Physical Oceanography 2nd Edition W. J. Emery
Data Analysis Methods in Physical Oceanography 2nd Edition W. J. Emery
Data Analysis Methods in Physical Oceanography 2nd Edition W. J. Emery
Data Analysis Methods in Physical Oceanography 2nd Edition W. J. Emery
1. Visit https://ebookultra.com to download the full version and
explore more ebooks
Data Analysis Methods in Physical Oceanography
2nd Edition W. J. Emery
_____ Click the link below to download _____
https://ebookultra.com/download/data-analysis-methods-
in-physical-oceanography-2nd-edition-w-j-emery/
Explore and download more ebooks at ebookultra.com
2. Here are some suggested products you might be interested in.
Click the link to download
Data Analysis Methods in Physical Oceanography Second and
Revised Edition Emery W.J.
https://ebookultra.com/download/data-analysis-methods-in-physical-
oceanography-second-and-revised-edition-emery-w-j/
Microarray Data Analysis Methods and Applications 2nd
Edition Pietro Hiram Guzzi
https://ebookultra.com/download/microarray-data-analysis-methods-and-
applications-2nd-edition-pietro-hiram-guzzi/
Methods of Analysis for Functional Foods and
Nutraceuticals 2nd Edition W. Jeffrey Hurst
https://ebookultra.com/download/methods-of-analysis-for-functional-
foods-and-nutraceuticals-2nd-edition-w-jeffrey-hurst/
Making Sense of Data I A Practical Guide to Exploratory
Data Analysis and Data Mining 2nd Edition Glenn J. Myatt
https://ebookultra.com/download/making-sense-of-data-i-a-practical-
guide-to-exploratory-data-analysis-and-data-mining-2nd-edition-glenn-
j-myatt/
3. Introduction to Research Methods and Data Analysis in
Psychology 3rd Edition Darren Langdridge
https://ebookultra.com/download/introduction-to-research-methods-and-
data-analysis-in-psychology-3rd-edition-darren-langdridge/
Statistical Methods for Microarray Data Analysis Methods
and Protocols 1st Edition Andrei Y. Yakovlev
https://ebookultra.com/download/statistical-methods-for-microarray-
data-analysis-methods-and-protocols-1st-edition-andrei-y-yakovlev/
Mass Spectrometry Data Analysis in Proteomics 2nd Edition
Rune Matthiesen
https://ebookultra.com/download/mass-spectrometry-data-analysis-in-
proteomics-2nd-edition-rune-matthiesen/
Oceanography in the days of sail 2nd ed Edition Jones
https://ebookultra.com/download/oceanography-in-the-days-of-sail-2nd-
ed-edition-jones/
Computational Methods for Next Generation Sequencing Data
Analysis 1st Edition Ion Mandoiu
https://ebookultra.com/download/computational-methods-for-next-
generation-sequencing-data-analysis-1st-edition-ion-mandoiu/
5. Data Analysis Methods in Physical Oceanography 2nd
Edition W. J. Emery Digital Instant Download
Author(s): W. J. Emery, Richard E. Thomson
ISBN(s): 0444507574
Edition: 2
File Details: PDF, 36.04 MB
Year: 2001
Language: english
8. Preface
Numerous books have been written on data analysis methods in the physical sciences
over the past several decades. Most of these books lean heavily toward the theoretical
aspects of data processing and few have been updated to include more modern
techniques such as fractal analysis and rotary spectral decomposition. In writing this
book we saw a clear need for a practical reference volume for earth and ocean sciences
that brings established and modern techniques together under a single cover. The text
is intended for students and established scientists alike. For the most part, graduate
programs in oceanography have some form of methods course in which students learn
about the measurement, calibration, processing and interpretation of geophysical
data. The classes are intended to give the students needed experience in both the
logistics of data collection and the practical problems of data processing and analysis.
Because the class material generally is based on the experience of the faculty members
giving the course, each class emphasizes different aspects of data collection and
analysis. Formalism and presentation can differ widely. While it is valuable to learn
from the first-hand experiences of the class instructor, it seemed to us important to
have available a central reference text that could be used to provide some uniformity
in the material being covered within the oceanographic community.
Many of the data analysis techniques most useful to oceanographers can be found in
books and journals covering a wide variety of topics ranging from elementary statistics
to wavelet transforms. Much of the technical information on these techniques is
detailed in texts on numerical methods, time series analysis, and statistical tech-
niques. In this book, we attempt to bring together many of the key data processing
methods found in the literature, as well as add new information on data analysis
techniques not readily available in older texts. We also provide, in Chapter 1, a
description of most of the instruments used today in physical oceanography. Our hope
is that the book will provide instructional material for students in the oceanographic
sciences and serve as a general reference volume for those directly involved with
oceanographic research.
The broad scope and rapidly evolving nature of oceanographic sciences has meant
that it has not been possible for us to cover all existing or emerging data analysis
methods. However, we trust that many of the methods and procedures outlined in the
book will provide a basic understanding of the kinds of options available to the user
for interpretation of data sets. Our intention is to describe general statistical and
analytical methods that will be sufficiently fundamental to maintain a high level of
utility over the years.
9. xii Data Analysis Methods in Physical Oceanography
Finally, we believe that the analysis procedures discussed in this book apply to a
wide readership in the geophysical sciences. As with oceanographers, this wider
community of scientists would likely benefit from a central source of information that
encompasses not only a description of the mathematical methods but also considers
some of the practical aspects of data analyses. It is this synthesis between theoretical
insight and the logistical limitations of real data measurement that is a primarily goal
of this text.
WilliamJ. Emery and Richard E. Thomson
Boulder, Colorado and Sidney, BC
10. Acknowledgments
Many people have contributed to this book over the years that it has taken to write.
The support and encouragement we received from our colleagues while we were
drafting the manuscript is most gratefully acknowledged. Bill Emery began work on
the book during a sabbatical year at the Institut ft~rMeereskunde (IFM) in Kiel and is
grateful for the hospitality of Drs Gerold Siedler, Wolfgang Kraus, Walter Zenk, Rolf
Kas~, Juergen Willebrand, Juergen Kielman, and others. Additional progress was
made at the University of British Columbia where Mrs Hiltrud Heckel aided in typing
the handwritten text. Colleagues at the University of Colorado who helped with the
book include Dr Robert Leben, who reviewed the text, contributed advice, text and
figures, and Mr Dan Leatzow, who converted all of the initial text and figures to the
Macintosh. The author would like to thank Mr Tom Kelecy for serving as the teaching
assistant the first time that the material was presented in a course on "Engineering
Data Analysis". Bill Emery also thanks the students who helped improve the book
during the course on data analysis and the National Space and Aeronautics
Administration (NASA) whose grant funding provided the laptop computer used in
preparation of the manuscript. The author also thanks his family for tolerating the
ever growing number of files labelled "dampo" on the family computer.
Rick Thomson independently began work on the book through the frustration of too
much time spent looking for information on data analysis methods in the literature.
There was clearly a need for a reference-type book that covers the wide range of
analysis techniques commonly used by oceanographers and other geoscientists. Many
of the ideas for the book originated with the author's studies as a research scientist
within Fisheries and Oceans Canada, but work on the book was done strictly at home
during evenings and weekends. Numerous conversations with Drs Dudley Chelton
and Alexander Rabinovich helped maintain the author's enthusiasm for the project.
The author wishes to thank his wife and two daughters (Justine and Karen) for
enduring the constant tapping of the keyboard and hours of dark despair when it
looked as if the book would never come to an end, and his parents (John and Irene) for
encouraging an interest in science.
The authors would like to thank the colleagues and friends who took time from their
research to review sections of the text or provide figures. There were others, far too
numerous to mention, whose comments and words of advice added to the usefulness of
the text. We are most grateful to Dudley Chelton of Oregon State University and
Alexander Rabinovich of Moscow State University who spent considerable time
criticizing the more mathematical chapters of the book. Dudley proved to be a most
impressive reviewer and Sasha contributed several figures that significantly improved
11. xvi Data Analysis Methods in Physical Oceanography
the section on time series analysis. George Pickard, Professor Emeritus of the
University of British Columbia (UBC), and Susumu Tabata, Research Scientist
Emeritus of the Institute of Ocean Sciences (IOS), provided thorough, and much
appreciated, reviews of Chapters 1 and 2. We thank Andrew Bennett of Oregon State
University for his comments on inverse methods in Chapter 4, Brenda Burd of Ecostat
Research for reviewing the bootstrap method in Chapter 3, and Steve Mihaly for
reviewing Appendix A. Contributions to the text were provided by Libe Washburn
(University of California, Santa Barbara), Peter Schlussel (IFM), Patrick Cummins
(lOS), and Mike Woodward (lOS). Figures or data were generously provided by Mark
E. Geneau (Inter-Ocean Systems, Inc.), Gail Gabel (G.S. Gabel Associates), R. Lee
Gordon (RD Instruments, Inc.), Diane Masson, Humfrey Melling, George Chase, John
Love, and Tom Juhfisz (lOS), Jason Middleton and Greg Nippard (University of New
South Wales), Doug Bennett (Sea-Bird Electronics, Inc.), Dan Schaas and Marcia
Gracia (General Oceanics), Mayra Pazos (National Oceanic and Atmospheric Admini-
stration), Chris Garrett (University of Victoria), David Halpern (Jet Propulsion
Laboratory), Phillip Richardson (Woods Hole Oceanographic Institution), Daniel
Jamous (Massachusetts Institute of Technology), Paul Dragos (Battelle Ocean
Sciences), Thomas Rossby (University of Rhode Island), Lynne Talley (Scripps),
Adrian Dolling and Jane Eert (Channel Consulting), and Gary Hamilton (Intelex
Research).
The authors would also like to thank Anna Allen for continued interest in this
project from the time she took over until now. There were a great many delays and
postponements, but through it all she remained firm in her support of the project.
This continued support made it easier to work through these delays and gave us
courage to believe that the project would one day be completed.
Lastly, we would like to thank our colleagues who found and reported errors and
omissions in the first printing of the book. Although the inevitable typos and mistakes
are discouraging for the authors (and frustrating for the reader), it is better that we
know about them so that they can be corrected in future printings and revisions. Our
thanks to Brian Blanton (University of North Carolina), Mike Foreman (Institute of
Ocean Sciences), Denis Gilbert (Institute Maurice-Lamontagne), Jack Harlan (NOTkA,
Boulder, Colorado), Clive Holden (Oceanographic Field Services, Pymbla, New South
Wales), Frank Janssen (University of Hamburg), Masahisa Kubota (Tokai University),
Robert Leben (University of Colorado), Rolf Lueck (University of Victoria), Andrew
Slater (University of Colorado), and Roy Hourston (WeatherWorks Consulting,
Victoria).
12. C HAPTER 1
Data Acquisition and Recording
1.1 INTRODUCTION
Physical oceanography is an evolving science in which the instruments, types of
observations and methods of analysis have undergone considerable change over the
last few decades. With most advances in oceanographic theory, instrumentation, and
software, there have been significant advances in marine science. The advent of digital
computers has revolutionized data collection procedures and the way that data are
reduced and analyzed. No longer is the individual scientist personally familiar with
each data point and its contribution to his or her study. Instrumentation and data
collection are moving out of direct application by the scientist and into the hands of
skilled technicians who are becoming increasingly more specialized in the operation
and maintenance of equipment. New electronic instruments operate at data rates not
possible with earlier mechanical devices and produce volumes of information that can
only be handled by high-speed computers. Most modern data collection systems
transmit sensor data directly to computer-based data acquisition systems where they
are stored in digital format on some type of electronic medium such as a tape, hard-
drive, or optical disk. High-speed analog-to-digital (AD) converters and digital-signal-
processors (DSPs) are now used to convert voltage or current signals from sensors to
digital values.
With the many technological advances taking place, it is important for oceano-
graphers to be aware of both the capabilities and limitations of their sampling
equipment. This requires a basic understanding of the sensors, the recording systems
and the data-processing tools. If these are known and the experiment carefully
planned, many problems commonly encountered during the processing stage can be
avoided. We cannot overemphasize the need for thoughtful experimental planning
and proper calibration of all oceanographic sensors. If instruments are not in near-
optimal locations or the researcher is unsure of the values coming out of the machines,
then it will be difficult to believe the results gathered in the field. To be truly reliable,
instruments should be calibrated on a regular basis at intervals determined by use and
the susceptibility of the sensor to drift. More specifically, the output from some
instruments such as the piezoelectric pressure sensors and fixed pathlength trans-
missometers drift with time and need to be calibrated before and after each field
deployment. For example, the zero point for the Paroscientific Digiquartz (0-
10,000 psi) pressure sensors used in the Hawaii Ocean Time-series (HOT) at station
"Aloha" 100 km north of Honolulu drifts about 4 dbar in three years. As a con-
sequef~ce, the sensors are calibrated about every six months against a Paroscientific
13. 2 Data Analysis Methods in Physical Oceanography
laboratory standard, which is recalibrated periodically at special calibration facilities
in the United States (Lukas, 1994). Our experience also shows that over-the-side field
calibrations during oceanic surveys can be highly valuable. As we discuss in the
following chapters, there are a number of fundamental requirements to be considered
when planning the collection of field records, including such basic considerations as
the sampling interval, sampling duration and sampling location.
It is the purpose of this chapter to review many of the standard instruments and
measurement techniques used in physical oceanography in order to provide the reader
with a common understanding of both the utility and limitations of the resulting
measurements. The discussion is not intended to serve as a detailed "user's manual"
nor as an "observer's handbook". Rather, our purpose is to describe the fundamentals
of the instruments in order to give some insight into the data they collect. An under-
standing of the basic observational concepts, and their limitations, is a prerequisite for
the development of methods, techniques and procedures used to analyze and interpret
the data that are collected.
Rather than treat each measurement tool individually, we have attempted to group
them into generic classes and to limit our discussion to common features of the
particular instruments and associated techniques. Specific references to particular
company products and the quotation of manufacturer's engineering specifications
have been avoided whenever possible. Instead, we refer to published material
addressing the measurement systems or the data recorded by them. Those studies
which compare measurements made by similar instruments are particularly valuable.
The emphasis of the instrument review section is to give the reader a background in
the collection of data in physical oceanography. For those readers interested in more
complete information regarding a specific instrument or measurement technique, we
refer to the references at the end of the book where we list the sources of the material
quoted. We realize that, in terms of specific measurement systems, and their review,
this text will be quickly dated as new and better systems evolve. Still, we hope that the
general outline we present for accuracy, precision and data coverage will serve as a
useful guide to the employment of newer instruments and methods.
1.2 BASIC SAMPLING REQUIREMENTS
A primary concern in most observational work is the accuracy of the measurement
device, a common performance statistic for the instrument. Absolute accuracy
requires frequent instrument calibration to detect and correct for any shifts in
behavior. The inconvenience of frequent calibration often causes the scientist to
substitute instrument precision as the measurement capability of an instrument.
Unlike absolute accuracy, precision is a relative term and simply represents the ability
of the instrument to repeat the observation without deviation. Absolute accuracy
further requires that the observation be consistent in magnitude with some absolute
reference standard. In most cases, the user must be satisfied with having good
precision and repeatability of the measurement rather than having absolute
measurement accuracy. Any instrument that fails to maintain its precision, fails to
provide data that can be handled in any meaningful statistical fashion. The best
14. Data Acquisition and Recording 3
instruments are those that provide both high precision and defensible absolute
accuracy.
Digital instrument resolution is measured in bits, where a resolution of N bits
means that the full range of the sensor is partitioned into 2N equal segments (N = 1, 2,
...). For example, eight-bit resolution means that the specified full-scale range of the
sensor, say V = 10 volts, is divided into 28 = 256 increments, with a bit-resolution of
V/256 = 0.039 volts. Whether the instrument can actually measure to a resolution or
accuracy of V/2 N units is another matter. The sensor range can always be divided into
an increasing number of smaller increments but eventually one reaches a point where
the value of each bit is buried in the noise level of the sensor.
1.2.1 Sampling interval
Assuming the instrument selected can produce reliable and useful data, the next
highest priority sampling requirement is that the measurements be collected often
enough in space and time to resolve the phenomena of interest. For example, in the
days when oceanographers were only interested in the mean stratification of the world
ocean, water property profiles from discrete-level hydrographic (bottle) casts were
adequate to resolve the general vertical density structure. On the other hand, these
same discrete-level profiles failed to resolve the detailed structure associated with
interleaving and mixing processes that now are resolved by the rapid vertical sampling
of modern conductivity-temperature-depth (CTD) profilers. The need for higher
resolution assumes that the oceanographer has some prior knowledge of the process of
interest. Often this prior knowledge has been collected with instruments incapable of
resolving the true variability and may only be suggested by highly aliased (distorted)
data collected using earlier techniques. In addition, theoretical studies may provide
information on the scales that must be resolved by the measurement system.
For discrete digital data x(ti) measured at times ti, the choice of the sampling
increment At (or Zk~cin the case of spatial measurements) is the quantity of import-
ance. In essence, we want to sample often enough that we can pick out the highest
frequency component of interest in the time-series but not oversample so that we fill
up the data storage file, use up all the battery power, or become swamped with a lot of
unnecessary data. We might also want to sample at irregular intervals to avoid built-in
bias in our sampling scheme. If the sampling interval is too large to resolve higher
frequency components, it becomes necessary to suppress these components during
sampling using a sensor whose response is limited to frequencies equal to that of the
sampling frequency. As we discuss in our section on processing satellite-tracked
drifter data, these lessons are often learned too latemafter the buoys have been cast
adrift in the sea.
The important aspect to keep in mind is that, for a given sampling interval At, the
highest frequency we can hope to resolve is the Nyquist (or folding)frequency, fx,
defined as
fN = 1/(2At) (1.2.1)
We cannot resolve any higher frequencies than this. For example, if we sample every
10 h, the highest frequency we can hope to see in the data isfN = 0.05 cph (cycles per
hour). Equation (1.2.1) states the obvious--that it takes at least two sampling intervals
(or three data points) to resolve a sinusoidal-type oscillation with period 1/fN (Figure
15. 4 Data Analysis Methods in Physical Oceanography
1.2.1). In practice, we need to contend with noise and sampling errors so that it takes
something like three or more sampling increments (i.e. _>four data points) to
accurately determine the highest observable frequency. Thus,fx is an upper limit. The
highest frequency we can resolve for a sampling of At - 10 h in Figure 1.2.1 is closer
to 1~3At ~ 0.033 cph.
An important consequence of (1.2.1) is the problem ofaliasing. In particular, if there
is considerable energy at frequencies f > fN--Which we obviously cannot resolve
because of the At we pickedmthis energy gets folded back into the range of frequen-
cies, f < fN, which we are attempting to resolve. This unresolved energy doesn't
disappear but gets redistributed within the frequency range of interest. What is worse
is that the folded-back energy is disguised (or aliased) within frequency components
different from those of its origin. We cannot distinguish this folded-back energy from
that which actually belongs to the lower frequencies. Thus, we end up with erroneous
(aliased) estimates of the spectral energy variance over the resolvable range of fre-
quencies. An example of highly aliased data would be 13-h sampling of currents in a
region having strong semidiurnal tidal currents. More will be said on this topic in
Chapter 5.
As a general rule, one should plan a measurement program based on the frequencies
and wavenumbers (estimated from the corresponding periods and wavelengths) of the
parameters of interest over the study domain. This requirement then dictates the
selection of the measurement tool or technique. If the instrument cannot sample
rapidly enough to resolve the frequencies of concern it should not be used. It should be
emphasized that the Nyquist frequency concept applies to both time and space and
the Nyquist wavenumber is a valid means of determining the fundamental wavelength
that must be sampled.
1.2.2 Sampling duration
The next concern is that one samples long-enough to establish a statistically
significant picture of the process being studied. For time-series measurements, this
amounts to a requirement that the data be collected over a period sufficiently long that
1.0
0.4
~" 0.2
= 0.0
o
.m
ca -0.2
= -0.4
[.l.,
-0.6
-- ~ 9 True value
0.8 -- .If "~"~, o Measured value
*i
0.6 -- t
- j
-0.8
-l.o j
0 0 4 8 12 16 20 24
Time (n)
Figure 1.2.1. Plot of the function F(n) = sin (27m/20 + 4~)where time is given by the integer n - -1,
O, .... 24. The period 2At =l/fN is 20 units and ~ is a random phase with a small magnitude in the
range +0.1. Open circles denote measured points and solid points the curve F(n). Noise makes it
necessary to use more than three data values to accurately define the oscillation period.
16. Data Acquisition and Recording 5
repeated cycles of the phenomenon are observed. This also applies to spatial sampling
where statistical considerations require a large enough sample to define multiple
cycles of the process being studied. Again, the requirement places basic limitations on
the instrument selected for use. If the equipment cannot continuously collect the data
needed for the length of time required to resolve repeated cycles of the process, it is
not well suited to the measurement required.
Consider the duration of the sampling at time step At. The longer we make the
record the better we are to resolve different frequency components in the data. In the
case of spatially separated data, AJc, resolution increases with increased spatial
coverage of the data. It is the total record length T = NAt obtained for N data samples
that: (1) determines the lowest frequency (the fundamental frequency)
fo = 1/(NAt) = 1/T (1.2.2)
that can be extracted from the time-series record; (2) determines the frequency
resolution or minimum difference in frequency Af = If2-fll = 1~NAt that can be
resolved between adjoining frequency components, fl and f2 (Figure 1.2.2); and (3)
determines the amount of band averaging (averaging of adjacent frequency bands)
that can be applied to enhance the statistical significance of individual spectral esti-
mates. In Figure 1.2.2, the two separate waveforms of equal amplitude but different
frequency produce a single spectrum. The two frequencies are well resolved for
Af = 2~NAt and 3~2NAt, just resolved for Af--1~NAt, and not resolved for
Af = 1~2NAt.
In theory, we should be able to resolve all frequency components,f, in the frequency
rangefo <_f <_fN, wherefx andfo are defined by (1.2.1) and (1.2.2), respectively. Herein
lies a classic sampling problem. In order to resolve the frequencies of interest in a
time-series, we need to sample for a long time (T large) so that fo covers the low end of
the frequency spectrum and Af is small (frequency resolution is high). At the same
time, we would like to sample sufficiently rapidly (At small) so thatfN extends beyond
all frequency components with significant spectral energy. Unfortunately, the longer
and more rapidly we want to sample the more data we need to collect and store, the
more time, effort and money we need to put into the sampling and the better
resolution we require from our sensors.
Our ability to resolve frequency components follows from Rayleigh's criterion for
the resolution of adjacent spectral peaks in light shone onto a diffraction grating. It
states that two adjacent frequency components are just resolved when the peaks of the
spectra are separated by frequency difference Af =fo = 1~NAt (Figure 1.2.2). For
example, to separate the spectral peak associated with the lunar-solar semidiurnal
tidal component M2 (frequency - 0.08051 cph) from that of the solar semidiurnal
tidal component $2 (0.08333 cph), for which Af - 0.00282 cph, requires N - 355 data
points at a sampling interval At - 1 h or N = 71 data points at At = 5 h. Similarly, a
total of 328 data values at 1-h sampling are needed to separate the two main diurnal
constituents Kl and O1 (Af - 0.00305 cph). Note that iffN is the highest frequency we
can measure and fo is the limit of frequency resolution, then
fX/fo = (1/2At)/(1/NAt) = N/2 (1.2.3)
is the maximum number of Fourier components we can hope to estimate in any
analysis.
17. 6 Data Analysis Methods in Physical Oceanography
(a) ~ -- Af ----q (b)
wb
0
ra~
(c)
Frequency
(d)
t
~ Combined
spectra
Figure 1.2.2. Spectral peaks of two separate waveforms of equal amplitude and frequencies f z and f2
(dashed and thin line) together with the calculated spectrum (solid line). (a) and (b) are well-resolved
spectra; (c) just resolved spectra; and (d) not resolved. Thick solid line is total spectrum for two
underlying signals with slightly different peak frequencies.
1.2.3 Sampling accuracy
According to the two previous sections, we need to sample long and often if we hope to
resolve the range of scales of interest in the variables we are measuring. It is
intuitively obvious that we also need tolsample as accurately as possible--with the
degree of recording accuracy determined by the response characteristics of the
sensors, the number of bits per data record (or parameter value) needed to raise
measurement values above background noise, and the volume of data we can live ~ith.
There is no use attempting to sample the high or low ends of the spectrum if the
instrument cannot respond rapidly or accurately enough to resolve changes in the
parameter being measured. In addition, there are several approaches to this aspect of
data sampling including the brute-force approach in which we measure as often as we
can at the degree of accuracy available and then improve the statistical reliability of
each data record through post-survey averaging, smoothing, and other manipulation.
1.2.4 Burst sampling versus continuous sampling
Regularly-spaced, digital time-series can be obtained in two different ways. The most
common approach is to use a continuous sampling mode, in which the data are sampled
at equally spaced intervals tk -- to + kAt from the start time to. Here, k is a positive
integer. Regardless of whether the equally spaced data have undergone internal
averaging or decimation using algorithms built into the machine, the output to the
data storage file is a series of individual samples at times tk. (Here, "decimation" is
used in the loose sense of removing every nth data point, where n is any positive
integer, and not in the sense of the ancient Roman technique of putting to death one
in ten soldiers in a legion guilty of mutiny or other crime.) Alternatively, we can use a
18. Data Acquisition and Recording 7
burst sampling mode, in which rapid sampling is undertaken over a relatively short time
interval AtB or "burst" embedded within each regularly spaced time interval, At. That
is, the data are sampled at high frequency for a short duration starting (or ending) at
times tk for which the burst duration Atn << At. The instrument "rests" between bursts.
There are advantages to the burst sampling scheme, especially in noisy (high fre-
quency) environments where it may be necessary to average-out the noise to get at the
frequencies of interest. Burst sampling works especially well when there is a "spectral
gap" between fluctuations at the high and low ends of the spectrum. As an example,
there is typically a spectral gap between surface gravity waves in the open ocean
(periods of 1-20 s) and the 12-hourly motions that characterize semidiurnal tidal cur-
rents. Thus, if we wanted to measure surface tidal currents using the burst-mode option
for our current meter, we could set the sampling to a 2-min burst every hour; this option
would smooth out the high-frequency wave effects but provide sufficient numbers of
velocity measurements to resolve the tidal motions. Burst sampling enables us to filter
out the high-frequency noise and obtain an improved estimate of the variability hidden
underneath the high-frequency fluctuations. In addition, we can examine the high-
frequency variability by scrutinizing the burst sampled data. If we were to sample
rapidly enough, we could estimate the surface gravity wave energy spectrum. Many
oceanographic instruments use (or have provision for) a burst-sampling data collection
mode. The "duty cycle" often used to collect positional data from satellite-tracked
drifters is a cost-saving form of burst sampling in which all positional data within a 24-h
period (about 10 satellite fixes) are collected only every third day. Tracking costs paid to
Service Argos are reduced by a factor of three using the duty cycle. Problems arise when
the length of each burst is too short to resolve energetic motions with periods
comparable to the burst sample length. In the case of satellite-tracked drifters poleward
of tropical latitudes, these problems are associated with highly energetic inertial
motions whose periods T = 1/(2f~ sin 0) are comparable to the 24-h duration of the
burst sample (here, f~ = 0.1161 • 10-4 cycles per second is the earth's rate of rotation
and 0 =_ latitude). Since 1992, it has been possible to improve resolution of high-
frequency motions using a 1/3 duty cycle of 8 h "on" followed by 16 h "off". According
to Bograd et al. (1999), even better resolution of high-frequency mid-latitude motions
could be obtained using a duty cycle of 16 h "on" followed by 32 h "off".
1.2.5 Regularly versus irregularly sampled data
In certain respects, an irregular sampling in time or nonequidistant placement of
instruments can be more effective than a more esthetically appealing uniform samp-
ling. For example, unequal spacing permits a more statistically reliable resolution of
oceanic spatial variability by increasing the number of quasi-independent estimates of
the dominant wavelengths (wavenumbers). Since oceanographers are almost always
faced with having fewer instruments than they require to resolve oceanic features,
irregular spacing can also be used to increase the overall spatial coverage (funda-
mental wavenumber) while maintaining the small-scale instrument separation for
Nyquist wavenumber estimates. The main concern is the lack of redundancy should
certain key instruments fail, as often seems to happen. In this case, a quasi-regular
spacing between locations is better. Prior knowledge of the scales of variability to
expect is a definite plus in any experimental array design.
In a sense, the quasi-logarithmic vertical spacing adopted by oceanographers for
bottle cast (hydrographic) sampling of 0, 10, 20, 30, 50, 75, 100, 125, 150 m, etc.
represents a "spectral window" adaptation to the known physical-chemical structure
19. 8 Data Analysis Methods in Physical Oceanography
of the ocean. Highest resolution is required near the surface where vertical changes
are most rapid. Similarly, an uneven spatial arrangement of observations increases the
number of quasi-independent estimates of the wavenumber spectrum. Digital data are
most often sampled (or subsampled) at regularly-spaced time increments. Aside from
the usual human propensity for order, the need for regularly-spaced data derives from
the fact that most analysis methods have been developed for regular-spaced data.
However, digital data do not necessarily need to be sampled at regularly-spaced time
increments to give meaningful results, although some form of interpolation between
values may eventually be required.
1.2.6 Independent realizations
As we review the different instruments and methods, the reader should keep in mind
the three basic concerns of accuracy/precision, resolution (spatial and temporal), and
statistical significance (statistical sampling theory). A fundamental consideration in
ensuring the statistical significance of a set of measurements is the need for inde-
pendent realizations. If repeat measurements of a process are strongly correlated, they
provide no new information and do not contribute to the statistical significance of the
measurements. Often a subjective decision must be made on the question of statistical
independence. While this concept has a formal definition, in practice it is often
difficult to judge. A simple guide suggested here is that any suite of measurements
that is highly correlated (in time or space) cannot be independent. At the same time, a
group of measurements that is totally uncorrelated, must be independent. In the case
of no correlation, the number of "degrees of freedom" is defined by the total number
of measurements; for the case of perfect correlation, the redundancy of the data values
reduces the degrees of freedom to one for scalar quantity and to two for a vector
quantity. The degree of correlation in the data set provides a way of roughly
estimating the number of degrees of freedom within a given suite of observations.
While more precise methods will be presented later in this text, a simple linear
relation between degrees of freedom and correlation often gives the practitioner a way
to proceed without developing complex mathematical constructs.
As will be discussed in detail later, all of these sampling recommendations have
statistical foundations and the guiding rules of probability and estimation can be
carefully applied to determine the sampling requirements and dictate the appropriate
measurement system. At the same time, these same statistical methods can be applied
to existing data in order to better evaluate their ability to measure phenomena of
interest. These comments are made to assist the reader in evaluating the potential of a
particular instrument (or method) for the measurement of some desired variable.
1.3 TEMPERATURE
The measurement of temperature in the ocean uses conventional techniques except
for deep observations where hydrostatic pressures are high and there is a need to
protect the sensing system from ambient depth/temperature changes higher in the
water column as the sensor is returned to the ship. Temperature is the ocean property
that is easiest to measure accurately. Some of the ways in which ocean temperature
can be measured are:
20. Data Acquisition and Recording 9
(a) Expansion of a liquid or a metal.
(b) Differential expansion of two metals (bimetallic strip).
(c) Vapor pressure of a liquid.
(d) Thermocouples.
(e) Change in electrical resistance.
(f) Infrared radiation from the sea surface.
In most of these sensing techniques, the temperature effect is very small and some form
of amplification is necessary to make the temperature measurement detectable. Usually,
the response is nearly linear with temperature so that only the first-order term is needed
when converting the sensor measurement to temperature. However, in order to achieve
high precision over large temperature ranges, second, third and even fourth order terms
must sometimes be used to convert the measured variable to temperature.
1.3.1 Mercury thermometers
Of the above methods, (a), (e), and (f) have been the most widely used in physical
oceanography. The most common type of the liquid expansion sensor is the mercury-
in-glass thermometer. In their earliest oceanographic application, simple mercury
thermometers were lowered into the ocean with hopes of measuring the temperature at
great depths in the ocean. Two effects were soon noticed. First, thermometer housings
with insufficient strength succumbed to the greater pressure in the ocean and were
crushed. Second, the process of bringing an active thermometer through the oceanic
vertical temperature gradient sufficiently altered the deeper readings that it was not
possible to accurately measure the deeper temperatures. An early solution to this
problem was the development of min-max thermometers that were capable of
retaining the minimum and maximum temperatures encountered over the descent
and ascent of the thermometer. This type of thermometer was widely used on the
Challenger expedition of 1873-1876.
The real breakthrough in thermometry was the development of reversing thermo-
meters, first introduced in London by Negretti and Zambra in 1874 (Sverdrup et al.,
1942, p. 349). The reversing thermometer contains a mechanism such that, when the
thermometer is inverted, the mercury in the thermometer stem separates from the
bulb reservoir and captures the temperature at the time of inversion. Subsequent
temperature changes experienced by the thermometer have limited effects on the
amount of mercury in the thermometer stem and can be accounted for when the
temperature is read on board the observing ship. This "break-off' mechanism is based
on the fact that more energy is required to create a gas-mercury interface (i.e. to break
the mercury) than is needed to expand an interface that already exists. Thus, within
the "pigtail" section of the reversing thermometer is a narrow region called the
"break-off point", located near appendix C in Figure 1.3.1, where the mercury will
break when the thermometer is inverted.
The accuracy of the reversing thermometer depends on the precision with which
this break occurs. In good reversing thermometers this precision is better than 0.01 ~
In standard mercury-in-glass thermometers, as well as in reversing thermometers,
there are concerns other than the break point which affect the precision of the temp-
erature measurement. These are:
(a) Linearity in the expansion coefficient of the liquid.
(b) The constancy of the bulb volume.
21. 10 Data Analysis Methods in Physical Oceanography
Reservior
o
m
t
jB
inlarged section showing
,ig-tail (A) appendix
Leadarm (B) and break-off
~oint (C).
s"
i" 1-.0
:E
- 9"" i
;.nlarged section showing
eadings on main stem (D)
nd auxiliary (E).
jacket x,~
Protected Unprotected
Figure 1.3.1. Details of a reversing mercury thermometer showing the"pigtail appendix".
(c) The uniformity of the capillary bore.
(d) The exposure of the thermometer stem to temperatures other than the bulb
temperature.
Mercury expands in a near-linear manner with temperature. As a consequence, it
has been the liquid used in most high precision, liquid-glass thermometers. Other
liquids such as alcohol and toluene are used in precision thermometers only for very
low temperature applications where the higher viscosity of mercury is a limitation.
Expansion linearity is critical in the construction of the thermometer scale which
would be difficult to engrave precisely if expansion were nonlinear.
In a mercury thermometer, the volume of the bulb is equivalent to about 6000 stem-
degrees Celsius. This is known as the "degree volume" and usually is considered to
comprise the bulb plus the portion of the stem below the mark. If the thermometer is
to retain its calibration, this volume must remain constant with a precision not
commonly realized by the casual user. For a thermometer precision within +0.01~
22. Data Acquisition and Recording 11
the bulb volume must remain constant within one part in 600,000. Glass does not have
ideal mechanical properties and it is known to exhibit some plastic behavior and
deform under sustained stress. Repeated exposure to high pressures may produce
permanent deformation and a consequent shift in bulb volume. Therefore, precision
can only be maintained by frequent laboratory calibration. Such shifts in bulb volume
can be detected and corrected by the determination of the "ice point" (a slurry of
water plus ice) which should be checked frequently if high accuracy is required. The
procedure is more or less obvious but a few points should be considered. First the ice
should be made from distilled water and the water-ice mixture should also be made
from distilled water. The container should be insulated and at least 70% of the bath in
contact with the thermometer should be chopped ice. The thermometer should be
immersed for five or more minutes during which time the ice-water mixture should be
stirred continuously. The control temperature of the bath can be taken by an accurate
thermometer of known reliability. Comparison with the temperature of the reversing
thermometer, after the known calibration characteristics have been accounted for, will
give an estimate of any offsets inherent in the use of the reversing thermometer in
question.
The uniformity of the capillary bore is critical to the accuracy of the mercury
thermometer. In order to maintain the linearity of the temperature scale it is necessary
to have a uniform capillary as well as a linear response liquid element. Small
variations in the capillary can occur as a result of small differences in cooling during
its construction or to inhomogeneities in the glass. Errors resulting from the variations
in capillary bore can be corrected through calibration at known temperatures. The
resulting corrections, including any effect of the change in bulb volume, are known as
"index corrections". These remain constant relative to the ice point and, once
determined, can be corrected for a shift in the ice point by addition or subtraction of a
constant amount. With proper calibration and maintenance, most of the mechanical
defects in the thermometer can be accounted for. Reversing thermometers are then
capable of accuracies of _+0.01~ as given earlier for the precision of the mercury
break-point. This accuracy, of course, depends on the resolution of the temperature
scale etched on the thermometer. For high accuracy in the typically weak vertical
temperature gradients of the deep ocean, thermometers are etched with scale intervals
between 0.1 and 0.2~ Most reversing thermometers have scale intervals of 0.1~
The reliability and calibrated absolute accuracy of reversing thermometers continue
to provide a standard temperature measurement against which all forms of electronic
sensors are compared and evaluated. In this role as a calibration standard, reversing
thermometers continue to be widely used. In addition, many oceanographers still
believe that standard hydrographic stations made with sample bottles and reversing
thermometers, provide the only reliable data. For these reasons, we briefly describe
some of the fundamental problems that occur when using reversing thermometers. An
understanding of these errors may also prove helpful in evaluating the accuracy of
reversing thermometer data that are archived in the historical data file. The primary
malfunction that occurs with a reversing thermometer is a failure of the mercury to
break at the correct position. This failure is caused by the presence of gas (a bubble)
somewhere within the mercury column. Normally all thermometers contain some gas
within the mercury. As long as the gas bubble has sufficient mercury compressing it,
the bubble volume is negligible, but if the bubble gets into the upper part of the
capillary tube it expands and causes the mercury to break at the bubble rather than at
the break-off point. The proper place for this resident gas is at the bulb end of the
23. 12 Data Analysis Methods in Physical Oceanography
mercury; for this reason it is recommended that reversing thermometers always be
stored and transported in the bulb-up (reservoir-down) position. Rough handling can
be the cause of bubble formation higher up in the capillary tube. Bubbles lead to
consistently offset temperatures and a record of the thermometer history can clearly
indicate when such a malfunction has occurred. Again the practice of renewing, or at
least checking, the thermometer calibration is essential to ensuring accurate tempera-
ture measurements. As with most oceanographic equipment, a thermometer with a
detailed history is much more valuable than a new one without some prior use.
There are two basic types of reversing thermometers: (1) protected thermometers
which are encased completely in a glass jacket and not exposed to the pressure of the
water column; and (2) unprotected thermometers for which the glass jacket is open at
one end so that the reservoir experiences the increase of pressure with ocean depth,
leading to an apparent increase in the measured temperature. The increase in
temperature with depth is due to the compression of the glass bulb, so that if the
compressibility of the glass is known from the manufacturer, the pressure and hence
the depth can be inferred from the temperature difference, AT = Tunprotecte d -
Tprotecte d. The difference in thermometer readings, collected at the same depth, can be
used to compute the depth to an accuracy of about 1% of the depth. This subject will
be treated more completely in the section on depth/pressure measurement. We note
here that the 1% accuracy for reversing thermometers exceeds the accuracy of 2-3%
one normally expects from modern depth sounders.
Unless collected for a specific observational program or taken as calibrations for
electronic measurement systems, reversing thermometer data are most commonly
found in historical data archives. In such cases, the user is often unfamiliar with the
precise history of the temperature data and thus cannot reconstruct the conditions
under which the data were collected and edited. Under these conditions one generally
assumes that the errors are of two types; either they are large offsets (such as errors in
reading the thermometer) which are readily identifiable by comparison with other
regional historical data, or they are small random errors due to a variety of sources and
difficult to identify or separate from real physical oceanicvariability. Parallax errors,
which are one of the main causes of reading errors, are greatly reduced through use of
an eye-piece magnifier. Identification and editing of these errors depends on the
problem being studied and will be discussed in a later section on data processing.
1.3.2. The mechanical bathythermograph (MBT)
The MBT uses a liquid-in-metal thermometer to register temperature and a Bourdon
tube sensor to measure pressure. The temperature sensing element is a fine copper
tube nearly 17 m long filled with toluene (Figure 1.3.2). Temperature readings are
recorded by a mechanical stylus which scratches a thin line on a coated glass slide.
Although this instrument has largely been replaced by the expendable bathy-
thermograph (XBT), the historical archives contain numerous temperature profiles
collected using this device. It is, therefore, worthwhile to describe the instrument and
the data it measures. Only the temperature measurement aspect of this device will be
considered; the pressure/depth recording capability will be addressed in a latter
section.
There are numerous limitations to the MBT. To begin with, it is restricted to depths
less than 300 m. While the MBT was intended to be used with the ship underway, it is
only really possible to use it successfully when the ship is traveling at no more than a
24. Data Acquisition and Recording 13
i
Temperature element Pressure element
S , Smoked glass _ ,
tyius slide j Bellows
Xylene filled Bourdon lifter Piston Helical
tubing head spring
tube
". ~--.-~'~--~-----.-~-:-.'-~~i
-.:- ;-"- : ~.,... 9 ~_~.
Figure 1.3.2.A bathythermographshowingits internal constructionand sampleBT slides.
few knots. At higher speeds, it becomes impossible to retrieve the MBT without the
risk of hitting the instrument against the ship. Higher speeds also make it difficult to
properly estimate the depth of the probe from the amount of wire out. The
temperature accuracy of the MBT is restricted by the inherent lower accuracy of the
liquid-in-metal thermometer. Metal thermometers are also subject to permanent
deformation. Since metal is more subject to changes at high temperatures than is glass
it is possible to alter the performance of the MBT by continued exposure to higher
temperatures (i.e. by leaving the probe out in the sun). The metal return spring of the
temperature stylus is also a source of potential problems in that it is subject to
hysteresis and creep. Hysteresis, in which the up-trace does not coincide with the
down-trace, is especially prevalent when the temperature differences are small. Creep
occurs when the metal is subjected to a constant loading for long periods. Thus, an
MBT continuously used in the heat of the tropics may be found later to have a slight
positive temperature error.
Most of the above errors can be detected and corrected for by frequent calibration of
the MBT. Even with regular calibration it is doubtful that the stated precision of 0.1~
(0.06~ can be attained. Here, the value is given in ~ since most of the MBTs were
produced with these temperature scales. When considering MBT data from the
historical data files, it should be realized that these data were entered into the files by
hand. The usual method was to produce an enlarged black-and-white photograph of
the temperature trace using the nonlinear calibration grid unique to each instrument.
Temperature values were then read off of these photographs and entered into the data
25. 14 Data Analysis Methods in Physical Oceanography
file at the corresponding depths. The usual procedure was to record temperatures for a
fixed depth interval (i.e. 5 or 10 m) rather than to select out inflection points that best
described the temperature profile. The primary weakness of this procedure is the ease
with which incorrect values can enter the data file through misreading the tempera-
ture trace or incorrectly entering the measured value. Usually these types of errors
result in large differences with the neighboring values and can be easily identified.
Care should be taken, however, to remove such values before applying objective
methods to search for smaller random errors. It is also possible that data entry errors
can occur in the entry of date, time and position of the temperature profile and tests
should be made to detect these errors.
1.3.3. Resistance thermometers (expendable bathythermograph" XBT)
Since the electrical resistance of metals, and other materials, changes with
temperature, these materials can be used as temperature sensors. The resistance (R)
of most metals depends on temperature (T) and can be expressed as a polynomial
R-R(1 +aT+bT 2 +cT 3 4- ...) (1.2.4)
where a, b, and c are constants. In practice, it is usually assumed that the response is
linear over some limited temperature range and the proportionality can be given by
the value of the coefficient a (called the temperature resistance coefficient). The most
commonly used metals are copper, platinum, and nickel which have temperature
coefficients of 0.0043, 0.0039, and 0.0066 (~ respectively. Of these, copper has the
most linear response but its resistance is low so that a thermal element would require
many turns of fine wire and would consequently be expensive to produce. Nickel has a
very high resistance but deviates sharply from linearity. Platinum has a relatively high
resistance level, is very stable and has a relatively linear behavior. For these reasons,
platinum resistance thermometers have become a standard by which the international
scale of temperature is defined. Platinum thermometers are also widely used as
laboratory calibration standards and have accuracies of _+0.001~
The semiconductors form another class of resistive materials used for temperature
measurements. These are mixtures of oxides of metals such as nickel, cobalt, and
manganese which are molded at high pressure followed by sintering (i.e. heating to
incipient fusion). The types of semiconductors used for oceanographic measurements
are commonly called thermistors. These thermistors have the advantages that: (1) the
temperature resistance coefficient of-0.05(~ -1 is about ten times as great as that for
copper; and (2) the thermistors may be made with high resistance for a very small
physical size.
The temperature coefficient of thermistors is negative which means that the
resistance decreases as temperature increases. This temperature coefficient is not a
constant except over very small temperature ranges; hence the change of resistance
with temperature is not linear. Instead, the relationship between resistance and
temperature is given by
R(T) - Ro exp [/3(T-1- To1)] (1.2.5)
where R,, = /3/T2 is the conventional temperature coefficient of resistance, and T and
T,, are two absolute temperatures (K) with the respective resistance values of R(T) and
26. Data Acquisition and Recording 15
Ro. Thus, we have a relationship whereby temperature T can be computed from the
measurement of resistance R(T).
One of the most common uses of thermistors in oceanography is in expendable
bathythermographs (XBTs). The XBT was developed to provide an upper ocean
temperature profiling device that operated while the ship was underway. The crucial
development was the concept of depth measurement using the elapsed time for the
known fall rate of a "freely-falling" probe. To achieve "free-fall", independent of the
ship's motion, the data transfer cable is constructed from fine copper wire with feed-
spools in both the sensor probe and in the launching canister (Figure 1.3.3). The
details of the depth measurement capability of the XBT will be discussed and
evaluated in the section on depth/pressure measurements.
The XBT probes employ a thermistor placed in the nose of the probe as the
temperature sensing element. According to the manufacturer (Sippican Corp.;
Marion, Massachusetts, U.S.A.), the accuracy of this system is _+0.1~ This figure
is determined from the characteristics of a batch of semiconductor material which has
known resistance-temperature (R-T) properties. To yield a given resistance at a
standard temperature, the individual thermistors are precision-ground, with the XBT
probe thermistors ground to yield 5000 Ft (~ is the symbol for the unit of ohms) at
25~ (Georgi et al., 1980). If the major source of XBT probe-to-probe variability can be
attributed to imprecise grinding, then a single-point calibration should suffice to
reduce this variability in the resultant temperatures. Such a calibration was carried
out by Georgi et al. (1980) both at sea and in the laboratory.
To evaluate the effects of random errors on the calibration procedure, twelve probes
were calibrated repeatedly. The mean differences between the measured and bath
temperatures was +_0.045~ with a standard deviation of 0.01~ For the overall
calibration comparison, 18 cases of probes (12 probes per case) were examined. Six
cases of T7s (good to 800 m and up to 30 knots) and two cases of T6s (good to 500 m
and at less than 15 knots) were purchased new from Sippican while the remaining 10
cases of T4s (good to 500 m up to 30 knots) were acquired from a large pool of XBT
probes manufactured in 1970 for the U.S. Naw. The over.~i~ average standard
deviation for the probes was 0.023~ which then reduces t~, ~,.021~ when con-
sideration is made for the inherent variability of the calibration procedure.
A separate investigation was made of the R-T relationship by studying the response
characteristics for nine probes. The conclusion was that the R-T differences ranged
from +0.011~ to -0.014~ which then means that the measured relationships were
within __.0.014~ of the published relationship and that the calculation of new coeffi-
cients, following Steinhart and Hart (1968), is not warranted. Moreover the final con-
clusions of Georgi et al. (1980) suggest an overall accuracy for XBT thermistors of
+0.06~ at the 95% confidence level and that the consistency between thermistors is
sufficiently high that individual probe calibration is not needed for this accuracy level.
Another method of evaluating the performance of the XBT system is to compare
XBT temperature profiles with those taken at the same time with an higher accuracy
profiler such as a CTD system. Such comparisons are discussed by Heinmiller et al.
(1983) for data collected in both the Atlantic and the Pacific using calibrated CTD
systems. In these comparisons, it is always a problem to achieve true synopticity in the
data collection since the XBT probe falls much faster than the recommended drop rate
for a CTD probe. Most of the earlier comparisons between XBT and CTD profiles
(Flierl and Robinson, 1977; Seaver and Kuleshov, 1982) were carried out using XBT
temperature profiles collected between CTD stations separated by 30 km. For the
27. 16 Data Analysis Methods in Physical Oceanography
Figure 1.3.3. Exploded view of a Sippican OceanographicInc. XBT showingspooland canister.
purposes of intercomparison, it is better for the XBT and CTD profiles to be collected
as simultaneously as possible.
The primary error discussed by Heinmiller et al. (1983) is that in the measurement
of depth rather than temperature. There were, however, significant differences
between temperatures measured at depths where the vertical temperature gradient
was small and the depth error should make little or no contribution. Here, the XBT
temperatures were found to be systematically higher than those recorded by the CTD.
Sample comparisons were divided by probe type and experiment. The T4 probes (as
defined above) yielded a mean XBT-CTD difference of about 0.19~ while the T7s
(defined above) had a lower mean temperature difference of 0.13~ Corresponding
standard deviations of the temperature differences were 0.23~ for the T4s, and
0.11~ for the T7s. Taken together, these statistics suggest an XBT accuracy less than
the +_0.1~ given by the manufacturer and far less than the 0.06~ reported by Georgi
et al. (1980) from their calibrations.
28. Data Acquisition and Recording 17
From these divergent results, it is difficult to decide where the true XBT
temperature accuracy lies. Since the Heinmiller et al. (1983) comparisons were made
in situ there are many sources of error that could contribute to the larger temperature
differences. Even though most of the CTD casts were made with calibrated instru-
ments, errors in operational procedures during collection and archival could add
significant errors to the resultant data. Also, it is not easy to find segments of temp-
erature profiles with no vertical temperature gradient and therefore it is difficult to
ignore the effect of the depth measurement error on the temperature trace. It seems
fair to conclude that the laboratory calibrations represent the ideal accuracy possible
with the XBT system (i.e. better than •176 In the field, however, one must expect
other influences that will reduce the accuracy of the XBT measurements and an
overall accuracy slightly more than _0.1 ~ is perhaps realistic. Some of the sources of
these errors can be easily detected, such as an insulation failure in the copper wire
which results in single step offsets in the resulting temperature profile. Other possible
temperature error sources are interference due to shipboard radio transmission (which
shows up as high frequency noise in the vertical temperature profile) or problems with
the recording system. Hopefully, these problems are detected before the data are
archived in historical data files.
In closing this section we comment that, until recently, most XBT data were
digitized by hand. The disadvantage of this procedure is that chart paper recording
doesn't fully realize the potential digital accuracy of the sensing system and that the
opportunities for operator recording errors are considerable. Again, some care should
be exercised in editing out these large errors which usually result from the incorrect
hand recording of temperature, date, time or position. It is becoming increasingly
popular to use digital XBT recording systems which improve the accuracy of the
recording and eliminate the possibility of incorrectly entering the temperature trace.
Such systems are described, for example, in Stegen et al. (1975) and Emery et al.
(1986). Today, essentially all research XBT data are collected with digital systems,
while the analog systems are predominantly used by various international navies.
1.3.4 Salinity/conductivity-temperature-depth profilers
Resistance thermometers are widely used on continuous profilers designed to replace
the earlier hydrographic profiles collected using a series of sampling bottles. The new in
situ electronic instruments continuously sample the water temperature, providing much
higher resolution information on the ocean's vertical and horizontal temperature
structure. Since density also depends on salinity, electronic sensors were developed to
measure salinity in situ and were incorporated into the profiling system. As discussed by
Baker (1981), an early electronic profiling system for temperature and salinity was
described by Jacobsen (1948). The system was limited to 400 m and used separate
supporting and data transfer cables. Next, a system called the STD (salinity-temp-
erature-depth) profiler was developed by Hamon and Brown in the mid-1950s (Hamon,
1955; Hamon and Brown, 1958). The evolution of the conductivity measurement, used
to derive salinity, will be discussed in the section on salinity. This evolution led to the
introduction of the conductivity-temperature-depth (CTD) profiling system (Brown,
1974). This name change identified improvements not only in the conductivity sensor
but also in the temperature sensing system designed to overcome the mismatch in the
response times of the temperature and conductivity sensors. This mismatch often
resulted in erroneous salinity spikes in the earlier STD systems (Dantzler, 1974).
29. 18 Data Analysis Methods in Physical Oceanography
Most STD/CTD systems use a platinum resistance thermometer as one leg of an
impedance bridge from which the temperature is determined. An important
development was made by Hamon and Brown (1958) where the sensing elements
were all connected to oscillators that converted the measured variables to audio
frequencies which could then be sent to the surface via a single conducting element in
the profiler support cable. The outer cable sheath acted as both mechanical support
and the return conductor. This data transfer method has subsequently been used on
most electronic profiling systems. The early STDs were designed to operate to 1000 m
and had a temperature range of 0-30~ with an accuracy of _+0.15~ Later STDs,
such as the widely used Plessey Model 9040, had accuracies of m0.05~ with
temperature ranges of-2 to +18~ or +15 to +35~ (range was switched auto-
matically during a cast). Modern CTDs, such as the Sea Bird Electronics SBE 25 and
the General Oceanics MK3C (modified after the EG&G Mark V) (Figure 1.3.4) have
accuracies of m0.002~ over a range of -3 to +32~ and a stability of 0.001~
(Brown and Morrison, 1978; Hendry, 1993). To avoid the problem of sensor response
mismatch the MK3C CTD combines the accuracy and stability of a platinum
resistance thermometer with the speed of a thermistor. The response time of the
platinum thermometer is typically 250 ms while the response time of the conductivity
cell (for a fall rate of 1 m/s) is typically 25 ms. The miniature thermistor probe
matches the faster response of the conductivity cell with a response time of 25 ms.
These two temperature measurements are combined to yield a rapid and highly
accurate temperature. The response of the combined system to a step input is shown
in Figure 1.3.5 taken from Brown and Morrison (1978). Later modifications have senl
the platinum resistance temperature up the cable along with the fast response
thermistor temperature for later combination. It is also possible to separate the
thermometer from the conductivity cell so that the spatial separation acts as a time
delay as the unit falls through the water (Topham and Perkins, 1988).
1.3.5 Dynamic response of temperature sensors
Before considering more closely the problem of sensor response time for STD/CTD
systems, it is worthwhile to review the general dynamic characteristics of temperature
measuring systems. For example, no temperature sensor responds instantaneously tc
changes in the environment which it is measuring. If the environment temperature is
changing, the sensor element lags in its response. A simple example is a reversing
thermometer which, lowered through the water column, would at no time read the
correct environment temperature until it had been stopped and allowed to equilibrate
for some time. The time (K) that it takes the thermometer to respond to the
temperature of a new environment is known as the response time or "time constant" oJ
the sensor.
The time constant K is best defined by writing the heat transfer equation for oui
temperature sensor as
dT 1
dt = -k(T- Tw) (1.3.5.1',
where Tw and T are the temperatures of the medium (water) and thermometer and l
refers to the elapsed time. If we assume that the temperature change occurs rapidly as
the sensor descends, the temperature response can be described by the integration oJ
equation (1.3.5.1) from which:
30. Data Acquisition and Recording 19
Figure 1.3.4. (a) Schematic of the Sea-Bird SBE 25 CTD and optional sensor modules (courtesy, Doug
Bennett, Sea-Bird Electronics).
(T- Tw)/(To - Tw) = AT/ATo = e-t/K (1.3.5.2)
In this solution, To refers to the temperature of the sensor before the temperature change
and K is defined so that the ratio AT/ATo becomes e-1 ( = 0.368) when 63% of the
temperature change, AT, has taken place. The time for the temperature sensor to reach
90% of the final temperature value can be calculated using e-t/k = 0.1. A more complex
case is when the temperature of the environment is changing at a constant rate; i.e.
31. 20 Data Analysis Methods in Physical Oceanography
(b)
(c)
Optional sensors
Clamp-housing
to end cap
Guard cage
A
II i]
Pad eye for
I suspension
sea cable
Stainless or
titanium housing
Bottom
//end cap
.,
Temperature and
i ~__~ , sensors
conductivity
Oxygen sensor
Ancillary
I0 kHz
Oscillator
Digitizer
Precision
AC
digital
to
analog
converter
Telemetry
I
I
I To sea cable
' l
I
I
I
I
i. ..... ..[.....
Sensors
Strain gage
pressure sel
Pressure
sensor
interface
~ Temperature
Platinum sensor
thermometer interface
~ Conductivity[ l[
sensor J"-'-~ i
Conductivity cell ]
Fast
temperature Analog
Thermistor interface
I
I
Logic
Data
"Finish"
"Start"
E C
Sensor select
Adaptive
sampling
control
10 kHz clock
DC channel ~ WOCE
I-8 ~ controller
card
Figure 1.3.4. (b) schematic of General Oceanics MK3C/WOCE CTD and optional sensors; (c) schematic
of electronics and sensors of Generhl Oceanics MK3C/WOCE CTD (courtesy, Dan Schaas and Mabel
Gracia, General Oceanics).
32. Data Acquisition and Recording 21
Figure 1.3.5. Combined output and response times of the resistance thermometer of a CTD.
Tw - T1 + ct (1.3.5.3)
where T1 and c are constants. The temperature sensor then follows the same temp-
erature change but lags behind so that
T - Tw - -cK (1.3.5.4)
The response times, as defined above, are given in Table 1.3.1 for various tempera-
ture sensing systems. Values refer to the time in seconds for the sensor to reach the
specified percentage of its final value.
The ability of the sensor to attain its response level depends strongly on the speed at
which the sensor moves through the medium. An example of the application of these
response times is an estimate for the period of time a reversing thermometer is allowed
to "soak" to register the appropriate temperature. Suppose we desired an accuracy of
_+0.01~ and that our reversing thermometer is initially 10~ warmer than the water.
From equation (1.3.5.2), 0.01/10.0 - exp (-t/K), so that t - 550 s or 9.2 min. Thus,
the usually recommended soak period of 5 min (for a hydrographic cast) is set by
thermometer limitations rather than by the imperfect flushing of the water sample
bottles. Another application is the estimation of the descent rate for a STD/CTD.
Table 1.3.1 Response times (in s)for various temperature sensors
Device K63% K90% K99%
Mechanical bathythermograph 0.13 0.30 0.60
STD 0.30 0.60 1.20
Thermistor 0.04 0.08 0.16
Reversing thermometer 17.40 40.00 80.00
33. 22 Data Analysis Methods in Physical Oceanography
Assuming that the critical temperature change is in the thermocline where the
temperature change is about 2~ then to sense this change, with an accuracy of
0.1~ the STD/CTD response requires that exp(-t/0.6) = 0.1/2.0 from which t =
1.8 s. Thus, we have the usual recommendation for a lowering rate of about 1 m/s.
Today sensors, such as those used in the SBE 25 CTD (Figure 1.3.4a), General
Oceanics MK3C CTD (Figure 1.3.4b, c) and the Guildline 8737 CTD have response
times closer to 1.0 s.
1.3.6 Response times of CTD systems
As with any thermometer, the temperature sensor on the CTD profiler does not
respond instantaneously to a change in temperature. Heat must first diffuse through
the viscous boundary layer set up around the probe and through the protective
coatings of the sensor (Lueck et al., 1977). In addition, the actual temperature head
must respond before the temperature is recorded. These effects lead to the finite
response time of the profiler and introduce noise into the observed temperature data
(Horne and Toole, 1980). A correction is needed to achieve accurate temperature,
salinity and density data. Fofonoffet al. (1974) discuss how the single-pole filter model
(1.3.5.2) may be used to correct the temperature data. In this lag-correction procedure,
the true temperature at a point is estimated from (1.3.5.1) by calculating the time rate-
of-change of temperature from the measured record using a least-square linear
estimation over several neighboring points.
Horne and Toole (1980) argue that data corrected with this method may still be in
error due to errors arising in the estimation of terms in the differential equation or the
approximation of the equation to the actual response of the sensor. As an alternative,
they suggest using the measured data to estimate a correction filter directly. This
procedure assumes that the observed temperature data may be written as a
convolution of the true temperature with the response function of the sensor such that
T(t) = H[T*(t)] (1.3.6.1)
where T is the observed temperature at time t, T* is the true temperature and H is the
transfer or response function of the sensor. The filter g is sought so that
g. H = (5(t) (1.3.6.2)
where (5 is the Dirac delta function. The filter g can be found by fitting, in a least-
squares sense, its Fourier transform to the known Fourier transform of the function H.
This method is fully described in the appendix to Horne and Toole (1980). The major
advantage of this filter technique is only realized in the computation of salinity from
conductivity and temperature.
In addition to the physical response time problem of the temperature sensor there is
the problem of the nonuniform descent of the CTD probe due to the effects of a ship's
roll or pitch (Trump, 1983). From a study of profiles collected with a Neil-Brown
CTD, the effects of ship's roll were clearly evident at the 5-s period when the data were
treated as a time-series and spectra were computed. High coherence between temp-
erature and conductivity effects suggest that the mechanisms leading to these roll-
induced features are not related to the sensors themselves but rather reflect an
interaction between the environment and the sensor motion. Two likely candidates
34. Data Acquisition and Recording 23
are: (a) the modification of the temperature by water carried along in the wake of the
CTD probe from another depth; and (b) the effects of a turbulent wake overtaking the
probe as it decelerates bringing with it a false temperature.
Trump (1983) concludes by saying that, while some editing procedure may yet be
discovered to remove roll-related temperature variations, none is presently available.
He, therefore, recommends that CTD data taken from a rolling ship not be used to
compute statistics on vertical fine-structure and suggests that the best way to remove
such contamination is to employ a roll-compensation support system for the CTD
probe. Trump also recommends a series of editing procedures to remove roll effects
from present CTD data and argues that of the 30,000 raw input data points in a 300 m
cast that up to one-half will be removed by these procedures. A standard procedure is
to remove any data for which there is a negative depth change between successive
values on the way down and vice versa on the way up.
1.3.7 Temperature calibration of STD/CTD profilers
Although STD/CTD profilers were supposed to replace bottle casts, it was soon found
that, due to problems with electronic drift and other subtle instrument changes, it was
necessary to conduct in situ calibrations using one or more bottle samples. For this
reason, and also to collect water samples for chemical analyses, most CTDs are used in
conjunction with Rosette bottle samplers which can be commanded to sample at
desired depths. A Rosette sampler consists of an aluminum cage at the end of the CTD
conducting cable to which are fixed six, 12, or more water bottles that can be
individually triggered electronically from the surface. Larger cages can accommodate
larger volume bottles, typically up to 30 litres. While such in situ calibrations are more
important for conductivity measurements, it is good practice to compare temperatures
from the reversing thermometers with the CTD values. This comparison must be done
in waters with near-uniform temperature and salinity profiles so that the errors
between the CTD values and water sample are minimized. One must pick the time of
the CTD values that coincide exactly with the tripping of the bottle. As reported by
Scarlet (1975), in situ calibration usually confirms the manufacturer's laboratory
calibration of the profiling instrument. Generally, this in situ calibration consists of
comparisons between four and six temperature profiles, collected with reversing
thermometers. Taken together with the laboratory calibration data these data are used
to construct a "correction" curve for each temperature sensor as a function of
pressure. Fofonoffet al. (1974) present a laboratory calibratiorl curve obtained over an
18-month period for an early Neil-Brown CTD (the Niel-Brown CTD was the
forerunner of the General Oceanics MK3C CTD). A comparison of 175 temperatures
measured in situ with this profiler and those measured by reversing mercury
thermometers is presented in Figure 1.3.6. In the work reported by Scarlet (1975),
these calibration curves were used in tabular, rather than functional, form and
intermediate values were derived using linear interpolation. This procedure was likely
adequate for the study region (Scarlet, 1975) but may not be generally applicable.
Other calibration procedures fit a polynomial to the reference temperature data to
define a calibration curve.
35. 24 Data Analysis Methods in Physical Oceanography
25 --
20 --
e~
i,.4
t~
15 --
cJ
0
O
10 --
-0.02 -0.01 0 O.Ol 0.02
DT (DSRT-CTD) (~
Figure 1.3.6. Histogram of temperature differences. Values used are the differences in temperature
between deep-sea reversing mercury thermometers (DSRT) and the temperature recorded by an early
Niel-Brown CTD. (From Fofonoff et al., 1974.)
1.3.8 Sea surface temperature
Sea surface temperature (SST) was one of the first oceanographic variables to be
measured and continues to be one of the most widely made routine oceanographic
measurements. Benjamin Franklin mapped the position of the Gulf Stream by sus-
pending a simple mercury-in-glass thermometer from his ship while traveling between
the U.S. and Europe.
1.3.8.1 Ship measurements
The introduction of routine SST measurements did away with the technique of
suspending a thermometer from the ship. In its place, SST was measured in a sample
of surface water collected in a bucket. When SST measurements were fairly approxi-
mate, this method was adequate. However, as the temperature sensors improved,
problems with on-deck heating/cooling, conduction between bucket and thermometer,
spillage, and other sources of error, led to modifications of the bucket system. New
buckets were built that contained the thermometer and captured only a small volume
of near-surface water. Due to its accessibility and location at the thermal boundary
between the ocean and the atmosphere, the SST has become the most widely observed
oceanic parameter. As in the past, the measurement of SST continues to be part of the
routine marine weather observations made by ships at sea.
There are many possible sources of error with the bucket method including changes
of the water sample temperature on the ship's deck, heat conduction through contact
of the container with the thermometer, and the temperature change of the thermo-
meter while it is being read (Tabata, 1978a). In order to avoid these difficulties, special
sample buckets have been designed (Crawford, 1969) which shield both the container
and the thermometer mounted in it from the heating/cooling effects of sun and wind.
Comparisons between temperature samples collected with these special bucket
36. Data Acquisition and Recording 25
samplers and reversing thermometers near the sea surface have yielded temperature
differences of •176 (Tauber, 1969; Tabata, 1978a).
Seawater cooling for ship's engines makes it possible to measure SST from the
temperature of the engine intake cooling system sensed by some type of thermometer
imbedded in the cooling stream. Called "injection temperatures" these temperature
values are reported by Saur (1963) to be on the average 0.7•176 higher than
corresponding bucket temperatures. For his study, Saur used SST data from 12
military ships transiting the North Pacific. Earlier similar studies by Roll (1951) and
by Brooks (1926) found smaller temperature differences, with the intake-temperatures
being only 0.1~ higher than the bucket values. Brooks found, however, that the
engine-room crew usually recorded values that were 0.3~ too high. More recent
studies by Walden (1966), James and Fox (1972) and Collins et al. (1975) have given
temperature differences of 0.3, 0.3• 1.3, and 0.3+_0.7~ respectively. Tabata (1978b)
compared three-day average SSTs from the Canadian weather ships at Station "P"
(which used a special bucket sampler) with ship injection temperatures from merch-
ant ships passing close by. He found an average difference of 0.2• 1.5~ (Figure 1.3.7).
Again, the mean differences were all positive suggesting the heating effect of the
engine room environment on the injection temperature.
The above comparisons between ship injection and ship bucket SSTs were made
with carefully recorded values on ships that were collecting both types of measure-
ments. Most routine ship-injection temperature reports are sent via radio by the ship's
officers and have no corresponding bucket sample. As might be expected, the largest
errors in these SST values are usually caused by errors in the radio transmission or in
the incorrect reporting or receiving of ship's position and/or temperature value
(Miyakoda and Rosati, 1982). The resulting large deviations in SST can normally be
detected by using a comparison with monthly climatological means and applying
some range of allowable variation such as 5~
This brings us to the problem of selecting the appropriate SST climatology--the
characteristic SST temperature structure to be used for the global ocean. Until
recently, there was little agreement as to which SST climatology was most appropriate.
In an effort to establish a guide as to which climatology was the best, Reynolds (1983)
compared the available SST climatologies with one he had produced (Reynolds, 1982).
It was this work that led to the selection of Reynolds (1982) climatology for use in the
Tropical Ocean Global Atmosphere (TOGA) research program.
r
o
~/is Station P [~ Oct-Mar
5 Apr-Sept
,,, , ,NN,,,,, ,N,I
0 0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40 0.45 0.50 0.55
Standard deviation of 31/2 day, sea surface temperature (~
Figure 1.3.7. Frequency of occurrence of standard deviations associated with the 3.5-day mean sea
surface temperature at Ocean Station"P"(50~ 145~ Difference between bucket temperature and
ship intake surface temperature. (Modified after Tabata, 1978a.)
37. 26 Data Analysis Methods in Physical Oceanography
1.3.8.2 Satellite-sensed sea surface temperature (radiation temperature)
In contrast to ship and buoy measurements that sample localized areas on a quasi-
synoptic basis, earth-orbiting satellites offer an opportunity to uniformly sample the
surface of the globe on a nearly synoptic basis. Infrared sensors flown on satellites
retrieve information that can be used to estimate SST, with certain limitations. Clouds
are one of the major limitations in that they prevent long-wave, sea-surface radiation
from reaching the satellite sensor. Fortunately, clouds are rarely stationary and all
areas of the earth's surface are eventually observed by the satellite. In general,
researchers wishing to produce "cloud-free" images of SST are required to construct
composites over time using repeat satellite infrared data. These composites may
require images spanning a couple of days to a whole week. Even with the need for
composite images, satellites provide almost global coverage on a fairly short time scale
compared with a collection of ship SST observations. The main problem with satellite
SST estimates is that their level of accuracy is a function of satellite sensor calibration
and specified corrections for the effects of the intervening atmosphere, even under
apparently cloud-free conditions (Figure 1.3.8).
One of the first satellites capable of viewing the whole earth with an infrared sensor
was ITOS 1 launched January 23, 1970. This satellite carried a scanning radiometer
(SR) with an infrared channel in the 10.5-12.5 #m spectral band. The scanner viewed
the earth in a swath with a nadir (sub-satellite) spatial resolution of 7.4 km as the
satellite travelled in its near-polar orbit. A method that uses these data to map global
SST is described in Rao et al. (1972). This program uses the histogram method of
Smith et al. (1970) to determine the mean SST over a number of pixels (picture
elements), including those with some cloud. A polar-stereographic grid of 2.5~ of
latitude by 2.5~ of longitude, encompassing about 1024 pixels per grid point per day,
was selected. In order to evaluate the calculated SST retrievals from the infrared
measurements, the calibrated temperature values were compared with SST maps
made from ship radio reports. The resulting root-mean-square (RMS) difference for
the northern hemisphere was 2.6~ for three days in September 1970. When only the
northern hemisphere ocean weather ship SST observations were used, this RMS value
dropped to 1.98~ a reflection of the improved ship SST observations. A comparison
for all ships from the southern hemisphere, for the same three days, resulted in an
RMS difference of 2.45~ As has been discussed earlier in this chapter, one of the
3
CJ
v
0m I
~
Q
o -I
~'~ -2
m
-3
_ O
0 ~ 0 O 0
-o~ n80 on
-- o 8
, I
0 500
[D
OID 0 0 0
0 O0 O
I ..... I I
1000 1500 2000
No. of cloud free pixels
I
2500
Figure 1.3.8. Sea surface temperature differences,satellite minus ship,plotted as afunction of the number
of cloud-free pixels in a 50 x 50 pixel array. (From LleweUyn-Jones et al., 1984.)
38. Data Acquisition and Recording 27
reasons for the magnitude of this difference is the uncertainty of the ship-sensed SST
measurements.
The histogram method of Smith et al. (1970) was the basis for an operational SST
product known as GOSSTCOMP (Brower et al., 1976). Barnett et al. (1979) examined
the usefulness of these data in the tropical Pacific and found that the satellite SST
values were negatively biased by 1-4~ and so were, by themselves, not very useful.
The authors concluded that the systematically cooler satellite temperatures were due
to the effects of undetected cloud and atmospheric water vapor. As reported in
Miyakoda and Rosati (1982), the satellite SST retrieval procedure was evolving at that
time and retrieval techniques had improved (Strong and Pritchard, 1980). These
changes included new methods to detect clouds and remove the effects of atmospheric
water vapor contamination.
More recently, a new satellite sensor system, the Advanced Very High Resolution
Radiometer (AVHRR) has become the standard for infrared SST retrievals. In terms
of SST the main benefits of the new sensor system are: (a) improved spatial resolution
of about 1 km at nadir; and (b) the addition of other spectral channels which improve
the detection of water vapor by computing the differences between various channel
radiometric properties. The AVHRR is a four- or five-channel radiometer with
channels in the visible (0.6-0.7 #m), near-infrared (0.7-1.1 #m), and thermal infrared
(3.5-3.9, 10.5-11.5, and 11.5-12.5 #m). The channel centered at 3.7 #m was intended
as a nighttime cloud discriminator but is more useful when combined with the 11 and
12 #m channels to correct for the variable amounts of atmospheric water vapor
(Bernstein, 1982). While there are many versions of this "two-channel" or "dual-
channel" correction procedure (also called the "split-window" method), the most
widely used was developed by McClain (1981).
The above channel correction methods have been developed in an empirical manner
using some sets of in situ SST measurements as a reference to which the AVHRR
radiance data are adjusted to yield SST. This requires selecting a set of satellite and in
situ SST data collected over coincident intervals of time and space. Bernstein (1982)
chose intervals of several tens of kilometers and several days (Figure 1.3.9) while
McClain et al. (1983) used a period of one day and a spatial grid of 25 km. In a recent
evaluation of both of these methods, Lynn and Svejkovsky (1984) found the Bernstein
method yielded a mean bias of +0.5~ and the McClain equation a bias of-0.4~
relative to in situ SST measurements. In each case, the difference from in situ values
was smaller than the RMS errors suggested by the authors of the two methods.
Bernstein (1982) compared mean maps made from 10 days of AVHRR retrievals with
similar maps made from routine ship reports. He found the maps to agree to within
_0.8~ (one standard deviation) and concluded that this level of agreement was
limited by the poor accuracy of the ship reports. He suggested that properly handled,
the radiometer data can be used to study climate variations with an accuracy of 0.5-
1.0~ This is consistent with the results of Lynn and Svejkovsky (1984) for a similar
type of data analysis.
Another possible source of satellite SST estimates is the Visible Infrared Spin Scan
Radiometer (VISSR) carried by the Geostationary Orbiting Earth Satellite (GOES).
Unfortunately, the VISSR has a spatial resolution of about 8 km at the sub-satellite
point for the infrared channel. Another disadvantage of the VISSR is the lack of
onboard infrared calibration similar to that available from the AVHRR (Maul and
Bravo, 1983). While the VISSR does provide a hemispherical scan every half hour its
shortcomings have discouraged its application to the general estimation of SST. In
39. 28 Data Analysis Methods in Physical Oceanography
25
rO
20
0
0
0
r,~
,,0
~ 10
<
Perfect agreement
line
5 10 15 20 25
Ship absolute temperature (~
Figure 1.3.9. Grid point by grid point crossplot of the mapped values of sea surface temperaturefrom
ship-based and A VHRR-based maps. (Bernstein, 1982.)
some cases, VISSR data have been examined where there is a lack of suitable AVHRR
data. In one such study, Maul and Bravo (1983) found that a regression between
VISSR infrared and in situ SST data, using the radiative transfer equations, yielded
satellite SST estimates that were no better than _0.9~ The conclusion was that, in
general, GOES VISSR SST estimates are accurate to within _ 1.3~ only. The primary
problem with improving this accuracy is the presence of sub-pixel size clouds which
contaminate the SST regression.
Efforts are underway to improve the accuracy of retrievals from AVHRR through a
better understanding of the on-board satellite calibration of the radiometer and by the
development of regional and seasonal "dual-channel" atmospheric correction proce-
dures. Evaluation of these correction procedures, compared with collections of
atmospheric radiosonde measurements, has demonstrated the robust character of the
"dual-channel" correction and improvements only require a better estimate of the
local versus global effects in deriving the appropriate algorithm (Llewellyn-Jones et
al., 1984). Thus, it appears safe to suggest that AVHRR SST estimates can be made
with accuracies of about 0.5~ assuming that appropriate atmospheric corrections are
performed.
Three workshops were held at the Jet Propulsion Laboratory UPL) to compare the
many different techniques of SST retrievals from existing satellite systems. The first
workshop (January 27-28, 1983) examined only the microwave data from the scanning
multichannel microwave radiometer (SMMR) while the second workshop (June 22-24,
1983) considered SMMR, HIRS (high resolution infrared sounder) and AVHRR for
two time periods, November 1979 and December, 1981. The third workshop (February
22-24, 1984) examined SST products derived from SMMR, HIRS, AVHRR, and VAS
(atmospheric sounder on the GOES satellites) for an additional two months (March
and July, 1982). A series of workshop reports is available from JPL and the results are
40. Data Acquisition and Recording 29
summarized in journal articles in the November 1985 issue of the Journal of Geo-
physical Research.
In their review of workshop-3 results, Hilland et al. (1985) reported that the overall
RMS satellite SST errors range from 0.5 to 1.0~ In a discussion of the same
workshop results, Bernstein and Chelton (1985) were more specific, reporting RMS
differences between satellite and SST anomalies ranging from 0.58 to 1.37~ Mean
differences for this same comparison ranged from -0.48 to 0.72~ and varied
substantially from month to month and season to season. They also reported that the
SMMR SSTs had the largest RMS differences and time-dependent biases. Differences
for the AVHRR and HIRS computed SSTs were smaller. When the monthly ship SST
data were smoothed spatially to represent 600 km averages, the standard deviations of
the monthly ship averages from climatology varied from 0.35 to 0.63~ Using these
smoothed ship SST anomalies as a reference, the signal-to-noise variance ratios were
0.25 for SMMR and 1.0 for both the AVHRR and HIRS.
The workshop review by McClain et al. (1985) of the AVHRR based multichannel
SST (MCSST) retrieval method found biases of 0.3-0.4~ (with MCSST lower than
ship), standard deviations of 0.5-0.6~ and correlations of +0.3 to +0.7 (see also
Bates and Diaz, 1991). They also discussed a refined MCSST technique being used
with more recent NOAA-9 AVHRR data which yielded consistent biases of-0.1~
and RMS differences (from ship SSTs) of 0.5~ In an application of AVHRR data to
the study of warm Gulf Stream rings, Brown et al. (1985) discuss a calibration
procedure which provides SST estimates accurate to _+0.2~ This new calibration
method is the result of thermal vacuum tests which revealed instrument specific
changes in the relative emittance between internal (to the satellite) and external (deep
space) calibration targets. By reviewing satellite prelaunch calibration data they found
that there was an instrument-specific, nonlinear departure from a two-point linear
calibration for higher temperatures. In addition, it was found that the calibration
relationship between the reference PRT (platinum resistance thermocouples) and the
sensor systems changed in the thermal vacuum tests; hence a limited instrument
retest, as part of the calibration cycle, is recommended as a way to improve AVHRR
SST accuracy. Such higher accuracy absolute SST values are of importance for future
climate studies where small, long-term temperature changes are significant.
The workshop results for the geostationary sounder unit (VAS) (Bates and Smith,
1985) revealed a warm bias of 0.5~ with an RMS scatter of 0.8-1.0~ The positive
bias was attributed to a diurnal sampling bias and a bias in the original set of
empirical VAS/buoy matchups. Use of a second set of VAS/buoy matches reduced this
warm bias making VAS SSTs more attractive due to the increased temporal coverage
(every half hour) over that of the AVHRR (one to four images per day).
All of these satellite SST intercomparisons were evaluated against either ship or
buoy measurements of the near surface bulk temperature. As is often acknowledged in
these evaluations, the bulk temperature is not generally equal to the sea surface skin
temperature measured by the satellite. Studies directed at a comparison between skin
and bulk temperatures by Grassl (1976), as well as Paulson and Simpson (1981),
demonstrate marked (about 0.5~ differences between the surface skin and
subsurface temperatures. In an effort to better evaluate the atmospheric attenuation
of infrared radiance, Schluessel et al. (1987) compare precision radiometric
measurements made from a ship with SST calculated using a variety of techniques
from coincident NOAA-7 AVHRR imagery. In addition, subsurface temperatures were
41. 30 Data Analysis Methods in Physical Oceanography
continuously monitored with thermistors in the upper 10 m for comparison with the
ship and satellite radiometric SST estimates.
As part of this study, Schluessel et al. (1987) examined the effects of radiometer scan
angle on the AVHRR attenuation and concluded that differences in scan angle,
resulting in different atmospheric paths, resulted in significant changes in the
computed SST. To correct for atmospheric water vapor attenuation, HIRS radiances
were used to correct the multichannel computation of SST from the AVHRR. The
correspondence between the HIRS radiances and atmospheric water vapor content
was found by numerical simulation of 182 different atmospheres. With the HIRS
correction, AVHRR-derived SST was found to have a bias of +0.03~ and an RMS
error of 0.42~ when compared with the ship radiometer measurements. Comparison
between ship radiometric and in situ temperatures yielded a mean offset of 0.2~ and a
range of-0.5 to +0.9~ about this value (Figure 1.3.10). According to Weinreb et al.
(1990), even if the nonlinearity corrections for the sensor remained valid after satellite
launch, the error in AVHRR data is still 0.55~ of which 0.35~ is traceable to
calibration of the laboratory blackbody.
From all these various studies, it appears that the infrared satellite SST, when
computed using a multichannel algorithm corrected by HIRS, is capable of yielding
reliable estimates of SST in the absence of visible cloud. Microwave satellite SST
sensing shows promise for future all weather sensing but present systems have a poor a
signal-to-noise ratio and a consequent low spatial resolution. Future microwave
systems will be designed to specifically measure atmospheric water vapor (in a
separate channel) thus making the correction of the infrared SST estimates more
straightforward. The frequent global coverage of satellite systems makes them
attractive for the global, long-term studies required for an understanding of the
world's climate.
1.3.9 The modem digital thermometer
In many oceanographic institutions, the mercury thermometer has been replaced by a
digital deep-sea reversing thermometer built by Sensoren-Instrutmente-System (SIS)
of Kiel that uses a highly stable platinum thermistor to measure temperature. The SIS
RTM 4002 digital reversing thermometer has the outer dimensions of mercury
instruments so that it fits into existing thermometer racks. Instead of the lighted
magnifying glass needed to read most mercury thermometers, the user simply touches
a small permanent magnet to the sensor "trigger" spot on the glass face of the
thermometer to obtain a bright digital readout of the temperature to three decimal
places. Since the instrument displays the actual temperature at the sample depth,
there is no need to read an auxiliary thermometer to correct the main reading, as is the
case for reversing mercury thermometers. This makes life much more pleasant on a
rolling ship in the middle of the night. Because the response time of the platinum
thermometer is rapid compared with the "soaking" time of several minutes required
for mercury thermometers, less ship time is wasted at oceanographic stations.
The RTM 4002 has a range of-2 to 40~ and a stability of 0.00025~ per month.
According to the manufacturer, the instrument has a resolution of 0.001~ and an
accuracy of 0.005~ over the temperature range -2 to 20~ Both resolution and
accuracy are considerably lower for temperatures in the range 20-40~ A magnet is
used to reset the instrument and to activate the light-emitting diode (LED). Sampling
can be performed in the three sequential modes. The "Hold" mode displays the last
42. Data Acquisition and Recording 31
1.5o _(a)
~7-1768126111184
0.75 --
0
-0.75 --
+
-1.5o i
0 7.2
1.5o _(b)
l
8.4
+
41-
I
9.6
0.75
-0.75 --
-1.50
0
1- . . . . . . . . . . . . . . .
++%+-~~ -+- .~ - - 7 §
+:t- 1-+ 4~- .t- - -.- ~,. T "r ~.
+ ++ (AVH § TOVS) - (PRT)
I I I I I I I ~ i , I I, I
10.8 12.0 13.2 14.4 15.6 16.8 18.0 19.2 20.4 21.6 22.8 24.0
Ship time (hours)
N7-17681 26111184
+
++
+
I I
7.2 8.4
++
I
9.6 10.8 12.0 13.2
1.5o _(c)
N7-17681 2611 1/84
0.75 -- +.0~.
+
o
-0.75
-1.50
+1-
+ti:~- ++.~_ ..~_+.. .+ ~- . + ++ + +
' h--4-t--- ~" "a-- . -- -.it- . . . . . . . . . . . . . . . . . . . . . .
++ +
(AVH, HIRS) - (PRT)
1 1 l I ! I I I 1 I I I
14.4 15.6 16.8 18.0 19.2 20.4 21.6 22.8 24.0
t-
+
m
1 I 1
0 7.2 8.4 9.6
Ship time (hours)
* ~+ + ~ + ++*+ ~§ .
~-~ +.+. _a.+ 4,- +-~- 40" "'4-- - .+' - ~-7 ~+ ~" .
+t. +
(AVHRR)- (PRT)
i I I I I ! I I I I I ..... I
10.8 12.0 13.2 14.4 15.6 16.8 18.0 19.2 20.4 21.6 22.8 24.0
Ship time (hours)
Figure 1.3.10. Difference between uncorrected (a) and corrected (b, c) satellite-sensed sea surface
temperatures and ship radiometric in situ temperatures. HIRS - high-resolution infrared sounder; PRT
- platinum resistance thermocouples.
temperature stored in memory; the "Cont" mode allows for continuous sampling for
use in the laboratory while the "Samp" mode is used for reversing thermometer
applications. The instrument allows for a minimum of 2700 samples on two small
lithium batteries.
1.3.10 Potential temperature and density
The deeper one goes into the ocean, the greater the heating of the water caused by the
compressive effect of hydrostatic pressure. The ambient temperature for a parcel of
water at depth is significantly higher than it would be in the absence of pressure
effects. Potential temperature is the in situ temperature corrected for this internal
heating caused by adiabatic compression as the parcel is transported to depth in the
ocean. To a high degree of approximation, the potential temperature defined as 0(p),
or To, is given in terms of the measured in situ temperature T(p) as 0(p) - T(p) -F(R),
where F(R) ~ 0.1~ km -1 is a function of the adiabatic temperature gradient R. The
results can have important consequences for oceanographers studying water mass
43. 32 Data Analysis Methods in Physical Oceanography
characteristics in the deep ocean. The difference between the ambient temperature
and 0 increases slowly from zero at the ocean surface to about 0.5~ at 5000 m depth.
At a depth of approximately 100 m and temperatures less than 5~ the difference
between the two forms of temperature reaches the absolute resolution (•176 of
most thermistors (Figure 1.3.11). Such differences are significant in studies of deep
ocean heating from hydrothermal venting or other heat sources where temperature
anomalies of ten millidegrees (0.010~ are considered large. In fact, if the observed
temperatures are not converted to potential temperature, it is impossible to calculate
the anomalies correctly.
We remark here that the use of potential temperature in the calculation of density
leads to the definition of potential density, po = p(S, 0, 0) in kg m -3, as the value of p
for a given salinity and potential temperature at surface pressure, p = 0. The
corresponding counterpart to 0.t(= 103[p(S,T,O)- 1000]), is then called "sigma-
theta" where 0.0 - 103[p(S, 0, 0) - 1000]. Since density surfaces (as well as isotherms)
can be displaced vertically hundreds of meters by internal oscillations in the deep
ocean, it is crucial that we compare temperatures correctly by taking into consider-
ation the thermal compression effect. Readers familiar with the oceanographic liter-
ature will also note the use of 0.2,0.4, and similar sigma expressions for density surfaces
in the deep ocean. These expressions are used as reference levels for the calculation of
density at depths where the effect of hydrostatic compression on density becomes
important. For example, 0.4 = 103[p(S,0,4)- 1000] refers to density at the observed
salinity and potential temperature referred to a pressure of 4000 dbar (40,000 kPa) or
about 4000 m depth. Use of 0.o in the deep Atlantic suggests a vertically unstable water
mass below 4000 m whereas the profiles of 0"4 correctly increase toward the bottom
(Pickard and Emery, 1992). As indicated by Table 1.3.2, the different sigma values
differ significantly.
Potential temperature: Marathon II May 17, 1984
In situ temperature - Potential temperature: Stn 40 (35N, 152W)
0.6
0.5
0.4
'l 0.3
~ o.2
0.1
0 ~
0 1000 2000 3000 4000 5000 6000
Pressure (db)
Figure 1.3.11. Difference between in situ temperature (T) recorded by a CTD versus the calculated
potential temperature (0)for a deep station in the North Pacific Ocean (35~ 152~ Below about
500 m, this curve is applicable to any region of the world ocean. (Data from Martin et al., 1987.)
44. Data Acquisition and Recording 33
Table 1.3.2 Comparison of different forms of sigmafor the western Pacific Ocean nearJapan (39~
147~ (From Talley et al., 1988.) Columns 2 and 3 give the in situ and potential temperatures,
respectively. Sigma units are kg m-s
Depth In situ T Pot. T (0) Salinity
(m) (~ (~ (psu)
0-0 0"2 O'4
0 18.909 18.909 32.574 23.192 31.706 39.852
100 1.160 1.156 33.158 26.555 35.830 44.689
500 3.338 3.305 34.108 27.145 36.286 45.020
1000 2.697 2.632 34.410 27.447 36.619 45.382
2000 1.868 1.734 34.600 27.672 36.890 45.696
3000 1.528 1.311 34.661 27.752 36.993 45.820
4000 1.456 1.138 34.679 27.778 37.029 45.865
5000 1.503 1.069 34.686 27.788 37.043 45.883
5460 1.547 1.054 34.688 27.791 37.046 45.886
1.4 SALINITY
It is the salt in the ocean that separates physical oceanography from other branches of
fluid dynamics. Most oceanographers are familiar with the term "salinity" but many
are not aware of its precise definition. Physical oceanographers often forget that
salinity is a nonobservable quantity and was traditionally defined by its relationship to
another measured parameter, "chlorinity". For the first half of this century, chlorinity
was measured by the chemical titration of a seawater sample. In 1899 the International
Council for the Exploration of the Sea (ICES) established a commission, presided over
by Professor M. Knudsen, to study the problems of determining salinity and density
from seawater samples. In its report (Forch et al., 1902), the commission recom-
mended that salinity be defined as follows: "The total amount of solid material in
grams contained in one kilogram of seawater when all the carbonate has been con-
verted to oxide, all the bromine and iodine replaced by chlorine and all the organic
material oxidized."
Using this definition, and available measurements of salinity, chlorinity and density
for a relatively small number of samples (a few hundred), the commission produced
the empirical relationship
S (%0) = 1.805C1 (%0) + 0.03 (1.4.1)
known as Knudsen's equation and a set of tables referred to as Knudsen's tables. The
symbol %o indicates "parts per thousand" (ppt) in analogy to percent (%) which is
parts per hundred. In the more modern Practical Salinity Scale, salinity is a unitless
quantity written as "psu" for practical salinity units. It is interesting to note that
Knudsen himself considered using electrical conductivity (Knudsen, 1901) to measure
salinity. However, due to the inadequacy of the apparatus available, or similar
problems at the time, he decided that the chemical method was superior.
There are many different titration methods used to determine salinity but that most
widely applied is the colorimetric titration of halides with silver nitrate (AgNO3)
using the visual end-point provided by potassium chromate (K2CrO4), as described in
Strickland and Parsons (1972). With a trained operator, this method is capable of an
45. 34 Data Analysis Methods in Physical Oceanography
accuracy of 0.02%o in salinity using the empirical Knudsen relationship. For precise
laboratory work Cox (1963) reported on more sensitive techniques for determining the
titration end-point which yield a precision of 0.002%0 in chlorinity. Cox also describes
an even more complex technique, used by the Standard Sea-water Service, which is
capable of a precision of about 0.0005%0 in chlorinity. It is fairly safe to say that these
levels of precision are not typically obtained by the traditional titration method and
that preconductivity salinities are generally no better than _+0.02%0.
1.4.1 Salinity and electrical conductivity
In the early 1950s, technical improvements in the measurement of the electrical
conductivity of seawater turned attention to using conductivity as a measure of
salinity rather than the titration of chlorinity. Seawater conductivity depends on the
ion content of the water and is therefore directly proportional to the salt content. The
primary reason for getting away from titration methods was the development of
reliable methods of making routine, accurate measurements of conductivity. As noted
earlier, the potential for using seawater conductivity as a measurement of salinity was
first recognized by Knudsen (1901). Later papers explored further the relationship
between conductivity, chlorinity and salinity. A paper by Wenner et al. (1930) sug-
gested that electrical conductivity was a more accurate measure of total salt content
than of chlorinity alone. The authors' conclusion was based on data from the first
conductivity salinometer developed for the International Ice Patrol. This instrument
used a set of six conductivity cells, controlled the sample temperature thermostatically
and was capable of measurements with a precision of better than 0.01%o. With an
experienced operator the precision may be as high as 0.003%0 (Cox, 1963). The latter is
a typical value for most modern conductive and inductive laboratory salinometers and
is an order of magnitude improvement in the precision of salinity measurements over
the older titration methods.
It is worth noting that the conductivity measured by either inductive or conductive
laboratory salinometers, such as the widely-used Guildline 8410 Portable Salinometer,
are relative measurements which are standardized by comparison with "gtandard
seawater". As an outgrowth of the ICES commission on salinity, the reference or
standard seawater was referred to as "Copenhagen Water" due to its earliest production
by a group in Denmark. This standard water is produced by diluting a large sample of
seawater until it has a precise salinity of 35%0 (Cox, 1963). Standard UNESCO seawater
is now being produced by the "Standard Seawater Service" in Wormley, England as well
as at other locations in the U.S.A. (i.e. Woods Hole Oceanographic Institution).
Standard seawater is used as a comparison standard for each "run" of a set of
salinity samples. To conserve standard water, it is customary to prepare a "secondary
standard" with a constant salinity measured in reference to the standard seawater. A
common procedure is to check the salinometer every 10-20 samples with the
secondary standard and to use the primary standard every 50 or 100 samples. In all of
these operations, it is essential to use proper procedures in "drawing the salinity
sample" from the hydrographic water bottle into the sample bottle. Assuming that the
hydrographic bottle remains well sealed on the upcast, two effects must be avoided:
first, contamination by previous salinity samples (that have since evaporated leaving a
salt residue that will increase salinity in the sample bottle); and second, the possibility
of evaporation of the present sample. The first problem is avoided by "rinsing" the
salinity bottle and its cap two to three times with the sample water. Evaporation is
46. Data Acquisition and Recording 35
avoided by using a screw cap with a gasket seal. A leaky bottle will give sample values
that are distorted by upper ocean values. For example, if salinity increases and dis-
solved oxygen decreases with depth, deep samples drawn from a leaky bottle will have
anomalously low salinities and high oxygens.
Salinity samples are usually allowed to come to room temperature before being run
on a laboratory bench salinometer. In running the salinity samples one must be careful
to avoid air bubbles and insure the proper flushing of the salinity sample through the
conductivity cell. Some bench salinometers correct for the marked influence of
ambient temperature on the conductivity of the sample by controlling the sample
temperature while other salinometers merely measure the sample temperature in order
to be able to compute the salinity from the conductivity and coincident temperature.
Another reason for the shift to conductivity measurements was the potential for in
situ profiling of salinity. The development history of the salinity/conductivity-
temperature-depth (STD/CTD) profilers has been sketched out in Section 1.3.4 in
terms of the development of continuous temperature profilers. The salinity sensing
aspects of the instrument played an important role in the evolution of these profilers.
The first STD (Hamon, 1955) used an electrode-type conductivity cell in which the
resistance or conductivity of the seawater sample is measured and compared with that
of a sample of standard seawater in the same cell. Fouling of the electrodes can be a
problem with this type of sensor. Later designs (Hamon and Brown, 1958) used an
inductive cell to sense conductivity. The inductive cell salinometer consists of two
coaxial torodoial coils immersed in the seawater sample in a cell of fixed dimensions.
An alternating current is passed through the primary coil which then induces an EMF
(electromagnetic force) and hence a current within the secondary coil. The EMF and
current in the secondary coil are proportional to the conductivity (salinity) of the
seawater sample. Again, the instrument is calibrated by measuring the conductivity of
standard seawater in the same cell. The advantage of this type of cell is that there are
no electrodes to become fouled. A widely used inductive type STD was the Plessey
model 9040 which claimed an accuracy of m0.03%o. Precision was somewhat better
being between 0.01 and 0.02%0 depending on the resolution selected. Modern elect-
rode-type cells measure the difference in voltage between conductivity elements at
each end of the seawater passageway. With the conducting elements potted into the
same material this type of salinity sensor is less prone to contamination by biological
fouling. At the same time, the response time of the conductive cell is much greater
than that of an inductive sensor leading to the problem of salinity spiking due to a
mismatch with temperature response.
The mismatch between the response times of the temperature and conductivity
sensors is the primary problem with STD profilers. Spiking in the salinity record
occurs because the salinity is computed from a temperature measured at a slightly
different time than the conductivity measurement. Modern CTD systems record
conductivity directly, rather than the salinity computed by the system's hardware, and
have faster response thermal sensors. In addition, most modern CTD systems use
electrodes rather than inductive salinity sensors. As shown in Figure 1.4.1, this sensor
has a set of four parallel conductive elements that constitute a bridge circuit for the
measurement of the current passed by the connecting seawater in the glass tube
containing the conductivity elements. The voltage difference is measured between the
conducting elements in the bridge circuit of the conductivity cell. The primary
advantage of the conductive sensor is its greater accuracy and faster time response. In
their discussion of the predecessor of the modern CTD, Fofonoff et al. (1974) give an
47. 36 Data Analysis Methods in Physical Oceanography
Electrodes
Electrode
Hollow glass
tube ..._
Conductivity cell
model 87410
Flow
To top of CTD
pressure case
Flow
:f'
L
Electrode
oasstu
Platinum coil
Figure 1.4.1. Guildline conductivity (salinity) sensorshowing the location offour parallel conductive
elements inserted into the hollow glass tube. Conductivity is measured as the water flows through the
glass tube. Cableplugs into the top of the CTD end plate on the pressurecase.
overall salinity accuracy for this instrument of _+0.003%o. This accuracy estimate was
based on comparisons with in situ reference samples whose salinities were determined
with a laboratory salinometer also accurate to this level (Figure 1.4.2). This accuracy
value is the same as the standard deviation of duplicate salinity samples run in the lab,
demonstrating the high level of accuracy of CTD profilers.
1.4.1.1 A comparison of two modern CTDs
In recent sea trials in the North Atlantic, scientists at the Bedford Institute of
Oceanography (Bedford, Nova Scotia) examined in situ temperature and salinity
records from a EG&G Mark V CTD and a Sea-Bird Electronics SBE 9 CTD (Hendry,
48. Data Acquisition and Recording 37
45 --
40 --
35 --
0
r 30 --
U
~ 25 --
0
0
~ 20 --
U
~' 15--
0
10 --
5 --
0
m
AS = 0.0002~o
o s --0.
......
i ! i
-0.010 -0.0115 0
! !
0.005 0.010
AS (CTD-rosette) %o
Figure 1.4.2. Histogram of salinity differences (in parts per thousand, %o). Values used are the differences
in salinity between salinity recorded by an early Niel-Brown CTD and deep-sea bottle samples taken
from a Rosette sampler. AS is the mean salinity difference and as is the standard deviation. (Modified
after Fofonoff et al., 1974.)
1993). The standards used for the comparisons were temperatures measured by
Sensoren Instrumente Systeme (SIS) digital-reading reversing thermometers and
salinity samples drawn from 10-1itre bottles on a Rosette sampler and analyzed using a
Guildline Instruments Ltd Autosal 8400A salinometer standardized with IAPSO
Standard Water.
The Mark V samples at 15.625 Hz and used two thermometers and a Standard
inductive salinity cell. The fast response (250 ms time-constant) platinum thermo-
meter is used to record the water temperature while the slower resistance thermo-
meter, whose response time is more closely tuned to that of the conductivity cell, is
used in the conversion of conductivity to salinity. Plots of the differences in temp-
erature and salinity between the bottle samples and the CTD are presented in Figure
1.4.3. Using only the manufacturer's calibrations for all instruments, the Mark V CTD
temperatures were lower than the reversing thermometer values by 0.0034_+0.0023~
with no obvious dependence on depth (Figure 1.4.3a). In contrast, the Mark V salinity
differences (Figure 1.4.3c) showed a significant trend with pressure, which may be
related to the instrument used or a peculiarity of the cell. With pressure in dbar,
regression of the data yields
Salinity Diff (bottle - CTD) - 0.00483 + 6.25910 -4 Pressure (CTD) (1.4.2.1)
with a correlation coefficient r2= 0.84. Removal of the trend gives salinity values
accurate to about _+0.003 psu. Pressure errors of several dbars (several meters in
depth) were noted.
The SBE 9 and SBE 25 sample at 24 Hz and use a high-capacity pumping system
and TC-duct to flush the conductivity cell at a known rate (e.g. 2.5 m/s pumping
49. 0.008
o
0.006
o 0.0(14
0
~ o.11t)2
9
~ 0
0
~ -0.0132
~ -0.11114
~ -0.~6
~ -0.008
0.010 --
(a)
9 9 ~ 9
9 9
-- 9 9 ~ O " ~ ~ 9
~ o "
-0.010
0
I l
1000 2000
AT(~ = 0.0018
+7.712 X 10 -7 P
r2 = 0.15
I 1 I
3000 4000 5000
O.OLO -
,-- 0.008 --
L)
o
--- 0.006 --
o
ca 0.004 --
o 0.002 --
0
-0.002 --
-0.004
-0.006
0
[" -0.008
-0.010
0
(b)
~,-~"**'*~0"~~- 9 AT(~ = -0.0052
-- +1.055 x 10-6 P
r2 = 0.67
m
I I I I I
I000 2000 3000 4000 5000
Pressure (db) Pressure (db)
0.03 --
(:;.,
o 0.02
O
~
-~ o.ol
.,u
oS
r~
(c)
O0 9 J r 9
I-" 9 Oo, o~ 9
u 9 O*" 9 AS - 0 0048
9 J 9 -- .
[ . ~ 9 +6.259 x 10-4 P
J~ 9 r2 = 0.84
I" I I I .... l J
0 1000 2000 3000 4000 5000
0.03 --
o 0.02 --
0
C~
~
-. 0.01 --
~
~
0
(d)
O
9 go 9 9
o.o;-.o-o-i---o--_o__o ~_.,o -
AS = 0.0051
--4.265 x 10-7 P
r2 = 0.07
1 I 1 "l,, I i
0 1000 2000 3000 4000 5000
Pressure (db) Pressure (db)
Figure 1.4.3. CTD correction data for temperature (bottle-CTD) and salinity (bottle-CTD) based on comparison of CTD data with in situ data from bottles
attached to a Rosette sampler. (a) Temperature difference for the EG (b) same as (a) but for the Sea-Bird SBE 9 CTD; (c) salinity difference for the EG (d) same as
(c) but for the SBE 9 CTD. Regression curves are given for each calibration in terms of the pressure, P, in dbar. r 2 is the squared correlation coefficient. (Adapted
from Hendry, 1993.)
O0
T
r
t..~.
t..~.
t'%
&
r
t%
50. Data Acquisition and Recording 39
speed for a rate of 0.6-1.2 I/s). When on deck, the conductivity cell must be kept filled
with distilled water. To allow for the proper alignment of the temperature and con-
ductivity records (so that the computed salinity is related to the same parcel of water
as the temperature), the instrument allows for a time shift of the conductivity channel
relative to the temperature channel in the deck unit or in the system software
SEASOFT (module AlignCTD). In the study, the conductivity was shifted by 0.072 s
earlier to align with temperature. (The deck unit was programed to shift conductivity by
one integral scan of 0.042 s and the software the remaining 0.030 s.) Using the manu-
facturer's calibrations for all instruments, the SBE 9 CTD temperatures for the nine
samples were higher than the reversing thermometer values by 0.0002+_0.0024~ with
only a moderate dependence on depth (Figure 1.4.3b). Salinity data from 30 samples
collected over a 3000 db depth range (Figure 1.4.3d) gave CTD salinities that were lower
(fresher) than Autosal salinities by 0.005+0.002 psu, with no depth dependence. By
comparison, the precision of a single bottle salinity measurement is 0.0007 psu. Pressure
errors were less than 1 dbar. Due to geometry changes and the slow degradation of the
platinum black on the electrode surfaces, the thermometer calibration is expected to drift
by 2 m~ and the electronic circuitry by 3 m~
Based on the Bedford report, modern CTDs are accurate to approximately 0.002~
in temperature, 0.005 psu in salinity and <0.5% of full-scale pressure in depth. The
report provides some additional interesting reading on oceanic technology. To begin
with, the investigators had considerable difficulty with erroneous triggering
(misfiring) of bottles on the Rosette. Those of us who have endured this notorious
"grounding" problem appreciate the difficulty of trying to decide if the bottle did or
didn't misfire and if the misfire registered on the shipboard deck unit. If the operator
triggers the unit again after a misfire, the question arises as whether the new pulse
fired the correct bottle or the next bottle in the sequence. Several of these misfires can
lead to confusing data, especially in well-mixed regions of the ocean. It is good policy
to keep track of the misfires for sorting out the data later.
Another interesting observation was that variations in lowering speed had a
noticeable influence on the temperature and conductivity measurements. Since most
modern CTDs dissipate 5-10 watts, the CTD slightly heats the water through which it
passes. At a 1 m/s nominal fall rate, surface swell can cause the actual fall rate to
oscillate over an approximate range of + 1 m/s with periodic reversals in fall direction
at the swell period. (Heave compensation is needed to prevent the CTD from being
pulled up and down.) As a result, the CTD sensors are momentarily yanked up
through an approximately 1 m ~ thermal wake which is shed from boundary layers
of the package as it decelerates. Hendry (1993) claims that conditional editing based
on package speed and acceleration is reasonably successful in removing these artifacts.
Since turbulent drag varies with speed squared, mechanical turbulence was found to
cause package vibration that affected the electrical connection from the platinum
thermometer to the Mark V CTD. Mixing, entrainment and thermal contamination
caused differences in down versus up casts in both instruments. The correction for
thermal inertia of the conductivity cell in the SBE CTD resulted in salinity changes of
0.005 psu with negative downcast corrections when the cell was cooling and positive
upcast corrections when the cell was warming (Lueck, 1990).
1.4.2 The practical salinity scale
In using either chlorinity titration or the measurement of conductivity to compute
salinity, one employs an empirical definition relating the observed variable to salinity.
In light of the increased use of conductivity to measure salinity, and its more direct
52. 148.
Mem., I, 5, 4.
149.
Mem., I, 5, 3, id. 6, 5; II, i, II, ecc.
150.
Mem. IV, 5, 6.
151.
Cfr. Dronke, op. cit., p. 42;. Pfander, op. cit., p. 28; e vedi Steinthal nella
Zeitschrift für Völkerpsychologie, vol. II, p. 303.
152.
Per es. Zeller, op. cit., p. 108.
153.
Sen. Symp., 2, 9.
154.
Mem., II, 2, 4.
155.
Vedi Strümpell, op. cit., p. 145, il quale a nostro parere è stato il primo,
che abbia messo in piena luce l'elemento filosofico e socratico degli
scritti di Senofonte, op. cit., pp. 482-509.
156.
Mem., I, 2, pp. 49-55.
157. Nub., dal v. 1320 in poi.
158.
Questo difetto c'è nel Köchly, op. cit, passim.
159.
Hermann, Privatalterthümer, § 33, 2ª ed.
160.
Mem., II, 2.
161.
Vedi il citato Hermann, § 29.
53. 162.
Sen. Symp, 8, 12 e seg. e cfr. Mem., I, 2, 29 e seg. 3, 8 e II, 6, 31.
163.
Questa è anche l'opinione dello Zeller, op. cit., p. 58.
164.
Mem., II, 4, 6 e seg.; id., 6, 21-29.
165.
Vedi il Jacobs, Vermischte Schriften, vol. II, p. 251: Jene Sitte enthält
eben so, wie die Liebe zum andern Geschlechte, alle Elèmente des
Edelsten und des Nichtswürdigsten, des Lasters, des Besten und des
Schlechtesten in sich.
166.
Mem., III, 5. 21 e 9, 10; e cfr. ibid., I, 2, 9; e Plat. Apol., 31, E.
167. Mem., III, 6; e IV, 2, 6 e seg.
168.
Mem., IV, 6, 6.
169.
Mem., III, 7. 9.
170.
Mem., IV, 4, 4: Plat. Apol., 34 D e seg.; e cfr. Phaed., 98 C e seg.
171.
Come vuole l'Hermann.
172.
Come vuole il Baur. Vedi su questa quistione lo Zeller, Der Platonische
Staat, in seiner Bedeutung für die Folgezeit, nei citati Vorträge ecc., pp.
62-82.
173.
Mem., III, cap. 11.
174.
Come fa lo Zeller, op. cit., p. 75, nota 2ª.
175.
Questa è l'opinione di Brandis: Entwickelungen ecc., p. 236, nota 49.
54. 176.
Vedi su questo argomento l'Hermann: Privatalterthümer, § 29, con tutte
le autorità ivi addotte, e specialmente John: The Hellenes, the history of
the manners of the ancient Greeks, Londra, 1844, vol. II, p. 42.
177. Vedi Jacobs: Vermischte Schriften, IV, p. 379 e seg.
178.
Mem., II, 6, 35 e cfr. III, 9, 8.
179.
Crit., 49 A e seg. e cfr. Rep., I, 334 B e seg.
180.
Questa è anche l'opinione dello Zeller, op. cit., p. 114.
181.
Il Mainers: Geschichte der Wissenschaften, II, P. 456 (*), pone una
distinzione arbitraria fra il male arrecato sensibilmente all'inimico, e
quella che può toccare il suo benessere interno, negando che
quest'ultimo sia incluso nel χαχῶς ποιεῖν di Senofonte. Nè meno
infondata è la supposizione del Brandis, secondo la quale Senofonte non
avrebbe espresso interamente il pensiero di Socrate. Cfr. lo Strümpell,
op. cit., p. 179, che ha tentato supplire Senofonte col Gorgia, p. 481.
182.
Vedi Döderlein: Lexicon Homericum, n. 536: e per l'etimologia il Curtius:
Griechische Etymologie, p. 317.
183.
Così anche presso gli scrittori posteriori a Socrate, per es. Platone, vedi
Ast: Lexicon Platonicum s. v. ἀρετή, e segnatamente Crito, p. 117 B,
Rep. I, p. 335 B. id. p. 353 B; Gorg. p. 506 D, ecc.
184.
Il Göttling, nella citata dissertazione: Die delphischen Spräche, ha
cercato di mostrare, come i sette famosi proverbi, che erano scritti alla
porta del tempio delfico, formassero già un insieme di vedute etiche,
una specie di catechismo delle virtù cardinali, un eptalogo insomma
dell'ellenismo. Cfr. in generale il Nägelsbach: Nachhomerische Theologie,
p. 289 e seg.
55. 185.
Cfr. specialmente Wigand: Das religiöse Element in der geschichtlichen
Darstellung des Thukydides, Berlin, 1829.
186.
Cfr. le parole caratteristiche di Platone, Rep. X, p. 600 E, e risc. Prot.,
118 E, e Gorg., 520 E. Quanto all'opinione contraria dello Schneidewin,
loc. cit., non stimiamo opportuno farne qui argomento di polemica.
187. Plat. Men. 91 B, 95 B; Gorg. 519 C; Soph. 223 A.
188.
Su la dottrina delle virtù secondo i Sofisti vedi Zeller, op. cit., vol. I, 3ª
ed., p. 910 e seg.; e Schanz: Die Sophisten nach Plato, Göttingen, 1867,
pp. 118.122.
189.
Cfr. Mem., IV, I, 2 e seg.
190.
La dissertazione del Dittrich: De Socratis sententia «Virtutem esse
Scientiam», Brunsbergae, 1858, contiene una larga raccolta di luoghi di
Platone e di Aristotele; ma le illazioni di quello scritto hanno poco a fare
col Socrate della storia.
191.
Sen. Mem., III, 9, 5: ἔψη δὲ καὶ τὴν δικαιοσύνην καὶ τὴν ἄλλην πᾶσαν
ἀροτην σοψιαν εἶναι e seg., ibid., II, 6, 39.
192.
Plut. Lach. 194 D: ἔκαστος... ἃ δε ἀμαδης ταῦτα δὲ κακός. La definizione
ch'è presso Diogene Laerzio raccoglie in una forma più concisa il
pensiero socratico, vedi II, 31: ἔλεγε δε και ἔν μόνον ἀγαδὸν εἶναι, τἡν
ἐπιστἠμην, καὶ εν μόνον κακόν, τἡν ἀμαδίαν. Non crediamo nè
opportuno nè necessario imprendere una polemica contro coloro, che,
mettendo da banda la testimonianza di Senofonte e di Aristotele (Eth.
ad Nicom. VI, 13, 3; id. VII, 13, 5; Eud. I, 5; cfr. Nicom. III, 8, 6), si
sono sforzati di mostrare che Socrate riponesse in una istintiva e divina
ispirazione il fondamento della virtù. Vedi specialmente il Mehring, il
quale nel suo articolo: Sokrates als Philosoph, che è una recensione del
libro di Lasaulx, nella: Zeitschrift für Philosophie und philosophische
Kritik, vol. 36º, fasc. 1º, pp. 81-119, fondandosi sopra una falsa
interpetrazione del Protagora e sopra un luogo molto equivoco del
56. Menone, nega che Socrate tenesse la virtù per cosa che possa
apprendersi; cfr. ibid., p. 100 e seg.
193.
Mem., IV, 6, 6.
194.
Mem., IV, 6, 4, 6.
195.
Sul concetto della σωοφροσύνη, vedi Mem., III, 9. 4; IV, 3, 1. Cfr. Zeller,
op. cit., p. 99.
196.
Per es. Kühner: Proleg. ad Xenoph. Memor., 2ª ed., Gothae, 1857, pp. 4-
10; Hurndall, op. cit., p. 29 e seg., ecc.
197. Vedi il citato Buchholtz, pp. 84-94.
198.
Vedi Strümpell, op. cit., p. 93 e seg.
199.
Mem., I, 5, 5; II, I, 19; IV, 5. 9.
200.
Mem., IV, 6, 10-11; III, 9, 2.
201.
Mem., IV, 4, 16.
202.
L'Allihn: Die Grundlehren der Ethik, p. 277, vuol trovare i primi germi
della valutazione incondizionata nella dottrina socratica. Noi non
sappiamo seguire la sua opinione, la quale del resto è espressa con
molta sobrietà.
203.
Aristot. Eth. Nicom., VI, 13, 5: Σωκράτης μὲν οὖν λόγους τὰς ἀρετὰς
ᾤετο εἶναι ἐπιστήμας γὰρ εἶναι πάσας ήμεῖς δὲ μετὰ λόγου; cfr. Eth. Eud.,
I, 5: διόπερ εζήτει τί ἐστιν ἀρετή, ἀλλ'οῦ πῶς γίνεται και ἐκ τίνων.
204.
Mem., II, 6, 39: ὃσαι δ'εν ανθρώπις ἀρεται λέγονται, σκοπούμενος
εύρήσεις πάσας μαθήαει τε καὶ μελέτη αυξανομένας.
57. 205.
Mem., III, 9, I. E in un altro luogo le convinzioni religiose sono
considerate come eccitamento alla virtù, I, 4, 19.
206.
Vedi Hartenstein: De psychologiae vulgaris origine ab Aristotele
repetenda; e lo stesso: Ueber den wissenschaftlichen Werth der Ethik
des Aristoteles, ristampato nelle Historisch-philosophische Abhandlungen
dello stesso autore, Leipzig, 1870, pp. 107-127, 240-305.
207. Specialmente il Dissen nel citato scritto, p. 59-88.
208.
Questa opinione è stata indirettamente favorita dallo Schleiermacher o.
c. 290 e seg. e poi messa tanto in onore dal Brandis, che per molti anni
ha esercitato la critica dei filosofi e dei filologi. Il Brandis dal suo primo
lavoro (Rhein, Museum, 1837) fino all'ultima storia che ha scritto della
Filosofia greca (Die Entwickelungen ecc., 1862) non si è potuto mai
liberare da una falsa valutazione della testimonianza di Senofonte. Uno
dei più dichiarati avversari del Socrate senofonteo è il Ribbing, vedi:
Genetische Darstellung der platonischen Ideenlehre, Leipzig, 1863, p. 40
e seg., dove l'autore si sforza di redarguire d'inconseguenza tutti gli altri
espositori del Socratismo. Noi qui osserviamo che la critica storica non si
può fare coi postulati alla mano, e che non sappiamo vedere col Ribbing
nel Socrate senofonteo l'espressione sistematica dell'immoralità. Cfr.
dello stesso autore: Om Sokrates. Upsala, 1846, e spec. p. 43 e seg. il
quale libro non abbiamo potuto bene esaminare, per esserci arrivato
troppo tardi.
209.
Lo Zeller ha raccolto tutti i luoghi che mettono in chiaro questo concetto,
op. cit., p. 101-106, ma ha poi sacrificato al suo proprio criterio
soggettivo la ragione storica della posizione socratica; su la quale ha
finito per pronunziare giudizi in gran parte sfavorevoli.
210.
Argomento sul quale molti critici sono tornati da Schleiermacher in poi.
211.
Stimiamo inutile addurre i molti luoghi dei Memorabili dai quali risulta
questa determinazione; basti ricordare il concetto dell'εὐπραξία III, 9,
14, il quale manifestamente rivela come la indeterminata
rappresentazione del benessere, del ben vivere, del buon esito, che era
58. espressa dall'eὖ fosse determinata dalla costanza della πρᾶξις, in quanto
diversa dall'εὐτυχία.
212.
Mem., III, 8, I e seg.
213.
Il Brandis ed il Dissen hanno ammessa questa confusione nel Socrate
senofonteo, contro la esplicita testimonianza di Sen. Mem. IV, 5, 10;
ibid. 6; e 8, 11. Cfr. Hermann: Geschichte ecc., p. 335, nota 348; e
Hurndall, op. cit., p. 37.
214.
Mem., IV, 6, 8 e seg.
215.
L'Alberti, op. cit., p. 100 e seg., parla del concetto della perfezione
morale, della educazione completa dell'uomo al principio assoluto della
moralità, dell'idea sostanziale del bene, e di altre cose simili, che sono
tutte estranee al Socrate della storia.
216.
Vedi specialmente Mem., IV, 1, 2.
217. In questo concetto ci accordiamo interamente con lo Strümpell, op. cit.,
p. 134.
218.
Non è questo il luogo per sviluppare un concetto tanto complicato,
qual'è quello dell'εὐδαιμονία nella storia della cultura greca; e ci
limiteremo ad osservare, che ci pare affatto infondato il giudizio dello
Zeller, il quale, op. cit., p. 103 nota 4ª, parla del concetto della felicità
come di quello che nella sua generalità costituiva l'ideale di tutti gli
antichi filosofi. Quanta diversità non corre solo fra Socrate e Platone? e
molto più fra questo ed Aristotele?
219.
Eth. Nicom., III, 8, 6. Δοκεῖ δε καὶ ἡ ἐμπειρια ἡ περὶ ἕκαστα ἀνδρία τις
εἶναι ὂθεν δ Σωκράτης ᾠήθη ἐπιστήμην εἶναι τὴν ἀνδρίαν. In questo
luogo è chiaro che la parola ἐμπειρία appartiene al testo di Aristotele, e
non alla sentenza socratica che v'è riferita. È l'Hurndall, op. cit., p. 28,
che principalmente ha insistito a voler provare, che il sapere socratico
sia empirico.
59. 220.
Specialmente il citato Brandis, la cui opinione è stata recisamente
rigettata come arbitraria dall'Hermann: Geschichte ecc., p. 251.
221.
Vedi su la falsa interpretazione di Aristotele fatta dall'Hurndall, loc. cit.,
Strümpell, op. cit., p. 159.
222.
Mem., IV, 2, 19 e seg.
223.
Mem., III, 9.
224.
In questa determinazione abbiamo seguito la facile e naturale
interpretazione dello Strümpell, op. cit., p. 138, ch'è pure quella
dell'Hermann: Geschichte ecc., p. 253 e nota 340, senza impacciarci in
una quistione astrusa sollevata dal Brindis: Rhem. Museum, I, p. 136;
cfr. Entwickelungen ecc., p. 237 e seg. ed Hurndall, op. cit., p. 39.
225.
Mem., I, 4, 2-19. In questo luogo è notevole che Socrate sia spinto
dall'incredulità di Aristodemo a tentare una pruova dell'esistenza della
divinità (καταμαθὼν γὰρ αὐτὸν οὔτε θύοντα τοῖς θεοῖς οὔτε μαντικῇ
χρώμενον, ἀλλὰ και τῶν ποιούτων ταῦτα καταγελῶντα); la qual cosa
mostra chiaramente quanto sia fondata la nostra opinione, che egli non
si fosse punto proposto di allontanarsi dal concetto tradizionale della
religiosità positiva, perchè era alla conferma delle pratiche del culto che
mirava la dimostrazione. Cfr. Sext. Emp. adv. Math. IX, 92 e seg. Tutti i
luoghi dei Memorabili, che concernono la teologia di Socrate, sono stati
raccolti dall'Hummel: De Theologia Socratis in Xenoph. Comment.
tradita, Gottingae, 1839; col quale autore noi non ci accordiamo quanto
all'interpretazione.
226.
La difficoltà di Aristodemo è formulata nelle parole: εἴπερ γε μὴ τύχῃ
τινί, ἀλλὰ ύπὸ γνώμης ταῦτα (ossia le cose naturali) γίγνεται, I, 4, 4.
227. La dimostrazione socratica dal bel principio formula il suo risultato nelle
parole: Πρέπει μν τέἀ ἐπ' ὠφελείᾳ γιγνόμενα γνώμες ἔργα εἶναι I, 4, 5.
60. 228.
Questo nostro concetto apparisce chiaro sì dalla varietà degli elementi
intuitivi, che fanno parte della dimostrazione nel suo largo sviluppo,
come dall'intento pratico di rettificare una falsa rappresentazione della
santità e del culto religioso.
229.
Mem., I, 1, 19..... ἡγεῖτο πάντα μὲν θεοὺς εὶδέναι..... I, 3, 3. οὔτε τοῖς
δεοῖς ἔφη καλῶς ἔχειν I, 4, 11 e seg. ove si discute della provvidenza
delta divinità (οἱ δεοί); e IV, 3, 3. In altri luoghi, è adoperato
semplicemente δεός I, 4, 13, 17; IV, 3, 6; e δεοἶν I, 4, 18.
230.
Mem., I, 1, 2 e seguenti e conf. IV, 3, 16.
231.
Mem., I, 4, 5: ὁ ἐξ ἀρχῆς ποιῶν. id. 17: ἔοικε ταῦτα (le cose naturali)
σοποῦ τινος δημιουργοῦ καὶ φιλοζὡου τεχνἡματι (γἱγνεσθαι) id. 17 τὴν
τοῦ δεοῦ φρὁνησιν..... τὸν τοῦ δεου ὁφθαλμον; IV, 3, 13: ὁ τὸν ὃλον
κόσμον συντἁττων καὶ αυνέχων.
232.
Mem., IV, 3, 13. Quanto al Krische; Forschungen etc. p. 220 (*), che
considera apocrifo il luogo, vedi Zeller, op. c., p. 118 n. 2. Il Dindorf
nell'edizione di Oxford, 1862, e nella 3ª ed. di Lipsia, Teubner, 1865, ha
rigettato come apocrifo l'intero capitolo; ma le sue ragioni sono state
bene ribattute dal Breitenbach nella 3ª ed. dei Memorabili, Berlino 1863.
Introd. p. 8-10.
233.
Zeller, op. c., p. 116-120.
234.
Zeller, op. cit. p. 117 e 110.
235.
Vedi tutto il citato luogo Mem., I, 4, 2 e seg.
236.
Quelli che non hanno intesa l'importanza di questa posizione storica
sono stati costretti ad ammettere, che Socrate seguisse una doppia
maniera di esposizione, adattandosi per convenienza alle forme
politeistiche, per es. Hummel, op. cit., p. 10, e Denis: Histoire des idées
61. morales dans l'antiquité (*) I, p. 79. Cfr. Zeller, op. cit., p. 120, che
rigetta questa falsa interpretazione.
237. Hummel, op. cit., p. 14 e seg., ha voluto condursi con troppo spirito di
conseguenza nel mettere insieme i predicati della divinità, che si trovano
qua e là nei Memorabili.
238.
Mem., I, 4, 8, 17 e seg e IV, 3, 12 e seg.
239.
Mem., I, 1, 6; 3, 4; 4, 14; 7, 6; II, 2, 14; IV, 3, 12-14 ecc.
240.
Vedi, per es., su la psicologia di Pindaro e di Eschilo il citato libro del
Buchholtz, pp. 17-39, e pp. 131-146.
241.
Mem., IV, 3, 14: ἡ φυχἡ, ἤ, εἴπερ καὶ αλλο τῶν ὰνθρωπίνων, τοῦ θείον
μετέχει.
242.
Apol., p. 40 e seg.
243.
Cirop., VIII, 7, 19 e seg.
244.
Ammenochè non si affermi così apoditticamente come fa il Brandis:
Entwickelungen ecc., p. 244.
245.
Gorgia, p. 523 e seg. Lo Strümpell, op. cit., p. 181, ha voluto con troppa
sicurezza inferire da questo luogo, che la convinzione dell'immortalità
fosse a quel tempo qualcosa di tanto nuovo, da dover eccitare la
maraviglia.
62. Nota del Trascrittore
Ortografia e punteggiatura originali sono state
mantenute, correggendo senza annotazione minimi errori
tipografici.
63. *** END OF THE PROJECT GUTENBERG EBOOK SOCRATE ***
Updated editions will replace the previous one—the old editions will
be renamed.
Creating the works from print editions not protected by U.S.
copyright law means that no one owns a United States copyright in
these works, so the Foundation (and you!) can copy and distribute it
in the United States without permission and without paying
copyright royalties. Special rules, set forth in the General Terms of
Use part of this license, apply to copying and distributing Project
Gutenberg™ electronic works to protect the PROJECT GUTENBERG™
concept and trademark. Project Gutenberg is a registered trademark,
and may not be used if you charge for an eBook, except by following
the terms of the trademark license, including paying royalties for use
of the Project Gutenberg trademark. If you do not charge anything
for copies of this eBook, complying with the trademark license is
very easy. You may use this eBook for nearly any purpose such as
creation of derivative works, reports, performances and research.
Project Gutenberg eBooks may be modified and printed and given
away—you may do practically ANYTHING in the United States with
eBooks not protected by U.S. copyright law. Redistribution is subject
to the trademark license, especially commercial redistribution.
START: FULL LICENSE
65. PLEASE READ THIS BEFORE YOU DISTRIBUTE OR USE THIS WORK
To protect the Project Gutenberg™ mission of promoting the free
distribution of electronic works, by using or distributing this work (or
any other work associated in any way with the phrase “Project
Gutenberg”), you agree to comply with all the terms of the Full
Project Gutenberg™ License available with this file or online at
www.gutenberg.org/license.
Section 1. General Terms of Use and
Redistributing Project Gutenberg™
electronic works
1.A. By reading or using any part of this Project Gutenberg™
electronic work, you indicate that you have read, understand, agree
to and accept all the terms of this license and intellectual property
(trademark/copyright) agreement. If you do not agree to abide by all
the terms of this agreement, you must cease using and return or
destroy all copies of Project Gutenberg™ electronic works in your
possession. If you paid a fee for obtaining a copy of or access to a
Project Gutenberg™ electronic work and you do not agree to be
bound by the terms of this agreement, you may obtain a refund
from the person or entity to whom you paid the fee as set forth in
paragraph 1.E.8.
1.B. “Project Gutenberg” is a registered trademark. It may only be
used on or associated in any way with an electronic work by people
who agree to be bound by the terms of this agreement. There are a
few things that you can do with most Project Gutenberg™ electronic
works even without complying with the full terms of this agreement.
See paragraph 1.C below. There are a lot of things you can do with
Project Gutenberg™ electronic works if you follow the terms of this
agreement and help preserve free future access to Project
Gutenberg™ electronic works. See paragraph 1.E below.
66. 1.C. The Project Gutenberg Literary Archive Foundation (“the
Foundation” or PGLAF), owns a compilation copyright in the
collection of Project Gutenberg™ electronic works. Nearly all the
individual works in the collection are in the public domain in the
United States. If an individual work is unprotected by copyright law
in the United States and you are located in the United States, we do
not claim a right to prevent you from copying, distributing,
performing, displaying or creating derivative works based on the
work as long as all references to Project Gutenberg are removed. Of
course, we hope that you will support the Project Gutenberg™
mission of promoting free access to electronic works by freely
sharing Project Gutenberg™ works in compliance with the terms of
this agreement for keeping the Project Gutenberg™ name associated
with the work. You can easily comply with the terms of this
agreement by keeping this work in the same format with its attached
full Project Gutenberg™ License when you share it without charge
with others.
1.D. The copyright laws of the place where you are located also
govern what you can do with this work. Copyright laws in most
countries are in a constant state of change. If you are outside the
United States, check the laws of your country in addition to the
terms of this agreement before downloading, copying, displaying,
performing, distributing or creating derivative works based on this
work or any other Project Gutenberg™ work. The Foundation makes
no representations concerning the copyright status of any work in
any country other than the United States.
1.E. Unless you have removed all references to Project Gutenberg:
1.E.1. The following sentence, with active links to, or other
immediate access to, the full Project Gutenberg™ License must
appear prominently whenever any copy of a Project Gutenberg™
work (any work on which the phrase “Project Gutenberg” appears,
or with which the phrase “Project Gutenberg” is associated) is
accessed, displayed, performed, viewed, copied or distributed:
67. This eBook is for the use of anyone anywhere in the United
States and most other parts of the world at no cost and with
almost no restrictions whatsoever. You may copy it, give it away
or re-use it under the terms of the Project Gutenberg License
included with this eBook or online at www.gutenberg.org. If you
are not located in the United States, you will have to check the
laws of the country where you are located before using this
eBook.
1.E.2. If an individual Project Gutenberg™ electronic work is derived
from texts not protected by U.S. copyright law (does not contain a
notice indicating that it is posted with permission of the copyright
holder), the work can be copied and distributed to anyone in the
United States without paying any fees or charges. If you are
redistributing or providing access to a work with the phrase “Project
Gutenberg” associated with or appearing on the work, you must
comply either with the requirements of paragraphs 1.E.1 through
1.E.7 or obtain permission for the use of the work and the Project
Gutenberg™ trademark as set forth in paragraphs 1.E.8 or 1.E.9.
1.E.3. If an individual Project Gutenberg™ electronic work is posted
with the permission of the copyright holder, your use and distribution
must comply with both paragraphs 1.E.1 through 1.E.7 and any
additional terms imposed by the copyright holder. Additional terms
will be linked to the Project Gutenberg™ License for all works posted
with the permission of the copyright holder found at the beginning
of this work.
1.E.4. Do not unlink or detach or remove the full Project
Gutenberg™ License terms from this work, or any files containing a
part of this work or any other work associated with Project
Gutenberg™.
1.E.5. Do not copy, display, perform, distribute or redistribute this
electronic work, or any part of this electronic work, without
prominently displaying the sentence set forth in paragraph 1.E.1
68. with active links or immediate access to the full terms of the Project
Gutenberg™ License.
1.E.6. You may convert to and distribute this work in any binary,
compressed, marked up, nonproprietary or proprietary form,
including any word processing or hypertext form. However, if you
provide access to or distribute copies of a Project Gutenberg™ work
in a format other than “Plain Vanilla ASCII” or other format used in
the official version posted on the official Project Gutenberg™ website
(www.gutenberg.org), you must, at no additional cost, fee or
expense to the user, provide a copy, a means of exporting a copy, or
a means of obtaining a copy upon request, of the work in its original
“Plain Vanilla ASCII” or other form. Any alternate format must
include the full Project Gutenberg™ License as specified in
paragraph 1.E.1.
1.E.7. Do not charge a fee for access to, viewing, displaying,
performing, copying or distributing any Project Gutenberg™ works
unless you comply with paragraph 1.E.8 or 1.E.9.
1.E.8. You may charge a reasonable fee for copies of or providing
access to or distributing Project Gutenberg™ electronic works
provided that:
• You pay a royalty fee of 20% of the gross profits you derive
from the use of Project Gutenberg™ works calculated using the
method you already use to calculate your applicable taxes. The
fee is owed to the owner of the Project Gutenberg™ trademark,
but he has agreed to donate royalties under this paragraph to
the Project Gutenberg Literary Archive Foundation. Royalty
payments must be paid within 60 days following each date on
which you prepare (or are legally required to prepare) your
periodic tax returns. Royalty payments should be clearly marked
as such and sent to the Project Gutenberg Literary Archive
Foundation at the address specified in Section 4, “Information
69. about donations to the Project Gutenberg Literary Archive
Foundation.”
• You provide a full refund of any money paid by a user who
notifies you in writing (or by e-mail) within 30 days of receipt
that s/he does not agree to the terms of the full Project
Gutenberg™ License. You must require such a user to return or
destroy all copies of the works possessed in a physical medium
and discontinue all use of and all access to other copies of
Project Gutenberg™ works.
• You provide, in accordance with paragraph 1.F.3, a full refund of
any money paid for a work or a replacement copy, if a defect in
the electronic work is discovered and reported to you within 90
days of receipt of the work.
• You comply with all other terms of this agreement for free
distribution of Project Gutenberg™ works.
1.E.9. If you wish to charge a fee or distribute a Project Gutenberg™
electronic work or group of works on different terms than are set
forth in this agreement, you must obtain permission in writing from
the Project Gutenberg Literary Archive Foundation, the manager of
the Project Gutenberg™ trademark. Contact the Foundation as set
forth in Section 3 below.
1.F.
1.F.1. Project Gutenberg volunteers and employees expend
considerable effort to identify, do copyright research on, transcribe
and proofread works not protected by U.S. copyright law in creating
the Project Gutenberg™ collection. Despite these efforts, Project
Gutenberg™ electronic works, and the medium on which they may
be stored, may contain “Defects,” such as, but not limited to,
incomplete, inaccurate or corrupt data, transcription errors, a
copyright or other intellectual property infringement, a defective or
70. damaged disk or other medium, a computer virus, or computer
codes that damage or cannot be read by your equipment.
1.F.2. LIMITED WARRANTY, DISCLAIMER OF DAMAGES - Except for
the “Right of Replacement or Refund” described in paragraph 1.F.3,
the Project Gutenberg Literary Archive Foundation, the owner of the
Project Gutenberg™ trademark, and any other party distributing a
Project Gutenberg™ electronic work under this agreement, disclaim
all liability to you for damages, costs and expenses, including legal
fees. YOU AGREE THAT YOU HAVE NO REMEDIES FOR
NEGLIGENCE, STRICT LIABILITY, BREACH OF WARRANTY OR
BREACH OF CONTRACT EXCEPT THOSE PROVIDED IN PARAGRAPH
1.F.3. YOU AGREE THAT THE FOUNDATION, THE TRADEMARK
OWNER, AND ANY DISTRIBUTOR UNDER THIS AGREEMENT WILL
NOT BE LIABLE TO YOU FOR ACTUAL, DIRECT, INDIRECT,
CONSEQUENTIAL, PUNITIVE OR INCIDENTAL DAMAGES EVEN IF
YOU GIVE NOTICE OF THE POSSIBILITY OF SUCH DAMAGE.
1.F.3. LIMITED RIGHT OF REPLACEMENT OR REFUND - If you
discover a defect in this electronic work within 90 days of receiving
it, you can receive a refund of the money (if any) you paid for it by
sending a written explanation to the person you received the work
from. If you received the work on a physical medium, you must
return the medium with your written explanation. The person or
entity that provided you with the defective work may elect to provide
a replacement copy in lieu of a refund. If you received the work
electronically, the person or entity providing it to you may choose to
give you a second opportunity to receive the work electronically in
lieu of a refund. If the second copy is also defective, you may
demand a refund in writing without further opportunities to fix the
problem.
1.F.4. Except for the limited right of replacement or refund set forth
in paragraph 1.F.3, this work is provided to you ‘AS-IS’, WITH NO
OTHER WARRANTIES OF ANY KIND, EXPRESS OR IMPLIED,
71. INCLUDING BUT NOT LIMITED TO WARRANTIES OF
MERCHANTABILITY OR FITNESS FOR ANY PURPOSE.
1.F.5. Some states do not allow disclaimers of certain implied
warranties or the exclusion or limitation of certain types of damages.
If any disclaimer or limitation set forth in this agreement violates the
law of the state applicable to this agreement, the agreement shall be
interpreted to make the maximum disclaimer or limitation permitted
by the applicable state law. The invalidity or unenforceability of any
provision of this agreement shall not void the remaining provisions.
1.F.6. INDEMNITY - You agree to indemnify and hold the Foundation,
the trademark owner, any agent or employee of the Foundation,
anyone providing copies of Project Gutenberg™ electronic works in
accordance with this agreement, and any volunteers associated with
the production, promotion and distribution of Project Gutenberg™
electronic works, harmless from all liability, costs and expenses,
including legal fees, that arise directly or indirectly from any of the
following which you do or cause to occur: (a) distribution of this or
any Project Gutenberg™ work, (b) alteration, modification, or
additions or deletions to any Project Gutenberg™ work, and (c) any
Defect you cause.
Section 2. Information about the Mission
of Project Gutenberg™
Project Gutenberg™ is synonymous with the free distribution of
electronic works in formats readable by the widest variety of
computers including obsolete, old, middle-aged and new computers.
It exists because of the efforts of hundreds of volunteers and
donations from people in all walks of life.
Volunteers and financial support to provide volunteers with the
assistance they need are critical to reaching Project Gutenberg™’s
goals and ensuring that the Project Gutenberg™ collection will
72. remain freely available for generations to come. In 2001, the Project
Gutenberg Literary Archive Foundation was created to provide a
secure and permanent future for Project Gutenberg™ and future
generations. To learn more about the Project Gutenberg Literary
Archive Foundation and how your efforts and donations can help,
see Sections 3 and 4 and the Foundation information page at
www.gutenberg.org.
Section 3. Information about the Project
Gutenberg Literary Archive Foundation
The Project Gutenberg Literary Archive Foundation is a non-profit
501(c)(3) educational corporation organized under the laws of the
state of Mississippi and granted tax exempt status by the Internal
Revenue Service. The Foundation’s EIN or federal tax identification
number is 64-6221541. Contributions to the Project Gutenberg
Literary Archive Foundation are tax deductible to the full extent
permitted by U.S. federal laws and your state’s laws.
The Foundation’s business office is located at 809 North 1500 West,
Salt Lake City, UT 84116, (801) 596-1887. Email contact links and up
to date contact information can be found at the Foundation’s website
and official page at www.gutenberg.org/contact
Section 4. Information about Donations to
the Project Gutenberg Literary Archive
Foundation
Project Gutenberg™ depends upon and cannot survive without
widespread public support and donations to carry out its mission of
increasing the number of public domain and licensed works that can
be freely distributed in machine-readable form accessible by the
widest array of equipment including outdated equipment. Many
73. small donations ($1 to $5,000) are particularly important to
maintaining tax exempt status with the IRS.
The Foundation is committed to complying with the laws regulating
charities and charitable donations in all 50 states of the United
States. Compliance requirements are not uniform and it takes a
considerable effort, much paperwork and many fees to meet and
keep up with these requirements. We do not solicit donations in
locations where we have not received written confirmation of
compliance. To SEND DONATIONS or determine the status of
compliance for any particular state visit www.gutenberg.org/donate.
While we cannot and do not solicit contributions from states where
we have not met the solicitation requirements, we know of no
prohibition against accepting unsolicited donations from donors in
such states who approach us with offers to donate.
International donations are gratefully accepted, but we cannot make
any statements concerning tax treatment of donations received from
outside the United States. U.S. laws alone swamp our small staff.
Please check the Project Gutenberg web pages for current donation
methods and addresses. Donations are accepted in a number of
other ways including checks, online payments and credit card
donations. To donate, please visit: www.gutenberg.org/donate.
Section 5. General Information About
Project Gutenberg™ electronic works
Professor Michael S. Hart was the originator of the Project
Gutenberg™ concept of a library of electronic works that could be
freely shared with anyone. For forty years, he produced and
distributed Project Gutenberg™ eBooks with only a loose network of
volunteer support.
74. Project Gutenberg™ eBooks are often created from several printed
editions, all of which are confirmed as not protected by copyright in
the U.S. unless a copyright notice is included. Thus, we do not
necessarily keep eBooks in compliance with any particular paper
edition.
Most people start at our website which has the main PG search
facility: www.gutenberg.org.
This website includes information about Project Gutenberg™,
including how to make donations to the Project Gutenberg Literary
Archive Foundation, how to help produce our new eBooks, and how
to subscribe to our email newsletter to hear about new eBooks.
75. Welcome to our website – the ideal destination for book lovers and
knowledge seekers. With a mission to inspire endlessly, we offer a
vast collection of books, ranging from classic literary works to
specialized publications, self-development books, and children's
literature. Each book is a new journey of discovery, expanding
knowledge and enriching the soul of the reade
Our website is not just a platform for buying books, but a bridge
connecting readers to the timeless values of culture and wisdom. With
an elegant, user-friendly interface and an intelligent search system,
we are committed to providing a quick and convenient shopping
experience. Additionally, our special promotions and home delivery
services ensure that you save time and fully enjoy the joy of reading.
Let us accompany you on the journey of exploring knowledge and
personal growth!
ebookultra.com