SlideShare a Scribd company logo
Mechanical Engineers Handbook Instrumentation
Systems Controls And Mems Volume 2 Third Edition
Myer Kutz download
https://ebookbell.com/product/mechanical-engineers-handbook-
instrumentation-systems-controls-and-mems-volume-2-third-edition-
myer-kutz-4307868
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Mechanical Engineers Handbook Volume 2 Design Instrumentation And
Controls 4th Edition Myer Kutz
https://ebookbell.com/product/mechanical-engineers-handbook-
volume-2-design-instrumentation-and-controls-4th-edition-myer-
kutz-5325844
Mechanical Engineers Handbook Vol 1 Materials And Mechanical Design
3ed Myer Kutz John Wiley Sons Wiley Interscience Online Service Ed
https://ebookbell.com/product/mechanical-engineers-handbook-
vol-1-materials-and-mechanical-design-3ed-myer-kutz-john-wiley-sons-
wiley-interscience-online-service-ed-2628762
Mechanical Engineers Handbook Book 3 Manufacturing And Measurement 3ed
Kutz M Ed
https://ebookbell.com/product/mechanical-engineers-handbook-
book-3-manufacturing-and-measurement-3ed-kutz-m-ed-2628764
Mechanical Engineers Handbook Book 4 Energy And Power 3ed Kutz M Ed
https://ebookbell.com/product/mechanical-engineers-handbook-
book-4-energy-and-power-3ed-kutz-m-ed-2628766
Mechanical Engineers Handbook Energy And Power 4th Edition Myer Kutz
https://ebookbell.com/product/mechanical-engineers-handbook-energy-
and-power-4th-edition-myer-kutz-5252736
Marks Standard Handbook For Mechanical Engineers 11th Eugene A
Avallone
https://ebookbell.com/product/marks-standard-handbook-for-mechanical-
engineers-11th-eugene-a-avallone-1205078
Drilling Fluids Processing Handbook 1st Edition American Society Of
Mechanical Engineers Shale Shaker Committee
https://ebookbell.com/product/drilling-fluids-processing-handbook-1st-
edition-american-society-of-mechanical-engineers-shale-shaker-
committee-2386622
Gas Turbine Engineering Handbook 2nd Ed Meherwan P Boyce Fellow
American Society Of Mechanical Engineers Asme Usa And Fellow The
Institute Of Diesel And Gas Turbine Engineers Idgte Uk
https://ebookbell.com/product/gas-turbine-engineering-handbook-2nd-ed-
meherwan-p-boyce-fellow-american-society-of-mechanical-engineers-asme-
usa-and-fellow-the-institute-of-diesel-and-gas-turbine-engineers-
idgte-uk-918440
The Decommissioning Handbook Laguardia Thomas S Moghissi A Alan Taboas
https://ebookbell.com/product/the-decommissioning-handbook-laguardia-
thomas-s-moghissi-a-alan-taboas-5891410
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
Mechanical Engineers’ Handbook
Mechanical Engineers’ Handbook
Third Edition
Instrumentation, Systems,
Controls, and MEMS
Edited by
Myer Kutz
JOHN WILEY & SONS, INC.
This book is printed on acid-free paper. 嘷
⬁
Copyright 䉷 2006 by John Wiley & Sons, Inc. All rights reserved.
Published by John Wiley & Sons, Inc., Hoboken, New Jersey.
Published simultaneously in Canada.
No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form
or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as
permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior
written permission of the Publisher, or authorization through payment of the appropriate per-copy fee
to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400,
fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission
should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street,
Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/
permission.
Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts
in preparing this book, they make no representations or warranties with respect to the accuracy or
completeness of the contents of this book and specifically disclaim any implied warranties of
merchantability or fitness for a particular purpose. No warranty may be created or extended by sales
representatives or written sales materials. The advice and strategies contained herein may not be
suitable for your situation. The publisher is not engaged in rendering professional services, and you
should consult a professional where appropriate. Neither the publisher nor author shall be liable for
any loss of profit or any other commercial damages, including but not limited to special, incidental,
consequential, or other damages.
For general information on our other products and services, please contact our Customer Care
Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993
or fax (317) 572-4002.
Wiley also publishes its books in a variety of electronic formats. Some content that appears in print
may not be available in electronic books. For more information about Wiley products, visit our web
site at www.wiley.com.
Library of Congress Cataloging-in-Publication Data:
Mechanical engineers’ handbook/edited by Myer Kutz.—3rd ed.
p. cm.
Includes bibliographical references and index.
ISBN-13 978-0-471-44990-4
ISBN-10 0-471-44990-3 (cloth)
1. Mechanical engineering—Handbooks, manuals, etc. I. Kutz, Myer.
TJ151.M395 2005
621—dc22
2005008603
Printed in the United States of America.
10 9 8 7 6 5 4 3 2 1
To Bill and Judy, always there
vii
Contents
Preface ix
Vision Statement xi
Contributors xiii
PART 1 INSTRUMENTATION 1
1. Instrument Statics 3
Jerry Lee Hall, Sriram Sundararajan, and Mahmood Naim
2. Input and Output Characteristics 32
Adam C. Bell
3. Bridge Transducers 69
Patrick L. Walter
4. Measurements 116
E. L. Hixson and E. A. Ripperger
5. Temperature and Flow Transducers 131
Robert J. Moffat
6. Signal Processing 189
John Turnbull
7. Data Acquisition and Display Systems 209
Philip C. Milliman
8. Digital Integrated Circuits: A Practical Application 239
Todd Rhoad and Keith Folken
PART 2 SYSTEMS, CONTROLS, AND MEMS 255
9. Systems Engineering: Analysis, Design, and Information Processing for
Analysis and Design 257
Andrew P. Sage
10. Mathematical Models of Dynamic Physical Systems 300
K. Preston White, Jr.
11. Basic Control Systems Design 383
William J. Palm III
12. Closed-Loop Control System Analysis 443
Suhada Jayasuriya
13. Control System Performance Modification 503
Suhada Jayasuriya
14. Servoactuators for Closed-Loop Control 542
Karl N. Reid and Syed Hamid
viii Contents
15. Controller Design 620
Thomas Peter Neal
16. General-Purpose Control Devices 678
James H. Christensen, Robert J. Kretschmann, Sujeet Chand,
and Kazuhiko Yokoyama
17. State-Space Methods for Dynamic Systems Analysis 717
Krishnaswamy Srinivasan
18. Control System Design Using State-Space Methods 757
Krishnaswamy Srinivasan
19. Neural Networks in Feedback Control Systems 791
F. L. Lewis and Shuzhi Sam Ge
20. Mechatronics 826
Shane Farritor
21. Introduction to Microelectromechanical Systems (MEMS):
Design and Application 863
M. E. Zaghloul
Index 877
ix
Preface
The second volume of the third edition of the Mechanical Engineers’ Handbook (‘‘ME3’’)
is comprised of two major parts: Part 1, Instrumentation, with eight chapters, and Part 2,
Systems, Controls, and MEMS, with 13 chapters. The two parts are linked in the sense that
most feedback control systems require measurement transducers. Most of the chapters in this
volume originated not only in earlier editions of the Mechanical Engineers’ Handbook but
also in a book called Instrumentation and Control, which was edited by Chester L. Nachtigal
and published by Wiley in 1990. Some of these chapters have been either updated or exten-
sively revised. Some have been replaced. Others, which present timeless, fundamental con-
cepts, have been included without change.1
In addition, there are chapters that are entirely
new, including Digital Integrated Circuits: A Practical Application (Chapter 8), Neural Net-
works in Control Systems (Chapter 19), Mechatronics (Chapter 20), and Introduction to
Microelectromechanical Systems (MEMS): Design and Application (Chapter 21).
The instrumentation chapters basically are arranged, as they were in the Nachtigal vol-
ume, in the order of the flow of information in real measurement systems. These chapters
start with fundamentals of transducer design, present transducers used by mechanical engi-
neers, including strain gages, temperature transducers such as thermocouples and thermistors,
and flowmeters, and then discuss issues involved in processing signals from transducers and
in acquiring and displaying data. A general chapter on measurement fundamentals, updated
from the second edition of Mechanical Engineers’Handbook (‘‘ME2’’), as well as the chapter
on digital integrated circuits have been added to the half-dozen Instrumentation and Control
chapters in this first part.
The systems and control chapters in the second part of this volume start with three
chapters from ME2, two of which have been updated, and move on to seven chapters from
Nachtigal, only two of which required updating. These ten chapters present a general dis-
cussion of systems engineering; fundamentals of control system design, analysis, and per-
formance modification; and detailed information about the design of servoactuators,
controllers, and general-purpose control devices. This second part of Vol. II concludes with
the chapters, all of them new to the handbook, on what are termed ‘‘new departures’’—
neural networks, mechatronics, and MEMS. These topics have become increasingly impor-
tant to mechanical engineers in recent years.
1
A new edition of Instrumentation and Control has been sought after but has never appeared. Because
several chapters had numerous contributors, it proved impossible to update or revise them or even to
find anyone to write new chapters on the same topics on the schedule that other contributors could meet.
Because the material in these chapters was outdated, they have been dropped from this edition, but may
be revised for future editions.
xi
Vision for the Third Edition
Basic engineering disciplines are not static, no matter how old and well established they are.
The field of mechanical engineering is no exception. Movement within this broadly based
discipline is multidimensional. Even the classic subjects on which the discipline was founded,
such as mechanics of materials and heat transfer, continue to evolve. Mechanical engineers
continue to be heavily involved with disciplines allied to mechanical engineering, such as
industrial and manufacturing engineering, which are also constantly evolving. Advances in
other major disciplines, such as electrical and electronics engineering, have significant impact
on the work of mechanical engineers. New subject areas, such as neural networks, suddenly
become all the rage.
In response to this exciting, dynamic atmosphere, the Mechanical Engineers’ Handbook
is expanding dramatically, from one volume to four volumes. The third edition not only is
incorporating updates and revisions to chapters in the second edition, which was published
in 1998, but also is adding 24 chapters on entirely new subjects as well, incorporating updates
and revisions to chapters in the Handbook of Materials Selection, which was published in
2002, as well as to chapters in Instrumentation and Control, edited by Chester Nachtigal
and published in 1990.
The four volumes of the third edition are arranged as follows:
Volume I: Materials and Mechanical Design—36 chapters
Part 1. Materials—14 chapters
Part 2. Mechanical Design—22 chapters
Volume II: Instrumentation, Systems, Controls, and MEMS—21 chapters
Part 1. Instrumentation—8 chapters
Part 2. Systems, Controls, and MEMS—13 chapters
Volume III: Manufacturing and Management—24 chapters
Part 1. Manufacturing—12 chapters
Part 2. Management, Finance, Quality, Law, and Research—12 chapters
Volume IV: Energy and Power—31 chapters
Part 1: Energy—15 chapters
Part 2: Power—16 chapters
The mechanical engineering literature is extensive and has been so for a considerable
period of time. Many textbooks, reference works, and manuals as well as a substantial
number of journals exist. Numerous commercial publishers and professional societies, par-
ticularly in the United States and Europe, distribute these materials. The literature grows
continuously, as applied mechanical engineering research finds new ways of designing, con-
trolling, measuring, making and maintaining things, and monitoring and evaluating technol-
ogies, infrastructures, and systems.
Most professional-level mechanical engineering publications tend to be specialized, di-
rected to the specific needs of particular groups of practitioners. Overall, however, the me-
chanical engineering audience is broad and multidisciplinary. Practitioners work in a variety
of organizations, including institutions of higher learning, design, manufacturing, and con-
xii Vision for the Third Edition
sulting firms as well as federal, state, and local government agencies. A rationale for an
expanded general mechanical engineering handbook is that every practitioner, researcher,
and bureaucrat cannot be an expert on every topic, especially in so broad and multidiscipli-
nary a field, and may need an authoritative professional summary of a subject with which
he or she is not intimately familiar.
Starting with the first edition, which was published in 1986, our intention has always
been that the Mechanical Engineers’ Handbook stand at the intersection of textbooks, re-
search papers, and design manuals. For example, we want the handbook to help young
engineers move from the college classroom to the professional office and laboratory where
they may have to deal with issues and problems in areas they have not studied extensively
in school.
With this expanded third edition, we have produced a practical reference for the me-
chanical engineer who is seeking to answer a question, solve a problem, reduce a cost, or
improve a system or facility. The handbook is not a research monograph. The chapters offer
design techniques, illustrate successful applications, or provide guidelines to improving the
performance, the life expectancy, the effectiveness, or the usefulness of parts, assemblies,
and systems. The purpose is to show readers what options are available in a particular
situation and which option they might choose to solve problems at hand.
The aim of this expanded handbook is to serve as a source of practical advice to readers.
We hope that the handbook will be the first information resource a practicing engineer
consults when faced with a new problem or opportunity—even before turning to other print
sources, even officially sanctioned ones, or to sites on the Internet. (The second edition has
been available online on knovel.com.) In each chapter, the reader should feel that he or she
is in the hands of an experienced consultant who is providing sensible advice that can lead
to beneficial action and results.
Can a single handbook, even spread out over four volumes, cover this broad, interdis-
ciplinary field? We have designed the third edition of the Mechanical Engineers’ Handbook
as if it were serving as a core for an Internet-based information source. Many chapters in
the handbook point readers to information sources on the Web dealing with the subjects
addressed. Furthermore, where appropriate, enough analytical techniques and data are pro-
vided to allow the reader to employ a preliminary approach to solving problems.
The contributors have written, to the extent their backgrounds and capabilities make
possible, in a style that reflects practical discussion informed by real-world experience. We
would like readers to feel that they are in the presence of experienced teachers and con-
sultants who know about the multiplicity of technical issues that impinge on any topic within
mechanical engineering. At the same time, the level is such that students and recent graduates
can find the handbook as accessible as experienced engineers.
xiii
Contributors
Adam C. Bell
Dartmouth, Nova Scotia, Canada
Sujeet Chand
Rockwell Automation
Milwaukee, Wisconsin
James H. Christensen
Holobloc, Inc.
Cleveland Heights, Ohio
Shane Farritor
University of Nebraska–Lincoln
Lincoln, Nebraska
Keith Folken
Peoria, Illinois
Shuzhi Sam Ge
National University of Singapora
Singapore
Jerry Lee Hall
Hall-Wade Engineering Services
and
Iowa State University
Ames, Iowa
Syed Hamid
Halliburton Services
Duncan, Oklahoma
E. L. Hixson
University of Texas
Austin, Texas
Suhada Jayasuriya
Texas A&M University
College Station, Texas
Robert J. Kretschmann
Rockwell Automation
Mayfield Heights, Ohio
F. L. Lewis
University of Texas at Arlington
Fort Worth, Texas
Philip C. Milliman
Weyerhaeuser Company
Federal Way, Washington
Robert J. Moffat
Stanford University
Stanford, California
Mahmood Naim
Union Carbide Corporation
Indianapolis, Indiana
Thomas Peter Neal
Lake View, New York
William J. Palm III
University of Rhode Island
Kingston, Rhode Island
Karl N. Reid
Oklahoma State University
Stillwater, Oklahoma
Todd Rhoad
Austin, Texas
E. A. Ripperger
University of Texas
Austin, Texas
Andrew P. Sage
George Mason University
Fairfax, Virginia
xiv Contributors
Krishnaswamy Srinivasan
The Ohio State University
Columbus, Ohio
Sriram Sundararajan
Iowa State University
Ames, Iowa
John Turnbull
Case Western Reserve University
Cleveland, Ohio
Patrick L. Walter
Texas Christian University
Fort Worth, Texas
K. Preston White, Jr.
University of Virginia
Charlottesville, Virginia
Kazuhiko Yokoyama
Yaskawa Electric Corporation
Tokyo, Japan
M. E. Zaghloul
The George Washington University
Washington, D.C.
Mechanical Engineers’ Handbook
PART 1
INSTRUMENTATION
3
CHAPTER 1
INSTRUMENT STATICS
Jerry Lee Hall
Department of Mechanical Engineering
Iowa State University
Ames, Iowa
Sriram Sundararajan
Department of Mechanical Engineering
Iowa State University
Ames, Iowa
Mahmood Naim
Union Carbide Corporation
Indianapolis, Indiana
1 TERMINOLOGY 3
1.1 Transducer Characteristics 3
1.2 Definitions 4
2 STATIC CALIBRATION 6
2.1 Calibration Process 6
2.2 Fitting Equations to Calibration
Data 6
3 STATISTICS IN THE
MEASUREMENT PROCESS 9
3.1 Unbiased Estimates 9
3.2 Sampling 9
3.3 Types of Errors 10
3.4 Propagation of Error or
Uncertainty 10
3.5 Uncertainty Interval 12
3.6 Amount of Data to Take 14
3.7 Goodness of Fit 15
3.8 Probability Density Functions 16
3.9 Determination of Confidence
Limits on ␮ 17
3.10 Confidence Limits on
Regression Lines 18
3.11 Inference and Comparison 22
REFERENCES 31
1 TERMINOLOGY
1.1 Transducer Characteristics
A measurement system extracts information about a measurable quantity from some medium
of interest and communicates this measured data to the observer. The measurement of any
variable is accomplished by an instrumentation system composed of transducers. Each trans-
ducer is an energy conversion device and requires energy transfer into the device before the
variable of interest can be detected.
The Instrument Society of America (ISA) defines transducer as ‘‘a device that provides
usable output in response to a specified measurand.’’ The measurand is ‘‘a physical quantity,
property or condition which is measured.’’ The output is ‘‘the electrical quantity, produced
by a transducer, which is a function of the applied measurand.’’1
It should be made very clear that the act of measurement involves transfer of energy
between the measured medium and the measuring system and hence the measured quantity
Mechanical Engineers’ Handbook: Instrumentation, Systems, Controls, and MEMS, Volume 2, Third Edition.
Edited by Myer Kutz
Copyright  2006 by John Wiley & Sons, Inc.
4 Instrument Statics
is disturbed to some extent, making a perfect measurement unrealistic. Therefore, pressure
cannot be measured without an accompanying change in volume, force cannot be measured
without an accompanying change in length, and voltage cannot be measured without an
accompanying flow of charge. Instead measures must be taken to minimize the energy trans-
fer from the source to be measured if the measurement is to be accurate.
There are several categorical characteristics for a transducer (or measurement system).
When the measurand maintains a steady value or varies very slowly with time, transducer
performance can be described in terms of static characteristics. Instruments associated with
rapidly varying measurands require additional qualifications termed dynamic characteristics.
Other performance descriptors include environmental characteristics (for situations involving
varying environmental operating conditions), reliability characteristics (related to the life
expectancy of the instrument under various operating conditions), theoretical characteristics
(describing the ideal behavior of the instrument in terms of mathematical or graphical rela-
tionships), and noise characteristics (external factors that can contribute to the measurement
process such as electromagnetic surroundings, humidity, acoustic and thermal vibrations,
etc.). In this chapter, we will describe the considerations associated with evaluating numerical
values for the static characteristics of an instrument.
1.2 Definitions
The description of a transducer and its role in a measuring system is based on most of the
definitions that follow. Further details of these definitions can be found in other works.2–4
Static calibration is the process of measuring the static characteristics of an instrument.
This involves applying a range of known values of static input to the instrument and
recording the corresponding outputs. The data obtained are presented in a tabular or
graphical form.
Range is defined by the upper and lower limits of the measured values that an instrument
can measure. Instruments are designed to provide predictable performance and, often,
enhanced linearity over the range specified.
Sensitivity is defined as the change in the output signal relative to the change in the
input signal at an operating point. Sensitivity may be constant over the range of the
input signal to a transducer or it can vary. Instruments that have a constant sensitivity
are called ‘‘linear.’’
Resolution is defined as the smallest change in the input signal that will yield a readable
change in the output of the measuring system at its operating point.
Threshold of an instrument is the minimum input for which there will be an output.
Below this minimum input the instrument will read zero.
Zero of an instrument refers to a selected datum. The output of an instrument is adjusted
to read zero at a predefined point in the measured range. For example, the output of
a Celsius thermometer is zero at the freezing point of water; the output of a pressure
gage may be zero at atmospheric pressure.
Zero drift is the change in output from its set zero value over a specified period of time.
Zero drift occurs due to changes in ambient conditions, changes in electrical condi-
tions, aging of components, or mechanical damage. The error introduced may be
significant when a transducer is used for long-term measurement.
Creep is a change in output occurring over a specific time period while the measurand
is held constant at a value other than zero and all environmental conditions are held
constant.
1 Terminology 5
Figure 1 Schematics illustrating concepts of (a) accuracy and precision, (b) hysteresis, (c) a static error
band, and (d) fitting a curve to the calibration data.
Accuracy is the maximum amount of difference between a measured variable and its
true value. It is usually expressed as a percentage of full-scale output. In the strictest
sense, accuracy is never known because the true value is never really known.
Precision is the difference between a measured variable and the best estimate (as ob-
tained from the measured variable) of the true value of the measured variable. It is a
measure of repeatability. Precise measurements have small dispersion but may have
poor accuracy if they are not close to the true value. Figure 1a shows the differences
between accuracy and precision.
Linearity describes the maximum deviation of the output of an instrument from a best-
fitting straight line through the calibration data. Most instruments are designed so that
the output is a linear function of the input. Linearity is based on the type of straight
line fitted to the calibration data. For example, least-squares linearity is referenced to
that straight line for which the sum of the squares of the residuals is minimized. The
6 Instrument Statics
term ‘‘residual’’ refers to the deviations of output readings from their corresponding
values on the straight line fitted through the data.
Hysteresis is the maximum difference in output, at any measured value within the spec-
ified range, when the value is approached first with increasing and then with decreas-
ing measurand. Hysteresis is typically caused by a lag in the action of the sensing
element of the transducer. Loading the instrument through a cycle of first increasing
values, then decreasing values, of the measurand provides a hysteresis loop, as shown
in Fig. 1b. Hysteresis is usually expressed in percent of full-scale output.
Error band is the band of maximum deviation of output values from a specified refer-
ence line or curve. A static error band (see Fig. 1c) is obtained by static calibration.
It is determined on the basis of maximum deviations observed over at least two
consecutive calibration cycles so as to include repeatability. Error band accounts for
deviations that may be due to nonlinearity, nonrepeatability, hysteresis, zero shift,
sensitivity shift, and so forth. It is a convenient way to specify transducer behavior
when individual types of deviations need not be specified nor determined.5
2 STATIC CALIBRATION
2.1 Calibration Process
Calibration is the process of comparison of the output of a measuring system to the values
of a range of known inputs. For example, a pressure gage is calibrated by a device called a
‘‘dead-weight’’ tester, where known pressures are applied to the gage and the output of the
gage is recorded over its complete range of operation.
The calibration signal should, as closely as possible, be the same as the type of input
signal to be measured. Most calibrations are performed by means of static or level calibration
signals since they are usually easy to produce and maintain accurately. However, a measuring
system calibrated with static signals may not read correctly when subjected to the dynamic
input signals since the natural dynamic characteristics and the response characteristics of the
measurement system to the input forcing function would not be accounted for with a static
calibration. A measurement system used for dynamic signals should be calibrated using
known dynamic inputs.
A static calibration should include both increasing and decreasing values of the known
input signal and a repetition of the input signal.6
This allows one to determine hysteresis as
well as the repeatability of the measuring system, as shown in Fig. 1c. The sensitivity of
the measuring system is obtained from the slope of a suitable line or curve plotted through
the calibration points at any level of the input signal.
2.2 Fitting Equations to Calibration Data
Though linear in most cases, the calibration plot of a specific measurement system may
require a choice of a nonlinear functional form for the relationship that best describes the
calibration data, as shown in Fig. 1d. This functional form (or curve fit) may be a standard
polynomial type or may be one of a transcendental function type. Statistics are used to fit a
desired function to the calibration data. A detailed description of the mathematical basis of
the selection process used to determine the appropriate function to fit the data can be found
elsewhere.7
Most of today’s graphing software allow the user to select the type of fit required.
A very common method used to describe the quality of ‘‘fit’’ of a chosen functional form is
2 Static Calibration 7
the ‘‘least-squares fit.’’ The principle used in making this type of curve fit is to minimize
the sum of the squares of the deviations of the data from the assumed curve. These deviations
from the assumed curve may be due to errors in one or more variables. If the error is in one
variable, the technique is called linear regression and is the common case encountered in
engineering measurements. If several variables are involved, it is called multiple regression.
Two assumptions are often used with the least-squares method: (i) the x variable (usually
the input to the calibration process) has relatively little error as compared to the y (measured)
variable and (ii) the magnitude of the uncertainty in y is not dependent on the magnitude of
the x variable. The methodology for evaluating calibration curves in systems where the
magnitude of the uncertainty in the measured value varies with the value of the input variable
can be found elsewhere.8
Although almost all graphing software packages include the least-squares fit analysis,
thus enabling the user to identify the best-fit curve with minimum effort, a brief description
of the mathematical process is given here. To illustrate the least-squares technique, assume
that an equation of the following polynomial form will fit a given set of data:
1 2 k
y ⫽ a ⫹ bx ⫹ cx ⫹ 䡠䡠䡠 ⫹ mx (1)
If the data points are denoted by (xi, yi), where i ranges from 1 to n, then the expression for
summation of the residuals is
n
2
(y ⫺ y) ⫽ R (2)
冘 i
i⫽1
The least-squares method requires that R be minimized. The parameters used for the
minimization are the unknown coefficients a, b, c, . . . , m in the assumed equation. The
following differentiation yields k ⫹ 1 equations called ‘‘normal equations’’ to determine the
k ⫹ 1 coefficients in the assumed relation. The coefficients a, b, c, . . . , m are found by
solving the normal equations simultaneously:
⭸R ⭸R ⭸R ⭸R
⫽ ⫽ ⫽ 䡠䡠䡠 ⫽ ⫽ 0 (3)
⭸a ⭸b ⭸c ⭸m
For example, if k ⫽ 1, then the polynomial is of first degree (a straight line) and the normal
equations become
2
兺y ⫽ a(n) ⫹ b兺x 兺x y ⫽ a兺x ⫹ b兺x (4)
i i i i i i
and the coefficients a and b are
2
兺x 兺y ⫺ 兺x兺xy n兺xy ⫺ 兺x兺y
a ⫽ b ⫽ (5)
2 2 2 2
n兺x ⫺ (兺x) n兺x ⫺ (兺x)
The resulting curve (y ⫽ a ⫹ bx) is called the regression curve of y on x. It can be shown
that a regression curve fit by the least-squares method passes through the centroid ( ) of
x, y
the data.9
If two new variables X and Y are defined as
X ⫽ x ⫺ x and Y ⫽ y ⫺ y (6)
then
兺X ⫽ 0 ⫽ 兺Y (7)
Substitution of these new variables in the normal equations for a straight line yields the
following result for a and b:
8 Instrument Statics
Table 1 Statistical Analysis for Example 1
Assumed Equation
Regression
Coefficient
a b
Residual
Error, R
Maximum
Deviation
Correlation
Coefficienta
y ⫽ bx —- 1.254 56.767 10.245 0.700
y ⫽ a ⫹ bx 9.956 0.907 20.249 7.178 0.893
y ⫽ aebx
10.863 0.040 70.274 18.612 0.581
y ⫽ 1/(a ⫹ bx) 0.098 ⫺0.002 14257.327 341.451 ⫺74.302
y ⫽ a ⫹ b/x 40.615 ⫺133.324 32.275 9.326 0.830
y ⫽ a ⫹ b log x ⫺14.188 15.612 1.542 2.791 0.992
y ⫽ axb
3.143 0.752 20.524 8.767 0.892
y ⫽ x/(a ⫹ bx) 0.496 0.005 48.553 14.600 0.744
a
Defined in Section 4.3.7.
兺XY
a ⫽ 0 b ⫽ (8)
2
兺X
The regression line becomes
Y ⫽ bX (9)
The technique described above will yield a curve based on an assumed form that will fit a
set of data. This curve may not be the best one that could be found, but it will be the best
based on the assumed form. Therefore, the ‘‘goodness of fit’’ must be determined to check
that the fitted curve follows the physical data as closely as possible.
Example 1 Choice of Functional Form. Find a suitable equation to represent the follow-
ing calibration data:
x ⫽ [3, 4, 5, 7, 9, 12, 13, 14, 17, 20, 23, 25, 34, 38, 42, 45]
y ⫽ [5.5, 7.75, 10.6, 13.4, 18.5, 23.6, 26.2, 27.8, 30.5, 33.5, 35, 35.4, 41, 42.1, 44.6, 46.2]
Solution: A computer program can be written or graphing software (e.g., Microsoft Excel)
used to fit the data to several assumed forms, as given in Table 1. The data can be plotted
and the best-fitting curve selected on the basis of minimum residual error, maximum cor-
relation coefficient, or smallest maximum absolute deviation, as shown in Table 1.
The analysis shows that the assumed equation y ⫽ a ⫹ (b) log(x) represents the best fit
through the data as it has the smallest maximum deviation and the highest correlation co-
efficient. Also note that the equation y ⫽ 1/a ⫹ bx is not appropriate for these data because
it has a negative correlation coefficient.
Example 2 Nonlinear Regression. Find the regression coefficients a, b, and c if the as-
sumed behavior of the (x, y) data is y ⫽ a ⫹ bx ⫹ cx2
:
x ⫽ [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]
y ⫽ [0.26, 0.38, 0.55, 0.70, 1.05, 1.36, 1.75, 2.20, 2.70, 3.20, 3.75, 4.40, 5.00, 6.00]
3 Statistics in the Measurement Process 9
From Eqs. (1)–(3)
2
y ⫽ a ⫹ bx ⫹ cx
2 3
xy ⫽ ax ⫹ bx ⫹ cx (10)
2 2 3 4
x y ⫽ ax ⫹ bx ⫹ cx
A simultaneous solution of the above equations provides the desired regression coefficients:
a ⫽ 0.1959 b ⫽ ⫺0.0205 c ⫽ 0.0266 (11)
As was mentioned previously, a measurement process can only give you the best estimate
of the measurand. In addition, engineering measurements taken repeatedly under seemingly
identical conditions normally show variations in measured values. A statistical treatment of
measurement data is therefore a necessity.
3 STATISTICS IN THE MEASUREMENT PROCESS
3.1 Unbiased Estimates
Data sets typically have two very important characteristics: central tendency (or most rep-
resentative value) and dispersion (or scatter). Other characteristics such as skewness and
kurtosis (or peakedness) may also be of importance but will not be considered here.10
A basic problem in every quantitative experiment is that of obtaining an unbiased es-
timate of the true value of a quantity as well as an unbiased measure of the dispersion or
uncertainty in the measured variable. Philosophically, in any measurement process a deter-
ministic event is observed through a ‘‘foggy’’ window. If so, ultimate refinement of the
measuring system would result in all values of measurements to be the true value ␮. Because
errors occur in all measurements, one can never exactly measure the true value of any
quantity. Continued refinement of the methods used in any measurement will yield closer
and closer approximations, but there is always a limit beyond which refinements cannot be
made. To determine the relation that a measured value has with the true value, we must
specify the unbiased estimate of the true value ␮ of a measurement and its uncertainty (or
x
precision) interval Wx based on a desired confidence level (or probability of occurrence).
An unbiased estimator9
exists if the mean of its distribution is the same as the quantity
being estimated. Thus, for sample mean to be an unbiased estimator of population mean
x
␮, the mean of the distribution of sample means, , must be equal to the population mean.
x
3.2 Sampling
Unbiased estimates for determining population mean, population variance, and variance of
the sample mean depend on the type of sampling procedure used.
Sampling with Replacement (Random Sampling)
ˆ
␮ ⫽ x (12)
where is the sample mean and is the unbiased estimate of the population mean,
x ˆ
␮
␮;
10 Instrument Statics
n
2 2
ˆ
␴ ⫽ S 冉 冊
n ⫺ 1
where S2
is the sample variance
2
兺(x ⫺ x)
i
2
S ⫽ (13)
n
and
2
ˆ
␴
2
ˆ
␴ ⫽ (14)
x
n
where is the variance of the mean.
2
ˆ
␴x
Sampling without Replacement (Usual Case)
ˆ
␮ ⫽ x (15)
n N ⫺ 1
2 2
ˆ
␴ ⫽ S (16)
冉 冊冉 冊
n ⫺ 1 N
where N is the population size and n the sample size, and
2
ˆ
␴ N ⫺ n
2
ˆ
␴ ⫽ (17)
冉 冊
x
n N ⫺ 1
Note that sampling without replacement from an extremely large population is equivalent to
random sampling.
3.3 Types of Errors
There are at least three types of errors that one must consider in making measurements.
They are systematic (or fixed) errors, illegitimate errors (or mistakes), and random errors:
Systematic errors are of consistent form. They result from conditions or procedures that
are correctable. This type of error may generally be eliminated by calibration.
Illegitimate errors are mistakes and should not exist. They may be eliminated by using
care in the experiment, proper measurement procedures, and repetition of the mea-
surement.
Random errors are accidental errors that occur in all measurements. They are charac-
terized by their inconsistent nature, and their origin cannot be determined in the
measurement process. These errors are estimated by statistical analysis.
If the illegitimate errors can be eliminated by care and proper measurement procedures and
the systematic errors can be eliminated by calibrating the measurement system, then the
random errors remain to be determined by statistical analysis to yield the precision of the
measurement.
3.4 Propagation of Error or Uncertainty
In many cases the desired quantity cannot be measured directly but must be calculated from
the most representative value (e.g., the mean) of two or more measured quantities. It is
desirable to know the uncertainty or precision of such calculated quantities.
Precision is specified by quantities called precision indexes (denoted by Wx) that are
calculated from the random errors of a set of measurements. A ⳲWx should be specified for
3 Statistics in the Measurement Process 11
every measured variable. The confidence limits or probability for obtaining the range ⳲWx
is generally specified directly or is implied by the particular type of precision index being
used.
The precision index of a calculated quantity depends on the precision indexes of the
measured quantities required for the calculations.9
If the measured quantities are determined
independently and if their distribution about a measure of central tendency is approximately
symmetrical, the following ‘‘propagation-of-error’’ equation is valid11
:
2
⭸R
2 2
W ⫽ 兺 W (18)
冉 冊
R xt
⭸xi
In this equation, R represents the calculated quantity and x1, x2, . . . , xn represent the
measured independent variables so that mathematically we have R ⫽ ƒ(x1, x2, . . . , xn). The
precision index is a measure of dispersion about the central tendency and is denoted by W
in Eq. (18). The standard deviation is often used for W; however, any precision index will
do as long as the same type of precision index is used in each term of the equation.
A simplified form of this propagation-of-error equation results if the function R has the
form
a b c m
R ⫽ kx x x 䡠䡠䡠 x (19)
1 2 3 n
where the exponents a, b, . . . , m may be positive or negative, integer or noninteger. The
simplified result for the precision WR in R is
2 2 2 2
W W W
W x x x
1 2 n
R 2 2 2
⫽ a ⫹ b ⫹ 䡠䡠䡠 ⫹ m (20)
冉 冊 冉 冊 冉 冊 冉 冊
R x x x
1 2 n
The propagation-of-error equation is also used in planning experiments. If a certain precision
is desired on the calculated result R, the precision of the measured variables can be deter-
mined from this equation. Then, the cost of a proposed measurement system can be deter-
mined as it is directly related to precision.
Example 3 Propagation of Uncertainty. Determine the resistivity and its uncertainty for
a conducting wire of circular cross section from the measurements of resistance, length, and
diameter. Given
2
L 4L ␲D R
R ⫽ ␳ ⫽ ␳ or ␳ ⫽ (21)
2
A ␲D 4L
R ⫽ 0.0959 Ⳳ 0.0001 ⍀ L ⫽ 250 Ⳳ 2.5 cm D ⫽ 0.100 Ⳳ 0.001 cm
where R ⫽ wire resistance, ⍀
L ⫽ wire length, cm
A ⫽ cross-sectional area, ⫽ ␲D2
/4, cm2
␳ ⫽ wire resistivity, ⍀䡠cm
Solution: Thus the resistivity is
2
(␲)(0.100) (0.0959) ⫺6
␳ ⫽ ⫽ 3.01 ⫻ 10 ⍀䡠cm
4(250)
The propagation of variance (or precision index) equation for ␳ reduces to the simplified
form, that is,
12 Instrument Statics
2 2 2 2
W␳ W W W
D R L
⫽ 4 ⫹ ⫹
冉 冊 冉 冊 冉 冊 冉 冊
␳ D R L
2 2 2
0.001 0.0001 2.5
⫽ 4 ⫹ ⫹
冉 冊 冉 冊 冉 冊
0.10 0.0959 250
⫺4 ⫺6 ⫺4
⫽ 4.00 ⫻ 10 ⫹ 1.09 ⫻ 10 ⫹ 1.00 ⫻ 10
⫺4
⫽ 5.01 ⫻ 10
The resulting resistivity ␳ and its precision W␳ are
⫺4 ⫺8
W ⫽ ␳兹(5.01)10 ⫽ Ⳳ6.74 ⫻ 10
␳
⫺6
␳ ⫽ (3.01 Ⳳ 0.07) ⫻ 10 ⍀䡠cm
3.5 Uncertainty Interval
When several measurements of a variable have been obtained to form a data set (multisample
data), the best estimates of the most representative value (mean) and dispersion (standard
deviation) are obtained from the formulas in Section 3.2. When a single measurement exists
(or when the data are taken so that they are equivalent to a single measurement), the standard
deviation cannot be determined and the data are said to be ‘‘single-sample’’ data. Under
these conditions the only estimate of the true value is the single measurement, and the
uncertainty interval must be estimated by the observer.12
It is recommended that the precision
index be estimated as the maximum reasonable error. This corresponds approximately to the
99% confidence level associated with multisample data.
Uncertainty Interval Considering Random Error
Once the unbiased estimates of mean and variance are determined from the data sample, the
uncertainty interval for ␮ is
ˆ
␮ ⫽ ˆ
␮ Ⳳ W ⫽ ˆ
␮ Ⳳ k(␯, ␥)ˆ
␴ (22)
where represents the most representative value of ␮ from the measured data and is the
ˆ
ˆ
␮ W
uncertainty interval or precision index associated with the estimate of ␮. The magnitude of
the precision index or uncertainty interval depends on the confidence level ␥ (or probability
chosen), the amount of data n, and the type of probability distribution governing the distri-
bution of measured items.
The uncertainty interval can be replaced by k , where is the standard deviation
Ŵ ˆ
␴ ˆ
␴
(measure of dispersion) of the population as estimated from the sample and k is a constant
that depends on the probability distribution function, the confidence level ␥, and the amount
of data n. For example, with a Gaussian distribution the 95% confidence limits are ⫽
Ŵ
1.96␴, where k ⫽ 1.96 and is independent of n. For a t-distribution, k ⫽ 2.78, 2.06, and
1.96 with a sample size of 5, 25, and ⬁, respectively, at the 95% level of confidence prob-
ability. Note that ␯ ⫽ n ⫺ 1 for the t-distribution. The t-distribution is the same as the
Gaussian distribution as n → ⬁.
Uncertainty Interval Considering Random Error with Resolution, Truncation, and
Significant Digits
The uncertainty interval in Eq. (22) assumes a set of measured values with only random
Ŵ
error present. Furthermore, the set of measured values is assumed to have unbounded sig-
nificant digits and to have been obtained with a measuring system having infinite resolution.
When finite resolution exists and truncation of digits occurs, the uncertainty interval may be
larger than that predicted by consideration of the random error only. The uncertainty interval
can never be less than the resolution limits or truncation limits of the measured values.
3 Statistics in the Measurement Process 13
Resolution and Truncation
Let {sn} be the theoretically possible set of measurements of unbound significant digits from
a measuring system of infinite resolution and let {xn} be the actual set of measurements
expressed to m significant digits from a measuring system of finite resolution. Then the
quantity si ⫺ xi ⫽ Ⳳei is the resolution or truncation deficiency caused by the measurement
process. The unbiased estimates of mean and variance are
2
兺s 兺(s ⫺ s)
i i
2
ˆ
␮ ⫽ ⫽ s and ˆ
␴ ⫽ (23)
n n ⫺ 1
Noting that the set {xn} is available rather than {sn}, the required mean and variance are
2
兺x 兺e 兺e 兺(x ⫺ x)
i i i i
2
ˆ
␮ ⫽ Ⳳ ⫽ x Ⳳ and ˆ
␴ ⫽ (24)
n n n n ⫺ 1
The truncation or resolution has no effect on the estimate of variance but does affect the
estimate of the mean. The truncation error ei is not necessarily distributed randomly and may
all be of the same sign. Thus can be biased as much as 兺ei /n ⫽ high or low from the
x e
unbiased estimate of the value of ␮ so that .
ˆ
␮ ⫽ x Ⳳ e
If ei is a random variable observed through a ‘‘cloudy window’’ with a measuring system
of finite resolution, the value of ei may be plus or minus but its upper bound is R (the
resolution of the measurement). Thus the resolution error is no larger than R and ⫽ Ⳳ
ˆ
␮ x
Rn/n ⫽ Ⳳ R.
x
If the truncation is never more than that dictated by the resolution limits (R) of the
measurement system, the uncertainty in as a measure of the most representative value of
x
␮ is never larger than R plus the uncertainty due to the random error. Thus ˆ
␮ ⫽ x Ⳳ
. It should be emphasized that the uncertainty interval can never be less than the
ˆ
(W ⫹ R)
resolution bounds of the measurement. The resolution bounds cannot be reduced without
changing the measurement system.
Significant Digits
When xi is observed to m significant digits, the uncertainty (except for random error) is never
more than Ⳳ5/10m
and the bounds on si are equal to xi Ⳳ 5/10m
so that
5 5
x ⫺ ⬍ s ⬍ x ⫹ (25)
i i i
m m
10 10
The relation for for m significant digits is then from Eq. 24.
ˆ
␮
m
兺e 兺(5/10 ) 5
i
ˆ
␮ ⫽ x Ⳳ ⫽ x Ⳳ ⫽ x Ⳳ (26)
m
n n 10
The estimated value of variance is not affected by the constant magnitude of 5/10m
. When
the uncertainty due to significant digits is combined with the resolution limits and random
error, the uncertainty interval on becomes
ˆ
␮
5
ˆ
ˆ
␮ ⫽ x Ⳳ W ⫹ R ⫹ (27)
冉 冊
m
10
This illustrates that the number of significant digits of a measurement should be carefully
chosen in relation to the resolution limits of the measuring system so that 5/10m
has about
the same magnitude as R. Additional significant digits would imply more accuracy to the
measurement than would actually exist based on the resolving ability of the measuring sys-
tem.
14 Instrument Statics
3.6 Amount of Data to Take
Exactly what data to take and how much data to take are two important questions to be
answered in any experiment. Assuming that the correct variables have been measured, the
amount of data to be obtained can be determined by using the relation
5
ˆ
␮ ⫽ x Ⳳ W ⫹ R ⫹ (28)
冉 冊
x m
10
where it is presumed that several sample sets may exist for estimation of ␮ and that the
mean of means of the sample sets is denoted by . This equation can be rewritten using Eqs.
x
(13) and (14) (assuming random sampling):
5
␮ ⫽ x Ⳳ k(␯,␥)ˆ
␴ ⫹ R ⫹
冉 冊
x m
10
ˆ
␴ 5
⫽ x Ⳳ k(␯,␥) ⫹ R ⫹ (29)
冉 冊
m
10
兹n
The value of n to achieve the difference in within a stated percent of ␮ can be
␮ ⫺ x
determined from
2
k(v,␥)ˆ
␴
n ⫽ (30)
冋 册
m
(%/100) ˆ
␮ ⫺ R ⫺ (5/10 )
This equation can only yield valid values of n once valid estimates of and m are
ˆ
␮, ˆ
␴, k, R,
available. This means that the most correct values of n can only be obtained once the
measurement system and data-taking procedure have been specified so that R and m are
known. Furthermore, either a preliminary experiment or a portion of the actual experiment
should be performed to obtain good estimates of and . Because k depends not only on
ˆ
␮ ˆ
␴
the type of distribution the data follows but also on the sample size n, the solution is iterative.
Thus, the most valid estimates of the amount of data to take can only be obtained after the
experiment has begun. However, the equation can be quite useful for prediction purposes if
one wishes to estimate values of and m. This is especially important in experi-
ˆ
␮, ˆ
␴, k, R,
ments for which the cost of a single run may be relatively high.
Example 4 Amount of Data to Take. The life for a certain type of automotive tire is to
be established. The mean and standard deviation of the life estimated for these tires are
84,000 and Ⳳ7,230 km, respectively, from a sample of nine tires. On the basis of the sample,
how much data are required to establish the life of this type of tire to within Ⳳ10% with
90% confidence and a resolution of 5 km?
Solution: Confidence limits are as follows:
5
␮ ⫽ ˆ
␮ Ⳳ tˆ
␴ ⫹ R ⫹
冉 冊
x m
10
5 ˆ
␴ 5
x
␮ ⫺ ˆ
␮ ⫽ (0.10)x ⫽ tˆ
␴ ⫹ R ⫹ ⫽ t ⫹ R ⫹
x m m
10 10
兹n
m
t (0.10)x ⫺ R ⫺ (5/10 ) (0.10)(84000) ⫺ 5
⫽ ⫽ ⫽ 1.6
ˆ
␴ 7230
兹n x
3 Statistics in the Measurement Process 15
n ␯ t(v,0.10)a
t/兹n
2 1 6.31 3.65
3 2 2.92 1.46
5 4 2.13 0.87
a
From a t-statistic table.9
Thus a sample of three tires is sufficient to establish the tire life within Ⳳ10% at a 90%
level of confidence.
3.7 Goodness of Fit
Statistical methods can be used to fit a curve to a given data set. In general, the least-squares
principle is used to minimize the sum of the squares of deviations away from the curve to
be fitted. The deviations from an assumed curve y ⫽ ƒ(x) are due to errors in y, in x, or in
both y and x. In most cases the errors in the independent variable x are much smaller than
the dependent variable y. Therefore, only the errors in y are considered for the least-squares
curve. The goodness of fit of an assumed curve is defined by the correlation coefficient r,
where
2 2
兺(y ⫺ y) 兺(y ⫺ y)
i
r ⫽ Ⳳ ⫽ Ⳳ 1 ⫺
2 2
冪 冪
兺(y ⫺ y) 兺(y ⫺ y)
i i
2
ˆ
␴y,x
⫽ Ⳳ 1 ⫺ (31)
2
冪 ˆ
␴y
where 兺(yi ⫺ ⫽
2
y) total variation (variation about mean)
兺(yi ⫺ y)2
⫽ unexplained variation (variation about regression)
兺(y ⫺ ⫽
2
y) explained variation (variation based on assumed regression equation)
⫽
ˆ
␴y estimated population standard deviation of y variable
⫽
ˆ
␴y,x standard error of estimate of y on x
When the correlation coefficient r is zero, the data cannot be explained by the assumed
curve. However, when r is close to Ⳳ1, the variation of y with respect to x can be explained
by the assumed curve and a good correlation is indicated between the variables x and y.
The probabilistic statement for the goodness-of-fit test is given by
P[r ⬎ r] ⫽ ␣ ⫽ 1 ⫺ ␥ (32)
calc
where rcalc is calculated from Eq. (31) and the null and alternate hypotheses are as follows:
H0: No correlation of assumed regression equation with data.
H1: Correlation of regression equation with data.
The goodness-of-fit for a straight line is determined by comparing rcalc with the value of r
obtained at n ⫺ 2 degrees of freedom at a selected confidence level ␥ from tables. If rcalc ⬎
r, the null hypothesis is rejected and a significant fit of the data within the confidence level
specified is inferred. However, if rcalc ⬍ r, the null hypothesis cannot be rejected and no
correlation of the curve fit with the data is inferred at the chosen confidence level.
Example 5 Goodness-of-Fit Test. The given x–y data were fitted to a curve y ⫽ a ⫹ bx
by the method of least-squares linear regression. Determine the goodness of fit at a 5%
significance level (95% confidence level):
16 Instrument Statics
Figure 2 Histogram of a data set and the corresponding frequency polygon.
x ⫽ [56, 58, 60, 70, 72, 75, 77, 77, 82, 87, 92, 104, 125]
y ⫽ [51, 60, 60, 52, 70, 65, 49, 60, 63, 61, 64, 84, 75]
At ␣ ⫽ 0.05, the value of r is 0.55. Thus P[rcalc ⬎ 0.55] ⫽ 0.05. The least-squares regression
equation is calculated to be y ⫽ 39.32 ⫹ 0.30x, and the correlation coefficient is calculated
as rcalc ⫽ 0.61. Therefore, a satisfactory fit of the regression line to that data is inferred at
the 5% significance level (95% confidence level).
3.8 Probability Density Functions
Consider the measurement of a quantity x. Let
xi ⫽ i measurement of quantity
⫽
x most representative value of measured quantity
di ⫽ deviation of ith measurement from , ⫽xi ⫺
x x
n ⫽ total number of measurements
⌬x ⫽ smallest measurable change of x, also known as ‘‘least count’’
mj ⫽ number of measurements in xj size group
A histogram and the corresponding frequency polygon obtained from the data are shown in
Fig. 2, where the xj size group is taken to be inclusive at the lower limit and exclusive at
the upper limit. Thus, each xj size group corresponds to the following range of values:
1 1
x ⫺ ⌬x ⱕ x ⬍ x ⫹ ⌬x (33)
冉 冊 冉 冊
j j j j
2 2
3 Statistics in the Measurement Process 17
The height of any rectangle of the histogram shown is denoted as the relative number mj /n
and is equal to the statistical probability Fj(x) that a measured value will have the size xj Ⳳ
(⌬xj /2). The area of each rectangle can be made equal to the relative number by transforming
the ordinate of the histogram in the following way: Area ⫽ relative number ⫽ Fj(x) ⫽ mj /n
⫽ pj(x) ⌬x. Thus
mj
p (x) ⫽ (34)
j
n ⌬x
The shape of the histogram is preserved in this transformation since the ordinate is merely
changed by a constant scale factor (1/⌬x). The resulting diagram is called the probability
density diagram. The sum of the areas underneath all rectangles is then equal to 1 [i.e.,
兺pj(x) ⌬xj ⫽ 1]. In the limit, as the number of data approaches infinity and the least count
becomes very small, a smooth curve called the probability density function is obtained. For
this smooth curve we note that
⫹⬁
冕 p(x) dx ⫽ 1 (35)
⫺⬁
and that the probability of any measurement, x, having values between xa and xb is found
from
xb
P(x ⱕ x ⱕ x ) ⫽ 冕 p(x) dx (36)
a b
xa
To integrate this expression, the exact probability density function p(x) is required. Based
on the assumptions made, several forms of frequency distribution laws have been obtained.
The distribution of a proposed set of measurements is usually unknown in advance. However,
the Gaussian (or normal) distribution fits observed data distributions in a large number of
cases. The Gaussian probability density function is given by the expression
2 2
⫺(x⫺x) / 2␴
p(x) ⫽ (1/␴兹2␲)e (37)
where ␴ is the standard deviation. The standard deviation is a measure of dispersion and is
defined by the relation
⫹⬁ 2 2
兰 x p(x) dx 兺(x ⫺ ␮)
i
⫺⬁
␴ ⫽ ⫽ (38)
⫹⬁ 冪 n
兰 p(x) dx
⫺⬁
3.9 Determination of Confidence Limits on ␮
If a set of measurements is given by a random variable x, then the central limit theorem13
states that the distribution of means, , of the samples of size n is Gaussian (normal) with
x
mean ␮ and variance , that is, [Also, the random variable
2 2 2
␴ ⫽ ␴ /n x ⬃ G(␮, ␴ /n). z ⫽
x
is Gaussian with a mean of zero and a variance of unity, that is, z ⬃ G(0,1).]
(x ⫺ ␮)/(␴/兹n)
The random variable z is used to determine the confidence limits on ␮ due to random error
of the measurements when ␴ is known.
The confidence limit is determined from the following probabilistic statement and the
Gaussian distribution for a desired confidence level ␥:
18 Instrument Statics
x ⫺ ␮
P ⫺ z ⬍ ⬍ z ⫽ ␥ (39)
冋 册
␴/兹n
It shows a ␥ probability or ‘‘confidence level’’ that the experimental value of z will be
between Ⳳz obtained from the Gaussian distribution table. For a 95% confidence level,
z ⫽ Ⳳ1.96 from the Gaussian table9
and P[⫺ 1.96 ⬍ z ⬍ 1.96] ⫽ ␥, where z ⫽
( ). Therefore, the expression for the 95% confidence limits on ␮ is
x ⫺ ␮)/(␴/兹n
␴
␮ ⫽ x Ⳳ 1.96 (40)
兹n
In general
␴
␮ ⫽ x Ⳳ k (41)
兹n
where k is found from the Gaussian table for the specified value of confidence, ␥.
If the population variance is not known and must be estimated from a sample, the
statistic ) is not distributed normally but follows the t-distribution. When n
(x ⫺ ␮)/(␴/兹n
is very large, the t-distribution is the same as the Gaussian distribution. When n is finite, the
value of k is the ‘‘t’’ value obtained from the t-distribution table.9
The probabilistic statement
then becomes
x ⫺ ␮
P ⫺t ⬍ ⬍ ⫹ t ⫽ ␥ (42)
冋 册
␴/兹n
and the inequality yields the expression for the confidence limits on ␮:
␴
␮ ⫽ x Ⳳ t (43)
兹n
If the effects of resolution and significant digits are included, the expression becomes as
previously indicated in Eq. (29):
␴ 5
␮ ⫽ x Ⳳ t ⫹ R ⫹ (44)
冉 冊
m
10
兹n
3.10 Confidence Limits on Regression Lines
The least-squares method is used to fit a straight line to data that are either linear or trans-
formed to a linear relation.9,14
The following method assumes that the uncertainty in the
variable x is negligible compared to the uncertainty in the variable y and that the uncertainty
in the variable y is independent of the magnitude of the variable x. Figure 3 and the defi-
nitions that follow are used to obtain confidence levels relative to regression lines fitted to
experimental data:
a ⫽ intercept of regression line
b ⫽ slope of regression line, ⫽兺XY/兺X2
yi ⫽ value of y from data at x ⫽ xi
⫽
ŷi value of y from regression line at x ⫽ note that for a straight
x̂ ; ŷ ⫽ a ⫹ bx̂
i i i
line
3 Statistics in the Measurement Process 19
Figure 3 Schematic of a regression line.
⫽
yi mean estimated value of yi at x ⫽ xi; mean of distribution of y values at x
⫽ xi; if there is only one measurement of y at x ⫽ xi, then that value of y
(i.e., yi) is best estimate of yi
␯ ⫽ degrees of freedom in fitting regression line to data (␯ ⫽ n ⫺ 2 for straight
line)
兺(yi ⫺ )2
/v ⫽
ŷi ⫽ unexplained variance (for regression line) where is standard
2
ˆ
␴ ˆ
␴
y,x y,x
deviation of estimate
⫽
2
␴y,x /n from central limit theorem
2
␴y,x
⫽
2
␴b ⫽ estimate of variance on slope
2 2
␴ /兺X
y,x
Slope-Centroid Approximation
This method assumes that the placement uncertainty of the regression line is due to uncer-
tainties in the centroid ( ) of the data and the slope b of the regression line passing through
x, y
this centroid. These uncertainties are determined from the following relations and are shown
in Fig. 4:
ˆ
␴y,x
Centroid: ␮ ⫽ ˆ
␮ Ⳳ t(␯,␥)ˆ
␴ ⫽ y Ⳳ t (45)
y y y,x
兹n
ˆ
␴y,x
ˆ ˆ
Slope: ␮ ⫽ b Ⳳ t(␯,␥)ˆ
␴ ⫽ b Ⳳ t (46)
b b 2
兺X
Point-by-Point Approximation
This is a better approximation than the slope-centroid technique. It gives confidence limits
of the points , where
yi
20 Instrument Statics
b – t␴b
^ ^
b + t␴b
^
^
b – t␴b
^ ^
b + t␴b
^
^
y = a + bx
^
^
^
Figure 4 Schematic illustrating confidence limits on a regression line.
␮ ⫽ ŷ Ⳳ t(␯,␥)ˆ
␴ (47)
yi i yi
and
2
1 Xi
2 2
ˆ
␴ ⫽ ˆ
␴ ⫹ (48)
冉 冊
y y,x
i 2
n 兺Xi
At the centroid where Xi ⫽ 0, Eq. (47) reduces to the result for given in Eq. (45).
␮y
Line as a Whole
More uncertainty is involved when all points for the line are taken collectively. The ‘‘price’’
paid for the additional uncertainty is given by replacing t(v,␥) in the confidence interval
relation by , where F ⫽ F (2, ␯, ␥) obtained from an ‘‘F’’ table statistical distribution.
兹2F
Thus the uncertainty interval for placement of the ‘‘line as a whole’’ confidence limit at x
⫽ xi is found by the formula
1 Xi
␮ ⫽ ŷ Ⳳ 兹2Fˆ
␴ ⫹ (49)
Line i y,x 2
冪n 兺Xi
Future Estimate of a Single Point at x ⴝ xi
This gives the expected confidence limits on a future estimate value of y at x ⫽ xi in a
relation to a prior regression estimate. This confidence limit can be found from the relation
␮ ⫽ ŷ Ⳳ t(␯,␥)ˆ
␴ (50)
y i yi
where is the best estimate of y from the regression line and
ŷi
2
1 Xi
2 2
ˆ
␴ ⫽ ˆ
␴ 1 ⫹ ⫹ (51)
冉 冊
y y,x
i 2
n 兺Xi
If one uses ␥ ⫽ 0.99, nearly all (99%) of the observed data points should be within the
uncertainty limits calculated by Eq. (51).
3 Statistics in the Measurement Process 21
Example 6 Confidence Limits on a Regression Line. Calibration data for a copper–
constantan thermocouple are given:
Temperature: x ⫽ [0, 10, 20, 30, 40 50, 60, 70, 80, 90, 100] ⬚C
Voltage: y ⫽ [⫺0.89, ⫺0.53, ⫺9.15, ⫺0.20, 0.61, 1.03, 1.45, 1.88, 2.31,
2.78, 3.22] mV
If the variation of y is expected to be linear over the range of x and the uncertainty in
temperature is much less than the uncertainty in voltage, then:
1. Determine the linear relation between y and x.
2. Test the goodness of fit at the 95% confidence level.
3. Determine the 90% confidence limits of points on the line by the slope-centroid
technique.
4. Determine the 90% confidence limits on the intercept of the regression line.
5. Determine the 90% confidence limits on a future estimated point of temperature at
120⬚C.
6. Determine the 90% confidence limits on the whole line.
7. How much data would be required to determine the centroid of the voltage values
within 1% at the 90% confidence level?
The calculations are as follows:
x ⫽ 50.0 y ⫽ 1.0827 y ⫽ a ⫹ bx
兺x ⫽ 550 兺y ⫽ 11.91 兺x y ⫽ 1049.21
i i i i
2 2
兺(x ) ⫽ 38,500.0 兺(y ) ⫽ 31.6433
i i
2 2 2 2
兺(x ⫺ x) ⫽ 兺X ⫽ 11,000.00 兺(y ⫺ y) ⫽ 兺Y ⫽ 18.7420
i i i i
兺(x ⫺ x)(y ⫺ y) ⫽ 兺X Y ⫽ 453.70
i i i i
2
兺(y ⫺ ŷ ) ⫽ 0.0299
i i
2
1. b ⫽ 兺XY/兺X ⫽ 453.7/11,000 ⫽ 0.0412
a ⫽ y ⫺ bx ⫽ 1.0827 ⫺ (0.0412)(50.00) ⫽ 1.0827 ⫺ 2.0600
⫽ ⫺0.9773
2 2 2 2
2. r ⫽ 兹兺(ŷ ⫺ y) /兺(y ⫺ y) ⫽ 兹1 ⫺ 兺(y ⫺ ŷ ) /兺(y ⫺ y)
exp i i i i
2 2
⫽ 兺XY/兹兺X 兺Y ⫽ 0.998
⫽ r(␣, ␯) ⫽ r(0.05,9) ⫽ 0.602
rTable
P[r ⬎ r ] ⫽ ␣ with H of no correlation; therefore reject H and infer
exp Table 0 0
significant correlation
3. t ⫽ t(␥, v) ⫽ t(0.90,9) ⫽ 1.833 (see Ref. 6)
2 2
ˆ
␴ ⫽ 兺(y ⫺ ŷ ) /␯ ⫽ 0.0299/9 ⫽ 0.00333
y,x i i
22 Instrument Statics
ˆ
␴ ⫽ Ⳳ0.05777, ˆ
␴ ⫽ Ⳳ0.0173
y,x y,x
⫽ Ⳳ0.000550
2
ˆ
␴ ⫽ ˆ
␴ /兹兺X
b y,x
␮ ⫽ ˆ
␮ Ⳳ tˆ
␴ ⫽ y Ⳳ tˆ
␴
y y y,x y,x
⫽ 1.0827 Ⳳ (1.833)(0.01737) ⫽ 1.0827 Ⳳ 0.0318
ˆ
␮ ⫽ b Ⳳ tˆ
␴ ⫽ 0.0412 Ⳳ (1.833)(0.00055)
b b
⫽ 0.0412 Ⳳ 0.00101
4. ⫽ 2 2
␮ ˆ
␮ Ⳳ tˆ
␴ ⫽ y Ⳳ tˆ
␴ 兹1/n ⫹ X /兺X
yi yi yi i y,x
⫽ ⫺0.9773 Ⳳ (1.833)(0.0325)
⫽ ⫺0.9773 Ⳳ 0.0596
5. ⫽ 2 2
␮ ˆ
␮ Ⳳ tˆ
␴ ⫽ ŷ ⫹ tˆ
␴ 兹1 ⫹ 1/n ⫹ X /兺X
yi yi yi i y,x
⫽ (⫺0.9773 ⫹ 4.9500) Ⳳ (1.833)(0.0715)
⫽ 3.9727 Ⳳ 0.1310
6. ⫽ ; for any given point compare with 4 above
␮ ˆ
␮ Ⳳ 兹2F(2, v)ˆ
␴
yi Line yi yi
⫽ ŷ Ⳳ 兹(2)(3.01)ˆ
␴
i yi
⫽ ŷ Ⳳ (2.46)ˆ
␴
i yi
⫽ ⫺0.9773 Ⳳ 0.0799; for point of 4 above
7.
ˆ
␴y,x
␮ ⫽ ˆ
␮ Ⳳ tˆ
␴ ⫽ y Ⳳ t
yi y y,x
兹n
␴ ⫽ ␴ ⫽ ␴ /兹n
y,x y,x y,x
ˆ
␴ ⫽ Ⳳ0.0577
y,x
␮ ⫺ y ⫽ (1%)(␮ ) 艑 (1%)(y) ⫽ 0.010827
y y
t/兹n ⫽ (1%)y/ ˆ
␴ ⫽ 0.010827/0.05770 ⫽ 0.188
y,x
From the t table9
at t/ ⫽ 0.188 with ␯ ⫽ n ⫺ 2 for a straight line (two constraints).
兹n
␯ t(0.10,v) t/兹n
60 1.671 0.213
90 1.663 0.174
75 1.674 0.188
Therefore n ⫽ ␯ ⫹ 2 ⫽ 77 is the amount of data to obtain to assure the precision and
confidence level desired.
3.11 Inference and Comparison
Events are not only deterministic but probabilistic. Under certain specified conditions some
events will always happen. Some other events, however, may or may not happen. Under the
same specified conditions, the latter ones depend on chance and, therefore, the probability
of occurrence of such events is of concern. For example, it is quite certain that a tossed
unbiased die will fall down. However, it is not at all certain which face will appear on top
3 Statistics in the Measurement Process 23
when the die comes to rest. The probabilistic nature of some events is apparent when ques-
tions of the following type are asked. Does medicine A cure a disease better than medicine
B? What is the ultimate strength of 1020 steel? What total mileage will brand X tire yield?
Which heat treatment process is better for a given part?
Answering such questions involves designing experiments, performing measurements,
analyzing the data, and interpreting the results. In this endeavor two common phenomena
are observed: (1) repeated measurements of the same attribute differ due to measurement
error and resolving capability of the measurement system and (2) corresponding attributes
of identical entities differ due to material differences, manufacturing tolerances, tool wear,
and so on.
Conclusions based on experiments are statistical inferences and can only be made with
some element of doubt. Experiments are performed to make statistical inferences with min-
imum doubt. Therefore, experiments are designed specifying the data required, amount of
data needed, and the confidence limits desired in drawing conclusions. In this process an
instrumentation system is specified, a data-taking procedure is outlined, and a statistical
method is used to make conclusions at preselected confidence levels.
In statistical analysis of experimental data, the descriptive and inference tasks are con-
sidered. The descriptive task is to present a comprehensible set of observations. The inference
task determines the truth of the whole by examination of a sample. The inference task
requires sampling, comparison, and a variety of statistical tests to obtain unbiased estimates
and confidence limits to make decisions.
Statistical Testing
A statistical hypothesis is an assertion relative to the distribution of a random variable. The
test of a statistical hypothesis is a procedure to accept or reject the hypothesis. A hypothesis
is stated such that the experiment attempts to nullify the hypothesis. Therefore, the hypothesis
under test is called the null hypothesis and symbolized by H0. All alternatives to the null
hypothesis are termed the alternate hypothesis and are symbolized by H1.15
If the results of the experiment cannot reject H0, the experiment cannot detect the dif-
ferences in measurements at the chosen probability level.
Statistical testing determines if a process or item is better than another with some stated
degree of confidence. The concept can be used with a certain statistical distribution to de-
termine the confidence limits.
The following procedure is used in statistical testing:
1. Define H0 and H1.
2. Choose the confidence level ␥ of the test.
3. Form an appropriate probabilistic statement.
4. Using the appropriate statistical distribution, perform the indicated calculation.
5. Make a decision concerning the hypothesis and/or determine confidence limits.
Two types of error are possible in statistical testing. A Type I error is that of rejecting
a true null hypothesis (rejecting truth). A Type II error is that of accepting a false null
hypothesis (embracing fiction). The confidence levels and sample size are chosen to minimize
the probability of making a Type I or Type II error. Figure 5 illustrates the Type I (␣) and
Type II (␤) errors, where
H0 ⫽ sample with n ⫽ 1 comes from N(10,16)
H1 ⫽ sample with n ⫽ 1 comes from N(17,16)
24 Instrument Statics
Figure 5 Probability of Type I and Type II errors.
␣ ⫽ probability of Type I error [concluding data came from N(17,16) when data actually
came from N(10,16)]
␤ ⫽ probability of a Type II error [accepting that data come from N(10,16) when data
actually come from N(17,16)]
When ␣ ⫽ 0.05 and n ⫽ 1, the critical value of x is 16.58 and ␤ ⫽ 0.458. If the value
of x obtained from the measurement is less than 16.58, accept H0. If x is larger than 16.58,
reject H0 and infer H1. For ␤ ⫽ 0.458 there is a large chance for a Type II error. To minimize
this error, increase ␣ or the sample size. For example, when ␣ ⫽ 0.05 and n ⫽ 4, the critical
value of x is 13.29 and ␤ ⫽ 0.032 as shown in Fig. 5. Thus, the chance of a Type II error
is significantly decreased by increasing the sample size. Various statistical tests are sum-
marized in the flow chart shown in Fig. 6.
Comparison of Variability
To test whether two samples are from the same population, their variability or dispersion
characteristics are first compared using F-statistics.9,15
If x and y are random variables from two different samples, the parameters U ⫽ 兺(xi
⫺ and V ⫽ 兺(yi ⫺ are also random variables and have chi-square distributions
2 2 2 2
x) /␴ y) /␴
x y
(see Fig. 7a) with ␯1 ⫽ n1 ⫺ 1 and ␯2 ⫽ n2 ⫺ 1 degrees of freedom, respectively. The
random variable W formed by the ratio (U/␯1)/(V/␯2) has an F-distribution with ␯1 and ␯2
degrees of freedom [i.e., W ⬃ F (␥, ␯1, ␯2)]. The quotient W is symmetric; therefore, 1/W
also has an F-distribution with ␯2 and ␯1 degrees of freedom [i.e., 1/W ⬃ F (␣, ␯2, ␯1)].
Figure 7b shows the F-distribution and its probabilistic statement:
P[F ⬍ W ⬍ F ] ⫽ ␥ (52)
L R
where W ⫽ (U/␯1)/(V/␯2) ⫽ with
2 2
ˆ
␴ / ˆ
␴
1 2
2 2 2 2
H as ␴ ⫽ ␴ and H as ␴ ⫽ ␴ (53)
0 1 2 1 1 2
Here, W is calculated from values of and obtained from the samples.
2 2
ˆ
␴ ˆ
␴
1 2
Example 7 Testing for Homogeneous Variances. Test the hypothesis at the 90% level of
confidence that when the samples of n1 ⫽ 16 and n2 ⫽ 12 yielded ⫽ 5.0 and
2 2 2
␴ ⫽ ␴ ˆ
␴
1 2 1
⫽ 2.5. Here H0 is ␴1 ⫽ ␴2 and H1 is ␴1 ⫽ ␴2
2
ˆ
␴2
25
Figure
6
User’s
flow
chart
for
statistical
tests
of
data.
26
Figure
6
(Continued
)
3 Statistics in the Measurement Process 27
Figure 7 (a) Chi-square distribution and (b) F-distribution. Also shown are probabilistic statements of
the distributions such as that shown in Eq. (52).
2
ˆ
␴1
P[F (␯ ,␯ ) ⬍ ⬍ F (␯ ,␯ )] ⫽ ␥
冉 冊
L 1 2 R 1 2
ˆ
␴2
ˆ
␴1
P[F (15,11) ⬍ 2 ⬍ F (15,11)] ⫽ 0.90
冉 冊
0.95 0.05
ˆ
␴2
2
1 1 ˆ
␴1
F (15,11) ⫽ ⫽ ⫽ 0.398 P[0.398 ⬍ ⬍ 2.72] ⫽ 0.90
冉 冊
0.95
F (11, 15) 2.51 ˆ
␴
0.05 2
Since there was a probability of 90% of ranging between 0.398 and 2.72 and with
2 2
ˆ
␴ / ˆ
␴
1 2
the actual value of , H0 cannot be rejected at the 90% confidence level.
2 2
ˆ
␴ / ˆ
␴ ⫽ 2.0
1 2
Comparison of Means
Industrial experimentation often compares two treatments of a set of parts to determine if a
part characteristic such as strength, hardness, or lifetime has been improved. If it is assumed
that the treatment does not change the variability of items tested (H0), then the t-distribution
determines if the treatment had a significant effect on the part characteristics (H1). The t-
statistic is
d ⫺ ␮d
t ⫽ (54)
␴d
where and ␮d ⫽ ␮1 ⫺ ␮2. From the propagation of variance, becomes
2
d ⫽ x ⫺ x ␴
1 2 d
2 2
␴ ␴
1 2
2 2 2
␴ ⫽ ␴ ⫹ ␴ ⫽ ⫹ (55)
d x x
1 2
n n
1 2
28 Instrument Statics
where and are each estimates of the population variance ␴2
. A better estimate of ␴2
2 2
␴ ␴
1 2
is the combined variance and it replaces both and in Eq. (55). The combined
2 2 2
␴ , ␴ ␴
c 1 2
variance is determined by weighting the individual estimates of variance based on their
degrees of freedom according to the relation
2 2
ˆ
␴ v ⫹ ˆ
␴ v
1 1 2 2
2
␴ ⫽ (56)
c
v ⫹ v
1 2
Then
2 2
ˆ
␴ ˆ
␴ 1 1
c c
2 2
␴ ⫽ ⫹ ⫽ ␴ ⫹ (57)
冉 冊
d c
n n n n
1 2 1 2
Under the hypothesis H0 that ␮1 ⫽ ␮2 (no effect due to treatment), the resulting probabilistic
statement is
1 1 1 1
P ⫺t␴ ⫹ ⬍ x ⫺ x ⬍ t␴ ⫹ ⫽ ␥ (58)
冋 册
c 1 2 c
冪 冪
n n n n
1 2 1 2
If the variances of the items being compared are not equal (homogeneous), a modified t- (or
d-) statistic is used,9,15
where d depends on confidence level ␥, degrees of freedom v, and a
parameter ␪ that depends on the ratio of standard deviations according to
ˆ
␴ /兹n
1 1
tan ␪ ⫽ (59)
ˆ
␴ /兹n
2 2
The procedure for using the d-statistic is the same as described for the t-statistic.
Example 8 Testing for Homogeneous Means. A part manufacturer has the following
data:
Sample
Number
Number of
Parts
Mean Lifetime
(h)
Variance
(h)
1 15 2530 490
2 11 2850 360
Determine if the lifetime of the parts is due to chance at a 10% significance level (90%
confidence level).
Check variance first (H0—homogeneous variances and H1—nonhomogeneous vari-
ances):
15
2
ˆ
␴ ⫽ 490 ⫽ 525
冉 冊
1
14
11
2
ˆ
␴ ⫽ 360 ⫽ 396
冉 冊
2
10
(14)(525) ⫹ (10)(396)
2
␴ ⫽ ⫽ 471
c
24
3 Statistics in the Measurement Process 29
2
ˆ
␴ 525
1
P[F ⬍ F ⬍ F ] ⫽ ␥ F ⫽ ⫽ ⫽ 1.33
L exp R exp 2
ˆ
␴ 396
2
F ⫽ F (␯ , ␯ ) ⫽ F (14,10) ⫽ 2.8
R 0.05 1 2 0.05
1 1
F ⫽ F (␯ , ␯ ) ⫽ ⫽ ⫽ 0.385
L 0.95 1 2
F (10,14) 2.60
0.05
Therefore, accept H0 (variances are homogeneous).
Check means next (H0 is ␮1 ⫽ ␮2 and H1 is ␮2 ⫽ ␮1):
P[⫺t ⬍ t ⬍ t] ⫽ ␥
calc
t ⫽ t(␯ ⫹ ␯ , ␥) ⫽ t(24,0.90) ⫽ 1.711
1 2
Therefore, P[⫺1.71 ⬍ texp ⬍ 1.71] ⫽ 0.90:
兩x ⫺ x 兩 兩x ⫺ x 兩
1 2 1 2
t ⫽ ⫽
exp 2 2 2
兹␴ /n ⫹ ␴ /n 兹␴ (1/n ⫹ 1/n )
1 1 2 2 c 1 2
兩2530 ⫺ 2850兩
⫽
兹(471)(0.1576)
兩⫺320兩 320
⫽ ⫽
8.58
兹73.6
⫽ 32.3
Therefore, reject H0 and accept H1 of ␮1 ⫽ ␮2 and infer differences in samples are due not
to chance alone but to some real cause.
Comparing Distributions
A chi-square distribution is also used for testing whether or not an observed phenomenon
fits an expected or theoretical behavior.15
The chi-square statistic is defined as
2
(O ⫺ E )
j j
冘冋 册
Ej
where Oj is the observed frequency of occurrence in the jth class interval and Ej is the
expected frequency of occurrence in the jth class interval. The expected class frequency is
based on an assumed distribution or a hypothesis H0. This statistical test is used to compare
the performance of machines or other items. For example, lifetimes of certain manufactured
parts, locations of hole center lines on rectangular plates, and locations of misses from targets
in artillery and bombing missions follow chi-square distributions.
The probabilistic statements depend on whether a one-sided test or a two-sided test is
being performed.16
The typical probabilistic statements are
2 2
P[␹ ⬎ ␹ (␣, ␯)] ⫽ ␣ (60)
exp
for the one-sided test and
2 2 2
P[␹ ⬍ ␹ ⬍ ␹ ] ⫽ ␥ (61)
L exp R
for the two-sided test, where
30 Instrument Statics
2
兺(O ⫺ E )
j j
2
␹ ⫽ (62)
exp
Ej
When using Eq. (62) at least five items per class interval are normally used, and a continuity
correction must be made if less than three class intervals are used.
Example 9 Use of the Chi-Square Distribution. In a test laboratory the following record
shows the number of failures of a certain type of manufactured part.
Number of Failures
Number of Parts
Observed Expecteda
0 364 325
1 376 396
2 218 243
3 89 97
4 33 30
5 13 7
6 4 1
7 3 0
8 2 0
9 1 0
兺 ⫽ 1103
a
From an assumed statistical model that the failures are random.
At a 95% confidence level, determine whether the failures are attributable to chance
alone or to some real cause. For H0, assume failures are random and not related to a cause
so that the expected distribution of failures follows the Poisson distribution. For H1, assume
failures are not random and are cause related. The Poisson probability is Pr ⫽ !,
⫺␮ r
e ␮ /r
where ␮ ⫽ 兺xiPi:
364 376 218 89 33 13
␮ ⫽ (0) ⫹ (1) ⫹ (2) ⫹ (3) ⫹ (4) ⫹ (5)
冉 冊 冉 冊 冉 冊 冉 冊 冉 冊 冉 冊
1103 1103 1103 1103 1103 1103
24 21 16 9
⫹ ⫹ ⫹ ⫹
1103 1103 1103 1103
1346
⫽ ⫽ 1.22
1103
Then P0 ⫽ 0.295 and E0 ⫽ P0n ⫽ 325. Similarly P1 ⫽ 0.360, P2 ⫽ 0.220, P3 ⫽ 0.088, . . .
and, correspondingly, E1 ⫽ 396, E2 ⫽ 243, E3 ⫽ 97, . . . are tabulated above for the expected
number of parts to have the number of failures listed.
Goodness-of-fit test—H0 represents random failures and H1 represents real cause:
2 2
P[␹ ⬎ ␹ ] ⫽ ␣ ⫽ 0.05
exp Table
2 2 2
␹ ⫽ ␹ (␯, ␣) ⫽ ␹ (5,0.05) ⫽ 11.070
Table
2
兺(0 ⫺ E)
2
␹ ⫽
exp
E
References 31
O E O – E (O – E)2
(O – E)2
/E
364 325 39 1520 4.6800
376 396 ⫺20 400 1.0100
218 243 ⫺25 625 2.5700
89 97 ⫺8 64 0.6598
33 30 3 9 0.3000
23 8 15 225 28.1000
⫽ 37.320
2
␹exp
Since there is only a 5% chance of obtaining a calculated chi-square statistic as large as
11.07 under the hypothesis chosen, the null hypothesis is rejected and it is inferred that some
real (rather than random) cause exists for the failures with a 95% probability or confidence
level.
REFERENCES
1. E. J. Minnar (ed.), ISA Transducer Compendium, Instrument Society of America/Plenum, New York,
1963.
2. ‘‘Electrical Transducer Nomenclature and Terminology,’’ ANSI Standard MC 6.1-1975 (ISA S37.1),
Instrument Society of America, Research Triangle Park, NC, 1975.
3. Anonymous (ed.), Standards and Practices for Instrumentation, Instrument Society of America,
Research Triangle Park, NC, 1988.
4. E. O. Doebelin, Measurement Systems: Application and Design, 4th ed., McGraw-Hill, New York,
1990.
5. J. G. Webster, The Measurement, Instrumentation, and Sensors Handbook, CRC Press in cooperation
with IEEE, Boca Raton, FL, 1999.
6. R. S. Figliola and D. E. Beasley, Theory and Design for Mechanical Measurements, 3rd ed., Wiley,
New York, 2000.
7. C. L. Nachtigal (ed.), Instrumentation and Control: Fundamentals and Applications, Wiley, New
York, 1990.
8. I. B. Gertsbakh, Measurement Theory for Engineers, Springer, New York, 2003.
9. J. B. Kennedy and A. M. Neville, Basic Statistical Methods for Engineers and Scientists, 3rd ed.,
Harper & Row, New York, 1986.
10. A. G. Worthing and J. Geffner, Treatment of Experimental Data, Wiley, New York, 1943.
11. C. R. Mischke, Mathematical Model Building (an Introduction to Engineering), 2nd ed., Iowa State
University Press, Ames, IA, 1980.
12. C. Lipson and N. J. Sheth, Statistical Design and Analysis of Engineering Experiments, McGraw-
Hill, New York, 1973.
13. B. Ostle and R. W. Mensing, Statistics in Research, 3rd ed., Iowa State University Press, Ames, IA,
1975.
14. D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, Wiley, New
York, 2002.
15. S. B. Vardeman, Statistics for Engineering Problem Solving, PWS Publishing, Boston, 1994.
16. R. E. Walpole, S. L. Myers, R. H. Myers, and K. Ye, Probability and Statistics for Engineers and
Scientists, 7th ed., Prentice-Hall, Englewood Cliffs, NJ, 2002.
32
CHAPTER 2
INPUT AND OUTPUT CHARACTERISTICS
Adam C. Bell
Dartmouth, Nova Scotia
1 INTRODUCTION 32
2 FAMILIAR EXAMPLES OF
INPUT–OUTPUT INTERACTIONS 34
2.1 Power Exchange 34
2.2 Energy Exchange 35
2.3 A Human Example 36
3 ENERGY, POWER, IMPEDANCE 37
3.1 Definitions and Analogies 37
3.2 Impedance and Admittance 38
3.3 Combining Impedances
and/or Admittances 39
3.4 Computing Impedance or
Admittance at an Input or
Output 40
3.5 Transforming or Gyrating
Impedances 41
3.6 Source Equivalents: Thévenin
and Norton 44
4 OPERATING POINT OF
STATIC SYSTEMS 45
4.1 Exchange of Real Power 45
4.2 Operating Points in an
Exchange of Power or Energy 46
4.3 Input and Output Impedance
at the Operating Point 48
4.4 Operating Point and Load for
Maximum Transfer of Power 48
4.5 An Unstable Energy
Exchange: Tension-Testing
Machine 50
4.6 Fatigue in Bolted Assemblies 52
4.7 Operating Point for Nonlinear
Characteristics 52
4.8 Graphical Determination of
Output Impedance for
Nonlinear Systems 54
5 TRANSFORMING THE
OPERATING POINT 57
5.1 Transducer-Matched
Impedances 57
5.2 Impedance Requirements for
Mixed Systems 58
6 MEASUREMENT SYSTEMS 60
6.1 Interaction in Instrument
Systems 61
6.2 Dynamic Interactions in
Instrument Systems 63
6.3 Null Instruments 65
7 DISTRIBUTED SYSTEMS IN
BRIEF 66
7.1 Impedance of a
Distributed System 67
8 CONCLUDING REMARKS 67
REFERENCES 68
1 INTRODUCTION
Everyone is familiar with the interaction of devices connected to form a system, although
they may not think of their observations in those terms. Familiar examples include the
following:
Reprinted from Instrumentation and Control, Wiley, New York, 1990, by permission of the publisher.
Mechanical Engineers’ Handbook: Instrumentation, Systems, Controls, and MEMS, Volume 2, Third Edition.
Edited by Myer Kutz
Copyright  2006 by John Wiley & Sons, Inc.
1 Introduction 33
1. Dimming of the headlights while starting a car
2. Slowdown of an electric mixer lowered into heavy batter
3. Freezing a showerer by starting the dishwasher
4. Speedup of a vacuum cleaner when the hose plugs
5. Two-minute wait for a fever thermometer to rise
6. Special connectors required for TV antennas
7. Speedup of a fan in the window with the wind against it
8. Shifting of an automatic transmission on a hill
These effects happen because one part of a system loads another. Most mechanical
engineers would guess that weighing an automobile by placing a bathroom-type scale under
its wheels one at a time and summing the four measurements will yield a higher result than
would be obtained if the scale was flush with the floor. Most electrical engineers understand
that loading a potentiometer’s wiper with too low a resistance makes its dial nonlinear for
voltage division. Instrumentation engineers know that a heavy accelerometer mounted on a
thin panel will not measure the true natural frequencies of the panel. Audiophiles are aware
that loudspeaker impedances must be matched to amplifier impedance. We have all seen the
75- and 300-⍀ markings under the antenna connections on TV sets, and most cable sub-
scribers have seen balun transformers for connecting a coaxial cable to the flat-lead terminals
of an older TV.
Every one of these examples involves a desired or undesirable interaction between a
source and a receiver of energy. In every case, there are properties of the source part and
the load part of the system that determine the efficiency of the interaction. This chapter deals
exclusively with interactions between static and dynamic subsystems intended to function
together in a task and with how best to configure and characterize those subsystems.
Consider the analysis of dynamic systems. To create mathematical models of these
systems requires that we idealize our view of the physical world. First, the system must be
identified and separated from its environment. The environment of a system is the universe
outside the free body, control volume, or isolated circuit. The combination of these, which
is the system under study and the external sources, provides or removes energy from the
system in a known way. Next, in the system itself, we must arrange a restricted set of ideal
elements connected in a way that will correctly represent the energy storages and dissipations
of the physical system while, at the same time, we need the mathematical handles that
explore the system’s behavior in its environment. The external environment of the system
being modeled must then itself be modeled and connected and is usually represented by
special ideal elements called sources.
We expect, as a result of these sources, that the system under study will not alter the
important variables in its environment. The water rushing from a kitchen faucet will not
normally alter the atmospheric pressure; our electric circuit will not measurably slow the
turbines in the local power plant; the penstock will not draw down the level of the reservoir
(in a time frame consistent with a study of penstock dynamics, anyway); the cantilever beam
will not distort the wall it is built into; and so on. In this last instance, for example, the wall
is a special source of zero displacement and zero rotation no matter what forces and moments
are applied.
In this chapter, we consider, instead of the behavior of a single system in a known
environment, the interaction between pairs of connected dynamic systems at their interface,
often called the driving point. The fundamental currency is, as always, the energy or power
exchanged through the interface. In an instrumentation or control system, the objective of
34 Input and Output Characteristics
the energy exchange might be information transmission, but this is not considered here (we
would like information exchanges to take place at the lowest possible energy costs, but the
second law of thermodynamics rules out a free transmission).
As always, energy factors into two variables, such as voltage and current in electrical
systems, and we are concerned with the behavior of these in the energetic interaction. The
major difference in this perspective is that the system supplying energy cannot do so at a
fixed value. Neither the source nor the system receiving energy can fix its values for a
changing demand without a change in the value of a supply variable. The two subsystems
are in an equilibrium with each other and are forced by their connection to have the same
value of both of the appropriate energy variables. We concern ourselves with determining
and controlling the value of these energy variables at the interface where, obviously, only
one is determined by each of the interacting systems.
2 FAMILIAR EXAMPLES* OF INPUT–OUTPUT INTERACTIONS
2.1 Power Exchange
In the real world, pure sources and sinks are difficult to find. They are idealized, convenient
constructs or approximations that give our system analyses independent forcing functions.
We commonly think of an automobile storage battery as a source of 12.6 V independent of
the needed current, and yet we have all observed dimming headlights while starting an
engine. Clearly, the voltage of this battery is a function of the current demanded by its load.
Similarly, we cannot charge the battery unless our alternator provides more than 12.6 V, and
the charging rate depends on the overvoltage supplied. Thus, when the current demanded or
supplied to a battery approaches its limits, we must consider that the battery really looks
like an ideal 12.6-V source in series with a small resistance. The voltage at the battery
terminals is a function of the current demanded and is not independent of the system loading
or charging it in the interaction. This small internal resistance is termed the output impedance
(or input impedance or driving-point impedance) of the battery.
If we measure the voltage on this battery with a voltmeter, we should draw so little
current that the voltage we see is truly the source voltage without any loss in the internal
resistance. The power delivered from the battery to the voltmeter is negligible (but not zero)
because the current is so small. Alternatively, if we do a short-circuit test of the battery, its
terminal voltage should fall to zero while we measure the very large current that results.
Again, the power delivered to the ammeter is negligible because, although the current is very
large, the voltage is vanishingly small.
At these two extremes the power delivered is essentially zero, so clearly at some inter-
mediate load the power delivered will be a maximum. We will show later that this occurs
when the load resistance is equal to the internal resistance of the battery (a point at which
batteries are usually not designed to operate). The discussion above illustrates a simple
concept: Impedances should be matched to maximize power or energy transfer but should
be maximally mismatched for making a measurement without loading the system in which
the measurement is to be made. We will return to the details of this statement later.
*Many of the examples in this chapter are drawn from Chapter 6 of a manuscript of unpublished notes,
‘‘Dynamic Systems and Measurements,’’ by C. L. Nachtigal, used in the School of Mechanical Engi-
neering, Purdue University, 1978.
2 Familiar Examples of Input-Output Interactions 35
2.2 Energy Exchange
Interactions between systems are not restricted to resistive behavior, nor is the concept of
impedance matching restricted to real, as opposed to reactive, impedances. Consider a pair
of billiard balls on a frictionless table (to avoid the complexities of spin), and consider that
their impact is governed by a coefficient of restitution, ⑀. Before impact, only one ball is
moving, but afterward both may be. The initial and final energies are as follows:
1 2
–
Initial energy ⫽ M v
2 1 1i
1 2 1 2
– –
Final energy ⫽ M v ⫹ M v (1)
2 1 1ƒ 2 2 2ƒ
where the subscript 1 refers to the striker and 2 to the struck ball, M is mass, v is velocity,
and the subscripts i and ƒ refer to initial and final conditions, respectively.
Since no external forces act on this system of two balls during their interaction, the
total momentum of the system is conserved:
M v ⫹ M v ⫽ M v ⫹ M v
1 1i 2 2i 1 1ƒ 2 2ƒ
or
v ⫹ mv ⫽ v ⫹ mv (2)
1i 2i 1ƒ 2ƒ
where m ⫽ M2 /M1. The second equation, required to solve for the final velocities, derives
from impulse and momentum considerations for the balls considered one at a time. Since
no external forces act on either ball during their interaction except those exerted by the other
ball, the impulses,* or integrals of the force acting over the time of interaction on the two,
are equal. (See impact in virtually any dynamics text.) From this it can be shown that the
initial and final velocities must be related:
⑀(v ⫺ v ) ⫽ (v ⫺ v ) (3)
1i 2i 1ƒ 2ƒ
where v2i ⫽ 0 in this case and the coefficient of restitution ⑀ is a number between 0 and 1.
A 0 corresponds to a plastic impact while a 1 corresponds to a perfectly elastic impact.
Equations (2) and (3) can be solved for the final velocities of the two balls:
1 ⫺ m⑀ 1 ⫹ ⑀
v ⫽ v and v ⫽ v (4)
1ƒ 1i 2ƒ 1i
1 ⫹ m 1 ⫹ m
Now assume that one ball strikes the other squarely† and that the coefficient of resti-
tution ⑀ is unity (perfectly elastic impact). Consider three cases:
1. The two balls have equal mass, so m ⫽ 1, and ⑀ ⫽ 1. Then the striking ball, M1,
will stop, and the struck ball, M2, will move away from the impact with exactly the
initial velocity of the striking ball. All the initial energy is transferred.
2. The struck ball is more massive than the striking ball, m ⬎ 1, ⑀ ⫽ 1. Then the striker
will rebound along its initial path, and the struck ball will move away with less than
the initial velocity of the striker. The initial energy is shared between the balls.
*Impulse ⫽ Force dt, where Force is the vector sum of all the forces acting over the period of
t
兰t⫽0
interaction, t.
†Referred to in dynamics as direct central impact.
36 Input and Output Characteristics
3. The striker is the more massive of the two, m ⬍ 1, ⑀ ⫽ 1. Then the striker, M1, will
follow at reduced velocity behind the struck ball after their impact, and the struck
ball will move away faster than the initial velocity of the striker (because it has less
mass). Again, the initial energy is shared between the balls.
Thus, the initial energy is conserved in all of these transactions. But the energy can be
transferred completely from one ball to the other if and only if the two balls have the same
mass.
If these balls were made of clay so that the impact was perfectly plastic (no rebound
whatsoever), then ⑀ ⫽ 0, so the striker and struck balls would move off together at the same
velocity after impact no matter what the masses of the two balls. They would be effectively
stuck together. The final momentum of the pair would equal the initial momentum of the
striker because, on a frictionless surface, there are no external forces acting, but energy could
not be conserved because of the losses in plastic deformation during the impact. The final
velocities for the same three cases are
1
v ⫽ v (5)
ƒ i
1 ⫹ m
Since the task at hand, however, is to transfer kinetic (KE) from the first ball to the
second, we are interested in maximizing the energy in the second ball after impact with
respect to the energy in the first ball before impact:
1 2 2 2
–
KE M (v ) M (1/(1 ⫹ m)) (v ) m
(M ,after) 2 2 2ƒ
2 2 1i
⫽ ⫽ ⫽ (6)
1 2 2 2
–
KE M (v ) M (v ) (1 ⫹ m)
(M ,before) 2 1 1i 1 1i
1
This takes on a maximum value of when m ⫽ 1 and falls off rapidly as m departs
1
–
4
from 1.
Thus, after the impact of two clay balls of equal mass, one-fourth of the initial energy
remains in the striker, one-fourth is transferred to the struck ball, and one-half of the initial
energy of the striker is lost in the impact. If the struck ball is either larger or smaller than
the striker, however, then a greater fraction of the initial energy is dissipated in the impact
and a smaller fraction is transferred to the second ball. The reader should reflect on how
this influences the severity of automobile accidents between vehicles of different sizes.
2.3 A Human Example
Those in good health can try the following experiment. Run up a long flight of stairs one at
a time and record the elapsed time. After a rest, try again, but run the stairs two at a time.
Still later, try a third time, but run three steps at a time. Most runners will find that their
best time is recorded for two steps at a time.
In the first test, the runner’s legs are velocity limited: Too much work is expended
simply moving legs and feet, and the forces required are too low to use the full power of
the legs effectively. In the third test, although the runner’s legs do not have to move very
quickly, they are on the upper edge of their force capabilities for continued three-step jumps;
the forces required are too high and the runner could, at lower forces, move his or her legs
much faster. In the intermediate case there is a match between the task and the force–velocity
characteristics of the runner’s legs.
Bicycle riders assure this match with a variable-speed transmission that they adjust so
they can crank the pedals at approximately 60 RPM. We will later look at other means of
ensuring the match between source capabilities and load requirements when neither of them
3 Energy, Power, Impedance 37
Figure 1 H. M. Paynter’s tetrahedron of state.
is changeable, but the answer is always a transformer or gyrator of some type (a gear ratio
in this case).
3 ENERGY, POWER, IMPEDANCE
3.1 Definitions and Analogies
Energy is the fundamental currency in the interactions between elements of a physical system
no matter how the elements are defined. In engineering systems, it is convenient to describe
these transactions in terms of a complementary pair of variables whose product is the power
or flow rate of the energy in the transaction. These product pairs are familiar to all engineers:
voltage ⫻ current ⫽ power, force ⫻ displacement ⫽ energy, torque ⫻ angular velocity ⫽
power, pressure ⫻ flow ⫽ power, and pressure ⫻ time rate of change of volume exchanged
⫽ power. Some are less familiar: flux linkage ⫻ current ⫽ energy, charge ⫻ voltage ⫽
energy, and absolute temperature ⫻ entropy flux ⫽ thermal power. Henry M. Paynter’s1
tetrahedron of state shows how these are related (Fig. 1). Typically, one of these factors is
extensive, a flux or flow, such as current, velocity, volume flow rate, or angular velocity. The
other is intensive, a potential or effort,* such as voltage, force, pressure, or torque. Thus P
⫽ extensive ⫻ intensive for any of these domains of physical activity.
This factoring is quite independent of the analogies between the factors of power in
different domains, for which any arbitrary selection is acceptable. In essence, velocity is not
like voltage or force like current, just as velocity is not like current or force like voltage. It
is convenient, however, before defining impedance and working with it to choose an analogy
so that generalizations can be made across the domains of engineering activity. There are
two standard ways to do this: the Firestone analogy2
and the mobility analogy. Electrical
engineers are most familiar with the Firestone analogy, while mechanical engineers are prob-
ably more comfortable with the mobility analogy. The results derived in this chapter are
independent of the analogy chosen. To avoid confusion, both will be introduced, but only
the mobility analogy will be used in this chapter.
The Firestone analogy gives circuitlike properties to mechanical systems: All systems
consist of nodes like a circuit and only of lumped elements considered to be two-terminal
or four-terminal devices. For masses and tanks of liquid, one of the terminals must be
understood to be ground, the inertial reference frame, or atmosphere. Then one of the energy
*This is Paynter’s terminology, used with reference to his ‘‘Bond Graphs.’’
38 Input and Output Characteristics
Table 1 Impedances of Lumped Linear Elements
Domain Kinetic Storage Dissipation Potential Storage
Translational Mass: Ms Damping: b Spring: k/s
Rotational Inertia: Js Damping: B Torsion spring: kƒ /s
Electrical Inductance: Ls Resistance: R Capacitance: 1/Cs
Fluid Inertance: Is Fluid resistance: R Fluid capacitance: 1/Cs
variables is measured across the terminals of the element and the other passes through the
element. In a circuit, voltage is across and current passes through. For a spring, however,
velocity difference is across and the force passes through. Thus this analogy linked voltage
to velocity, angular velocity, and pressure as across variables and linked current to force,
torque, and flow rate as through variables. Clearly, across ⫻ through ⫽ power.
The mobility analogy, in contrast, considers the complementary power variables to con-
sist of a potential and a flux, an intrinsic and extrinsic variable. The potentials, or efforts,
are voltage, force, torque, and pressure, while the fluxes, or flows, are current, velocity,
angular velocity, and fluid flow rate.
3.2 Impedance and Admittance
Impedance, in the most general sense, is the relationship between the factors of power.
Because only the constitutive relationships for the dissipative elements are expressed directly
in terms of the power variables, ⌬VR ⫽ RiR for example, while the equations for the energy
storage elements are expressed in terms of the derivative of one of the power variables* with
respect to the other, iC ⫽ C(dVC /dt) for example, these are most conveniently expressed in
Laplace transform terms. Impedances are really self-transfer functions at a point in a system.
Since the concept was probably defined first for electrical systems, that definition is most
standardized: Electrical impedance Zelectrical is defined as the rate of change of voltage with
current:
d(voltage) d(effort)
Z ⫽ ⫽ (7)
electrical
d(current) d(flow)
By analogy, impedance can be similarly defined for the other engineering domains:
d(force)
Z ⫽ (8)
translation
d(velocity)
d(torque)
Z ⫽ (9)
rotation
d(angular velocity)
d(pressure)
Z ⫽ (10)
fluid
d(flow rate)
Table 1 is an impedance table using these definitions of the fundamental lumped linear
elements. Note that these are derived from the Laplace transforms of the constitutive equa-
*See Fig. 1 again. Capacitance is a relationship between the integral of the flow and the effort, which
is the same as saying that capacitance relates the flow to the derivative of the effort.
3 Energy, Power, Impedance 39
tions for these elements; they are the transfer functions of the elements and are expressed
in terms of the Laplace operator s. The familiar F ⫽ Ma, for example, becomes, in power-
variable terms, F ⫽ M(dv/dt); it transforms as F(s) ⫽ Msv(s), so
dFmass
(Z ) ⫽ ⫽ Ms (11)
translation mass
dvmass
Because these involve the Laplace operator s, they can be manipulated algebraically to
derive combined impedances. The reciprocal of the impedance, the admittance, is also useful.
Formally, admittance is defined as
1 d(flow)
Admittance: Y ⫽ ⫽ (12)
Z d(effort)
3.3 Combining Impedances and/or Admittances
Elements in series are those for which the flow variable is common to both elements and
the efforts sum. Elements in parallel are those for which the effort variable is common to
both elements and the flows sum. By analogy to electrical resistors, we can deduce that the
impedance sum for series elements and the admittance sum for parallel elements form the
combined impedance or admittance of the elements:
Impedances in series:
Z ⫹ Z ⫽ Z (common flow)
1 2 total (13)
1 1 1
⫹ ⫽
Y Y Y
1 2 total
Impedances in parallel:
Y ⫹ Y ⫽ Y (common effort)
1 2 total (14)
1 1 1
⫹ ⫽
Z Z Z
1 2 total
When applying these relationships to electrical or fluid elements, there is rarely any
confusion about what constitutes series and parallel. In the mobility analogy, however, a pair
of springs connected end to end are in parallel because they experience a common force,
regardless of the topological appearance, whereas springs connected side by side are in series
because they experience a common velocity difference.* For a pair of springs end to end,
the total admittance is
s s s
⫽ ⫹
k k k
total 1 2
so the impedance is
k k k
total 1 2
⫽
s s(k ⫹ k )
1 2
For the same springs side by side, the total impedance is
*For many, the appeal of the Firestone analogy is that springs are equivalent to inductors, and there can
be no ambiguity about series and parallel connections. End to end is series.
Another Random Document on
Scribd Without Any Related Topics
back
back
back
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
back
back
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
back
Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz
back
back
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Mechanical engineers handbook book 4 Energy and power 3ed. Edition Kutz M. (Ed.)
PDF
Flexible Robot Manipulators Modelling Simulation and Control Azad
PDF
Control Of Mechatronic Systems Demirel Burak Emirler Mmin Tolga Gven
PDF
Flexible Robot Manipulators Modelling Simulation And Control 2nd Ed Azad
PDF
Flexible Robot Manipulators Modelling Simulation and Control Azad
PDF
The Mechatronics Handbook provides step by step approach towards Mechatronics
PDF
Sensors and actuators control systems instrumentation 1st Edition De Silva
PDF
Design of Automatic Machinery 1st Edition Stephen J. Derby
Mechanical engineers handbook book 4 Energy and power 3ed. Edition Kutz M. (Ed.)
Flexible Robot Manipulators Modelling Simulation and Control Azad
Control Of Mechatronic Systems Demirel Burak Emirler Mmin Tolga Gven
Flexible Robot Manipulators Modelling Simulation And Control 2nd Ed Azad
Flexible Robot Manipulators Modelling Simulation and Control Azad
The Mechatronics Handbook provides step by step approach towards Mechatronics
Sensors and actuators control systems instrumentation 1st Edition De Silva
Design of Automatic Machinery 1st Edition Stephen J. Derby

Similar to Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz (20)

PDF
Sensors and actuators control systems instrumentation 1st Edition De Silva
PDF
Sensors and actuators control systems instrumentation 1st Edition De Silva
PDF
Mechatronics integrated aproach
PDF
Modern Digital Control Systems 2nd ed Edition Jacquot
PDF
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
PDF
Mechanical Vibration Analysis Uncertainties and Control Third Edition Benaroya
PDF
Get Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju fr...
PDF
Control Theory A guided tour 3rd Edition James Ron Leigh
PDF
Transparency For Robots And Autonomous Systems Fundamentals Technologies And ...
PDF
Mechanical Vibration Analysis Uncertainties and Control Third Edition Benaroya
PDF
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
PDF
Control Theory A Guided Tour 3rd James Ron Leigh
PDF
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
PDF
Transformer and inductor design handbook 3rd Edition Colonel Wm. T. Mclyman
PDF
Mechanical Vibration Analysis Uncertainties and Control Third Edition Benaroya
PDF
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
PDF
Flexible Robot Manipulators Modelling Simulation and Control Azad
PDF
Energy efficient electric motors 3rd Edition Ali Emadi download pdf
PDF
Instrumentation And Control Systems 3rd Edition William Bolton
PDF
Design of Automatic Machinery 1st Edition Stephen J. Derby
Sensors and actuators control systems instrumentation 1st Edition De Silva
Sensors and actuators control systems instrumentation 1st Edition De Silva
Mechatronics integrated aproach
Modern Digital Control Systems 2nd ed Edition Jacquot
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
Mechanical Vibration Analysis Uncertainties and Control Third Edition Benaroya
Get Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju fr...
Control Theory A guided tour 3rd Edition James Ron Leigh
Transparency For Robots And Autonomous Systems Fundamentals Technologies And ...
Mechanical Vibration Analysis Uncertainties and Control Third Edition Benaroya
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
Control Theory A Guided Tour 3rd James Ron Leigh
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
Transformer and inductor design handbook 3rd Edition Colonel Wm. T. Mclyman
Mechanical Vibration Analysis Uncertainties and Control Third Edition Benaroya
Gaseous Electronics Theory and Practice 1st Edition Gorur Govinda Raju
Flexible Robot Manipulators Modelling Simulation and Control Azad
Energy efficient electric motors 3rd Edition Ali Emadi download pdf
Instrumentation And Control Systems 3rd Edition William Bolton
Design of Automatic Machinery 1st Edition Stephen J. Derby
Ad

Recently uploaded (20)

PDF
1_English_Language_Set_2.pdf probationary
PDF
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PDF
My India Quiz Book_20210205121199924.pdf
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PPTX
Introduction to Building Materials
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PPTX
History, Philosophy and sociology of education (1).pptx
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PDF
HVAC Specification 2024 according to central public works department
PDF
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
1_English_Language_Set_2.pdf probationary
David L Page_DCI Research Study Journey_how Methodology can inform one's prac...
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
My India Quiz Book_20210205121199924.pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Unit 4 Computer Architecture Multicore Processor.pptx
Virtual and Augmented Reality in Current Scenario
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
LDMMIA Reiki Yoga Finals Review Spring Summer
Paper A Mock Exam 9_ Attempt review.pdf.
FORM 1 BIOLOGY MIND MAPS and their schemes
TNA_Presentation-1-Final(SAVE)) (1).pptx
Introduction to Building Materials
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
History, Philosophy and sociology of education (1).pptx
Chinmaya Tiranga quiz Grand Finale.pdf
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
HVAC Specification 2024 according to central public works department
CISA (Certified Information Systems Auditor) Domain-Wise Summary.pdf
Ad

Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz

  • 1. Mechanical Engineers Handbook Instrumentation Systems Controls And Mems Volume 2 Third Edition Myer Kutz download https://ebookbell.com/product/mechanical-engineers-handbook- instrumentation-systems-controls-and-mems-volume-2-third-edition- myer-kutz-4307868 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Mechanical Engineers Handbook Volume 2 Design Instrumentation And Controls 4th Edition Myer Kutz https://ebookbell.com/product/mechanical-engineers-handbook- volume-2-design-instrumentation-and-controls-4th-edition-myer- kutz-5325844 Mechanical Engineers Handbook Vol 1 Materials And Mechanical Design 3ed Myer Kutz John Wiley Sons Wiley Interscience Online Service Ed https://ebookbell.com/product/mechanical-engineers-handbook- vol-1-materials-and-mechanical-design-3ed-myer-kutz-john-wiley-sons- wiley-interscience-online-service-ed-2628762 Mechanical Engineers Handbook Book 3 Manufacturing And Measurement 3ed Kutz M Ed https://ebookbell.com/product/mechanical-engineers-handbook- book-3-manufacturing-and-measurement-3ed-kutz-m-ed-2628764 Mechanical Engineers Handbook Book 4 Energy And Power 3ed Kutz M Ed https://ebookbell.com/product/mechanical-engineers-handbook- book-4-energy-and-power-3ed-kutz-m-ed-2628766
  • 3. Mechanical Engineers Handbook Energy And Power 4th Edition Myer Kutz https://ebookbell.com/product/mechanical-engineers-handbook-energy- and-power-4th-edition-myer-kutz-5252736 Marks Standard Handbook For Mechanical Engineers 11th Eugene A Avallone https://ebookbell.com/product/marks-standard-handbook-for-mechanical- engineers-11th-eugene-a-avallone-1205078 Drilling Fluids Processing Handbook 1st Edition American Society Of Mechanical Engineers Shale Shaker Committee https://ebookbell.com/product/drilling-fluids-processing-handbook-1st- edition-american-society-of-mechanical-engineers-shale-shaker- committee-2386622 Gas Turbine Engineering Handbook 2nd Ed Meherwan P Boyce Fellow American Society Of Mechanical Engineers Asme Usa And Fellow The Institute Of Diesel And Gas Turbine Engineers Idgte Uk https://ebookbell.com/product/gas-turbine-engineering-handbook-2nd-ed- meherwan-p-boyce-fellow-american-society-of-mechanical-engineers-asme- usa-and-fellow-the-institute-of-diesel-and-gas-turbine-engineers- idgte-uk-918440 The Decommissioning Handbook Laguardia Thomas S Moghissi A Alan Taboas https://ebookbell.com/product/the-decommissioning-handbook-laguardia- thomas-s-moghissi-a-alan-taboas-5891410
  • 6. Mechanical Engineers’ Handbook Third Edition Instrumentation, Systems, Controls, and MEMS Edited by Myer Kutz JOHN WILEY & SONS, INC.
  • 7. This book is printed on acid-free paper. 嘷 ⬁ Copyright 䉷 2006 by John Wiley & Sons, Inc. All rights reserved. Published by John Wiley & Sons, Inc., Hoboken, New Jersey. Published simultaneously in Canada. No part of this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, electronic, mechanical, photocopying, recording, scanning, or otherwise, except as permitted under Section 107 or 108 of the 1976 United States Copyright Act, without either the prior written permission of the Publisher, or authorization through payment of the appropriate per-copy fee to the Copyright Clearance Center, Inc., 222 Rosewood Drive, Danvers, MA 01923, (978) 750-8400, fax (978) 750-4470, or on the web at www.copyright.com. Requests to the Publisher for permission should be addressed to the Permissions Department, John Wiley & Sons, Inc., 111 River Street, Hoboken, NJ 07030, (201) 748-6011, fax (201) 748-6008, or online at http://www.wiley.com/go/ permission. Limit of Liability/Disclaimer of Warranty: While the publisher and author have used their best efforts in preparing this book, they make no representations or warranties with respect to the accuracy or completeness of the contents of this book and specifically disclaim any implied warranties of merchantability or fitness for a particular purpose. No warranty may be created or extended by sales representatives or written sales materials. The advice and strategies contained herein may not be suitable for your situation. The publisher is not engaged in rendering professional services, and you should consult a professional where appropriate. Neither the publisher nor author shall be liable for any loss of profit or any other commercial damages, including but not limited to special, incidental, consequential, or other damages. For general information on our other products and services, please contact our Customer Care Department within the United States at (800) 762-2974, outside the United States at (317) 572-3993 or fax (317) 572-4002. Wiley also publishes its books in a variety of electronic formats. Some content that appears in print may not be available in electronic books. For more information about Wiley products, visit our web site at www.wiley.com. Library of Congress Cataloging-in-Publication Data: Mechanical engineers’ handbook/edited by Myer Kutz.—3rd ed. p. cm. Includes bibliographical references and index. ISBN-13 978-0-471-44990-4 ISBN-10 0-471-44990-3 (cloth) 1. Mechanical engineering—Handbooks, manuals, etc. I. Kutz, Myer. TJ151.M395 2005 621—dc22 2005008603 Printed in the United States of America. 10 9 8 7 6 5 4 3 2 1
  • 8. To Bill and Judy, always there
  • 9. vii Contents Preface ix Vision Statement xi Contributors xiii PART 1 INSTRUMENTATION 1 1. Instrument Statics 3 Jerry Lee Hall, Sriram Sundararajan, and Mahmood Naim 2. Input and Output Characteristics 32 Adam C. Bell 3. Bridge Transducers 69 Patrick L. Walter 4. Measurements 116 E. L. Hixson and E. A. Ripperger 5. Temperature and Flow Transducers 131 Robert J. Moffat 6. Signal Processing 189 John Turnbull 7. Data Acquisition and Display Systems 209 Philip C. Milliman 8. Digital Integrated Circuits: A Practical Application 239 Todd Rhoad and Keith Folken PART 2 SYSTEMS, CONTROLS, AND MEMS 255 9. Systems Engineering: Analysis, Design, and Information Processing for Analysis and Design 257 Andrew P. Sage 10. Mathematical Models of Dynamic Physical Systems 300 K. Preston White, Jr. 11. Basic Control Systems Design 383 William J. Palm III 12. Closed-Loop Control System Analysis 443 Suhada Jayasuriya 13. Control System Performance Modification 503 Suhada Jayasuriya 14. Servoactuators for Closed-Loop Control 542 Karl N. Reid and Syed Hamid
  • 10. viii Contents 15. Controller Design 620 Thomas Peter Neal 16. General-Purpose Control Devices 678 James H. Christensen, Robert J. Kretschmann, Sujeet Chand, and Kazuhiko Yokoyama 17. State-Space Methods for Dynamic Systems Analysis 717 Krishnaswamy Srinivasan 18. Control System Design Using State-Space Methods 757 Krishnaswamy Srinivasan 19. Neural Networks in Feedback Control Systems 791 F. L. Lewis and Shuzhi Sam Ge 20. Mechatronics 826 Shane Farritor 21. Introduction to Microelectromechanical Systems (MEMS): Design and Application 863 M. E. Zaghloul Index 877
  • 11. ix Preface The second volume of the third edition of the Mechanical Engineers’ Handbook (‘‘ME3’’) is comprised of two major parts: Part 1, Instrumentation, with eight chapters, and Part 2, Systems, Controls, and MEMS, with 13 chapters. The two parts are linked in the sense that most feedback control systems require measurement transducers. Most of the chapters in this volume originated not only in earlier editions of the Mechanical Engineers’ Handbook but also in a book called Instrumentation and Control, which was edited by Chester L. Nachtigal and published by Wiley in 1990. Some of these chapters have been either updated or exten- sively revised. Some have been replaced. Others, which present timeless, fundamental con- cepts, have been included without change.1 In addition, there are chapters that are entirely new, including Digital Integrated Circuits: A Practical Application (Chapter 8), Neural Net- works in Control Systems (Chapter 19), Mechatronics (Chapter 20), and Introduction to Microelectromechanical Systems (MEMS): Design and Application (Chapter 21). The instrumentation chapters basically are arranged, as they were in the Nachtigal vol- ume, in the order of the flow of information in real measurement systems. These chapters start with fundamentals of transducer design, present transducers used by mechanical engi- neers, including strain gages, temperature transducers such as thermocouples and thermistors, and flowmeters, and then discuss issues involved in processing signals from transducers and in acquiring and displaying data. A general chapter on measurement fundamentals, updated from the second edition of Mechanical Engineers’Handbook (‘‘ME2’’), as well as the chapter on digital integrated circuits have been added to the half-dozen Instrumentation and Control chapters in this first part. The systems and control chapters in the second part of this volume start with three chapters from ME2, two of which have been updated, and move on to seven chapters from Nachtigal, only two of which required updating. These ten chapters present a general dis- cussion of systems engineering; fundamentals of control system design, analysis, and per- formance modification; and detailed information about the design of servoactuators, controllers, and general-purpose control devices. This second part of Vol. II concludes with the chapters, all of them new to the handbook, on what are termed ‘‘new departures’’— neural networks, mechatronics, and MEMS. These topics have become increasingly impor- tant to mechanical engineers in recent years. 1 A new edition of Instrumentation and Control has been sought after but has never appeared. Because several chapters had numerous contributors, it proved impossible to update or revise them or even to find anyone to write new chapters on the same topics on the schedule that other contributors could meet. Because the material in these chapters was outdated, they have been dropped from this edition, but may be revised for future editions.
  • 12. xi Vision for the Third Edition Basic engineering disciplines are not static, no matter how old and well established they are. The field of mechanical engineering is no exception. Movement within this broadly based discipline is multidimensional. Even the classic subjects on which the discipline was founded, such as mechanics of materials and heat transfer, continue to evolve. Mechanical engineers continue to be heavily involved with disciplines allied to mechanical engineering, such as industrial and manufacturing engineering, which are also constantly evolving. Advances in other major disciplines, such as electrical and electronics engineering, have significant impact on the work of mechanical engineers. New subject areas, such as neural networks, suddenly become all the rage. In response to this exciting, dynamic atmosphere, the Mechanical Engineers’ Handbook is expanding dramatically, from one volume to four volumes. The third edition not only is incorporating updates and revisions to chapters in the second edition, which was published in 1998, but also is adding 24 chapters on entirely new subjects as well, incorporating updates and revisions to chapters in the Handbook of Materials Selection, which was published in 2002, as well as to chapters in Instrumentation and Control, edited by Chester Nachtigal and published in 1990. The four volumes of the third edition are arranged as follows: Volume I: Materials and Mechanical Design—36 chapters Part 1. Materials—14 chapters Part 2. Mechanical Design—22 chapters Volume II: Instrumentation, Systems, Controls, and MEMS—21 chapters Part 1. Instrumentation—8 chapters Part 2. Systems, Controls, and MEMS—13 chapters Volume III: Manufacturing and Management—24 chapters Part 1. Manufacturing—12 chapters Part 2. Management, Finance, Quality, Law, and Research—12 chapters Volume IV: Energy and Power—31 chapters Part 1: Energy—15 chapters Part 2: Power—16 chapters The mechanical engineering literature is extensive and has been so for a considerable period of time. Many textbooks, reference works, and manuals as well as a substantial number of journals exist. Numerous commercial publishers and professional societies, par- ticularly in the United States and Europe, distribute these materials. The literature grows continuously, as applied mechanical engineering research finds new ways of designing, con- trolling, measuring, making and maintaining things, and monitoring and evaluating technol- ogies, infrastructures, and systems. Most professional-level mechanical engineering publications tend to be specialized, di- rected to the specific needs of particular groups of practitioners. Overall, however, the me- chanical engineering audience is broad and multidisciplinary. Practitioners work in a variety of organizations, including institutions of higher learning, design, manufacturing, and con-
  • 13. xii Vision for the Third Edition sulting firms as well as federal, state, and local government agencies. A rationale for an expanded general mechanical engineering handbook is that every practitioner, researcher, and bureaucrat cannot be an expert on every topic, especially in so broad and multidiscipli- nary a field, and may need an authoritative professional summary of a subject with which he or she is not intimately familiar. Starting with the first edition, which was published in 1986, our intention has always been that the Mechanical Engineers’ Handbook stand at the intersection of textbooks, re- search papers, and design manuals. For example, we want the handbook to help young engineers move from the college classroom to the professional office and laboratory where they may have to deal with issues and problems in areas they have not studied extensively in school. With this expanded third edition, we have produced a practical reference for the me- chanical engineer who is seeking to answer a question, solve a problem, reduce a cost, or improve a system or facility. The handbook is not a research monograph. The chapters offer design techniques, illustrate successful applications, or provide guidelines to improving the performance, the life expectancy, the effectiveness, or the usefulness of parts, assemblies, and systems. The purpose is to show readers what options are available in a particular situation and which option they might choose to solve problems at hand. The aim of this expanded handbook is to serve as a source of practical advice to readers. We hope that the handbook will be the first information resource a practicing engineer consults when faced with a new problem or opportunity—even before turning to other print sources, even officially sanctioned ones, or to sites on the Internet. (The second edition has been available online on knovel.com.) In each chapter, the reader should feel that he or she is in the hands of an experienced consultant who is providing sensible advice that can lead to beneficial action and results. Can a single handbook, even spread out over four volumes, cover this broad, interdis- ciplinary field? We have designed the third edition of the Mechanical Engineers’ Handbook as if it were serving as a core for an Internet-based information source. Many chapters in the handbook point readers to information sources on the Web dealing with the subjects addressed. Furthermore, where appropriate, enough analytical techniques and data are pro- vided to allow the reader to employ a preliminary approach to solving problems. The contributors have written, to the extent their backgrounds and capabilities make possible, in a style that reflects practical discussion informed by real-world experience. We would like readers to feel that they are in the presence of experienced teachers and con- sultants who know about the multiplicity of technical issues that impinge on any topic within mechanical engineering. At the same time, the level is such that students and recent graduates can find the handbook as accessible as experienced engineers.
  • 14. xiii Contributors Adam C. Bell Dartmouth, Nova Scotia, Canada Sujeet Chand Rockwell Automation Milwaukee, Wisconsin James H. Christensen Holobloc, Inc. Cleveland Heights, Ohio Shane Farritor University of Nebraska–Lincoln Lincoln, Nebraska Keith Folken Peoria, Illinois Shuzhi Sam Ge National University of Singapora Singapore Jerry Lee Hall Hall-Wade Engineering Services and Iowa State University Ames, Iowa Syed Hamid Halliburton Services Duncan, Oklahoma E. L. Hixson University of Texas Austin, Texas Suhada Jayasuriya Texas A&M University College Station, Texas Robert J. Kretschmann Rockwell Automation Mayfield Heights, Ohio F. L. Lewis University of Texas at Arlington Fort Worth, Texas Philip C. Milliman Weyerhaeuser Company Federal Way, Washington Robert J. Moffat Stanford University Stanford, California Mahmood Naim Union Carbide Corporation Indianapolis, Indiana Thomas Peter Neal Lake View, New York William J. Palm III University of Rhode Island Kingston, Rhode Island Karl N. Reid Oklahoma State University Stillwater, Oklahoma Todd Rhoad Austin, Texas E. A. Ripperger University of Texas Austin, Texas Andrew P. Sage George Mason University Fairfax, Virginia
  • 15. xiv Contributors Krishnaswamy Srinivasan The Ohio State University Columbus, Ohio Sriram Sundararajan Iowa State University Ames, Iowa John Turnbull Case Western Reserve University Cleveland, Ohio Patrick L. Walter Texas Christian University Fort Worth, Texas K. Preston White, Jr. University of Virginia Charlottesville, Virginia Kazuhiko Yokoyama Yaskawa Electric Corporation Tokyo, Japan M. E. Zaghloul The George Washington University Washington, D.C.
  • 18. 3 CHAPTER 1 INSTRUMENT STATICS Jerry Lee Hall Department of Mechanical Engineering Iowa State University Ames, Iowa Sriram Sundararajan Department of Mechanical Engineering Iowa State University Ames, Iowa Mahmood Naim Union Carbide Corporation Indianapolis, Indiana 1 TERMINOLOGY 3 1.1 Transducer Characteristics 3 1.2 Definitions 4 2 STATIC CALIBRATION 6 2.1 Calibration Process 6 2.2 Fitting Equations to Calibration Data 6 3 STATISTICS IN THE MEASUREMENT PROCESS 9 3.1 Unbiased Estimates 9 3.2 Sampling 9 3.3 Types of Errors 10 3.4 Propagation of Error or Uncertainty 10 3.5 Uncertainty Interval 12 3.6 Amount of Data to Take 14 3.7 Goodness of Fit 15 3.8 Probability Density Functions 16 3.9 Determination of Confidence Limits on ␮ 17 3.10 Confidence Limits on Regression Lines 18 3.11 Inference and Comparison 22 REFERENCES 31 1 TERMINOLOGY 1.1 Transducer Characteristics A measurement system extracts information about a measurable quantity from some medium of interest and communicates this measured data to the observer. The measurement of any variable is accomplished by an instrumentation system composed of transducers. Each trans- ducer is an energy conversion device and requires energy transfer into the device before the variable of interest can be detected. The Instrument Society of America (ISA) defines transducer as ‘‘a device that provides usable output in response to a specified measurand.’’ The measurand is ‘‘a physical quantity, property or condition which is measured.’’ The output is ‘‘the electrical quantity, produced by a transducer, which is a function of the applied measurand.’’1 It should be made very clear that the act of measurement involves transfer of energy between the measured medium and the measuring system and hence the measured quantity Mechanical Engineers’ Handbook: Instrumentation, Systems, Controls, and MEMS, Volume 2, Third Edition. Edited by Myer Kutz Copyright  2006 by John Wiley & Sons, Inc.
  • 19. 4 Instrument Statics is disturbed to some extent, making a perfect measurement unrealistic. Therefore, pressure cannot be measured without an accompanying change in volume, force cannot be measured without an accompanying change in length, and voltage cannot be measured without an accompanying flow of charge. Instead measures must be taken to minimize the energy trans- fer from the source to be measured if the measurement is to be accurate. There are several categorical characteristics for a transducer (or measurement system). When the measurand maintains a steady value or varies very slowly with time, transducer performance can be described in terms of static characteristics. Instruments associated with rapidly varying measurands require additional qualifications termed dynamic characteristics. Other performance descriptors include environmental characteristics (for situations involving varying environmental operating conditions), reliability characteristics (related to the life expectancy of the instrument under various operating conditions), theoretical characteristics (describing the ideal behavior of the instrument in terms of mathematical or graphical rela- tionships), and noise characteristics (external factors that can contribute to the measurement process such as electromagnetic surroundings, humidity, acoustic and thermal vibrations, etc.). In this chapter, we will describe the considerations associated with evaluating numerical values for the static characteristics of an instrument. 1.2 Definitions The description of a transducer and its role in a measuring system is based on most of the definitions that follow. Further details of these definitions can be found in other works.2–4 Static calibration is the process of measuring the static characteristics of an instrument. This involves applying a range of known values of static input to the instrument and recording the corresponding outputs. The data obtained are presented in a tabular or graphical form. Range is defined by the upper and lower limits of the measured values that an instrument can measure. Instruments are designed to provide predictable performance and, often, enhanced linearity over the range specified. Sensitivity is defined as the change in the output signal relative to the change in the input signal at an operating point. Sensitivity may be constant over the range of the input signal to a transducer or it can vary. Instruments that have a constant sensitivity are called ‘‘linear.’’ Resolution is defined as the smallest change in the input signal that will yield a readable change in the output of the measuring system at its operating point. Threshold of an instrument is the minimum input for which there will be an output. Below this minimum input the instrument will read zero. Zero of an instrument refers to a selected datum. The output of an instrument is adjusted to read zero at a predefined point in the measured range. For example, the output of a Celsius thermometer is zero at the freezing point of water; the output of a pressure gage may be zero at atmospheric pressure. Zero drift is the change in output from its set zero value over a specified period of time. Zero drift occurs due to changes in ambient conditions, changes in electrical condi- tions, aging of components, or mechanical damage. The error introduced may be significant when a transducer is used for long-term measurement. Creep is a change in output occurring over a specific time period while the measurand is held constant at a value other than zero and all environmental conditions are held constant.
  • 20. 1 Terminology 5 Figure 1 Schematics illustrating concepts of (a) accuracy and precision, (b) hysteresis, (c) a static error band, and (d) fitting a curve to the calibration data. Accuracy is the maximum amount of difference between a measured variable and its true value. It is usually expressed as a percentage of full-scale output. In the strictest sense, accuracy is never known because the true value is never really known. Precision is the difference between a measured variable and the best estimate (as ob- tained from the measured variable) of the true value of the measured variable. It is a measure of repeatability. Precise measurements have small dispersion but may have poor accuracy if they are not close to the true value. Figure 1a shows the differences between accuracy and precision. Linearity describes the maximum deviation of the output of an instrument from a best- fitting straight line through the calibration data. Most instruments are designed so that the output is a linear function of the input. Linearity is based on the type of straight line fitted to the calibration data. For example, least-squares linearity is referenced to that straight line for which the sum of the squares of the residuals is minimized. The
  • 21. 6 Instrument Statics term ‘‘residual’’ refers to the deviations of output readings from their corresponding values on the straight line fitted through the data. Hysteresis is the maximum difference in output, at any measured value within the spec- ified range, when the value is approached first with increasing and then with decreas- ing measurand. Hysteresis is typically caused by a lag in the action of the sensing element of the transducer. Loading the instrument through a cycle of first increasing values, then decreasing values, of the measurand provides a hysteresis loop, as shown in Fig. 1b. Hysteresis is usually expressed in percent of full-scale output. Error band is the band of maximum deviation of output values from a specified refer- ence line or curve. A static error band (see Fig. 1c) is obtained by static calibration. It is determined on the basis of maximum deviations observed over at least two consecutive calibration cycles so as to include repeatability. Error band accounts for deviations that may be due to nonlinearity, nonrepeatability, hysteresis, zero shift, sensitivity shift, and so forth. It is a convenient way to specify transducer behavior when individual types of deviations need not be specified nor determined.5 2 STATIC CALIBRATION 2.1 Calibration Process Calibration is the process of comparison of the output of a measuring system to the values of a range of known inputs. For example, a pressure gage is calibrated by a device called a ‘‘dead-weight’’ tester, where known pressures are applied to the gage and the output of the gage is recorded over its complete range of operation. The calibration signal should, as closely as possible, be the same as the type of input signal to be measured. Most calibrations are performed by means of static or level calibration signals since they are usually easy to produce and maintain accurately. However, a measuring system calibrated with static signals may not read correctly when subjected to the dynamic input signals since the natural dynamic characteristics and the response characteristics of the measurement system to the input forcing function would not be accounted for with a static calibration. A measurement system used for dynamic signals should be calibrated using known dynamic inputs. A static calibration should include both increasing and decreasing values of the known input signal and a repetition of the input signal.6 This allows one to determine hysteresis as well as the repeatability of the measuring system, as shown in Fig. 1c. The sensitivity of the measuring system is obtained from the slope of a suitable line or curve plotted through the calibration points at any level of the input signal. 2.2 Fitting Equations to Calibration Data Though linear in most cases, the calibration plot of a specific measurement system may require a choice of a nonlinear functional form for the relationship that best describes the calibration data, as shown in Fig. 1d. This functional form (or curve fit) may be a standard polynomial type or may be one of a transcendental function type. Statistics are used to fit a desired function to the calibration data. A detailed description of the mathematical basis of the selection process used to determine the appropriate function to fit the data can be found elsewhere.7 Most of today’s graphing software allow the user to select the type of fit required. A very common method used to describe the quality of ‘‘fit’’ of a chosen functional form is
  • 22. 2 Static Calibration 7 the ‘‘least-squares fit.’’ The principle used in making this type of curve fit is to minimize the sum of the squares of the deviations of the data from the assumed curve. These deviations from the assumed curve may be due to errors in one or more variables. If the error is in one variable, the technique is called linear regression and is the common case encountered in engineering measurements. If several variables are involved, it is called multiple regression. Two assumptions are often used with the least-squares method: (i) the x variable (usually the input to the calibration process) has relatively little error as compared to the y (measured) variable and (ii) the magnitude of the uncertainty in y is not dependent on the magnitude of the x variable. The methodology for evaluating calibration curves in systems where the magnitude of the uncertainty in the measured value varies with the value of the input variable can be found elsewhere.8 Although almost all graphing software packages include the least-squares fit analysis, thus enabling the user to identify the best-fit curve with minimum effort, a brief description of the mathematical process is given here. To illustrate the least-squares technique, assume that an equation of the following polynomial form will fit a given set of data: 1 2 k y ⫽ a ⫹ bx ⫹ cx ⫹ 䡠䡠䡠 ⫹ mx (1) If the data points are denoted by (xi, yi), where i ranges from 1 to n, then the expression for summation of the residuals is n 2 (y ⫺ y) ⫽ R (2) 冘 i i⫽1 The least-squares method requires that R be minimized. The parameters used for the minimization are the unknown coefficients a, b, c, . . . , m in the assumed equation. The following differentiation yields k ⫹ 1 equations called ‘‘normal equations’’ to determine the k ⫹ 1 coefficients in the assumed relation. The coefficients a, b, c, . . . , m are found by solving the normal equations simultaneously: ⭸R ⭸R ⭸R ⭸R ⫽ ⫽ ⫽ 䡠䡠䡠 ⫽ ⫽ 0 (3) ⭸a ⭸b ⭸c ⭸m For example, if k ⫽ 1, then the polynomial is of first degree (a straight line) and the normal equations become 2 兺y ⫽ a(n) ⫹ b兺x 兺x y ⫽ a兺x ⫹ b兺x (4) i i i i i i and the coefficients a and b are 2 兺x 兺y ⫺ 兺x兺xy n兺xy ⫺ 兺x兺y a ⫽ b ⫽ (5) 2 2 2 2 n兺x ⫺ (兺x) n兺x ⫺ (兺x) The resulting curve (y ⫽ a ⫹ bx) is called the regression curve of y on x. It can be shown that a regression curve fit by the least-squares method passes through the centroid ( ) of x, y the data.9 If two new variables X and Y are defined as X ⫽ x ⫺ x and Y ⫽ y ⫺ y (6) then 兺X ⫽ 0 ⫽ 兺Y (7) Substitution of these new variables in the normal equations for a straight line yields the following result for a and b:
  • 23. 8 Instrument Statics Table 1 Statistical Analysis for Example 1 Assumed Equation Regression Coefficient a b Residual Error, R Maximum Deviation Correlation Coefficienta y ⫽ bx —- 1.254 56.767 10.245 0.700 y ⫽ a ⫹ bx 9.956 0.907 20.249 7.178 0.893 y ⫽ aebx 10.863 0.040 70.274 18.612 0.581 y ⫽ 1/(a ⫹ bx) 0.098 ⫺0.002 14257.327 341.451 ⫺74.302 y ⫽ a ⫹ b/x 40.615 ⫺133.324 32.275 9.326 0.830 y ⫽ a ⫹ b log x ⫺14.188 15.612 1.542 2.791 0.992 y ⫽ axb 3.143 0.752 20.524 8.767 0.892 y ⫽ x/(a ⫹ bx) 0.496 0.005 48.553 14.600 0.744 a Defined in Section 4.3.7. 兺XY a ⫽ 0 b ⫽ (8) 2 兺X The regression line becomes Y ⫽ bX (9) The technique described above will yield a curve based on an assumed form that will fit a set of data. This curve may not be the best one that could be found, but it will be the best based on the assumed form. Therefore, the ‘‘goodness of fit’’ must be determined to check that the fitted curve follows the physical data as closely as possible. Example 1 Choice of Functional Form. Find a suitable equation to represent the follow- ing calibration data: x ⫽ [3, 4, 5, 7, 9, 12, 13, 14, 17, 20, 23, 25, 34, 38, 42, 45] y ⫽ [5.5, 7.75, 10.6, 13.4, 18.5, 23.6, 26.2, 27.8, 30.5, 33.5, 35, 35.4, 41, 42.1, 44.6, 46.2] Solution: A computer program can be written or graphing software (e.g., Microsoft Excel) used to fit the data to several assumed forms, as given in Table 1. The data can be plotted and the best-fitting curve selected on the basis of minimum residual error, maximum cor- relation coefficient, or smallest maximum absolute deviation, as shown in Table 1. The analysis shows that the assumed equation y ⫽ a ⫹ (b) log(x) represents the best fit through the data as it has the smallest maximum deviation and the highest correlation co- efficient. Also note that the equation y ⫽ 1/a ⫹ bx is not appropriate for these data because it has a negative correlation coefficient. Example 2 Nonlinear Regression. Find the regression coefficients a, b, and c if the as- sumed behavior of the (x, y) data is y ⫽ a ⫹ bx ⫹ cx2 : x ⫽ [2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15] y ⫽ [0.26, 0.38, 0.55, 0.70, 1.05, 1.36, 1.75, 2.20, 2.70, 3.20, 3.75, 4.40, 5.00, 6.00]
  • 24. 3 Statistics in the Measurement Process 9 From Eqs. (1)–(3) 2 y ⫽ a ⫹ bx ⫹ cx 2 3 xy ⫽ ax ⫹ bx ⫹ cx (10) 2 2 3 4 x y ⫽ ax ⫹ bx ⫹ cx A simultaneous solution of the above equations provides the desired regression coefficients: a ⫽ 0.1959 b ⫽ ⫺0.0205 c ⫽ 0.0266 (11) As was mentioned previously, a measurement process can only give you the best estimate of the measurand. In addition, engineering measurements taken repeatedly under seemingly identical conditions normally show variations in measured values. A statistical treatment of measurement data is therefore a necessity. 3 STATISTICS IN THE MEASUREMENT PROCESS 3.1 Unbiased Estimates Data sets typically have two very important characteristics: central tendency (or most rep- resentative value) and dispersion (or scatter). Other characteristics such as skewness and kurtosis (or peakedness) may also be of importance but will not be considered here.10 A basic problem in every quantitative experiment is that of obtaining an unbiased es- timate of the true value of a quantity as well as an unbiased measure of the dispersion or uncertainty in the measured variable. Philosophically, in any measurement process a deter- ministic event is observed through a ‘‘foggy’’ window. If so, ultimate refinement of the measuring system would result in all values of measurements to be the true value ␮. Because errors occur in all measurements, one can never exactly measure the true value of any quantity. Continued refinement of the methods used in any measurement will yield closer and closer approximations, but there is always a limit beyond which refinements cannot be made. To determine the relation that a measured value has with the true value, we must specify the unbiased estimate of the true value ␮ of a measurement and its uncertainty (or x precision) interval Wx based on a desired confidence level (or probability of occurrence). An unbiased estimator9 exists if the mean of its distribution is the same as the quantity being estimated. Thus, for sample mean to be an unbiased estimator of population mean x ␮, the mean of the distribution of sample means, , must be equal to the population mean. x 3.2 Sampling Unbiased estimates for determining population mean, population variance, and variance of the sample mean depend on the type of sampling procedure used. Sampling with Replacement (Random Sampling) ˆ ␮ ⫽ x (12) where is the sample mean and is the unbiased estimate of the population mean, x ˆ ␮ ␮;
  • 25. 10 Instrument Statics n 2 2 ˆ ␴ ⫽ S 冉 冊 n ⫺ 1 where S2 is the sample variance 2 兺(x ⫺ x) i 2 S ⫽ (13) n and 2 ˆ ␴ 2 ˆ ␴ ⫽ (14) x n where is the variance of the mean. 2 ˆ ␴x Sampling without Replacement (Usual Case) ˆ ␮ ⫽ x (15) n N ⫺ 1 2 2 ˆ ␴ ⫽ S (16) 冉 冊冉 冊 n ⫺ 1 N where N is the population size and n the sample size, and 2 ˆ ␴ N ⫺ n 2 ˆ ␴ ⫽ (17) 冉 冊 x n N ⫺ 1 Note that sampling without replacement from an extremely large population is equivalent to random sampling. 3.3 Types of Errors There are at least three types of errors that one must consider in making measurements. They are systematic (or fixed) errors, illegitimate errors (or mistakes), and random errors: Systematic errors are of consistent form. They result from conditions or procedures that are correctable. This type of error may generally be eliminated by calibration. Illegitimate errors are mistakes and should not exist. They may be eliminated by using care in the experiment, proper measurement procedures, and repetition of the mea- surement. Random errors are accidental errors that occur in all measurements. They are charac- terized by their inconsistent nature, and their origin cannot be determined in the measurement process. These errors are estimated by statistical analysis. If the illegitimate errors can be eliminated by care and proper measurement procedures and the systematic errors can be eliminated by calibrating the measurement system, then the random errors remain to be determined by statistical analysis to yield the precision of the measurement. 3.4 Propagation of Error or Uncertainty In many cases the desired quantity cannot be measured directly but must be calculated from the most representative value (e.g., the mean) of two or more measured quantities. It is desirable to know the uncertainty or precision of such calculated quantities. Precision is specified by quantities called precision indexes (denoted by Wx) that are calculated from the random errors of a set of measurements. A ⳲWx should be specified for
  • 26. 3 Statistics in the Measurement Process 11 every measured variable. The confidence limits or probability for obtaining the range ⳲWx is generally specified directly or is implied by the particular type of precision index being used. The precision index of a calculated quantity depends on the precision indexes of the measured quantities required for the calculations.9 If the measured quantities are determined independently and if their distribution about a measure of central tendency is approximately symmetrical, the following ‘‘propagation-of-error’’ equation is valid11 : 2 ⭸R 2 2 W ⫽ 兺 W (18) 冉 冊 R xt ⭸xi In this equation, R represents the calculated quantity and x1, x2, . . . , xn represent the measured independent variables so that mathematically we have R ⫽ ƒ(x1, x2, . . . , xn). The precision index is a measure of dispersion about the central tendency and is denoted by W in Eq. (18). The standard deviation is often used for W; however, any precision index will do as long as the same type of precision index is used in each term of the equation. A simplified form of this propagation-of-error equation results if the function R has the form a b c m R ⫽ kx x x 䡠䡠䡠 x (19) 1 2 3 n where the exponents a, b, . . . , m may be positive or negative, integer or noninteger. The simplified result for the precision WR in R is 2 2 2 2 W W W W x x x 1 2 n R 2 2 2 ⫽ a ⫹ b ⫹ 䡠䡠䡠 ⫹ m (20) 冉 冊 冉 冊 冉 冊 冉 冊 R x x x 1 2 n The propagation-of-error equation is also used in planning experiments. If a certain precision is desired on the calculated result R, the precision of the measured variables can be deter- mined from this equation. Then, the cost of a proposed measurement system can be deter- mined as it is directly related to precision. Example 3 Propagation of Uncertainty. Determine the resistivity and its uncertainty for a conducting wire of circular cross section from the measurements of resistance, length, and diameter. Given 2 L 4L ␲D R R ⫽ ␳ ⫽ ␳ or ␳ ⫽ (21) 2 A ␲D 4L R ⫽ 0.0959 Ⳳ 0.0001 ⍀ L ⫽ 250 Ⳳ 2.5 cm D ⫽ 0.100 Ⳳ 0.001 cm where R ⫽ wire resistance, ⍀ L ⫽ wire length, cm A ⫽ cross-sectional area, ⫽ ␲D2 /4, cm2 ␳ ⫽ wire resistivity, ⍀䡠cm Solution: Thus the resistivity is 2 (␲)(0.100) (0.0959) ⫺6 ␳ ⫽ ⫽ 3.01 ⫻ 10 ⍀䡠cm 4(250) The propagation of variance (or precision index) equation for ␳ reduces to the simplified form, that is,
  • 27. 12 Instrument Statics 2 2 2 2 W␳ W W W D R L ⫽ 4 ⫹ ⫹ 冉 冊 冉 冊 冉 冊 冉 冊 ␳ D R L 2 2 2 0.001 0.0001 2.5 ⫽ 4 ⫹ ⫹ 冉 冊 冉 冊 冉 冊 0.10 0.0959 250 ⫺4 ⫺6 ⫺4 ⫽ 4.00 ⫻ 10 ⫹ 1.09 ⫻ 10 ⫹ 1.00 ⫻ 10 ⫺4 ⫽ 5.01 ⫻ 10 The resulting resistivity ␳ and its precision W␳ are ⫺4 ⫺8 W ⫽ ␳兹(5.01)10 ⫽ Ⳳ6.74 ⫻ 10 ␳ ⫺6 ␳ ⫽ (3.01 Ⳳ 0.07) ⫻ 10 ⍀䡠cm 3.5 Uncertainty Interval When several measurements of a variable have been obtained to form a data set (multisample data), the best estimates of the most representative value (mean) and dispersion (standard deviation) are obtained from the formulas in Section 3.2. When a single measurement exists (or when the data are taken so that they are equivalent to a single measurement), the standard deviation cannot be determined and the data are said to be ‘‘single-sample’’ data. Under these conditions the only estimate of the true value is the single measurement, and the uncertainty interval must be estimated by the observer.12 It is recommended that the precision index be estimated as the maximum reasonable error. This corresponds approximately to the 99% confidence level associated with multisample data. Uncertainty Interval Considering Random Error Once the unbiased estimates of mean and variance are determined from the data sample, the uncertainty interval for ␮ is ˆ ␮ ⫽ ˆ ␮ Ⳳ W ⫽ ˆ ␮ Ⳳ k(␯, ␥)ˆ ␴ (22) where represents the most representative value of ␮ from the measured data and is the ˆ ˆ ␮ W uncertainty interval or precision index associated with the estimate of ␮. The magnitude of the precision index or uncertainty interval depends on the confidence level ␥ (or probability chosen), the amount of data n, and the type of probability distribution governing the distri- bution of measured items. The uncertainty interval can be replaced by k , where is the standard deviation Ŵ ˆ ␴ ˆ ␴ (measure of dispersion) of the population as estimated from the sample and k is a constant that depends on the probability distribution function, the confidence level ␥, and the amount of data n. For example, with a Gaussian distribution the 95% confidence limits are ⫽ Ŵ 1.96␴, where k ⫽ 1.96 and is independent of n. For a t-distribution, k ⫽ 2.78, 2.06, and 1.96 with a sample size of 5, 25, and ⬁, respectively, at the 95% level of confidence prob- ability. Note that ␯ ⫽ n ⫺ 1 for the t-distribution. The t-distribution is the same as the Gaussian distribution as n → ⬁. Uncertainty Interval Considering Random Error with Resolution, Truncation, and Significant Digits The uncertainty interval in Eq. (22) assumes a set of measured values with only random Ŵ error present. Furthermore, the set of measured values is assumed to have unbounded sig- nificant digits and to have been obtained with a measuring system having infinite resolution. When finite resolution exists and truncation of digits occurs, the uncertainty interval may be larger than that predicted by consideration of the random error only. The uncertainty interval can never be less than the resolution limits or truncation limits of the measured values.
  • 28. 3 Statistics in the Measurement Process 13 Resolution and Truncation Let {sn} be the theoretically possible set of measurements of unbound significant digits from a measuring system of infinite resolution and let {xn} be the actual set of measurements expressed to m significant digits from a measuring system of finite resolution. Then the quantity si ⫺ xi ⫽ Ⳳei is the resolution or truncation deficiency caused by the measurement process. The unbiased estimates of mean and variance are 2 兺s 兺(s ⫺ s) i i 2 ˆ ␮ ⫽ ⫽ s and ˆ ␴ ⫽ (23) n n ⫺ 1 Noting that the set {xn} is available rather than {sn}, the required mean and variance are 2 兺x 兺e 兺e 兺(x ⫺ x) i i i i 2 ˆ ␮ ⫽ Ⳳ ⫽ x Ⳳ and ˆ ␴ ⫽ (24) n n n n ⫺ 1 The truncation or resolution has no effect on the estimate of variance but does affect the estimate of the mean. The truncation error ei is not necessarily distributed randomly and may all be of the same sign. Thus can be biased as much as 兺ei /n ⫽ high or low from the x e unbiased estimate of the value of ␮ so that . ˆ ␮ ⫽ x Ⳳ e If ei is a random variable observed through a ‘‘cloudy window’’ with a measuring system of finite resolution, the value of ei may be plus or minus but its upper bound is R (the resolution of the measurement). Thus the resolution error is no larger than R and ⫽ Ⳳ ˆ ␮ x Rn/n ⫽ Ⳳ R. x If the truncation is never more than that dictated by the resolution limits (R) of the measurement system, the uncertainty in as a measure of the most representative value of x ␮ is never larger than R plus the uncertainty due to the random error. Thus ˆ ␮ ⫽ x Ⳳ . It should be emphasized that the uncertainty interval can never be less than the ˆ (W ⫹ R) resolution bounds of the measurement. The resolution bounds cannot be reduced without changing the measurement system. Significant Digits When xi is observed to m significant digits, the uncertainty (except for random error) is never more than Ⳳ5/10m and the bounds on si are equal to xi Ⳳ 5/10m so that 5 5 x ⫺ ⬍ s ⬍ x ⫹ (25) i i i m m 10 10 The relation for for m significant digits is then from Eq. 24. ˆ ␮ m 兺e 兺(5/10 ) 5 i ˆ ␮ ⫽ x Ⳳ ⫽ x Ⳳ ⫽ x Ⳳ (26) m n n 10 The estimated value of variance is not affected by the constant magnitude of 5/10m . When the uncertainty due to significant digits is combined with the resolution limits and random error, the uncertainty interval on becomes ˆ ␮ 5 ˆ ˆ ␮ ⫽ x Ⳳ W ⫹ R ⫹ (27) 冉 冊 m 10 This illustrates that the number of significant digits of a measurement should be carefully chosen in relation to the resolution limits of the measuring system so that 5/10m has about the same magnitude as R. Additional significant digits would imply more accuracy to the measurement than would actually exist based on the resolving ability of the measuring sys- tem.
  • 29. 14 Instrument Statics 3.6 Amount of Data to Take Exactly what data to take and how much data to take are two important questions to be answered in any experiment. Assuming that the correct variables have been measured, the amount of data to be obtained can be determined by using the relation 5 ˆ ␮ ⫽ x Ⳳ W ⫹ R ⫹ (28) 冉 冊 x m 10 where it is presumed that several sample sets may exist for estimation of ␮ and that the mean of means of the sample sets is denoted by . This equation can be rewritten using Eqs. x (13) and (14) (assuming random sampling): 5 ␮ ⫽ x Ⳳ k(␯,␥)ˆ ␴ ⫹ R ⫹ 冉 冊 x m 10 ˆ ␴ 5 ⫽ x Ⳳ k(␯,␥) ⫹ R ⫹ (29) 冉 冊 m 10 兹n The value of n to achieve the difference in within a stated percent of ␮ can be ␮ ⫺ x determined from 2 k(v,␥)ˆ ␴ n ⫽ (30) 冋 册 m (%/100) ˆ ␮ ⫺ R ⫺ (5/10 ) This equation can only yield valid values of n once valid estimates of and m are ˆ ␮, ˆ ␴, k, R, available. This means that the most correct values of n can only be obtained once the measurement system and data-taking procedure have been specified so that R and m are known. Furthermore, either a preliminary experiment or a portion of the actual experiment should be performed to obtain good estimates of and . Because k depends not only on ˆ ␮ ˆ ␴ the type of distribution the data follows but also on the sample size n, the solution is iterative. Thus, the most valid estimates of the amount of data to take can only be obtained after the experiment has begun. However, the equation can be quite useful for prediction purposes if one wishes to estimate values of and m. This is especially important in experi- ˆ ␮, ˆ ␴, k, R, ments for which the cost of a single run may be relatively high. Example 4 Amount of Data to Take. The life for a certain type of automotive tire is to be established. The mean and standard deviation of the life estimated for these tires are 84,000 and Ⳳ7,230 km, respectively, from a sample of nine tires. On the basis of the sample, how much data are required to establish the life of this type of tire to within Ⳳ10% with 90% confidence and a resolution of 5 km? Solution: Confidence limits are as follows: 5 ␮ ⫽ ˆ ␮ Ⳳ tˆ ␴ ⫹ R ⫹ 冉 冊 x m 10 5 ˆ ␴ 5 x ␮ ⫺ ˆ ␮ ⫽ (0.10)x ⫽ tˆ ␴ ⫹ R ⫹ ⫽ t ⫹ R ⫹ x m m 10 10 兹n m t (0.10)x ⫺ R ⫺ (5/10 ) (0.10)(84000) ⫺ 5 ⫽ ⫽ ⫽ 1.6 ˆ ␴ 7230 兹n x
  • 30. 3 Statistics in the Measurement Process 15 n ␯ t(v,0.10)a t/兹n 2 1 6.31 3.65 3 2 2.92 1.46 5 4 2.13 0.87 a From a t-statistic table.9 Thus a sample of three tires is sufficient to establish the tire life within Ⳳ10% at a 90% level of confidence. 3.7 Goodness of Fit Statistical methods can be used to fit a curve to a given data set. In general, the least-squares principle is used to minimize the sum of the squares of deviations away from the curve to be fitted. The deviations from an assumed curve y ⫽ ƒ(x) are due to errors in y, in x, or in both y and x. In most cases the errors in the independent variable x are much smaller than the dependent variable y. Therefore, only the errors in y are considered for the least-squares curve. The goodness of fit of an assumed curve is defined by the correlation coefficient r, where 2 2 兺(y ⫺ y) 兺(y ⫺ y) i r ⫽ Ⳳ ⫽ Ⳳ 1 ⫺ 2 2 冪 冪 兺(y ⫺ y) 兺(y ⫺ y) i i 2 ˆ ␴y,x ⫽ Ⳳ 1 ⫺ (31) 2 冪 ˆ ␴y where 兺(yi ⫺ ⫽ 2 y) total variation (variation about mean) 兺(yi ⫺ y)2 ⫽ unexplained variation (variation about regression) 兺(y ⫺ ⫽ 2 y) explained variation (variation based on assumed regression equation) ⫽ ˆ ␴y estimated population standard deviation of y variable ⫽ ˆ ␴y,x standard error of estimate of y on x When the correlation coefficient r is zero, the data cannot be explained by the assumed curve. However, when r is close to Ⳳ1, the variation of y with respect to x can be explained by the assumed curve and a good correlation is indicated between the variables x and y. The probabilistic statement for the goodness-of-fit test is given by P[r ⬎ r] ⫽ ␣ ⫽ 1 ⫺ ␥ (32) calc where rcalc is calculated from Eq. (31) and the null and alternate hypotheses are as follows: H0: No correlation of assumed regression equation with data. H1: Correlation of regression equation with data. The goodness-of-fit for a straight line is determined by comparing rcalc with the value of r obtained at n ⫺ 2 degrees of freedom at a selected confidence level ␥ from tables. If rcalc ⬎ r, the null hypothesis is rejected and a significant fit of the data within the confidence level specified is inferred. However, if rcalc ⬍ r, the null hypothesis cannot be rejected and no correlation of the curve fit with the data is inferred at the chosen confidence level. Example 5 Goodness-of-Fit Test. The given x–y data were fitted to a curve y ⫽ a ⫹ bx by the method of least-squares linear regression. Determine the goodness of fit at a 5% significance level (95% confidence level):
  • 31. 16 Instrument Statics Figure 2 Histogram of a data set and the corresponding frequency polygon. x ⫽ [56, 58, 60, 70, 72, 75, 77, 77, 82, 87, 92, 104, 125] y ⫽ [51, 60, 60, 52, 70, 65, 49, 60, 63, 61, 64, 84, 75] At ␣ ⫽ 0.05, the value of r is 0.55. Thus P[rcalc ⬎ 0.55] ⫽ 0.05. The least-squares regression equation is calculated to be y ⫽ 39.32 ⫹ 0.30x, and the correlation coefficient is calculated as rcalc ⫽ 0.61. Therefore, a satisfactory fit of the regression line to that data is inferred at the 5% significance level (95% confidence level). 3.8 Probability Density Functions Consider the measurement of a quantity x. Let xi ⫽ i measurement of quantity ⫽ x most representative value of measured quantity di ⫽ deviation of ith measurement from , ⫽xi ⫺ x x n ⫽ total number of measurements ⌬x ⫽ smallest measurable change of x, also known as ‘‘least count’’ mj ⫽ number of measurements in xj size group A histogram and the corresponding frequency polygon obtained from the data are shown in Fig. 2, where the xj size group is taken to be inclusive at the lower limit and exclusive at the upper limit. Thus, each xj size group corresponds to the following range of values: 1 1 x ⫺ ⌬x ⱕ x ⬍ x ⫹ ⌬x (33) 冉 冊 冉 冊 j j j j 2 2
  • 32. 3 Statistics in the Measurement Process 17 The height of any rectangle of the histogram shown is denoted as the relative number mj /n and is equal to the statistical probability Fj(x) that a measured value will have the size xj Ⳳ (⌬xj /2). The area of each rectangle can be made equal to the relative number by transforming the ordinate of the histogram in the following way: Area ⫽ relative number ⫽ Fj(x) ⫽ mj /n ⫽ pj(x) ⌬x. Thus mj p (x) ⫽ (34) j n ⌬x The shape of the histogram is preserved in this transformation since the ordinate is merely changed by a constant scale factor (1/⌬x). The resulting diagram is called the probability density diagram. The sum of the areas underneath all rectangles is then equal to 1 [i.e., 兺pj(x) ⌬xj ⫽ 1]. In the limit, as the number of data approaches infinity and the least count becomes very small, a smooth curve called the probability density function is obtained. For this smooth curve we note that ⫹⬁ 冕 p(x) dx ⫽ 1 (35) ⫺⬁ and that the probability of any measurement, x, having values between xa and xb is found from xb P(x ⱕ x ⱕ x ) ⫽ 冕 p(x) dx (36) a b xa To integrate this expression, the exact probability density function p(x) is required. Based on the assumptions made, several forms of frequency distribution laws have been obtained. The distribution of a proposed set of measurements is usually unknown in advance. However, the Gaussian (or normal) distribution fits observed data distributions in a large number of cases. The Gaussian probability density function is given by the expression 2 2 ⫺(x⫺x) / 2␴ p(x) ⫽ (1/␴兹2␲)e (37) where ␴ is the standard deviation. The standard deviation is a measure of dispersion and is defined by the relation ⫹⬁ 2 2 兰 x p(x) dx 兺(x ⫺ ␮) i ⫺⬁ ␴ ⫽ ⫽ (38) ⫹⬁ 冪 n 兰 p(x) dx ⫺⬁ 3.9 Determination of Confidence Limits on ␮ If a set of measurements is given by a random variable x, then the central limit theorem13 states that the distribution of means, , of the samples of size n is Gaussian (normal) with x mean ␮ and variance , that is, [Also, the random variable 2 2 2 ␴ ⫽ ␴ /n x ⬃ G(␮, ␴ /n). z ⫽ x is Gaussian with a mean of zero and a variance of unity, that is, z ⬃ G(0,1).] (x ⫺ ␮)/(␴/兹n) The random variable z is used to determine the confidence limits on ␮ due to random error of the measurements when ␴ is known. The confidence limit is determined from the following probabilistic statement and the Gaussian distribution for a desired confidence level ␥:
  • 33. 18 Instrument Statics x ⫺ ␮ P ⫺ z ⬍ ⬍ z ⫽ ␥ (39) 冋 册 ␴/兹n It shows a ␥ probability or ‘‘confidence level’’ that the experimental value of z will be between Ⳳz obtained from the Gaussian distribution table. For a 95% confidence level, z ⫽ Ⳳ1.96 from the Gaussian table9 and P[⫺ 1.96 ⬍ z ⬍ 1.96] ⫽ ␥, where z ⫽ ( ). Therefore, the expression for the 95% confidence limits on ␮ is x ⫺ ␮)/(␴/兹n ␴ ␮ ⫽ x Ⳳ 1.96 (40) 兹n In general ␴ ␮ ⫽ x Ⳳ k (41) 兹n where k is found from the Gaussian table for the specified value of confidence, ␥. If the population variance is not known and must be estimated from a sample, the statistic ) is not distributed normally but follows the t-distribution. When n (x ⫺ ␮)/(␴/兹n is very large, the t-distribution is the same as the Gaussian distribution. When n is finite, the value of k is the ‘‘t’’ value obtained from the t-distribution table.9 The probabilistic statement then becomes x ⫺ ␮ P ⫺t ⬍ ⬍ ⫹ t ⫽ ␥ (42) 冋 册 ␴/兹n and the inequality yields the expression for the confidence limits on ␮: ␴ ␮ ⫽ x Ⳳ t (43) 兹n If the effects of resolution and significant digits are included, the expression becomes as previously indicated in Eq. (29): ␴ 5 ␮ ⫽ x Ⳳ t ⫹ R ⫹ (44) 冉 冊 m 10 兹n 3.10 Confidence Limits on Regression Lines The least-squares method is used to fit a straight line to data that are either linear or trans- formed to a linear relation.9,14 The following method assumes that the uncertainty in the variable x is negligible compared to the uncertainty in the variable y and that the uncertainty in the variable y is independent of the magnitude of the variable x. Figure 3 and the defi- nitions that follow are used to obtain confidence levels relative to regression lines fitted to experimental data: a ⫽ intercept of regression line b ⫽ slope of regression line, ⫽兺XY/兺X2 yi ⫽ value of y from data at x ⫽ xi ⫽ ŷi value of y from regression line at x ⫽ note that for a straight x̂ ; ŷ ⫽ a ⫹ bx̂ i i i line
  • 34. 3 Statistics in the Measurement Process 19 Figure 3 Schematic of a regression line. ⫽ yi mean estimated value of yi at x ⫽ xi; mean of distribution of y values at x ⫽ xi; if there is only one measurement of y at x ⫽ xi, then that value of y (i.e., yi) is best estimate of yi ␯ ⫽ degrees of freedom in fitting regression line to data (␯ ⫽ n ⫺ 2 for straight line) 兺(yi ⫺ )2 /v ⫽ ŷi ⫽ unexplained variance (for regression line) where is standard 2 ˆ ␴ ˆ ␴ y,x y,x deviation of estimate ⫽ 2 ␴y,x /n from central limit theorem 2 ␴y,x ⫽ 2 ␴b ⫽ estimate of variance on slope 2 2 ␴ /兺X y,x Slope-Centroid Approximation This method assumes that the placement uncertainty of the regression line is due to uncer- tainties in the centroid ( ) of the data and the slope b of the regression line passing through x, y this centroid. These uncertainties are determined from the following relations and are shown in Fig. 4: ˆ ␴y,x Centroid: ␮ ⫽ ˆ ␮ Ⳳ t(␯,␥)ˆ ␴ ⫽ y Ⳳ t (45) y y y,x 兹n ˆ ␴y,x ˆ ˆ Slope: ␮ ⫽ b Ⳳ t(␯,␥)ˆ ␴ ⫽ b Ⳳ t (46) b b 2 兺X Point-by-Point Approximation This is a better approximation than the slope-centroid technique. It gives confidence limits of the points , where yi
  • 35. 20 Instrument Statics b – t␴b ^ ^ b + t␴b ^ ^ b – t␴b ^ ^ b + t␴b ^ ^ y = a + bx ^ ^ ^ Figure 4 Schematic illustrating confidence limits on a regression line. ␮ ⫽ ŷ Ⳳ t(␯,␥)ˆ ␴ (47) yi i yi and 2 1 Xi 2 2 ˆ ␴ ⫽ ˆ ␴ ⫹ (48) 冉 冊 y y,x i 2 n 兺Xi At the centroid where Xi ⫽ 0, Eq. (47) reduces to the result for given in Eq. (45). ␮y Line as a Whole More uncertainty is involved when all points for the line are taken collectively. The ‘‘price’’ paid for the additional uncertainty is given by replacing t(v,␥) in the confidence interval relation by , where F ⫽ F (2, ␯, ␥) obtained from an ‘‘F’’ table statistical distribution. 兹2F Thus the uncertainty interval for placement of the ‘‘line as a whole’’ confidence limit at x ⫽ xi is found by the formula 1 Xi ␮ ⫽ ŷ Ⳳ 兹2Fˆ ␴ ⫹ (49) Line i y,x 2 冪n 兺Xi Future Estimate of a Single Point at x ⴝ xi This gives the expected confidence limits on a future estimate value of y at x ⫽ xi in a relation to a prior regression estimate. This confidence limit can be found from the relation ␮ ⫽ ŷ Ⳳ t(␯,␥)ˆ ␴ (50) y i yi where is the best estimate of y from the regression line and ŷi 2 1 Xi 2 2 ˆ ␴ ⫽ ˆ ␴ 1 ⫹ ⫹ (51) 冉 冊 y y,x i 2 n 兺Xi If one uses ␥ ⫽ 0.99, nearly all (99%) of the observed data points should be within the uncertainty limits calculated by Eq. (51).
  • 36. 3 Statistics in the Measurement Process 21 Example 6 Confidence Limits on a Regression Line. Calibration data for a copper– constantan thermocouple are given: Temperature: x ⫽ [0, 10, 20, 30, 40 50, 60, 70, 80, 90, 100] ⬚C Voltage: y ⫽ [⫺0.89, ⫺0.53, ⫺9.15, ⫺0.20, 0.61, 1.03, 1.45, 1.88, 2.31, 2.78, 3.22] mV If the variation of y is expected to be linear over the range of x and the uncertainty in temperature is much less than the uncertainty in voltage, then: 1. Determine the linear relation between y and x. 2. Test the goodness of fit at the 95% confidence level. 3. Determine the 90% confidence limits of points on the line by the slope-centroid technique. 4. Determine the 90% confidence limits on the intercept of the regression line. 5. Determine the 90% confidence limits on a future estimated point of temperature at 120⬚C. 6. Determine the 90% confidence limits on the whole line. 7. How much data would be required to determine the centroid of the voltage values within 1% at the 90% confidence level? The calculations are as follows: x ⫽ 50.0 y ⫽ 1.0827 y ⫽ a ⫹ bx 兺x ⫽ 550 兺y ⫽ 11.91 兺x y ⫽ 1049.21 i i i i 2 2 兺(x ) ⫽ 38,500.0 兺(y ) ⫽ 31.6433 i i 2 2 2 2 兺(x ⫺ x) ⫽ 兺X ⫽ 11,000.00 兺(y ⫺ y) ⫽ 兺Y ⫽ 18.7420 i i i i 兺(x ⫺ x)(y ⫺ y) ⫽ 兺X Y ⫽ 453.70 i i i i 2 兺(y ⫺ ŷ ) ⫽ 0.0299 i i 2 1. b ⫽ 兺XY/兺X ⫽ 453.7/11,000 ⫽ 0.0412 a ⫽ y ⫺ bx ⫽ 1.0827 ⫺ (0.0412)(50.00) ⫽ 1.0827 ⫺ 2.0600 ⫽ ⫺0.9773 2 2 2 2 2. r ⫽ 兹兺(ŷ ⫺ y) /兺(y ⫺ y) ⫽ 兹1 ⫺ 兺(y ⫺ ŷ ) /兺(y ⫺ y) exp i i i i 2 2 ⫽ 兺XY/兹兺X 兺Y ⫽ 0.998 ⫽ r(␣, ␯) ⫽ r(0.05,9) ⫽ 0.602 rTable P[r ⬎ r ] ⫽ ␣ with H of no correlation; therefore reject H and infer exp Table 0 0 significant correlation 3. t ⫽ t(␥, v) ⫽ t(0.90,9) ⫽ 1.833 (see Ref. 6) 2 2 ˆ ␴ ⫽ 兺(y ⫺ ŷ ) /␯ ⫽ 0.0299/9 ⫽ 0.00333 y,x i i
  • 37. 22 Instrument Statics ˆ ␴ ⫽ Ⳳ0.05777, ˆ ␴ ⫽ Ⳳ0.0173 y,x y,x ⫽ Ⳳ0.000550 2 ˆ ␴ ⫽ ˆ ␴ /兹兺X b y,x ␮ ⫽ ˆ ␮ Ⳳ tˆ ␴ ⫽ y Ⳳ tˆ ␴ y y y,x y,x ⫽ 1.0827 Ⳳ (1.833)(0.01737) ⫽ 1.0827 Ⳳ 0.0318 ˆ ␮ ⫽ b Ⳳ tˆ ␴ ⫽ 0.0412 Ⳳ (1.833)(0.00055) b b ⫽ 0.0412 Ⳳ 0.00101 4. ⫽ 2 2 ␮ ˆ ␮ Ⳳ tˆ ␴ ⫽ y Ⳳ tˆ ␴ 兹1/n ⫹ X /兺X yi yi yi i y,x ⫽ ⫺0.9773 Ⳳ (1.833)(0.0325) ⫽ ⫺0.9773 Ⳳ 0.0596 5. ⫽ 2 2 ␮ ˆ ␮ Ⳳ tˆ ␴ ⫽ ŷ ⫹ tˆ ␴ 兹1 ⫹ 1/n ⫹ X /兺X yi yi yi i y,x ⫽ (⫺0.9773 ⫹ 4.9500) Ⳳ (1.833)(0.0715) ⫽ 3.9727 Ⳳ 0.1310 6. ⫽ ; for any given point compare with 4 above ␮ ˆ ␮ Ⳳ 兹2F(2, v)ˆ ␴ yi Line yi yi ⫽ ŷ Ⳳ 兹(2)(3.01)ˆ ␴ i yi ⫽ ŷ Ⳳ (2.46)ˆ ␴ i yi ⫽ ⫺0.9773 Ⳳ 0.0799; for point of 4 above 7. ˆ ␴y,x ␮ ⫽ ˆ ␮ Ⳳ tˆ ␴ ⫽ y Ⳳ t yi y y,x 兹n ␴ ⫽ ␴ ⫽ ␴ /兹n y,x y,x y,x ˆ ␴ ⫽ Ⳳ0.0577 y,x ␮ ⫺ y ⫽ (1%)(␮ ) 艑 (1%)(y) ⫽ 0.010827 y y t/兹n ⫽ (1%)y/ ˆ ␴ ⫽ 0.010827/0.05770 ⫽ 0.188 y,x From the t table9 at t/ ⫽ 0.188 with ␯ ⫽ n ⫺ 2 for a straight line (two constraints). 兹n ␯ t(0.10,v) t/兹n 60 1.671 0.213 90 1.663 0.174 75 1.674 0.188 Therefore n ⫽ ␯ ⫹ 2 ⫽ 77 is the amount of data to obtain to assure the precision and confidence level desired. 3.11 Inference and Comparison Events are not only deterministic but probabilistic. Under certain specified conditions some events will always happen. Some other events, however, may or may not happen. Under the same specified conditions, the latter ones depend on chance and, therefore, the probability of occurrence of such events is of concern. For example, it is quite certain that a tossed unbiased die will fall down. However, it is not at all certain which face will appear on top
  • 38. 3 Statistics in the Measurement Process 23 when the die comes to rest. The probabilistic nature of some events is apparent when ques- tions of the following type are asked. Does medicine A cure a disease better than medicine B? What is the ultimate strength of 1020 steel? What total mileage will brand X tire yield? Which heat treatment process is better for a given part? Answering such questions involves designing experiments, performing measurements, analyzing the data, and interpreting the results. In this endeavor two common phenomena are observed: (1) repeated measurements of the same attribute differ due to measurement error and resolving capability of the measurement system and (2) corresponding attributes of identical entities differ due to material differences, manufacturing tolerances, tool wear, and so on. Conclusions based on experiments are statistical inferences and can only be made with some element of doubt. Experiments are performed to make statistical inferences with min- imum doubt. Therefore, experiments are designed specifying the data required, amount of data needed, and the confidence limits desired in drawing conclusions. In this process an instrumentation system is specified, a data-taking procedure is outlined, and a statistical method is used to make conclusions at preselected confidence levels. In statistical analysis of experimental data, the descriptive and inference tasks are con- sidered. The descriptive task is to present a comprehensible set of observations. The inference task determines the truth of the whole by examination of a sample. The inference task requires sampling, comparison, and a variety of statistical tests to obtain unbiased estimates and confidence limits to make decisions. Statistical Testing A statistical hypothesis is an assertion relative to the distribution of a random variable. The test of a statistical hypothesis is a procedure to accept or reject the hypothesis. A hypothesis is stated such that the experiment attempts to nullify the hypothesis. Therefore, the hypothesis under test is called the null hypothesis and symbolized by H0. All alternatives to the null hypothesis are termed the alternate hypothesis and are symbolized by H1.15 If the results of the experiment cannot reject H0, the experiment cannot detect the dif- ferences in measurements at the chosen probability level. Statistical testing determines if a process or item is better than another with some stated degree of confidence. The concept can be used with a certain statistical distribution to de- termine the confidence limits. The following procedure is used in statistical testing: 1. Define H0 and H1. 2. Choose the confidence level ␥ of the test. 3. Form an appropriate probabilistic statement. 4. Using the appropriate statistical distribution, perform the indicated calculation. 5. Make a decision concerning the hypothesis and/or determine confidence limits. Two types of error are possible in statistical testing. A Type I error is that of rejecting a true null hypothesis (rejecting truth). A Type II error is that of accepting a false null hypothesis (embracing fiction). The confidence levels and sample size are chosen to minimize the probability of making a Type I or Type II error. Figure 5 illustrates the Type I (␣) and Type II (␤) errors, where H0 ⫽ sample with n ⫽ 1 comes from N(10,16) H1 ⫽ sample with n ⫽ 1 comes from N(17,16)
  • 39. 24 Instrument Statics Figure 5 Probability of Type I and Type II errors. ␣ ⫽ probability of Type I error [concluding data came from N(17,16) when data actually came from N(10,16)] ␤ ⫽ probability of a Type II error [accepting that data come from N(10,16) when data actually come from N(17,16)] When ␣ ⫽ 0.05 and n ⫽ 1, the critical value of x is 16.58 and ␤ ⫽ 0.458. If the value of x obtained from the measurement is less than 16.58, accept H0. If x is larger than 16.58, reject H0 and infer H1. For ␤ ⫽ 0.458 there is a large chance for a Type II error. To minimize this error, increase ␣ or the sample size. For example, when ␣ ⫽ 0.05 and n ⫽ 4, the critical value of x is 13.29 and ␤ ⫽ 0.032 as shown in Fig. 5. Thus, the chance of a Type II error is significantly decreased by increasing the sample size. Various statistical tests are sum- marized in the flow chart shown in Fig. 6. Comparison of Variability To test whether two samples are from the same population, their variability or dispersion characteristics are first compared using F-statistics.9,15 If x and y are random variables from two different samples, the parameters U ⫽ 兺(xi ⫺ and V ⫽ 兺(yi ⫺ are also random variables and have chi-square distributions 2 2 2 2 x) /␴ y) /␴ x y (see Fig. 7a) with ␯1 ⫽ n1 ⫺ 1 and ␯2 ⫽ n2 ⫺ 1 degrees of freedom, respectively. The random variable W formed by the ratio (U/␯1)/(V/␯2) has an F-distribution with ␯1 and ␯2 degrees of freedom [i.e., W ⬃ F (␥, ␯1, ␯2)]. The quotient W is symmetric; therefore, 1/W also has an F-distribution with ␯2 and ␯1 degrees of freedom [i.e., 1/W ⬃ F (␣, ␯2, ␯1)]. Figure 7b shows the F-distribution and its probabilistic statement: P[F ⬍ W ⬍ F ] ⫽ ␥ (52) L R where W ⫽ (U/␯1)/(V/␯2) ⫽ with 2 2 ˆ ␴ / ˆ ␴ 1 2 2 2 2 2 H as ␴ ⫽ ␴ and H as ␴ ⫽ ␴ (53) 0 1 2 1 1 2 Here, W is calculated from values of and obtained from the samples. 2 2 ˆ ␴ ˆ ␴ 1 2 Example 7 Testing for Homogeneous Variances. Test the hypothesis at the 90% level of confidence that when the samples of n1 ⫽ 16 and n2 ⫽ 12 yielded ⫽ 5.0 and 2 2 2 ␴ ⫽ ␴ ˆ ␴ 1 2 1 ⫽ 2.5. Here H0 is ␴1 ⫽ ␴2 and H1 is ␴1 ⫽ ␴2 2 ˆ ␴2
  • 42. 3 Statistics in the Measurement Process 27 Figure 7 (a) Chi-square distribution and (b) F-distribution. Also shown are probabilistic statements of the distributions such as that shown in Eq. (52). 2 ˆ ␴1 P[F (␯ ,␯ ) ⬍ ⬍ F (␯ ,␯ )] ⫽ ␥ 冉 冊 L 1 2 R 1 2 ˆ ␴2 ˆ ␴1 P[F (15,11) ⬍ 2 ⬍ F (15,11)] ⫽ 0.90 冉 冊 0.95 0.05 ˆ ␴2 2 1 1 ˆ ␴1 F (15,11) ⫽ ⫽ ⫽ 0.398 P[0.398 ⬍ ⬍ 2.72] ⫽ 0.90 冉 冊 0.95 F (11, 15) 2.51 ˆ ␴ 0.05 2 Since there was a probability of 90% of ranging between 0.398 and 2.72 and with 2 2 ˆ ␴ / ˆ ␴ 1 2 the actual value of , H0 cannot be rejected at the 90% confidence level. 2 2 ˆ ␴ / ˆ ␴ ⫽ 2.0 1 2 Comparison of Means Industrial experimentation often compares two treatments of a set of parts to determine if a part characteristic such as strength, hardness, or lifetime has been improved. If it is assumed that the treatment does not change the variability of items tested (H0), then the t-distribution determines if the treatment had a significant effect on the part characteristics (H1). The t- statistic is d ⫺ ␮d t ⫽ (54) ␴d where and ␮d ⫽ ␮1 ⫺ ␮2. From the propagation of variance, becomes 2 d ⫽ x ⫺ x ␴ 1 2 d 2 2 ␴ ␴ 1 2 2 2 2 ␴ ⫽ ␴ ⫹ ␴ ⫽ ⫹ (55) d x x 1 2 n n 1 2
  • 43. 28 Instrument Statics where and are each estimates of the population variance ␴2 . A better estimate of ␴2 2 2 ␴ ␴ 1 2 is the combined variance and it replaces both and in Eq. (55). The combined 2 2 2 ␴ , ␴ ␴ c 1 2 variance is determined by weighting the individual estimates of variance based on their degrees of freedom according to the relation 2 2 ˆ ␴ v ⫹ ˆ ␴ v 1 1 2 2 2 ␴ ⫽ (56) c v ⫹ v 1 2 Then 2 2 ˆ ␴ ˆ ␴ 1 1 c c 2 2 ␴ ⫽ ⫹ ⫽ ␴ ⫹ (57) 冉 冊 d c n n n n 1 2 1 2 Under the hypothesis H0 that ␮1 ⫽ ␮2 (no effect due to treatment), the resulting probabilistic statement is 1 1 1 1 P ⫺t␴ ⫹ ⬍ x ⫺ x ⬍ t␴ ⫹ ⫽ ␥ (58) 冋 册 c 1 2 c 冪 冪 n n n n 1 2 1 2 If the variances of the items being compared are not equal (homogeneous), a modified t- (or d-) statistic is used,9,15 where d depends on confidence level ␥, degrees of freedom v, and a parameter ␪ that depends on the ratio of standard deviations according to ˆ ␴ /兹n 1 1 tan ␪ ⫽ (59) ˆ ␴ /兹n 2 2 The procedure for using the d-statistic is the same as described for the t-statistic. Example 8 Testing for Homogeneous Means. A part manufacturer has the following data: Sample Number Number of Parts Mean Lifetime (h) Variance (h) 1 15 2530 490 2 11 2850 360 Determine if the lifetime of the parts is due to chance at a 10% significance level (90% confidence level). Check variance first (H0—homogeneous variances and H1—nonhomogeneous vari- ances): 15 2 ˆ ␴ ⫽ 490 ⫽ 525 冉 冊 1 14 11 2 ˆ ␴ ⫽ 360 ⫽ 396 冉 冊 2 10 (14)(525) ⫹ (10)(396) 2 ␴ ⫽ ⫽ 471 c 24
  • 44. 3 Statistics in the Measurement Process 29 2 ˆ ␴ 525 1 P[F ⬍ F ⬍ F ] ⫽ ␥ F ⫽ ⫽ ⫽ 1.33 L exp R exp 2 ˆ ␴ 396 2 F ⫽ F (␯ , ␯ ) ⫽ F (14,10) ⫽ 2.8 R 0.05 1 2 0.05 1 1 F ⫽ F (␯ , ␯ ) ⫽ ⫽ ⫽ 0.385 L 0.95 1 2 F (10,14) 2.60 0.05 Therefore, accept H0 (variances are homogeneous). Check means next (H0 is ␮1 ⫽ ␮2 and H1 is ␮2 ⫽ ␮1): P[⫺t ⬍ t ⬍ t] ⫽ ␥ calc t ⫽ t(␯ ⫹ ␯ , ␥) ⫽ t(24,0.90) ⫽ 1.711 1 2 Therefore, P[⫺1.71 ⬍ texp ⬍ 1.71] ⫽ 0.90: 兩x ⫺ x 兩 兩x ⫺ x 兩 1 2 1 2 t ⫽ ⫽ exp 2 2 2 兹␴ /n ⫹ ␴ /n 兹␴ (1/n ⫹ 1/n ) 1 1 2 2 c 1 2 兩2530 ⫺ 2850兩 ⫽ 兹(471)(0.1576) 兩⫺320兩 320 ⫽ ⫽ 8.58 兹73.6 ⫽ 32.3 Therefore, reject H0 and accept H1 of ␮1 ⫽ ␮2 and infer differences in samples are due not to chance alone but to some real cause. Comparing Distributions A chi-square distribution is also used for testing whether or not an observed phenomenon fits an expected or theoretical behavior.15 The chi-square statistic is defined as 2 (O ⫺ E ) j j 冘冋 册 Ej where Oj is the observed frequency of occurrence in the jth class interval and Ej is the expected frequency of occurrence in the jth class interval. The expected class frequency is based on an assumed distribution or a hypothesis H0. This statistical test is used to compare the performance of machines or other items. For example, lifetimes of certain manufactured parts, locations of hole center lines on rectangular plates, and locations of misses from targets in artillery and bombing missions follow chi-square distributions. The probabilistic statements depend on whether a one-sided test or a two-sided test is being performed.16 The typical probabilistic statements are 2 2 P[␹ ⬎ ␹ (␣, ␯)] ⫽ ␣ (60) exp for the one-sided test and 2 2 2 P[␹ ⬍ ␹ ⬍ ␹ ] ⫽ ␥ (61) L exp R for the two-sided test, where
  • 45. 30 Instrument Statics 2 兺(O ⫺ E ) j j 2 ␹ ⫽ (62) exp Ej When using Eq. (62) at least five items per class interval are normally used, and a continuity correction must be made if less than three class intervals are used. Example 9 Use of the Chi-Square Distribution. In a test laboratory the following record shows the number of failures of a certain type of manufactured part. Number of Failures Number of Parts Observed Expecteda 0 364 325 1 376 396 2 218 243 3 89 97 4 33 30 5 13 7 6 4 1 7 3 0 8 2 0 9 1 0 兺 ⫽ 1103 a From an assumed statistical model that the failures are random. At a 95% confidence level, determine whether the failures are attributable to chance alone or to some real cause. For H0, assume failures are random and not related to a cause so that the expected distribution of failures follows the Poisson distribution. For H1, assume failures are not random and are cause related. The Poisson probability is Pr ⫽ !, ⫺␮ r e ␮ /r where ␮ ⫽ 兺xiPi: 364 376 218 89 33 13 ␮ ⫽ (0) ⫹ (1) ⫹ (2) ⫹ (3) ⫹ (4) ⫹ (5) 冉 冊 冉 冊 冉 冊 冉 冊 冉 冊 冉 冊 1103 1103 1103 1103 1103 1103 24 21 16 9 ⫹ ⫹ ⫹ ⫹ 1103 1103 1103 1103 1346 ⫽ ⫽ 1.22 1103 Then P0 ⫽ 0.295 and E0 ⫽ P0n ⫽ 325. Similarly P1 ⫽ 0.360, P2 ⫽ 0.220, P3 ⫽ 0.088, . . . and, correspondingly, E1 ⫽ 396, E2 ⫽ 243, E3 ⫽ 97, . . . are tabulated above for the expected number of parts to have the number of failures listed. Goodness-of-fit test—H0 represents random failures and H1 represents real cause: 2 2 P[␹ ⬎ ␹ ] ⫽ ␣ ⫽ 0.05 exp Table 2 2 2 ␹ ⫽ ␹ (␯, ␣) ⫽ ␹ (5,0.05) ⫽ 11.070 Table 2 兺(0 ⫺ E) 2 ␹ ⫽ exp E
  • 46. References 31 O E O – E (O – E)2 (O – E)2 /E 364 325 39 1520 4.6800 376 396 ⫺20 400 1.0100 218 243 ⫺25 625 2.5700 89 97 ⫺8 64 0.6598 33 30 3 9 0.3000 23 8 15 225 28.1000 ⫽ 37.320 2 ␹exp Since there is only a 5% chance of obtaining a calculated chi-square statistic as large as 11.07 under the hypothesis chosen, the null hypothesis is rejected and it is inferred that some real (rather than random) cause exists for the failures with a 95% probability or confidence level. REFERENCES 1. E. J. Minnar (ed.), ISA Transducer Compendium, Instrument Society of America/Plenum, New York, 1963. 2. ‘‘Electrical Transducer Nomenclature and Terminology,’’ ANSI Standard MC 6.1-1975 (ISA S37.1), Instrument Society of America, Research Triangle Park, NC, 1975. 3. Anonymous (ed.), Standards and Practices for Instrumentation, Instrument Society of America, Research Triangle Park, NC, 1988. 4. E. O. Doebelin, Measurement Systems: Application and Design, 4th ed., McGraw-Hill, New York, 1990. 5. J. G. Webster, The Measurement, Instrumentation, and Sensors Handbook, CRC Press in cooperation with IEEE, Boca Raton, FL, 1999. 6. R. S. Figliola and D. E. Beasley, Theory and Design for Mechanical Measurements, 3rd ed., Wiley, New York, 2000. 7. C. L. Nachtigal (ed.), Instrumentation and Control: Fundamentals and Applications, Wiley, New York, 1990. 8. I. B. Gertsbakh, Measurement Theory for Engineers, Springer, New York, 2003. 9. J. B. Kennedy and A. M. Neville, Basic Statistical Methods for Engineers and Scientists, 3rd ed., Harper & Row, New York, 1986. 10. A. G. Worthing and J. Geffner, Treatment of Experimental Data, Wiley, New York, 1943. 11. C. R. Mischke, Mathematical Model Building (an Introduction to Engineering), 2nd ed., Iowa State University Press, Ames, IA, 1980. 12. C. Lipson and N. J. Sheth, Statistical Design and Analysis of Engineering Experiments, McGraw- Hill, New York, 1973. 13. B. Ostle and R. W. Mensing, Statistics in Research, 3rd ed., Iowa State University Press, Ames, IA, 1975. 14. D. C. Montgomery and G. C. Runger, Applied Statistics and Probability for Engineers, Wiley, New York, 2002. 15. S. B. Vardeman, Statistics for Engineering Problem Solving, PWS Publishing, Boston, 1994. 16. R. E. Walpole, S. L. Myers, R. H. Myers, and K. Ye, Probability and Statistics for Engineers and Scientists, 7th ed., Prentice-Hall, Englewood Cliffs, NJ, 2002.
  • 47. 32 CHAPTER 2 INPUT AND OUTPUT CHARACTERISTICS Adam C. Bell Dartmouth, Nova Scotia 1 INTRODUCTION 32 2 FAMILIAR EXAMPLES OF INPUT–OUTPUT INTERACTIONS 34 2.1 Power Exchange 34 2.2 Energy Exchange 35 2.3 A Human Example 36 3 ENERGY, POWER, IMPEDANCE 37 3.1 Definitions and Analogies 37 3.2 Impedance and Admittance 38 3.3 Combining Impedances and/or Admittances 39 3.4 Computing Impedance or Admittance at an Input or Output 40 3.5 Transforming or Gyrating Impedances 41 3.6 Source Equivalents: Thévenin and Norton 44 4 OPERATING POINT OF STATIC SYSTEMS 45 4.1 Exchange of Real Power 45 4.2 Operating Points in an Exchange of Power or Energy 46 4.3 Input and Output Impedance at the Operating Point 48 4.4 Operating Point and Load for Maximum Transfer of Power 48 4.5 An Unstable Energy Exchange: Tension-Testing Machine 50 4.6 Fatigue in Bolted Assemblies 52 4.7 Operating Point for Nonlinear Characteristics 52 4.8 Graphical Determination of Output Impedance for Nonlinear Systems 54 5 TRANSFORMING THE OPERATING POINT 57 5.1 Transducer-Matched Impedances 57 5.2 Impedance Requirements for Mixed Systems 58 6 MEASUREMENT SYSTEMS 60 6.1 Interaction in Instrument Systems 61 6.2 Dynamic Interactions in Instrument Systems 63 6.3 Null Instruments 65 7 DISTRIBUTED SYSTEMS IN BRIEF 66 7.1 Impedance of a Distributed System 67 8 CONCLUDING REMARKS 67 REFERENCES 68 1 INTRODUCTION Everyone is familiar with the interaction of devices connected to form a system, although they may not think of their observations in those terms. Familiar examples include the following: Reprinted from Instrumentation and Control, Wiley, New York, 1990, by permission of the publisher. Mechanical Engineers’ Handbook: Instrumentation, Systems, Controls, and MEMS, Volume 2, Third Edition. Edited by Myer Kutz Copyright  2006 by John Wiley & Sons, Inc.
  • 48. 1 Introduction 33 1. Dimming of the headlights while starting a car 2. Slowdown of an electric mixer lowered into heavy batter 3. Freezing a showerer by starting the dishwasher 4. Speedup of a vacuum cleaner when the hose plugs 5. Two-minute wait for a fever thermometer to rise 6. Special connectors required for TV antennas 7. Speedup of a fan in the window with the wind against it 8. Shifting of an automatic transmission on a hill These effects happen because one part of a system loads another. Most mechanical engineers would guess that weighing an automobile by placing a bathroom-type scale under its wheels one at a time and summing the four measurements will yield a higher result than would be obtained if the scale was flush with the floor. Most electrical engineers understand that loading a potentiometer’s wiper with too low a resistance makes its dial nonlinear for voltage division. Instrumentation engineers know that a heavy accelerometer mounted on a thin panel will not measure the true natural frequencies of the panel. Audiophiles are aware that loudspeaker impedances must be matched to amplifier impedance. We have all seen the 75- and 300-⍀ markings under the antenna connections on TV sets, and most cable sub- scribers have seen balun transformers for connecting a coaxial cable to the flat-lead terminals of an older TV. Every one of these examples involves a desired or undesirable interaction between a source and a receiver of energy. In every case, there are properties of the source part and the load part of the system that determine the efficiency of the interaction. This chapter deals exclusively with interactions between static and dynamic subsystems intended to function together in a task and with how best to configure and characterize those subsystems. Consider the analysis of dynamic systems. To create mathematical models of these systems requires that we idealize our view of the physical world. First, the system must be identified and separated from its environment. The environment of a system is the universe outside the free body, control volume, or isolated circuit. The combination of these, which is the system under study and the external sources, provides or removes energy from the system in a known way. Next, in the system itself, we must arrange a restricted set of ideal elements connected in a way that will correctly represent the energy storages and dissipations of the physical system while, at the same time, we need the mathematical handles that explore the system’s behavior in its environment. The external environment of the system being modeled must then itself be modeled and connected and is usually represented by special ideal elements called sources. We expect, as a result of these sources, that the system under study will not alter the important variables in its environment. The water rushing from a kitchen faucet will not normally alter the atmospheric pressure; our electric circuit will not measurably slow the turbines in the local power plant; the penstock will not draw down the level of the reservoir (in a time frame consistent with a study of penstock dynamics, anyway); the cantilever beam will not distort the wall it is built into; and so on. In this last instance, for example, the wall is a special source of zero displacement and zero rotation no matter what forces and moments are applied. In this chapter, we consider, instead of the behavior of a single system in a known environment, the interaction between pairs of connected dynamic systems at their interface, often called the driving point. The fundamental currency is, as always, the energy or power exchanged through the interface. In an instrumentation or control system, the objective of
  • 49. 34 Input and Output Characteristics the energy exchange might be information transmission, but this is not considered here (we would like information exchanges to take place at the lowest possible energy costs, but the second law of thermodynamics rules out a free transmission). As always, energy factors into two variables, such as voltage and current in electrical systems, and we are concerned with the behavior of these in the energetic interaction. The major difference in this perspective is that the system supplying energy cannot do so at a fixed value. Neither the source nor the system receiving energy can fix its values for a changing demand without a change in the value of a supply variable. The two subsystems are in an equilibrium with each other and are forced by their connection to have the same value of both of the appropriate energy variables. We concern ourselves with determining and controlling the value of these energy variables at the interface where, obviously, only one is determined by each of the interacting systems. 2 FAMILIAR EXAMPLES* OF INPUT–OUTPUT INTERACTIONS 2.1 Power Exchange In the real world, pure sources and sinks are difficult to find. They are idealized, convenient constructs or approximations that give our system analyses independent forcing functions. We commonly think of an automobile storage battery as a source of 12.6 V independent of the needed current, and yet we have all observed dimming headlights while starting an engine. Clearly, the voltage of this battery is a function of the current demanded by its load. Similarly, we cannot charge the battery unless our alternator provides more than 12.6 V, and the charging rate depends on the overvoltage supplied. Thus, when the current demanded or supplied to a battery approaches its limits, we must consider that the battery really looks like an ideal 12.6-V source in series with a small resistance. The voltage at the battery terminals is a function of the current demanded and is not independent of the system loading or charging it in the interaction. This small internal resistance is termed the output impedance (or input impedance or driving-point impedance) of the battery. If we measure the voltage on this battery with a voltmeter, we should draw so little current that the voltage we see is truly the source voltage without any loss in the internal resistance. The power delivered from the battery to the voltmeter is negligible (but not zero) because the current is so small. Alternatively, if we do a short-circuit test of the battery, its terminal voltage should fall to zero while we measure the very large current that results. Again, the power delivered to the ammeter is negligible because, although the current is very large, the voltage is vanishingly small. At these two extremes the power delivered is essentially zero, so clearly at some inter- mediate load the power delivered will be a maximum. We will show later that this occurs when the load resistance is equal to the internal resistance of the battery (a point at which batteries are usually not designed to operate). The discussion above illustrates a simple concept: Impedances should be matched to maximize power or energy transfer but should be maximally mismatched for making a measurement without loading the system in which the measurement is to be made. We will return to the details of this statement later. *Many of the examples in this chapter are drawn from Chapter 6 of a manuscript of unpublished notes, ‘‘Dynamic Systems and Measurements,’’ by C. L. Nachtigal, used in the School of Mechanical Engi- neering, Purdue University, 1978.
  • 50. 2 Familiar Examples of Input-Output Interactions 35 2.2 Energy Exchange Interactions between systems are not restricted to resistive behavior, nor is the concept of impedance matching restricted to real, as opposed to reactive, impedances. Consider a pair of billiard balls on a frictionless table (to avoid the complexities of spin), and consider that their impact is governed by a coefficient of restitution, ⑀. Before impact, only one ball is moving, but afterward both may be. The initial and final energies are as follows: 1 2 – Initial energy ⫽ M v 2 1 1i 1 2 1 2 – – Final energy ⫽ M v ⫹ M v (1) 2 1 1ƒ 2 2 2ƒ where the subscript 1 refers to the striker and 2 to the struck ball, M is mass, v is velocity, and the subscripts i and ƒ refer to initial and final conditions, respectively. Since no external forces act on this system of two balls during their interaction, the total momentum of the system is conserved: M v ⫹ M v ⫽ M v ⫹ M v 1 1i 2 2i 1 1ƒ 2 2ƒ or v ⫹ mv ⫽ v ⫹ mv (2) 1i 2i 1ƒ 2ƒ where m ⫽ M2 /M1. The second equation, required to solve for the final velocities, derives from impulse and momentum considerations for the balls considered one at a time. Since no external forces act on either ball during their interaction except those exerted by the other ball, the impulses,* or integrals of the force acting over the time of interaction on the two, are equal. (See impact in virtually any dynamics text.) From this it can be shown that the initial and final velocities must be related: ⑀(v ⫺ v ) ⫽ (v ⫺ v ) (3) 1i 2i 1ƒ 2ƒ where v2i ⫽ 0 in this case and the coefficient of restitution ⑀ is a number between 0 and 1. A 0 corresponds to a plastic impact while a 1 corresponds to a perfectly elastic impact. Equations (2) and (3) can be solved for the final velocities of the two balls: 1 ⫺ m⑀ 1 ⫹ ⑀ v ⫽ v and v ⫽ v (4) 1ƒ 1i 2ƒ 1i 1 ⫹ m 1 ⫹ m Now assume that one ball strikes the other squarely† and that the coefficient of resti- tution ⑀ is unity (perfectly elastic impact). Consider three cases: 1. The two balls have equal mass, so m ⫽ 1, and ⑀ ⫽ 1. Then the striking ball, M1, will stop, and the struck ball, M2, will move away from the impact with exactly the initial velocity of the striking ball. All the initial energy is transferred. 2. The struck ball is more massive than the striking ball, m ⬎ 1, ⑀ ⫽ 1. Then the striker will rebound along its initial path, and the struck ball will move away with less than the initial velocity of the striker. The initial energy is shared between the balls. *Impulse ⫽ Force dt, where Force is the vector sum of all the forces acting over the period of t 兰t⫽0 interaction, t. †Referred to in dynamics as direct central impact.
  • 51. 36 Input and Output Characteristics 3. The striker is the more massive of the two, m ⬍ 1, ⑀ ⫽ 1. Then the striker, M1, will follow at reduced velocity behind the struck ball after their impact, and the struck ball will move away faster than the initial velocity of the striker (because it has less mass). Again, the initial energy is shared between the balls. Thus, the initial energy is conserved in all of these transactions. But the energy can be transferred completely from one ball to the other if and only if the two balls have the same mass. If these balls were made of clay so that the impact was perfectly plastic (no rebound whatsoever), then ⑀ ⫽ 0, so the striker and struck balls would move off together at the same velocity after impact no matter what the masses of the two balls. They would be effectively stuck together. The final momentum of the pair would equal the initial momentum of the striker because, on a frictionless surface, there are no external forces acting, but energy could not be conserved because of the losses in plastic deformation during the impact. The final velocities for the same three cases are 1 v ⫽ v (5) ƒ i 1 ⫹ m Since the task at hand, however, is to transfer kinetic (KE) from the first ball to the second, we are interested in maximizing the energy in the second ball after impact with respect to the energy in the first ball before impact: 1 2 2 2 – KE M (v ) M (1/(1 ⫹ m)) (v ) m (M ,after) 2 2 2ƒ 2 2 1i ⫽ ⫽ ⫽ (6) 1 2 2 2 – KE M (v ) M (v ) (1 ⫹ m) (M ,before) 2 1 1i 1 1i 1 This takes on a maximum value of when m ⫽ 1 and falls off rapidly as m departs 1 – 4 from 1. Thus, after the impact of two clay balls of equal mass, one-fourth of the initial energy remains in the striker, one-fourth is transferred to the struck ball, and one-half of the initial energy of the striker is lost in the impact. If the struck ball is either larger or smaller than the striker, however, then a greater fraction of the initial energy is dissipated in the impact and a smaller fraction is transferred to the second ball. The reader should reflect on how this influences the severity of automobile accidents between vehicles of different sizes. 2.3 A Human Example Those in good health can try the following experiment. Run up a long flight of stairs one at a time and record the elapsed time. After a rest, try again, but run the stairs two at a time. Still later, try a third time, but run three steps at a time. Most runners will find that their best time is recorded for two steps at a time. In the first test, the runner’s legs are velocity limited: Too much work is expended simply moving legs and feet, and the forces required are too low to use the full power of the legs effectively. In the third test, although the runner’s legs do not have to move very quickly, they are on the upper edge of their force capabilities for continued three-step jumps; the forces required are too high and the runner could, at lower forces, move his or her legs much faster. In the intermediate case there is a match between the task and the force–velocity characteristics of the runner’s legs. Bicycle riders assure this match with a variable-speed transmission that they adjust so they can crank the pedals at approximately 60 RPM. We will later look at other means of ensuring the match between source capabilities and load requirements when neither of them
  • 52. 3 Energy, Power, Impedance 37 Figure 1 H. M. Paynter’s tetrahedron of state. is changeable, but the answer is always a transformer or gyrator of some type (a gear ratio in this case). 3 ENERGY, POWER, IMPEDANCE 3.1 Definitions and Analogies Energy is the fundamental currency in the interactions between elements of a physical system no matter how the elements are defined. In engineering systems, it is convenient to describe these transactions in terms of a complementary pair of variables whose product is the power or flow rate of the energy in the transaction. These product pairs are familiar to all engineers: voltage ⫻ current ⫽ power, force ⫻ displacement ⫽ energy, torque ⫻ angular velocity ⫽ power, pressure ⫻ flow ⫽ power, and pressure ⫻ time rate of change of volume exchanged ⫽ power. Some are less familiar: flux linkage ⫻ current ⫽ energy, charge ⫻ voltage ⫽ energy, and absolute temperature ⫻ entropy flux ⫽ thermal power. Henry M. Paynter’s1 tetrahedron of state shows how these are related (Fig. 1). Typically, one of these factors is extensive, a flux or flow, such as current, velocity, volume flow rate, or angular velocity. The other is intensive, a potential or effort,* such as voltage, force, pressure, or torque. Thus P ⫽ extensive ⫻ intensive for any of these domains of physical activity. This factoring is quite independent of the analogies between the factors of power in different domains, for which any arbitrary selection is acceptable. In essence, velocity is not like voltage or force like current, just as velocity is not like current or force like voltage. It is convenient, however, before defining impedance and working with it to choose an analogy so that generalizations can be made across the domains of engineering activity. There are two standard ways to do this: the Firestone analogy2 and the mobility analogy. Electrical engineers are most familiar with the Firestone analogy, while mechanical engineers are prob- ably more comfortable with the mobility analogy. The results derived in this chapter are independent of the analogy chosen. To avoid confusion, both will be introduced, but only the mobility analogy will be used in this chapter. The Firestone analogy gives circuitlike properties to mechanical systems: All systems consist of nodes like a circuit and only of lumped elements considered to be two-terminal or four-terminal devices. For masses and tanks of liquid, one of the terminals must be understood to be ground, the inertial reference frame, or atmosphere. Then one of the energy *This is Paynter’s terminology, used with reference to his ‘‘Bond Graphs.’’
  • 53. 38 Input and Output Characteristics Table 1 Impedances of Lumped Linear Elements Domain Kinetic Storage Dissipation Potential Storage Translational Mass: Ms Damping: b Spring: k/s Rotational Inertia: Js Damping: B Torsion spring: kƒ /s Electrical Inductance: Ls Resistance: R Capacitance: 1/Cs Fluid Inertance: Is Fluid resistance: R Fluid capacitance: 1/Cs variables is measured across the terminals of the element and the other passes through the element. In a circuit, voltage is across and current passes through. For a spring, however, velocity difference is across and the force passes through. Thus this analogy linked voltage to velocity, angular velocity, and pressure as across variables and linked current to force, torque, and flow rate as through variables. Clearly, across ⫻ through ⫽ power. The mobility analogy, in contrast, considers the complementary power variables to con- sist of a potential and a flux, an intrinsic and extrinsic variable. The potentials, or efforts, are voltage, force, torque, and pressure, while the fluxes, or flows, are current, velocity, angular velocity, and fluid flow rate. 3.2 Impedance and Admittance Impedance, in the most general sense, is the relationship between the factors of power. Because only the constitutive relationships for the dissipative elements are expressed directly in terms of the power variables, ⌬VR ⫽ RiR for example, while the equations for the energy storage elements are expressed in terms of the derivative of one of the power variables* with respect to the other, iC ⫽ C(dVC /dt) for example, these are most conveniently expressed in Laplace transform terms. Impedances are really self-transfer functions at a point in a system. Since the concept was probably defined first for electrical systems, that definition is most standardized: Electrical impedance Zelectrical is defined as the rate of change of voltage with current: d(voltage) d(effort) Z ⫽ ⫽ (7) electrical d(current) d(flow) By analogy, impedance can be similarly defined for the other engineering domains: d(force) Z ⫽ (8) translation d(velocity) d(torque) Z ⫽ (9) rotation d(angular velocity) d(pressure) Z ⫽ (10) fluid d(flow rate) Table 1 is an impedance table using these definitions of the fundamental lumped linear elements. Note that these are derived from the Laplace transforms of the constitutive equa- *See Fig. 1 again. Capacitance is a relationship between the integral of the flow and the effort, which is the same as saying that capacitance relates the flow to the derivative of the effort.
  • 54. 3 Energy, Power, Impedance 39 tions for these elements; they are the transfer functions of the elements and are expressed in terms of the Laplace operator s. The familiar F ⫽ Ma, for example, becomes, in power- variable terms, F ⫽ M(dv/dt); it transforms as F(s) ⫽ Msv(s), so dFmass (Z ) ⫽ ⫽ Ms (11) translation mass dvmass Because these involve the Laplace operator s, they can be manipulated algebraically to derive combined impedances. The reciprocal of the impedance, the admittance, is also useful. Formally, admittance is defined as 1 d(flow) Admittance: Y ⫽ ⫽ (12) Z d(effort) 3.3 Combining Impedances and/or Admittances Elements in series are those for which the flow variable is common to both elements and the efforts sum. Elements in parallel are those for which the effort variable is common to both elements and the flows sum. By analogy to electrical resistors, we can deduce that the impedance sum for series elements and the admittance sum for parallel elements form the combined impedance or admittance of the elements: Impedances in series: Z ⫹ Z ⫽ Z (common flow) 1 2 total (13) 1 1 1 ⫹ ⫽ Y Y Y 1 2 total Impedances in parallel: Y ⫹ Y ⫽ Y (common effort) 1 2 total (14) 1 1 1 ⫹ ⫽ Z Z Z 1 2 total When applying these relationships to electrical or fluid elements, there is rarely any confusion about what constitutes series and parallel. In the mobility analogy, however, a pair of springs connected end to end are in parallel because they experience a common force, regardless of the topological appearance, whereas springs connected side by side are in series because they experience a common velocity difference.* For a pair of springs end to end, the total admittance is s s s ⫽ ⫹ k k k total 1 2 so the impedance is k k k total 1 2 ⫽ s s(k ⫹ k ) 1 2 For the same springs side by side, the total impedance is *For many, the appeal of the Firestone analogy is that springs are equivalent to inductors, and there can be no ambiguity about series and parallel connections. End to end is series.
  • 55. Another Random Document on Scribd Without Any Related Topics
  • 56. back
  • 57. back
  • 58. back
  • 59. back
  • 61. back
  • 63. back
  • 64. back
  • 66. back
  • 68. back
  • 70. back
  • 71. back
  • 72. back
  • 73. back
  • 75. back
  • 76. back
  • 78. back
  • 79. back
  • 80. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com