SlideShare a Scribd company logo
The Practice Of Health Program Evaluation Second
Edition David E Grembowski download
https://ebookbell.com/product/the-practice-of-health-program-
evaluation-second-edition-david-e-grembowski-10010134
Explore and download more ebooks at ebookbell.com
Here are some recommended products that we believe you will be
interested in. You can click the link to download.
Progress Of Four Programs From The Comprehensive Addiction And
Recovery Act 1st Edition And Medicine Engineering National Academies
Of Sciences Health And Medicine Division Board On Population Health
And Public Health Practice Committee On The Review Of Specific
Programs In The Comprehensive Addiction And Recovery Act
https://ebookbell.com/product/progress-of-four-programs-from-the-
comprehensive-addiction-and-recovery-act-1st-edition-and-medicine-
engineering-national-academies-of-sciences-health-and-medicine-
division-board-on-population-health-and-public-health-practice-
committee-on-the-review-of-specific-programs-in-the-comprehensive-
addiction-and-recovery-act-51397008
An Integrated Framework For Assessing The Value Of Communitybased
Prevention Nonclinical Prevention Programs Committee On Valuing
Communitybased
https://ebookbell.com/product/an-integrated-framework-for-assessing-
the-value-of-communitybased-prevention-nonclinical-prevention-
programs-committee-on-valuing-communitybased-2624726
Systems Practices For The Care Of Socially Atrisk Populations 1st
Edition And Medicine Engineering National Academies Of Sciences Health
And Medicine Division Board On Health Care Services Board On
Population Health And Public Health Practice Committee On Accounting
For Socioeconomic Status In Medicare Payment Programs
https://ebookbell.com/product/systems-practices-for-the-care-of-
socially-atrisk-populations-1st-edition-and-medicine-engineering-
national-academies-of-sciences-health-and-medicine-division-board-on-
health-care-services-board-on-population-health-and-public-health-
practice-committee-on-accounting-for-socioeconomic-status-in-medicare-
payment-programs-51568740
The Practice Of International Health A Casebased Orientation 1st
Edition Daniel Perlman
https://ebookbell.com/product/the-practice-of-international-health-a-
casebased-orientation-1st-edition-daniel-perlman-5854060
Public Health In The British Empire Intermediaries Subordinates And
The Practice Of Public Health 18501960 Ryan Johnson And Amna Khalid
https://ebookbell.com/product/public-health-in-the-british-empire-
intermediaries-subordinates-and-the-practice-of-public-
health-18501960-ryan-johnson-and-amna-khalid-30972676
The Practice Of Reform In Health Medicine And Science 15002000 Scott
Mandelbrote
https://ebookbell.com/product/the-practice-of-reform-in-health-
medicine-and-science-15002000-scott-mandelbrote-49184112
The Principles And Practice Of Health Evangelism Elvin Adams
https://ebookbell.com/product/the-principles-and-practice-of-health-
evangelism-elvin-adams-33378734
From Patient Data To Medical Knowledge The Principles And Practice Of
Health Informatics 1st Edition Paul Taylor
https://ebookbell.com/product/from-patient-data-to-medical-knowledge-
the-principles-and-practice-of-health-informatics-1st-edition-paul-
taylor-2135094
The History And Practice Of College Health 1st Edition H Spencer
Turner Janet L Hurley
https://ebookbell.com/product/the-history-and-practice-of-college-
health-1st-edition-h-spencer-turner-janet-l-hurley-51787272
The Practice Of Health Program Evaluation Second Edition David E Grembowski
The Practice Of Health Program Evaluation Second Edition David E Grembowski
The Practice of Health Program Evaluation
Second Edition
2
To my family
3
The Practice of Health Program Evaluation
Second Edition
David Grembowski
University of Washington
4
FOR INFORMATION:
SAGE Publications, Inc.
2455 Teller Road
Thousand Oaks, California 91320
E-mail: order@sagepub.com
SAGE Publications Ltd.
1 Oliver’s Yard
55 City Road
London EC1Y 1SP
United Kingdom
SAGE Publications India Pvt. Ltd.
B 1/I 1 Mohan Cooperative Industrial Area
Mathura Road, New Delhi 110 044
India
SAGE Publications Asia-Pacific Pte. Ltd.
3 Church Street
#10-04 Samsung Hub
Singapore 049483
Copyright © 2016 by SAGE Publications, Inc.
All rights reserved. No part of this book may be reproduced or utilized in any form or by any means,
electronic or mechanical, including photocopying, recording, or by any information storage and retrieval
system, without permission in writing from the publisher.
Printed in the United States of America
Library of Congress Cataloging-in-Publication Data
Grembowski, David, author.
The practice of health program evaluation / David Grembowski. — Second edition.
p.; cm.
Includes bibliographical references and index.
ISBN 978-1-4833-7637-0 (paperback: alk. paper)
I. Title. [DNLM: 1. Program Evaluation—methods. 2. Cost-Benefit Analysis. 3. Evaluation Studies as Topic. 4. Health Care Surveys—
methods. W 84.41]
RA399.A1
362.1—dc23 2015020744
This book is printed on acid-free paper.
Acquisitions Editor: Helen Salmon
Editorial Assistant: Anna Villarruel
Production Editor: Kelly DeRosa
5
Copy Editor: Amy Marks
Typesetter: C&M Digitals (P) Ltd.
Proofreader: Sarah J. Duffy
Indexer: Teddy Diggs
Cover Designer: Michael Dubowe
Marketing Manager: Nicole Elliott
6
Detailed Contents
7
8
Detailed Contents
Acknowledgments
About the Author
Preface
Prologue
1. Health Program Evaluation: Is It Worth it?
Growth of Health Program Evaluation
Types of Health Program Evaluation
Evaluation of Health Programs
Evaluation of Health Systems
Summary
List of Terms
Study Questions
2. The Evaluation Process as a Three-Act Play
Evaluation as a Three-Act Play
Act I: Asking the Questions
Act II: Answering the Questions
Act III: Using the Answers in Decision Making
Roles of the Evaluator
Evaluation in a Cultural Context
Ethical Issues
Evaluation Standards
Summary
List of Terms
Study Questions
Act I: Asking the Questions
3. Developing Evaluation Questions
Step 1: Specify Program Theory
Conceptual Models: Theory of Cause and Effect
Conceptual Models: Theory of Implementation
Step 2: Specify Program Objectives
Step 3: Translate Program Theory and Objectives Into Evaluation Questions
Step 4: Select Key Questions
Age of the Program
Budget
Logistics
Knowledge and Values
Consensus
9
Result Scenarios
Funding
Evaluation Theory and Practice
Assessment of Fit
Summary
List of Terms
Study Questions
Act II: Answering the Questions
Scene 1: Developing the Evaluation Design to Answer the Questions
4. Evaluation of Program Impacts
Quasi-Experimental Study Designs
One-Group Posttest-Only Design
One-Group Pretest-Posttest Design
Posttest-Only Comparison Group Design
Recurrent Institutional Cycle (“Patched-Up”) Design
Pretest-Posttest Nonequivalent Comparison Group Design
Single Time-Series Design
Repeated Treatment Design
Multiple Time-Series Design
Regression Discontinuity Design
Summary: Quasi-Experimental Study Designs and Internal Validity
Counterfactuals and Experimental Study Designs
Counterfactuals and Causal Inference
Prestest-Posttest Control Group Design
Posttest-Only Control Group Design
Solomon Four-Group Design
Randomized Study Designs for Population-Based Interventions
When to Randomize
Closing Remarks
Statistical Threats to Validity
Generalizability of Impact Evaluation Results
Construct Validity
External Validity
Closing Remarks
Evaluation of Impact Designs and Meta-Analysis
Summary
List of Terms
Study Questions
5. Cost-Effectiveness Analysis
Cost-Effectiveness Analysis: An Aid to Decision Making
10
Comparing Program Costs and Effects: The Cost-Effectiveness Ratio
Types of Cost-Effectiveness Analysis
Cost-Effectiveness Studies of Health Programs
Cost-Effectiveness Evaluations of Health Services
Steps in Conducting a Cost-Effectiveness Analysis
Steps 1–4: Organizing the CEA
Step 5: Identifying, Measuring, and Valuing Costs
Step 6: Identifying and Measuring Effectiveness
Step 7: Discounting Future Costs and Effectiveness
Step 8: Conducting a Sensitivity Analysis
Step 9: Addressing Equity Issues
Steps 10 and 11: Using CEA Results in Decision Making
Evaluation of Program Effects and Costs
Summary
List of Terms
Study Questions
6. Evaluation of Program Implementation
Types of Evaluation Designs for Answering Implementation Questions
Quantitative and Qualitative Methods
Multiple and Mixed Methods Designs
Types of Implementation Questions and Designs for Answering Them
Monitoring Program Implementation
Explaining Program Outcomes
Summary
List of Terms
Study Questions
Act II: Answering the Questions
Scenes 2 and 3: Developing the Methods to Carry Out the Design and Conducting the
Evaluation
7. Population and Sampling
Step 1: Identify the Target Populations of the Evaluation
Step 2: Identify the Eligible Members of Each Target Population
Step 3: Decide Whether Probability or Nonprobability Sampling Is Necessary
Step 4: Choose a Nonprobability Sampling Design for Answering an Evaluation Question
Step 5: Choose a Probability Sampling Design for Answering an Evaluation Question
Simple and Systematic Random Sampling
Proportionate Stratified Sampling
Disproportionate Stratified Sampling
Post-Stratification Sampling
Cluster Sampling
11
Step 6: Determine Minimum Sample Size Requirements
Sample Size in Qualitative Evaluations
Types of Sample Size Calculations
Sample Size Calculations for Descriptive Questions
Sample Size Calculations for Comparative Questions
Step 7: Select the Sample
Summary
List of Terms
Study Questions
8. Measurement and Data Collection
Measurement and Data Collection in Quantitative Evaluations
The Basics of Measurement and Classification
Step 1: Decide What Concepts to Measure
Step 2: Identify Measures of the Concepts
Step 3: Assess the Reliability, Validity, and Responsiveness of the Measures
Step 4: Identify and Assess the Data Source of Each Measure
Step 5: Choose the Measures
Step 6: Organize the Measures for Data Collection and Analysis
Step 7: Collect the Measures
Data Collection in Qualitative Evaluations
Reliability and Validity in Qualitative Evaluations
Management of Data Collection
Summary
Resources
List of Terms
Study Questions
9. Data Analysis
Getting Started: What’s the Question?
Qualitative Data Analysis
Quantitative Data Analysis
What Are the Variables for Answering Each Question?
How Should the Variables Be Analyzed?
Summary
List of Terms
Study Questions
Act III: Using the Answers in Decision Making
10. Disseminating the Answers to Evaluation Questions
Scene 1: Translating Evaluation Answers Back Into Policy Language
Translating the Answers
Building Knowledge
12
Developing Recommendations
Scene 2: Developing a Dissemination Plan for Evaluation Answers
Target Audience and Type of Information
Format of Information
Timing of the Information
Setting
Scene 3: Using the Answers in Decision Making and the Policy Cycle
How Answers Are Used by Decision Makers
Increasing the Use of Answers in the Evaluation Process
Ethical Issues
Summary
List of Terms
Study Questions
11. Epilogue
Compendium
References
Index
13
Acknowledgments
In many ways, this book is a synthesis of what I have learned about evaluation from my evaluation teachers
and colleagues, and I am very grateful for what they have given me. I wish to acknowledge and thank my
teachers—Marilyn Bergner and Stephen Shortell—who provided the bedrock for my professional career in
health program evaluation when I was in graduate school. In those days, their program evaluation class was
structured around a new, unpublished book, Health Program Evaluation, by Stephen Shortell and William
Richardson, which has since become a classic in our field and provided an early model for this work.
I also benefited greatly from the support and help of other teachers of health program evaluation. I wish to
thank Ronald Andersen, who taught health program evaluation at the University of Chicago (now at UCLA)
for many years. He was an early role model and shared many insights about how to package course material in
ways that could be grasped readily by graduate students. His evaluation course divided the evaluation process
into three distinct phases, and I discovered early that his model also worked very well in my own evaluation
courses. I also wish to thank Diane Martin and Rita Altamore, who taught this course at the University of
Washington and shared their approaches in teaching this subject with me.
I also benefited greatly from “lessons learned” about evaluation through my work with other faculty—Melissa
Anderson, Ronald Andersen, Betty Bekemeier, Shirley Beresford, Michael Chapko, Meei-shia Chen, Karen
Cook, Douglas Conrad, Richard Deyo, Paula Diehr, Don Dillman, Mary Durham, Ruth Engelberg, Louis
Fiset, Paul Fishman, Harold Goldberg, David Grossman, Wayne Katon, Eric Larson, Diane Martin, Peter
Milgrom, Donald Patrick, James Pfeiffer, James Ralston, Robert Reid, Sharyne Shiu-Thornton, Charles
Spiekerman, John Tarnai, Beti Thompson, and Thomas Wickizer and others—on various studies over the
years. I also am grateful for the faculty in the Departments of Biostatistics and Epidemiology in the School of
Public Health at the University of Washington, who continue to advance my ongoing education about
research methods. I also wish to thank the Agency for Healthcare Research and Quality for inviting me to
become a member of a standing study section, and the opportunity to review grant applications for 4 years. I
learned much about research methods and evaluation from my study section colleagues and from performing
the reviews, which has informed the content of this second edition.
Several people played important roles in the production of this book. I especially want to thank the students in
my health program evaluation class, who have provided me with insights about how to write a book that
provides guidance to those who have never performed an evaluation. Many thanks also are extended to the
anonymous SAGE reviewers. Their thoughtful comments significantly improved the quality of this textbook.
Last but by no means least, I wish to thank my family for their support throughout both editions of this book.
14
About the Author
David Grembowski
, PhD, MA, is a professor in the Department of Health Services in the School of Public Health and the
Department of Oral Health Sciences in the School of Dentistry, and adjunct professor in the
Department of Sociology, at the University of Washington. He has taught health program evaluation to
graduate students for more than 20 years. His evaluation interests are prevention, the performance of
health programs and health care systems, survey research methods, and the social determinants of
population health. His other work has examined efforts to improve quality by increasing access to care in
integrated delivery systems; pharmacy outreach to provide statins preventively to patients with diabetes;
managed care and physician referrals; managed care and patient-physician relationships and physician
job satisfaction; cost-effectiveness of preventive services for older adults; cost-sharing and seeing out-of-
network physicians; social gradients in oral health; local health department spending and racial/ethnic
disparities in mortality rates; fluoridation effects on oral health and dental demand; financial incentives
and dentist adoption of preventive technologies; effects of dental insurance on dental demand; and the
link between mother and child access to dental care.
15
Preface
Since The Practice of Health Program Evaluation was published over a decade ago, much has changed in
evaluation. The methods for conducting evaluations of health programs and systems have advanced
considerably, and the mastery of evaluation has become more challenging. In this second edition, my intent is
to create a state-of-the-art resource for graduate students, researchers, health policymakers, clinicians,
administrators, and other groups to use to evaluate the performance of public health programs and health
systems.
In particular, two areas receive much more attention in the second edition, program theory and causal
inference. Program theory refers to the chain of logic, or mechanisms, through which a health program is
expected to cause change that leads to desired beneficial effects and avoid unintended consequences. Theory is
important in the evaluation process because of growing evidence that health programs based on theory are
more likely to be effective than programs lacking a theoretical base. Consequently, the second edition focuses
in more depth on the creation and application of conceptual models to evaluate health programs. For program
theory, a conceptual model is a diagram that illustrates the mechanisms through which a program is expected
to cause intended outcomes. Because implementation also influences program performance, conceptual
models for program implementation also are covered, as well as their relationships to program theory.
In the first edition, causal inference and study designs for impact evaluation were covered in the Campbell
tradition. Over the past 15 years, William Shadish and colleagues have refined the Campbell model and its
distinctions among internal, external, construct, and statistical conclusion validity. At the same time, Donald
Rubin and Judea Pearl have advanced their own models of causal inference in experiments and observational
studies. The second edition addresses all three models but retains the Campbell model as its core for causal
inference.
The second edition also contains new content in many other areas, including the following:
The role of stakeholders in the evaluation process
Ethical issues in evaluation and evaluation standards
The conduct of evaluation in a cultural context
Study designs for impact evaluation of population-based interventions
The application of epidemiology and biostatistics in evaluation methods
Mixed methods that combine quantitative and qualitative approaches for evaluating health programs
Over the past decade, I also have changed how I teach and conduct evaluations, and this professional
evolution is captured throughout the book. My professional growth has not been a solo journey but has been
influenced continually by the students in my classes, the faculty with whom I work, service on study sections
reviewing federal grant applications, and my own evaluation experiences. In particular, Gerald van Belle,
professor of biostatistics at the University of Washington, published a book, Statistical Rules of Thumb, which
offers simple, practical, and well-informed guidance on how to apply statistical concepts in public health
16
studies. Inspired by van Belle’s work, as well as Gerin and Kapelewski’s practical “hints” in their book Writing
the NIH Grant Proposal, I have sprinkled “Rules of Thumb” in several chapters, offering guidance on the
practice of evaluation.
The second edition retains its focus on applied research methods, with the assumption that teaching and
learning can improve through a customized textbook about the evaluation of health programs and systems that
presents information in a clear manner. However, designing and conducting a health program evaluation is
much more than an exercise in applied research methods. All evaluations are conducted in a political context,
and the ability to complete an evaluation successfully depends greatly on the evaluator’s ability to navigate the
political terrain. In addition, evaluation itself is a process with interconnected steps designed to produce
information for decision makers and other groups. Understanding the steps and their interconnections is just
as fundamental to evaluation as is knowledge of quantitative and qualitative research methods. To convey
these principles, I use the metaphor of evaluation as a three-act play with a variety of actors and interest
groups, each having a role and each entering and exiting the stage at different points in the evaluation process.
Evaluators are one of several actors in this play, and it is critical for them to understand their role if they are to
be successful.
Applying this principle, the book has three major sections, or “Acts,” that cover basic steps of the evaluation
process. Act I, “Asking the Questions,” occurs in the political realm, where evaluators work with decision
makers, stakeholders, and other groups to identify the questions that the evaluation will answer about a
program. Chapter 3 presents material to help students and health professionals develop evaluation questions,
specify program theory, and draw conceptual models.
In Act II, “Answering the Questions,” evaluation methods are applied to answer the questions about the
program. After the relevant interest groups and the evaluator agree on the key questions about a program, the
next step is to choose one or more evaluation designs that will answer those questions. Chapter 4 presents
experimental and quasi-experimental impact evaluation designs, and Chapter 5 reviews cost-effectiveness
analysis, which has become more prevalent and important in health care over the past two decades. Chapter 6
presents methods for designing evaluations of program implementation, or process evaluation, including
mixed methods.
Once a design is chosen, quantitative and qualitative methods for conducting the evaluation must be
developed and implemented. Chapter 7 presents methods for choosing the populations for the evaluation and
sampling members from them. Chapter 8 reviews measurement and data collection issues frequently
encountered in quantitative and qualitative evaluations. Finally, Chapter 9 describes data analyses for different
impact and implementation designs.
In Act III, “Using the Answers in Decision Making,” the evaluation returns to the political realm, where
findings are disseminated to decision makers, interest groups, and other constituents. A central assumption is
that evaluations are useful only when their results are used to formulate new policy or improve program
performance. Chapter 10 presents methods for developing formal dissemination plans and reviews factors that
influence whether evaluation findings are used or not. Chapter 11 presents some closing thoughts about
17
evaluation in public health and health systems.
In summary, by integrating the evaluation literature about health programs and services from a variety of
sources, this book is designed to be an educational resource for teachers and students, as well as a reference for
health professionals engaged in program evaluation.
18
Prologue
19
1 Health Program Evaluation Is It Worth It?
Growth of Health Program Evaluation
Types of Health Program Evaluation
Evaluation of Health Programs
Evaluation of Health Systems
Summary
List of Terms
Study Questions
Evaluation is a part of everyday life. Does Ford make a better truck than Chevrolet? What kind of reviews did
a new movie get? What are the top 10 football teams in the country? Who will be the recipient of this year’s
outstanding teacher award? All these questions entail judgments of merit, or “worth,” reached by weighing
information against some explicit or implicit yardstick (Weiss, 1972, 1998a). When judgments result in
decisions, evaluation is being performed at some level (Shortell & Richardson, 1978).
This book is about the evaluation of health programs and the role it plays in program management and
decision making. All societies face sundry health problems. Accidents, cancer, diabetes, heart disease, HIV
infection, suicide, inequitable health and access to health care across social groups, and many others are
mentioned commonly in the health literature (U.S. Department of Health and Human Services, 2015a). A
health program is an organized response, or “intervention,” to reduce or eliminate one or more problems by
achieving one or more objectives, with the ultimate goal of improving the health of society or reducing health
inequalities across social groups (Shortell & Richardson, 1978). Interventions are defined broadly and include
intentional changes in health systems or other societal institutions to improve individual and population
health and reduce health inequalities by socioeconomic status, race/ethnicity, religion, age, gender, sexual
orientation or gender identity, mental health, disability, geographic location, or other characteristics linked
historically to discrimination or social exclusion (U.S. Department of Health and Human Services, 2015a).
Evaluation is the systematic assessment of a program’s implementation and consequences to produce
information about the program’s performance in achieving its objectives (Weiss, 1998a). In general, most
evaluations are conducted to answer two fundamental questions: Is the program working as intended? Why is
this the case? Research methods are applied to answer these questions, to increase the accuracy and objectivity
of judgments about the program’s success in reaching its objectives, and to search for evidence of unintended
and unwanted consequences. The evaluation process fulfills this purpose by defining clear and explicit criteria
for success, collecting representative evidence of program performance, and comparing this evidence to the
criteria established at the outset. Evaluations help program managers understand the reasons for program
performance, which may lead to improvement or refinement of the program. Evaluations also help program
funders to make informed judgments about a program’s worth, which may result in decisions to extend it to
other sites or to cut back or abolish a program so that resources may be allocated elsewhere. In essence,
evaluation is a management or decision-making tool for program administrators, planners, policymakers, and
20
other health officials.
From a societal perspective, evaluation also may be viewed as a deliberate means of promoting social change
for the betterment of society (Shortell & Richardson, 1978; Weiss, 1972). Just as personal growth and
development are fundamental to a person’s quality of life, so do organizations and institutions mature by
learning more about their own behavior (Shortell & Richardson, 1978). The value of evaluation comes from
the insights that its findings can generate, which can speed up the learning process to produce benefits on a
societal scale (Cronbach, 1982).
Evaluation, however, can be a double-edged sword. The desire to learn more is often accompanied by the fear
of what may be found (Donaldson et al., 2002). Favorable results typically are greeted with a sigh of relief by
those who want the program to succeed. By contrast, unfavorable results may be as welcome as the plague.
When an evaluation finds that a program has not achieved its objectives, the program’s very worth is often
brought into question. Program managers and staff may feel threatened by poor evaluation results because
they often are held accountable for them by funders, who may decide the program has little worth. In this
case, funders or other decision makers often have the power and authority to change program implementation,
replace personnel, or even terminate the program and allocate funds elsewhere.
For popular health programs, such as prenatal care for low-income women, program advocates may view
unfavorable results as a threat to the very life of the program. To a great degree, the worth of prenatal care
programs is grounded on the argument that public spending now will prevent future costs and medical
complications associated with low birth weight (Huntington & Connell, 1994). Previous evaluations reported
“good” news: Prenatal care pays for itself (for every $1.00 spent, up to $3.38 will be saved). The “bad” news is
that the evaluations have serious methodologic flaws that may have resulted in overestimates of the cost
savings from prenatal care (Huntington & Connell, 1994). Today, the evidence is insufficient to conclude that
universal prenatal care prevents adverse birth outcomes and is cost saving (Grosse et al., 2006; Krans & Davis,
2012). These findings have attracted national attention because they challenge the very worth of prenatal care
programs if the objective of those programs is to save more than they cost (Kolata, 1994).
In all evaluations, a program’s worth depends on both its performance and the desirability of its objectives,
which is always a question of values (Kane et al., 1974; Palumbo, 1987; Weiss, 1983). For prenatal care and
other prevention programs, the real question may not be, How much does this save? but more simply, How
much is this program worth? (Huntington & Connell, 1994). For health and all types of social programs, the
answer to this fundamental question can have far-reaching consequences for large numbers of people.
Greenhalgh and Russell (2009) capture the essence of the values quandary in evaluation:
Should we spend limited public funds on providing state-of-the-art neonatal intensive-care facilities for
very premature infants? Or providing “Sure Start” programs for the children of teenage single mothers?
Or funding in vitro fertilization for lesbian couples? Or introducing a “traffic light” system of food
labeling, so even those with low health literacy can spot when a product contains too much fat and not
enough fiber? Or ensuring that any limited English speaker is provided with a professional interpreter for
21
health-care encounters? Of course, all these questions require “evidence”—but an answer to the question
“What should we do?” will never be plucked cleanly from massed files of scientific evidence. Whose likely
benefit is worth whose potential loss? These are questions about society’s values, not about science’s
undiscovered secrets. (p. 310)
22
Growth of Health Program Evaluation
Evaluation is a relatively new discipline. Prior to the 1960s, formal, systematic evaluations of social and health
programs were conducted rarely, and few professionals performed evaluations as a full-time career (Shadish et
al., 1991). With the election of Lyndon Johnson to the presidency in 1964, the United States entered into an
era of unprecedented growth in social and health programs for the disadvantaged. Medicare (public health
insurance for adults aged 65 and over), Medicaid (public health insurance for low-income individuals), and
other health care programs were launched, and Congress often mandated and funded evaluation of their
performance (O. W. Anderson, 1985). As public and private funding for evaluation grew, so did the number
of professionals and agencies conducting evaluations. Today, evaluation is a well-known, international
profession. In many countries, evaluators have established professional associations that hold annual
conferences (e.g., American Evaluation Association, Canadian Evaluation Society, African Evaluation
Association, and International Organization for Cooperation in Evaluation). Although an association for
health program evaluation does not exist in the United States, the American Public Health Association,
AcademyHealth (the professional association for health services research), and other groups often serve as
national forums for health evaluators to collaborate and disseminate their findings.
Other forces also have contributed to the growth of health program evaluation since the 1960s. Two
important factors are scarce resources and accountability. All societies have limited resources to address
pressing health problems. In particular, low-income and middle-income countries face the twin problems of
severe resource constraints and many competing priorities (Oxman et al., 2010). When resources are scarce,
competition for funds can intensify, and decision makers may allocate resources only to programs that can
demonstrate good performance at the lowest cost. In such environments, evaluations can provide useful
information for managing programs and, if performance is sound, for defending a program’s worth and
justifying continued funding. If, however, an evaluation is launched solely to collect information to defend a
program’s worth in political battles over resource allocation, and if the evaluation is conducted in an impartial
manner, there is no guarantee that an evaluation will produce results favoring the program.
The trillions of dollars invested globally in health programs has increased the importance of accountability
(Oxman et al., 2010). For financial and legal reasons, public and private funding agencies are concerned with
holding programs accountable for funds and their disbursement, with an eye toward avoiding inappropriate
payments. High-income countries have the greatest expenditures in health programs and, therefore, the
greatest potential for waste (Oxman et al., 2010). For performance reasons, funding agencies also want to
know if their investments produced expected benefits while avoiding harmful side effects. Similarly,
government, employers, and other purchasers of health services are concerned with clinical and fiscal
accountability, or evidence that health care systems and providers deliver services of demonstrated
effectiveness and quality in an efficient manner (Addicott & Shortell, 2014; Relman, 1988; Rittenhouse et al.,
2009; Shortell & Casalino, 2008). Employers also want to know whether their investments in health care
improve their employees’ productivity, for example, by collecting information about how quickly workers are
back on the job after an episode of care (Moskowitz, 1998).
23
Another factor stimulating interest in health program evaluation is more emphasis on prevention. Many
people and health professionals believe that preventing disease is better than curing it. Because the evidence
indicates that much disease is preventable (U.S. Department of Health and Human Services, 2015a), a variety
of preventive programs and technologies have emerged to maintain or improve the nation’s health. For
example, immunizations to prevent disease, water fluoridation to reduce caries, mammography screening to
detect breast cancer, and campaigns promoting the use of bicycle helmets to prevent injuries are common in
our society. Healthy People 2000 and its successors, Healthy People 2010 and Healthy People 2020, specify health
promotion and disease prevention objectives for the nation and provide a framework for the development and
implementation of federal, state, and local programs to meet these objectives (U.S. Department of Health and
Human Services, 2015a). As the number of programs has proliferated, so has interest in evaluating their
performance in achieving their objectives. However, although prevention and the diagnosis and treatment of
illness in its early stages are often advocated because they can save health dollars, preventing illness may either
save money or add to health care costs, depending on the intervention and the target population (J. T. Cohen
et al., 2008).
Since the 1960s, health program evaluation also has been promoted by public agencies, foundations, and other
groups sponsoring a variety of demonstration projects to improve population health and the performance of
the health care system or to achieve other goals. Consistent with our nation’s belief in incrementalism in
political decision making (Lindblom, 1959, 1979; Marmor, 1998; Shortell & Richardson, 1978), decision
makers often desire information about whether a proposed change will work before authorizing changes on a
broad scale. To supply this information, decision makers may approve demonstration projects or large-scale
social experiments to test the viability of promising solutions to pressing health problems. A prominent
example is the Rand Health Insurance Study, a large-scale experiment in which households were randomly
assigned to health insurance plans with different cost-sharing arrangements to determine their impacts on
health care utilization and expenditures, health outcomes, and satisfaction with medical and dental care
(Aron-Dine et al., 2013; Newhouse & the Insurance Experiment Group, 1993). Because large-scale
evaluations are relatively expensive to conduct, controversy may exist about whether their findings are worth
the resources invested in them. Nevertheless, decision makers are likely to continue authorizing such projects
because they often address critical issues in health policy, and because they give decision makers the flexibility
to be responsive to a problem while avoiding long-term commitments of resources. Evaluation is an important
element of demonstration projects because it provides the evidence for judging their worth.
Another factor contributing to the growth of evaluation is increasing government intervention to fix complex
problems in the U.S. health care system. Although the United States expends more per capita for health care
than any other country in the world, in 2011 U.S. life expectancy (78.7 years) ranked 26th out of 34 developed
countries (Organisation for Economic Co-operation and Development, 2008). Social and physical
environments, health behaviors, and genetics account in part for these health patterns, but major problems in
the U.S. health care system also contribute to the health deficits. Payment for health care remains largely fee-
for-service, which increases overuse and costs, undermining health outcomes and leaving fewer resources for
other health-producing societal investments (Evans & Stoddart, 1990). Before 2010, about 18% of Americans
were uninsured, and social groups with the worst health were more likely to be uninsured, to have greater
24
unmet needs for preventive and therapeutic care, and were less likely to have doctor visits (Hadley, 2003;
National Center for Health Statistics, 2012). For those receiving health care, persistent health care disparities
exist across social groups (Agency for Healthcare Research and Quality, 2012), and quality of care is low, with
only 55% of Americans receiving recommended care (McGlynn et al., 2003). Patient dissatisfaction with the
health care system is high, and health care is often fragmented and provider oriented rather than patient
centered (Institute of Medicine, 2001).
To address these and other problems, Congress, with much political rancor, passed the Patient Protection
and Affordable Care Act (ACA), the most significant government intervention in the U.S. health care system
since the passage of Medicare and Medicaid in the 1960s. Although a key reason for government intervention
was to reduce the percentage of uninsured individuals in the United States, another important reason was the
escalating costs of health care for federal, state, and local governments, which crowds out resources for other
public investments. Between 1989 and 2010, the nation’s health care spending grew from $604 billion to $2.6
trillion, with the public share increasing from 45% in 2010 to a projected 49% in 2022 (Cuckler et al., 2013;
Levit et al., 1991). Government intervention in the U.S. health care system likely will be greater in the 21st
century than in the 20th, which may stimulate future evaluations of system performance.
Another trend that is increasing health program evaluation is the movement toward evidence-based practice
in medicine and related fields. One reason for the low quality of U.S. health care is the lack of evidence about
comparative effectiveness, or what services work best, for whom, under what circumstances (Institute of
Medicine, 2008; McGlynn et al., 2003). When evidence exists, clinicians may not provide evidence-based
care, or the evidence for single conditions is less relevant for patients with multiple chronic conditions, who
are the main challenge facing health care systems worldwide (Barnett, 2012; Institute of Medicine, 2001,
2008). To increase medical evidence, Congress created, as part of the ACA, the Patient-Centered Outcomes
Research Institute (2015), which has funded over 365 studies of more than $700 million on the comparative
effectiveness of alternative clinical services to prevent, diagnose, or treat common medical conditions as well as
to improve health care delivery and outcomes. The National Institutes of Health, the Agency for Healthcare
Research and Quality, and other federal agencies also fund studies that contribute to the medicine evidence
base.
Similar calls to grow the evidence base are voiced in public health practice, management, and other fields
(Brownson et al., 2003; Brownson et al., 2009; Walshe & Rundall, 2001). The Centers for Disease Control
and Prevention has supported The Community Guide, a website that presents evidence-based
recommendations on preventive services, programs, and policies that work or do not work, based on
systematic reviews conducted by the Community Preventive Service Task Force. Of equal importance, The
Community Guide also indicates gaps in the evidence base in 20 areas where there is insufficient evidence to
determine whether an intervention works and where more funding and evaluation are needed to close the gaps
(Community Guide, 2015).
A related trend, dissemination and implementation science, also is generating growth in evaluation research.
This trend suggests that the supply of evidence is not the problem; rather, the critical issue is that only a
25
fraction of the evidence—about 14%—is translated into clinical practice and that promising results in new
publications take 17 years to be implemented widely (Balas & Boren, 2000; L. W. Green et al., 2009;
Institute of Medicine, 2013). Implementation science is a relatively new field that identifies the factors,
processes, and methods that increase the likelihood of evidence-based (i.e., effective) interventions being
adopted and used in medical practice, public health departments, and other settings to sustain improvements
in population health (Eccles & Mittman, 2006; Lobb & Colditz, 2013). Implementation science is part of the
larger process of translation research, which involves studying and understanding the movement of scientific
discovery from “bench-to-bedside-to-population”—or the linear progression from basic science discoveries to
efficacy and effectiveness studies, followed by large-scale demonstrations, and, finally, dissemination into
practice (Glasgow et al., 2012; Lobb & Colditz, 2013). As dissemination and implementation science
continues to grow, so will methods of process and impact evaluation for identifying strategies that can cause
greater and quicker uptake of effective interventions into routine use in health organizations.
In the health care system, growth in organizational consolidation and information technology is creating new
infrastructures for the evaluation of organizational and health system performance (Cutler & Morton, 2013;
Moses et al., 2013). All sectors of the health care system (insurers, physician offices, hospitals,
pharmaceuticals, and biotechnology) are consolidating, primarily horizontally (such as the merger of two or
more hospitals) but also vertically (such as the merger of a hospital and a physician group), to lower costs
through economies of scale and to gain market power over competitors and other sectors of the U.S. health
care economy. The ACA has increased the pace of consolidation and integration by authorizing the
development of accountable care organizations, or voluntary groups of integrated delivery systems, hospitals,
and other providers that assume responsibility for defined populations of Medicare beneficiaries (Berwick,
2011; Cutler & Morton, 2013; Dafny, 2014).
As the size of health care organizations has increased, so have investments in information technology to lower
their costs, increase coordination, and improve the quality and safety of clinical care. Although the evidence is
unclear regarding whether information technology has produced these expected benefits, consolidation and
information technology have converged to produce massive databases, or “Big Data,” creating new
opportunities for advancing evaluation methods, assessing health system performance, and building the
evidence base in the foreseeable future (Schneeweiss, 2014). Because the trend toward larger, more complex
health care organizations creates management challenges, leaders and administrators are major consumers of
evaluation information from Big Data, which can be used to make informed decisions to deliver efficient,
effective, equitable care on a population level. Growth in Big Data will require evaluators who are adept in
both quantitative and qualitative methods (e.g., the analysis of free-form text in the electronic medical charts
of thousands of patients) and who are aware of the strengths and limitations of Big Data (Khoury &
Ioannidis, 2014).
As a whole, the forces contributing to the growth of health program evaluation are interrelated and will likely
continue well into the 21st century. From these trends, two broad types of health program evaluation have
emerged, which are reviewed in the next section.
26
27
Types of Health Program Evaluation
Health programs usually are implemented to achieve specific outcomes by performing some type of
intervention or service. In general, two basic types of evaluation are conducted in the health field:
The evaluation of health programs
The evaluation of health systems
28
Evaluation of Health Programs
Evaluation of health programs includes programs created to reduce or eliminate a health problem or achieve a
specific objective. Healthy People 2020 has established 42 “topic areas” for improving the nation’s health and
reducing inequalities, ranging from adolescent health and arthritis to tobacco use and vision health (U.S.
Department of Health and Human Services, 2015b).
In essence, the topic areas are a comprehensive inventory of the categories of health programs that can be
implemented to achieve specific health objectives. For example, water fluoridation programs are implemented
to improve oral health, or exercise programs are created to increase physical activity. This type of evaluation
assesses the performance of programs developed to achieve health objectives in these and other areas. Some
topic areas, such as access to health services and public health infrastructure, overlap with the evaluation of
health systems.
29
Evaluation of Health Systems
Aday and colleagues (1998) present a framework for evaluating the performance of health care systems based
partly on Donabedian’s (1973) earlier work (see Figure 1.1). A health care system has a structure defined by
federal, state, and local laws and regulations; the availability of personnel, facilities, and other resources; and
the organization and financing of care. The structure component also includes the characteristics of the
population that the system serves, as well as the physical, social, and economic environments where they live.
As a whole, the structure of the system influences the process or delivery of health services, which in turn
produces outcomes, health and well-being. Three criteria are proposed for gauging the worth—or value—of
system performance. The three “E’s” define what improvements in health and satisfaction (effectiveness) were
produced by health services at what cost (efficiency) and for what population groups (equity). A fourth “E”
(ethics) is essential for judging whether a fair or equitable distribution of the costs and outcomes of health
services exists among those who need care and those who pay for it. Based on ethical principles of distributive
justice, inequitable access to care exists when those who need care the most do not get it. Table 1.1 presents
definitions of criteria for assessing effectiveness, efficiency, and equity at the clinical and population levels.
Evaluations of the performance of health care systems typically examine the influence of the structural
component on the process of care, or the influence of the structure and process components on the outcomes
of care (Begley et al., 2013; Clancy & Eisenberg, 1998; Kane, 1997). For example, an evaluation of the
association between the structure and process components of the system was performed by Baicker and
associates (2013; see also Finkelstein et al., 2012), who examined Oregon’s 2008 expansion of its Medicaid
program for low-income adults through a lottery drawing of about 30,000 individuals from a waiting list of
almost 90,000 persons. As expected, for persons who met eligibility requirements and enrolled in the program,
Medicaid coverage increased the use of health services, but the findings were mixed for quality of care and
health outcomes. Medicaid coverage improved rates of diabetes detection and management, improved self-
reported health and measures of mental health but not physical health, and reduced financial strain.
The evaluation of health systems also includes the local public health system. Hajat and colleagues (2009)
present a framework for evaluating the performance of local public health departments in improving
population health and reducing health inequalities (see Figure 1.2). Local health departments (LHDs) are the
government entities that are expected to improve population health and reduce health inequalities, particularly
in vulnerable social groups, by creating conditions in communities that support good health (Scutchfield &
Howard, 2011). LHDs typically have partnerships with other public, private, and voluntary entities, forming a
loosely connected, local public health system that coordinates activities to achieve common goals. LHDs
operate in a larger environment, or context, that informs their philosophy of public health practice, their
mission, and long- and short-term objectives (or “purpose”). Within an organizational structure, LHDs have
an infrastructure, or “inputs,” such as personnel, fiscal resources, information, and other resources, which are
converted into processes performing the core functions of public health (assessment, policy development, and
assurance) and the 10 essential public health services (see Table 1.2), which ultimately drive LHD
performance (Hyde & Shortell, 2012). Expected outcomes are improved population health, reduced health
30
inequalities, and a strengthened local public health system.
Figure 1.1 Framework for Evaluating the Performance of Health Care Systems
Source: Figure 1.4 in Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity, 3rd ed.,
by Lu Ann Aday et al., 2004. Chicago: Health Administration Press.
31
Source: Table 1.1 in Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity, 3rd ed., by Lu Ann Aday et al., 2004. Chicago:
Health Administration Press.
Public health services and systems research is the name of the relatively new field that applies the methods of
health services research—which includes evaluation—to investigate the performance of public health systems
(Scutchfield et al., 2009; Scutchfield & Shapiro, 2011). For example, recent studies have examined whether
LHDs with greater expenditures per capita have lower mortality and reduced health inequalities. Using
national data, Mays and Smith (2011) report that county-level mortality rates declined 1.1% to 6.9% for each
10% increase in LHD spending. Similarly, Grembowski et al. (2010) examined whether 1990–1997 changes
in LHD expenditures per capita were associated inversely with 1990–1997 changes in all-cause mortality rates
for Black and White racial groups in U.S. local jurisdictions. Although changes in LHD expenditures were
not related to reductions in Black/White inequalities in mortality rates in the total population, inverse
associations were detected for adults aged 15–44 and for males. Bekemeier et al. (2014) also report that LHD
expenditures for maternal and child health (MCH) had the expected, inverse relationship with county-level
low birth weights, particularly for counties with high concentrations of poverty and for categories of MCH
spending based on need.
Figure 1.2 Framework for Evaluating the Performance of Local Health Departments
32
Source: “What Predicts Local Public Health Agency Performance Improvement? A Pilot Study in North
Carolina,” by A. Hajat et al., 2009, Journal of Public Health Management and Practice, 15(2), p. E23.
Note: LPHA, local public health agency; MCC, maternity care coordination; TB, tuberculosis; WIC,
women, infants, and children.
33
Source: “The Public Health System and the 10 Essential Public Health Services,” by Centers for Disease Control and Prevention, 2014.
Retrieved from http://www.cdc.gov/nphpsp/essentialservices.html.
Historically, the health care system and the public health system have operated largely in isolation in the
United States, perhaps because 97% of national health expenditures go to health care services (Institute of
Medicine, 2013), and in prior years the health care system had few incentives to build linkages with the public
health system. However, by adopting a population perspective—long the hallmark of public health systems—
the ACA is creating opportunities for the health care system and the public health system to work together
through several mechanisms (Institute of Medicine, 2013). For example, accountable care organizations
(ACOs) are responsible for the quality of care in a defined population (Lewis et al., 2013). Fisher et al. (2012)
present a framework for evaluating the performance of ACOs, including their impacts on health outcomes,
which creates incentives for ACO providers, LHDs, and community partners to deploy population-level
strategies to protect the health of the ACO patient population (Institute of Medicine, 2013, 2014). The
ACA’s health insurance exchanges and health information exchanges also are a population approach to health
insurance expansion and access to health care (Mays & Scutchfield, 2012; Scutchfield et al., 2012).
Economic evaluation, such as cost-effectiveness analysis and cost-benefit analysis, is an important element of
assessing the performance of health programs and health systems. The focus is measurement of the benefits,
or outcomes, of a health program or medical technology relative to the costs of producing those benefits. In
the face of scarce resources, interventions that produce relatively large benefits at a low cost have greater worth
than interventions that offer few benefits and high costs. Specific standards for conducting cost-effectiveness
studies have emerged to ensure quality and adherence to fundamental elements of the methodology (Gold et
al., 1996). Spurred by the trends toward cost containment and evidence-based medicine and public health
practice, the number of cost-effectiveness evaluations in the published literature has skyrocketed since the
34
1980s. The Tufts-New England Medical Center (2013) Cost-Effectiveness Analysis Registry contains over
10,000 cost-effectiveness ratios for a variety of diseases and treatments. Because new technologies are always
being created, cost-effectiveness studies will be a major area of evaluation for many years to come.
This textbook is designed to provide a practical foundation for conducting evaluations in these arenas. The
concepts and methods are similar to those found in the evaluation of social programs but have been
customized for public health and medical care. Evaluation itself is a process conducted in a political context
and composed of interconnected steps; another goal of this book is to help evaluators navigate these steps in
health settings. A customized, reality-based treatment of health program evaluation should improve learning
and ultimately may produce evaluations that are both practical and useful.
While a key goal of evaluation is discovering whether a health program works, many decision makers
ultimately want to know about the generalizability of an evaluation’s findings—that is, do the findings apply
to different social groups and settings, variations in the intervention itself, and also for different ways of
measuring outcomes (Shadish et al., 2002)? The evaluation methods for examining whether a single health
program works cannot answer questions about the generalizability of a program. However, if a sufficient
number of evaluations of a program are conducted that address a common evaluation question, and that
contain different kinds of social groups and settings with variations in program and outcome, a meta-analysis
of their findings may be performed. Meta-analysis is a quantitative technique for synthesizing, or combining,
the results from different evaluations on the same topic, which may yield information about whether the
findings are robust over variations in persons, settings, programs, and outcomes (Shadish et al., 2002).
Although this textbook addresses in depth issues of generalizability of evaluation findings from the evaluation
of a single evaluation, the methods of meta-analysis are not covered here but can be found in other references
(M. Borenstein et al., 2009; Cooper et al., 2009).
35
Summary
At its core, evaluation entails making informed judgments about a program’s worth, ultimately to promote
social change for the betterment of society. Unprecedented growth in health programs and the health care
system since the 1960s is largely responsible for the development of health program evaluation. Other forces
contributing to the growth of evaluation include the increasing importance of accountability and scarce
resources, a greater emphasis on prevention, more attention being given to evidence-based practice and
implementation science, escalating health care costs and government intervention in health systems,
organizational consolidation and information technology, and a reliance on demonstration projects. Because
many of these trends will continue in the remainder of this century, so will interest in the evaluation of health
programs and health systems. To perform either of these two types of program evaluations, an evaluator
completes a process composed of well-defined steps. Chapter 2 reviews the elements of the evaluation process.
36
List of Terms
Accountability 4
Demonstration projects 5
Dissemination 7
Economic evaluation 14
Evaluation 2
Evidence-based practice 6
Generalizability 14
Government intervention 6
Health program 1
Health systems 2
Implementation science 7
Information technology 7
Meta-analysis 14
Organizational consolidation 7
Patient Protection and Affordable Care Act (ACA) 6
Prevention 5
Public health services and systems research 11
Scarce resources 4
Translation research 7
37
Study Questions
1. What is the purpose of health program evaluation?
2. What are the two fundamental questions of program evaluation?
3. How is the worth of a health program determined?
4. What are three factors that have contributed to the growth of health program evaluation since the 1960s? Why are they
important?
5. What are the major types of health program evaluation, and what are their relationships (if any) to each other?
6. What kinds of evaluations of the health reforms in the ACA might be conducted?
38
2 The Evaluation Process as a Three-Act Play
Evaluation as a Three-Act Play
Act I: Asking the Questions
Act II: Answering the Questions
Act III: Using the Answers in Decision Making
Role of the Evaluator
Evaluation in a Cultural Context
Ethical Issues
Evaluation Standards
Summary
List of Terms
Study Questions
Performing a health program evaluation involves more than just the application of research methods. The
evaluation process is composed of specific steps designed to produce information about a program’s
performance that is relevant and useful for decision makers, managers, program advocates, health
professionals, and other groups. Understanding the steps and their interconnections is just as fundamental to
evaluation as knowledge of the quantitative and qualitative research methods for assessing program
performance.
There are two basic perspectives on the evaluation process. In the first perspective—the rational-decision-
making model—evaluation is a technical activity, in which research methods from the social sciences are
applied in an objective manner to produce information about program performance for use by decision makers
(Faludi, 1973; Veney & Kaluzny, 1998). Table 2.1 lists the elements of the rational-decision-making model,
which are derived from systems analysis and general systems theory (Quade & Boucher, 1968; von
Bertalanffy, 1967). The model is a linear sequence of steps to help decision makers solve problems by learning
about the causes of the problems, analyzing and comparing alternative solutions in light of their potential
consequences, making a rational decision based on that information, and evaluating the actual consequences.
Today, the systematic comparison of alternative approaches in Step 3 is often referred to as health policy
analysis, which compares the potential, future advantages and disadvantages of proposed, alternative policy
options to reduce or solve a health care issue or population health problem (Aday et al., 2004; Begley et al.,
2013).
In practice, however, the evaluation of health programs rarely conforms to the rational-decision-making
model. Because politics is how we attach values to facts in our society, politics and values are inseparable from
the evaluation of health programs (Palumbo, 1987; Weiss, 1972). For instance, the public health value of
“health for everyone” conflicts with the differences in infant mortality rates across racial/ethnic groups, and
politics is the use of values to define whether this difference is a problem and what, if anything, should be
done about it. Consequently, in the second perspective, evaluations are conducted in a political context in
which a variety of interest groups compete for decisions in their favor. Completing an evaluation successfully
depends greatly on the evaluator’s ability to navigate this political terrain.
39
This chapter introduces the political nature of the evaluation process, using the metaphor of the evaluation
process as a three-act play. The remaining chapters of the book are organized around each act of the play. The
last two sections of this chapter address the importance of ethics and cultural context in conducting
evaluations and the role of the evaluator in the evaluation process.
40
Evaluation as a Three-Act Play
Drawing from Chelimsky’s earlier work (1987), I use the metaphor of a “three-act play” to describe the
political nature of the evaluation process. The play has a variety of actors and interest groups, each having a
role, and each entering and exiting the political “stage” at different points in the evaluation process. Evaluators
are one of several actors in the play, and it is critical for them to understand their role if they are to be
successful. The evaluation process itself generates the plot of the play, which varies from program to program
and often has moments of conflict, tension, suspense, quiet reflection, and even laughter as the evaluation
unfolds.
Table 2.2 presents the three acts of the play, which correspond to the basic steps of the evaluation process
(Andersen, 1988; Bensing et al., 2004). The play begins in the political realm with Act I, in which evaluators
work with decision makers to define the questions that the evaluation will answer about a program. This is the
most important act of the play, for if the questions do not address what decision makers truly want to know
about the program, the evaluation and its findings are more likely to have little value and use in decision
making. In Act II, the research methods are applied to answer the questions raised in Act I. Finally, in Act
III, the answers to the evaluation questions are disseminated in a political context, providing insights that may
influence decision making and policy about the program.
41
Act I: Asking the Questions
In Act I, the evaluation process begins when decision makers, a funding organization, or another group
authorize the evaluation of a program. In general, decision makers, funders, program managers, and other
groups may want to evaluate a program for overt or covert reasons (Rossi et al., 2004; Weiss, 1972, 1998a).
Overt reasons are explanations that conform to the rational-decision-making model and are generally accepted
by the public (Weiss, 1972, 1998a). In this context, evaluations are conducted to make decisions about
whether to
Continue or discontinue a program
Improve program implementation
Test the merits of a new program idea
Compare the performance of different versions of a program
Add or drop specific program strategies or procedures
Implement similar programs elsewhere
Allocate resources among competing programs
Because Act I occurs in the political arena, covert reasons for conducting evaluations also exist (Weiss, 1972,
1998a). Decision makers may launch an evaluation to
Delay a decision about the program
Escape the political pressures from opposing interest groups, each wanting a decision about the program
favoring its own position
Provide legitimacy to a decision that already has been made
Promote political support for a program by evaluating only the good parts of the program and avoiding
or covering up evidence of program failure
Whether a program is evaluated for overt or covert reasons may depend on the values and interests of the
different actors and groups in the play (Rossi et al., 2004; Shortell & Richardson, 1978).
42
A stakeholder analysis is an approach for identifying and prioritizing the interest groups in the evaluation
process and defining each group’s values and interests about the health program, policy, or health system
reform and the evaluation (Brugha & Varvasovszky, 2000; Page, 2002; Rossi et al., 2004; Sears & Hogg-
Johnson, 2009; Varvasovszky & Brugha, 2000; Weiss, 1998a). The term stakeholder was created by companies
to describe non-stockholder interest groups that might influence a company’s performance or survival (Brugha
& Varvasovszky, 2000; Patton, 2008). In evaluation, a stakeholder is an individual or a group with a stake—or
vested interest—in the health program and the evaluation findings (Patton, 2008). Based on definitions in the
literature, a stakeholder is an individual, a group, or an organization that can affect or is affected by the
achievement of the health program’s objectives, or the evaluation process or its findings (Bryson et al., 2011;
Page, 2002). Stakeholders tend to have two broad types of stakes (Page, 2002). The first type is a stake in an
investment in something of value in the health program, such as financial or human resources. For example,
funders of the health program and evaluation have stakes in both. The second type is a stake in the activity of
the health program; in other words, a stakeholder might be placed at risk or experience harm if the activity
were withheld. For instance, providers who receive revenue from delivering a medical treatment to patients
have stakes in evaluations of the effectiveness of the treatment. If the evaluations show that the treatment has
few health benefits, the treatment may be delivered less often and ultimately lead to a loss in revenue.
A stakeholder analysis provides essential information for planning the evaluation in a political context,
including a better understanding of the program’s and the evaluation’s political context, the identification of
common goals and contentious issues among the stakeholders, and the creation of an evaluation plan that
addresses stakeholder interests as much as possible (Sears & Hogg-Johnson, 2009). Rossi et al. (2004) suggest
the following best practices for stakeholder analysis:
Identify stakeholders at the outset and prioritize those with high vested interests in the health program and
evaluation.
Involve stakeholders early because their perspectives may influence how the evaluation is carried out.
Involve stakeholders continuously and actively through regular meetings, providing assistance with
identifying the evaluation questions and addressing study design issues, and requesting comments on
draft reports.
Establish a structure by developing a conceptual framework for the evaluation to build a common
understanding of the health program and evaluation, promote focused discussion of evaluation issues,
and keep everyone in the evaluation process “on the same page.” (Chapter 3 addresses this conceptual
frameworks in detail)
In identifying and prioritizing stakeholders, most, if not all, evaluations of health programs have multiple
stakeholders with different interests. Page (2002) suggests prioritizing stakeholders based on (a) their power
to influence the health program or evaluation; (b) whether a stakeholder’s actions and perspectives are
perceived to be legitimate and, therefore, should be taken into account in the evaluation; and (c) urgency—
that is, whether a stakeholder’s interests call for immediate attention in the evaluation. Stakeholders with all
three attributes tend to have the highest priority in the stakeholder analysis.
43
Figure 2.1 presents a power-interest grid, which is a tool for identifying the stakeholders and rating roughly
their relative power and interest in the evaluation (Bryson et al., 2011). The grid is a two-by-two matrix,
where power, ranging from low to high, is shown in the columns, and interest, also ranging from low to high,
is shown in the rows. Power is defined as the ability of stakeholders to pursue their interests (or who has the
most or least control over the program or direction of the evaluation), whereas interest refers to having a
political stake in the program and evaluation (or who has the most to gain or lose from the evaluation). The
goal is to sort each stakeholder into one of the four mutually exclusive cells in the matrix: players, subjects,
context setters, and crowd. Players are key stakeholders and potential users of evaluation results, assuming that
the questions posed in Act I address at least some or all of those interests. Subjects may become more engaged
in the evaluation by adopting a participatory or empowerment approach to advance their interests, as
explained later in this chapter. Context setters’ interests may change, depending on the results of the
evaluation, and obtaining their buy-in may become essential as the evaluation process unfolds. The spread of
stakeholders across the four cells may reveal commonalities among them, which may be used to build
stakeholder buy-in and collaboration in the evaluation process.
Figure 2.1 Stakeholder Power Versus Interest Grid
Source: “Working With Evaluation Stakeholders: A Rationale, Step-Wise Approach and Toolkit,” by J.
M. Bryson, M. Q. Patton, & R. A. Bowman, 2011, Evaluation and Program Planning, 34(1), p. 5.
Evaluations of health programs tend to have similar stakeholders (Rossi et al., 2004; Shortell & Richardson,
1978). Policymakers and decision makers often authorize evaluations to supply clear-cut answers to the policy
problems they are facing, such as whether to continue, discontinue, expand, or curtail the program (Bensing et
al., 2003). The funding agency may want to evaluate a program to determine its cost-effectiveness and discover
whether the program has any unintended, harmful effects. Rossi et al. (2004) suggest that the policymakers
and evaluation funders are the top stakeholders in the evaluation process. The organization that runs the
program may be interested in an evaluation to demonstrate to interest groups that the program works, to
justify past or future expenditures, to gain support for expanding the program, or simply to satisfy reporting
requirements imposed by the funding agency.
44
Program administrators may support an evaluation because it can bring favorable attention to a program that
they believe is successful, which may help them earn a promotion later on. Administrators also may use an
evaluation as a mechanism for increasing their control over the program, or to gather evidence to justify
expanding the program, or to defend the program against attacks from interest groups that want to reduce or
abolish it.
Alternatively, contextual stakeholders, or organizations or groups in the immediate environment of the program,
which either support or oppose the program, may advocate for an evaluation, with the hope of using
“favorable” results to promote their point of view in Act III of the evaluation process. The public and its
various interest groups may endorse evaluations for accountability or to ensure that tax dollars are being spent
on programs that work. The public also may support evaluations because their findings can be a source of
information—in the mass media, on the Internet, in journals, and elsewhere—about the merits of a health
program or health system reform, such as the Patient Protection and Affordable Care Act (ACA).
Program evaluators may want to conduct an evaluation for personal reasons, such as to earn an income or to
advance their careers. Alternatively, evaluators may sympathize with a program’s objectives and see the
evaluation as a means toward promoting those objectives. Other evaluators are motivated to evaluate because
they want to contribute to the discipline’s knowledge by publishing their findings or presenting them at
conferences. In addition, the larger evaluation and research community, composed mainly of evaluation
professionals, may have interests in the methods and findings of the evaluation.
After the stakeholders are identified and their relative power and interests are defined, a grid is constructed, as
shown in Figure 2.2, displaying each stakeholder’s initial support versus opposition to the program and the
proposed evaluation. The power-position grid provides information for planning the evaluation, such as
developing a strategy for engaging stakeholders in the evaluation (Preskill & Jones, 2009) and taking steps to
address explicitly the concerns of supporters and opponents in the evaluation process. Once the evaluation’s
findings and recommendations are known, the grid offers information for planning the communication
strategy to disseminate evaluation results to stakeholders and the public.
Table 2.3 presents a brief case study of a stakeholder analysis that Sears and Hogg-Johnson (2009) performed
for an evaluation of a pilot program in Washington state’s workers’ compensation system, which provides
health insurance coverage for workers who are injured on the job. The pilot program was authorized by the
Washington state legislature in a contentious political context. Key findings of the stakeholder analysis were
the identification of key stakeholders, their values, whether they supported or opposed the pilot program at
the outset, and what evaluation questions the stakeholders wanted the evaluation to address.
Act I, “Asking the Questions,” has two parts, or scenes. In Scene 1, evaluators work with decision makers and
other groups to develop one or more policy questions about the program, based on findings from the
stakeholder analysis (see Chapter 3). A policy question is a general statement indicating what decision makers
want to know about the program. Decision makers can include the funding agency, the director of the
organization that runs the program, the program’s manager and staff, outside interest groups, and the
program’s clients. Together, they constitute the play’s “audience,” and the objective of the evaluation is to
45
produce results that will be used by at least some members of a program’s audience.
Figure 2.2 Stakeholder Power Versus Support or Opposition Grid
Source: “Working With Evaluation Stakeholders: A Rationale, Step-Wise Approach and Toolkit,” by J.
M. Bryson, M. Q. Patton, & R. A. Bowman, 2011, Evaluation and Program Planning, 34(1), p. 9.
46
Source: Adapted from “Enhancing the Policy Impact of Evaluation Research: A Case Study of Nurse Practitioner Role Expansion in a
State Workers’ Compensation System,” by J. M. Sears & S. Hogg-Johnson, 2009, Nursing Outlook, 57(2), pp. 99–106.
Although decision makers may authorize an evaluation of a health program for a variety of reasons, many
evaluations are performed to answer two fundamental questions: “Did the program succeed in achieving its
objectives?” and “Why is this the case?” For some programs, however, decision makers may want to know
more about the program’s implementation than about its success in achieving its objectives. For example,
questions about achieving objectives may be premature for new programs that are just finding their legs, or
when decision makers want to avoid information about the program’s successes and failures, which may
generate controversy downstream in Act III of the evaluation process. In these and other cases, the basic
policy question becomes, “Was the program implemented as intended?” In general, as the number and
diversity of decision makers from different interest groups increase, the number of policy questions about the
program may increase greatly, which may decrease the likelihood of finding common ground and reaching
consensus on the evaluation’s purpose and key questions.
When a program addresses a controversial, political issue, heated debates may arise among decision makers
and interest groups about what questions should and should not be asked about the program. Evaluators can
play an important role when they facilitate communication among the decision makers and interest groups to
47
help them form a consensus about what policy questions to ask about the program. In addition to moderating
the discussions, evaluators also can be active participants and pose their own policy questions for decision
makers to consider. If the program is already up and running, evaluators can support the discussions by
providing descriptive information about program activities that may help decision makers formulate questions
or choose among alternative questions.
If the play is to continue, Scene 1 ends with one or more well-defined policy questions endorsed by decision
makers and, in some contexts, by at least some interest groups. The play may end in Scene 1, however, if no
questions about the program are proposed or if decision makers cannot agree on a policy question or what the
program is trying to accomplish (Rossi et al., 2004; Weiss, 1998a). For covert reasons, decision makers may
place stringent limits on what questions can and cannot be asked when they want to avoid important issues, or
possibly to cover up suspected areas of program failure. Under these conditions, evaluation findings may have
little influence on people’s views of the program, and consequently, there is little value in conducting the
evaluation (Weiss, 1998a).
Once one or more policy questions are developed, Scene 2 begins and the policy questions are translated into
feasible evaluation questions. In Scene 2, the evaluator is responsible for translating a general policy question,
such as “Does the program work?”, into a more specific evaluation question, such as “Did the smoking
prevention program reduce cigarette smoking behavior among adolescents between the ages of 13 and 15?” To
ensure that the evaluation will produce information that decision makers want in Act III of the play, decision
makers should review the evaluation questions and formally approve them before advancing to the next act of
the play.
In Scene 2, the evaluator also is responsible for conducting a feasibility assessment (Bowen et al., 2009;
Centers for Disease Control and Prevention, 1999; Melnyk & Morrison-Beedy, 2012; Rossi et al., 2004;
Weiss, 1998a). Before going ahead with the evaluation, the evaluator should confirm that adequate resources,
including time and qualified staff (or consultants), exist to conduct the evaluation for the budgeted amount of
money. The evaluator should verify that data required for answering the questions are available or can be
collected with minimal disruption and that a sufficient number of observations will exist for subsequent
quantitative or qualitative data analyses (see Chapter 7). For quantitative evaluations, a key issue is whether
the projected number of cases will have adequate statistical power. If the evaluation will engage staff in
multiple sites, the evaluator should request a letter indicating a site’s agreement to participate in the
evaluation. The feasibility assessment also should confirm whether the program has matured and has
established, stable routines. Stable programs are preferred because the reasons for program success or failure
can be identified more readily than in unstable programs. With stable programs, evaluation findings based on
data collected a year ago have a better chance of still being relevant today. In contrast, when a program is
changing continually, the findings obtained at the end of the evaluation process may apply to a program that
no longer exists. If no insurmountable obstacles are encountered and the evaluation appears to be feasible, the
evaluator and the other actors in the play have a “green light” to proceed to the next act of the evaluation
process.
48
49
Act II: Answering the Questions
In Act II, the evaluation is conducted. Evaluators apply research methods to produce qualitative and
quantitative information that answers the evaluation questions raised in Act I.
Evaluations may be prospective or retrospective. In a prospective evaluation, the evaluation is designed before the
program is implemented, as shown in Table 2.4. Prospective evaluations are ideal because a greater number of
approaches can be considered for evaluating a program. With greater choice, evaluators have more flexibility
to choose an evaluation approach with the greatest strengths and fewest weaknesses. In addition, evaluators
have more freedom to specify the information they want to collect about the program and, once the program
is implemented, to ensure that the information is actually gathered.
In contrast, retrospective evaluations are designed and conducted after a program has ended, and a smaller
number of alternative approaches usually exist for evaluating such programs. Historical information about the
program may exist in records and computer files, but the information may not be useful for answering key
questions about the program. In retrospective evaluations, choice of design and availability of information are
almost always compromised, which may limit what can be learned about the program.
Causation is an intrinsic feature of prospective and retrospective evaluations. All health programs have an
inherent assumption that the program is expected to cause change to achieve its objectives. Program theory
refers to the chain of causation, or the pathways or mechanisms, through which a health program is expected
to cause change that leads to desired beneficial effects and avoids unintended consequences (Alkin & Christie,
2005; Donaldson, 2007; Weiss, 1995). Program theories are always probabilistic rather than deterministic;
few, if any, interventions invariably produce the intended effects (Cook & Campbell, 1979; Shadish et al.,
2002).
A health program’s causal assumptions may or may not be based on formal theory. Drawing from Merton
(1968), Chen (1990), Donaldson (2007), and Krieger (2011), a program’s causal assumptions may be
grounded on discipline theory, or what Merton refers to as “grand” theories that are intended to explain a
wide range of human behaviors by positing “what causes what.” For example, a health insurance plan may
50
Exploring the Variety of Random
Documents with Different Content
LP20118.
San Fernando and the Kaaka Maami
Island. © 8Dec61; LP23032.
San Fernandocal. © 2Jan61 (in notice:
1960); LP19593.
Sing along with Cauliflower.
© 19Oct61; LP23026.
This goon for hire. © 24May62;
LP23666.
Will success spoil Clem Kadiddlehopper?
© 23Sep61; LP23022.
RED SKY OVER BISMARCK. See
THE DAKOTAS.
THE RED SWAMP FOX. See
THE RED SWAMP POX.
THE RED SWAMP POX. Terrytoons. Released
by Twentieth Century-Fox
Film Corp. 1 reel, sd., color,
35 mm. (A Terrytoon cartoon).
Appl. ti.: The red swamp fox.
© Terrytoons, division of CBS Films,
Inc.; 1Jan66; LP36741.
RED TAPE. See
THE DANNY THOMAS SHOW. No. 17-E (170).
RED TAPE ROMANCE. See
MY THREE SONS.
RED THORN IN CUBA. See
EYEWITNESS.
RED TOMAHAWK. A. C. Lyles Productions.
Released by Paramount Pictures.
82 min., sd., color, 35 mm. © Paramount
Pictures Corp. & A. C. Lyles
Productions, Inc.; 22Dec66; LP33643.
THE RED TRACTOR. Terrytoons.
Released by 20th Century-Fox Film
Corp. 7 min., sd., color, 35 mm.
Color by DeLuxe. (Duckwood)
© Terrytoons, a division of CBS Films,
Inc.; 3Jun64; LP28882.
RED WATER NORTH. See
CHEYENNE.
THE RED, WHITE, AND BLUE. See
PROJECT 20.
REDCOAT STRATEGY. See
WALT DISNEY PRESENTS. Show no. 16.
THE REDCOATS ARE COMING. See
THE DICK VAN DYKE SHOW.
REDECORATING. See
I LOVE LUCY.
REDECORATING THE MERTZ APARTMENT. See
I LOVE LUCY.
THE REDHEAD MEETS THE MOUSTACHE. See
THE LUCILLE BALL-DESI ARNAZ SHOW. 5102-203.
THE REDHEADS. See
OFFICIAL DETECTIVE.
REDIGO. Wilrich Productions. Released
by Screen Gems. Approx. 30 min.
each, sd., b&w, 16 mm. © Wilrich
Productions, Inc.
No.
1. Lady Warbonnet. © 24Sep63; LP35557.
2. The blooded bull. © 1Oct63;
LP35558.
3. Boy from Rio Bravo. © 8Oct63;
LP35559.
4. Prince among men, © 15Oct63;
LP35560.
5. The crooked circle. © 22Oct63;
LP35561.
6. Little Angel Blue Eyes.
© 29Oct63; LP35562.
7. Man in a blackout. © 5Nov63;
LP35563.
8. Papa-San. © 12Nov63; LP35564.
9. Horns of hate. © 19Nov63; LP35565.
10. Shadow of the cougar. © 26Nov63;
LP35566.
11. The thin line. © 3Dec63; LP35567.
12. Hostage hero hiding. © 10Dec63;
LP35568.
13. Privilege of a man. © 17Dec63;
LP35569.
14. The black rainbow. © 24Dec63;
LP35570.
15. The hunters. © 31Dec63; LP35571.
THE REDISCOVERY OF CHARLOTTE HYDE. See
RUN FOR YOUR LIFE.
REDS HURL RECORD BARRAGE AT QUEMOY, AUG. 23, 1958. See
ALMANAC NEWSREEL. Aug. 23, 1960.
RED'S RELATIVES. See
THE RED SKELTON HOUR.
REDS WIN IN CHINA. See
GREATEST HEADLINES OF THE CENTURY.
REDS WIN WORLD SERIES. See
SPORTFOLIO.
REDUCING THE VARIANCE. See
DESIGN OF EXPERIMENTS. Course program 12.
THE REDWOODS. Sierra Club. 20 min.,
sd., color, 16 mm. A King Screen
production. © Sierra Club; 19Jun68;
MP19665.
REDWOODS—SAVED? Sierra Club. Made by
Larry Dawson Productions. 4 min.,
sd., color, 16 mm. © Sierra Club;
5May69; MP19666.
REEL PINK. Mirisch-Geoffrey D-F.
Released by United Artists Corp.
7 min., sd., color, 35 mm. © Mirisch-Geoffrey
D-F; 16Nov65; LP35584.
REFACING A VALVE. McGraw-Hill Book Co.
3 min., si., color, 8 mm. (Automotive
mechanics series: Engine rebuilding,
set 2) Presented by McGraw-Hill
Text-Films. © McGraw-Hill, Inc.;
29Dec67; MP19114.
REFERENDUM ON MURDER. See
CHECKMATE.
REFERRED FOR UNDERACHIEVEMENT. President
& Fellows of Harvard College.
35 min., sd., b&w, 16 mm. Appl.
author: Edward A. Mason. © President
& Fellows of Harvard College;
18May66; MP16122.
REFINING PRECIOUS METALS FROM THE SUDBURY
NICKEL ORES. International
Nickel Co. 33 min., Eastman color,
16 mm. © International Nickel Co.,
Inc.; 28Oct60; MU6935.
REFLECTION. See
LIGHT: REFLECTION.
REFLECTION LAND. See
OUT OF THE INKWELL.
REFLECTIONS IN A GOLDEN EYE. Warner
Bros.-Seven Arts International.
Released by Warner Bros. Pictures.
109 min., sd., color, 35 mm. Panavision.
Based on the novel by
Carson McCullers. © Warner Bros.-Seven
Arts International, Ltd.; 28Oct67; LP35799.
REFLECTIONS OF A SOVIET SCIENTIST. See
CBS REPORTS.
THE REFORMATION. See
OF MEN AND FREEDOM.
REFORMATION AT BIG NOSE BUTTE. See
THE DAKOTAS.
REFORMATION OF DOC HOLLIDAY. See
THE LIFE AND LEGEND OF WYATT EARP. D-16 (127)
REFORMATION OF WILLIE. See
GOING MY WAY.
THE REFORMATION OF WILLIE HENRATTY. See
GOING MY WAY. Reformation of Willie.
REFRACTION. See
LIGHT: REFRACTION.
REFRESHMENT THROUGH THE YEARS. Coca-Cola
Co. Made by Jam Handy Organization.
22 min., sd., color, 35 mm.
Eastman color. © Coca-Cola Co.;
15May63; LU3280.
THE REFRIGER-RAIDER. See
OUT OF THE INKWELL.
REFRIGERATED AMMONIA SPILL TESTS.
Phillips Petroleum Co. 22 min., sd.,
color, 16 mm. © Phillips Petroleum
Co.; 18Oct65; MP15910.
REFRIGERATION AND AIR CONDITIONING.
Coronet Instructional Films. 10 min.,
sd., b&w, 16 mm. © Coronet Instructional
Films, a division of Esquire,
Inc.; 1May64; MP14532.
REFUGE IN RIO. See
EYEWITNESS.
THE REFUGEE. See
M-SQUAD.
THE REGAL ROAD TO RICHES. Video Films.
9 min., sd., color, 16 mm. © Video
Films, Inc.; 27Mar63; MU7274.
REGENERATION IN FLATWORMS. Thorne
Films. 2 min., sd., color, 16 mm.
(Biology demonstration series) Produced
in cooperation with Dept. of
Biology, University of Colorado.
© Thorne Films, Inc.; 25Mar65 (in
notice: 1963); MP15460.
THE REGIONAL CAMPUSES OF INDIANA UNIVERSITY.
Indiana University. Audio
Visual Center. 15 min., sd., color,
16 mm. Eastman color. © Indiana
University; 3Dec63; MP14076.
REGIONAL DANCES OF SPAIN. See
DANZAS REGIONALES ESPANOLAS.
REGISTERED MAIL. See
JOHNNY MIDNIGHT.
THE REGULAR. See
THE AMERICANS.
REGULAR CIGARETTE. See
OLD GOLD SPIN FILTERS COMMERCIAL. No. OGF-7-60.
REGULAR TRANSATLANTIC AIR SERVICE BEGINS, JUNE 28,
1939. See
ALMANAC NEWSREEL. June 28, 1960.
REGULATION AND CONTROL. See
BIOLOGY SERIES I. Film no. 10.
REGULATION OF GROWTH. See
BIOLOGY SERIES III. Film no. 9.
REGULATION OF PLANT DEVELOPMENT: COLEOPTILE
RESPONSE IN ZEA. Iowa State
University of Science & Technology.
5 min., sd., color, 16 mm. (4-35)
© Iowa State University of Science &
Technology (in notice: Iowa State
University); 21Dec64; MP14954.
REHABILITATION. MB Productions.
30 min., sd., color, 16 mm. (Athletic
injuries, the knee) © MB Productions,
Inc.; 28Jul67; MP17160.
REHABILITATION CENTER. B.P.O.E.
Idaho State Elks Assn. Made by Film
Originals. 14 min., sd., color,
16 mm. © George Oliver Smith & Helen
Stanfield Smith d.b.a. Film Originals;
7Mar60; MP9972.
REHABILITATION OF THE BELOW KNEE AMPUTEE.
Stanley L. Adelsberg. 8 min., color,
16 mm. © Stanley L. Adelsberg;
10May66; MU7701.
REHABILITATION: THE SCIENCE AND THE ART. See
WHAT FINER PURPOSE?
REINFORCEMENT. General Learning Corp.
Made by the David Coffing Co. 8 min.,
sd., color, 16 mm. (Teaching skills
for elementary school teachers)
© General Learning Corp.; 31Dec68;
MP19767.
REINFORCEMENT. General Learning Corp.
Made by the David Coffing Co. 8 min.,
sd., color, 16 mm. (Teaching skills
for secondary school teachers) © General
Learning Corp.; 31Dec68; MP19752.
REINFORCEMENT THERAPY. Smith, Kline &
French Laboratories. Made by Robert
Aller Productions. 45 min., sd.,
b&w, 16 mm. © Smith, Kline & French
Laboratories; 12May66; MP17878.
REJECTION. See
IT'S LIGHT TIME.
REJOICE. Andrew Burke. 13 min.,
b&w, 16 mm. © Andrew Burke;
27Dec67; LU3555.
REJOICE IN LABOR. See
IT'S LIGHT TIME.
RELATING SETS TO NUMBERS. General
Electric Co. 11 min., sd., color,
16 mm. (Pathways to modern mathematics
series) Eastman color.
© General Electric Co.; 1Oct64;
MP14673.
RELATIVE VALUE. See
ALFRED HITCHCOCK PRESENTS.
RELAXATION. West End Brewing Co. of
Utica, N.Y. Made by Doyle Dane
Bernbach. 60 sec., sd., b&w.
© West End Brewing Co. of Utica,
N.Y.; 21Nov62; MU7228.
RELAY RACE. See
KID'S STUFF. Episode no. 30.
RELAY STARTS. McGraw-Hill Book Co.
Made by California Academy of Sciences.
2 min., sd., color, 8 mm. (Swimming
series) © McGraw-Hill, Inc.; 21Nov66
(in notice: 1946); MP16620.
RELAY STATION. See
TALES OF WELLS FARGO.
THE RELEASE. See
DICK POWELL'S ZANE GREY THEATRE.
RELIC OF FORT TEJON. See
MAVERICK.
RELIEF OF AIRWAY OBSTRUCTION. University
Extension, University of Wisconsin.
7 min., sd., color, 16 mm.
Appl. author: University Extension,
University of Wisconsin, employer for
hire of B. J. Bamforth. © Regents of
the University of Wisconsin; 2Dec68;
MP19148.
RELIGIOUS READINESS FILMS FOR RETARDED
CHILDREN. Sister M. Coletta Dunn.
12 reels, si., b&w, 16 mm.
© Sister M. Coletta Dunn; 27May69;
MU8056.
THE RELUCTANT ASTRONAUT. Universal
Pictures. 102 min., sd., color,
35 mm. © Universal Pictures;
4Mar67; LP35395.
THE RELUCTANT BRIDE. See
PONY EXPRESS. 7129.
THE RELUCTANT BRIDEGROOM. See
COMEDY CAPERS.
THE TALL MAN.
THE TEXAN. 49
THE RELUCTANT GUN. See
DEATH VALLEY DAYS. 7216.
THE RELUCTANT HANDICAPPER. See
OUR MAN HIGGINS.
RELUCTANT HERO. See
SUGARFOOT.
THE RELUCTANT LOVER. See
THE DOUBLE LIFE OF HENRY PHYFE.
THE RELUCTANT REBEL. See
BONANZA.
THE RELUCTANT SPY. See
77 SUNSET STRIP.
THE REMARKABLE MRS. HAWK. See
THRILLER.
THE REMARKABLE SCHOOLHOUSE. See
THE 21ST CENTURY.
REMEMBER EDDIE SIMPSON? Robert J.
Fudge. 13 min., b&w. © Robert J.
Fudge; 4Oct68; MU7959.
REMEMBER LAKE SERENE? See
PLEASE DON'T EAT THE DAISIES.
REMEMBER ME NOT. See
MAN FROM BLACKHAWK.
REMEMBER PEARL HARBOR. See
HENNESEY.
REMEMBER PEARL HARBOR; AMERICA AT WAR, 1941-1945. See
THE SCREEN NEWS DIGEST. V. 9, no. 5.
REMEMBER ST. PETERSBURG. See
CAR 54, WHERE ARE YOU?
REMEMBER THE ALAMO. See
CORONADO 9.
REMEMBER THE ALIMONY. See
THE DICK VAN DYKE SHOW.
REMEMBER THE MAINE. See
THE ALASKANS.
REMEMBER THE YAZOO. See
TALES OF WELLS FARGO.
REMEMBRANCE OF CRIMES PAST See
CHECKMATE.
REMINDER TO DAIRYMEN. Babson Bros. Co.
9 min., sd., color, 16 mm. Appl.
author: Sumner J. Lyon. © Babson
Bros. Co.; 1Jan61; MP13172.
[REMINGTON ELECTRIC SHAVER TELEVISION
COMMERCIALS] Sperry Rand Corp.
Remington Electric Shaver Division.
Approx. 30 sec. each, sd., color,
16 mm. © Sperry Rand Corp., Remington
Electric Shaver Division.
Father. RM-2-20-68R. © 14Nov68;
MP19547.
Sneaky shaves. RM-8-30-69.
© 26May69; MP19545.
Spend less time ugly. RM-5-30-68.
© 29Nov68; MP19718.
REMINGTON SHAVER; DERMATOLOGIST REPORT.
Sperry Rand Corp. 120 sec., sd.,
b&w, 16 mm. Appl. author: Young &
Rubicam, Inc. © Sperry Rand Corp.;
17May61; MP11563.
REMITTANCE MAN. See
THE LIFE AND LEGEND OF WYATT EARP. D-8 (119)
TALES OF WELLS FARGO.
REMOTE CONTROL. National Telepix.
14 min., sd., b&w, 16 mm. (Comedy
capers) NM: editing, animation &
additions. © National Telepix, Inc.;
1Oct61; LP20485.
THE REMOUNTS. See
STAGECOACH—WEST.
REMOVAL & INSTALLATION OF BRAKE SHOES
ON SELF ADJUSTING BENDIX BRAKES.
Raybar Technical Films. 4 min., si.,
color, 16 mm. (Light automobile
service and maintenance) © Raybar
Technical Films, Inc.; 3Jan66;
MP16046.
REMOVING A CYLINDER RIDGE. McGraw-Hill
Book Co. 4 min., si., color, Super
8 mm. (Automotive mechanics series:
Engine rebuilding I) Presented by
McGraw-Hill Text-Films. © McGraw-Hill,
Inc.; 29Dec67; MP19632.
REMOVING AND REPLACING CHUCKS. See
MACHINE SHOP SERIES: ELEMENTARY ENGINE LATHE 1.
REMOVING FROG PITUITARY. Ealing Corp.
3 min., si., color, 8 mm. © Ealing
Corp.; 1Apr67; MP17569.
REMOVING GLAZE FROM CYLINDER WALL.
McGraw-Hill Book Co. 4 min., si.,
color, Super 8 mm. (Automotive
mechanics series: Engine rebuilding I)
Presented by McGraw-Hill Text-Films.
© McGraw-Hill, Inc.; 29Dec67; MP19636.
REMOVING THE PISTON FROM THE CYLINDER.
McGraw-Hill Book Co. 4 min., si.,
color, Super 8 mm. (Automotive
mechanics series: Engine rebuilding I)
Presented by McGraw-Hill Text-Films.
© McGraw-Hill, Inc.; 29Dec67; MP19633.
REMOVING VALVES FROM A CYLINDER HEAD.
McGraw-Hill Book Co. 3 min., si.,
color, 8 mm. (Automotive mechanics
series: Engine rebuilding, set 2)
Presented by McGraw-Hill Text-Films.
© McGraw-Hill, Inc.; 29Dec67; MP19109.
THE RENAISSANCE AND THE REFORMATION. See
I AM WITH YOU. Part III.
THE RENAISSANCE OF GUSSIE HILL. See
CHECKMATE.
RENAL HYPERTENSION, BILATERAL NEPHRECTOMY,
KIDNEY TRANSPLANTATION. Upjohn
Co. 23 min., sd., color, 16 mm.
© Upjohn Co.; 9Sep65; MU7646.
RENDEZVOUS. Columbia Broadcasting
System. 30 min. each, sd., b&w,
16 mm. © Columbia Broadcasting
System, Inc.
Alone. © 6Dec58; LP15914.
The funmaster. © 29Dec58; LP15864.
In an early winter. © 12Dec58;
LP15863.
The sound of gunfire. © 6Dec58;
LP15915.
A very fine deal. © 6Dec58; LP15916.
The white circle. © 6Dec58; LP15917.
RENDEZVOUS. Robert Grand Productions.
24 min., sd., b&w, 35 mm. © Robert
Grand Productions, Inc.; 17Sep65;
LP33751.
RENDEZVOUS. See
ALCOA PRESENTS ONE STEP BEYOND.
TWO FACES WEST.
RENDEZVOUS AT ARILLO. See
LAREDO.
RENDEZVOUS AT RED ROCK. See
CHEYENNE.
RENDEZVOUS AT SUNDOWN. See
ZORRO. 7205.
RENDEZVOUS FOR TWO. See
THE FARMER'S DAUGHTER.
RENDEZVOUS IN SPACE. Martin-Marietta
Corp. 19 min., sd. Appl. author:
Frank Capra. © Martin-Marietta Corp.;
29Jul65; LU3387.
RENDEZVOUS IN TOKYO. See
RUN FOR YOUR LIFE.
RENDEZVOUS IN WASHINGTON. See
CHECKMATE.
RENDEZVOUS WITH A MIRACLE. See
CHEYENNE.
RENDEZVOUS WITH LOVE. See
GRAND JURY.
THE RENEGADE. See
LASSIE. 6005.
PONY EXPRESS. 7112.
THE RENEGADE BRAND. See
LARAMIE.
RENEGADE WHITE. See
GUNSMOKE.
RENEGADES. See
CHEYENNE.
GUNSMOKE.
STAGECOACH—WEST.
RENEWING THE CENTRAL BUSINESS DISTRICT. See
DECISION FOR A CITY, RENEWING THE
CENTRAL BUSINESS DISTRICT.
THE RENO BROTHERS. See
JOHNNY RINGO. 2343.
REPAIR OF ROBESPIERRE. See
BRINGING UP BUDDY.
REPASSAGE. (Ironing) Lucie Eber,
France. 3 min., si., b&w, 16 mm.
© Lucie Eber; 12Jan65; LP29765.
REPEAT. See
LEHN & FINK CONSUMER PRODUCTS DIVISION
TELEVISION COMMERCIALS.
REPEAT PERFORMANCE. See
LIFE OF RILEY.
THE REPENTANT OUTLAW. See
TALES OF WELLS FARGO.
THE REPLACEMENT. See
LARAMIE.
PONY EXPRESS. 7102.
REPLACEMENT FOR PHOEBE. See
HAZEL.
REPLACING A LOWER MAIN BEARING INSERT.
McGraw-Hill Book Co. 4 min., si.,
color, Super 8 mm. (Automotive
mechanics series: Engine rebuilding I)
Presented by McGraw-Hill Text-Films.
© McGraw-Hill, Inc.; 29Dec67; MP19639.
REPLACING AN UPPER MAIN BEARING
INSERT. McGraw-Hill Book Co. 4 min.,
si., color, Super 8 mm. (Automotive
mechanics series: Engine rebuilding I)
Presented by McGraw-Hill Text-Films.
© McGraw-Hill, Inc.; 29Dec67; MP19638.
REPLACING GENERATOR BRUSHES. Raybar
Technical Films. 4 min., si.,
color, 16 mm. © Raybar Technical
Films, Inc.; 30Jun66; MP16274.
REPLACING IGNITION POINTS. Raybar
Technical Films. 4 min., si.,
color, 16 mm. © Raybar Technical
Films, Inc.; 30Jun66; MP16276.
REPLACING THE THERMOSTAT. Raybar
Technical Films. 4 min., si.,
color, 16 mm. © Raybar Technical
Films, Inc.; 30Jun66; MP16275.
THE REPORT CARD. See
THE DANNY THOMAS SHOW.
THE DONNA REED SHOW.
REPORT FILM ON THE SEATTLE CONFERENCE
ON NEW INSTRUCTIONAL MATERIALS.
University of Washington. 17 min.,
sd., b&w. © University of Washington;
19Sep66; MU7729.
A REPORT FROM BUDD. Marathon TV Newsreel.
20 min., sd., b&w, 16 mm.
© Marathon TV Newsreel; 1Apr58;
MP10593.
A REPORT FROM SAN JUAN. Delta Films
International. Released by Warner
Bros. Pictures. 17 min., sd., color,
35 mm. (World wide adventure special)
© Warner Bros. Pictures, Inc.;
8Aug64; LP29442.
REPORT FROM THIOKOL. See
POLYSULFIDE BASE INDUSTRIAL SEALANTS.
POLYSULFIDES FOR INDUSTRY.
REPORT FROM VIETNAM BY WALTER CRONKITE. See
WHO? WHAT? WHEN? WHERE? WHY?
THE REPORT GENERATOR. International
Business Machines Corp. 10 min.,
sd., color, 16 mm. © International
Business Machines Corp.; 1Dec61;
MU7109.
A REPORT ON FINDINGS OF SEVEN-YEAR
RESEARCH AND DEVELOPMENT PROGRAM ON
BALLET AND DANCE TRAINING. Imperial
Studios of Ballet. 113 min., sd.,
b&w, 16 mm. Appl. authors: Edward S.
Kneeland, Jr. & Jo Anna Kneeland.
© Imperial Studios of Ballet, Inc.;
29Oct63; MU7353.
A REPORT ON FOUNTAIN DESIGNS. Kim
Lighting & Manufacturing Co.
24 min., color, 16 mm. © Kim Manufacturing
Co., Inc.; 5Dec62; MU7241.
REPORT ON HONG KONG. See
CBS REPORTS.
REPORT THAT ACCIDENT! National Association
of Automotive Mutual Insurance
Companies. Released by Dallas Jones
Productions. 11 min., sd., color,
16 mm. Eastman color. © National
Association of Automotive Mutual
Insurance Companies; 28Dec62; MP12990.
REPORT TO ARMCO PEOPLE: A NEWSREEL OF
EVENTS AROUND THE WORLD OF ARMCO.
Armco Steel Corp. Made by Jam Handy
Organization. 16 min., sd., b&w,
16 mm. © Armco Steel Corp.; 15May63;
MU7322.
A REPORT TO YOU. Great Lakes Steel
Corp. Made by Jam Handy Organization.
24 min., sd., Ektachrome, 16 mm.
© Jam Handy Organization, Inc.;
1Oct62; MU7218.
REPRESENTATION AND DESIGN. Rensselaer
Polytechnic Institute. 14 min., sd.,
color, 16 mm. © Rensselaer Polytechnic
Institute; 12Feb63; MP13143.
REPRIEVE. Kaufman-Lubin Productions.
Released by Allied Artists Pictures
Corp. 106 min., sd., b&w, 35 mm.
Based on Reprieve, the autobiography
of John Resko. © Allied Artists
Pictures Corp. & Kaufman-Lubin Productions,
Inc.; 4Apr62; LP21654.
REPRIEVE. See
CHEYENNE.
REPRISAL. See
GUNSMOKE.
REPRODUCTION. See
BIOLOGY SERIES II. Film no. 7.
REPRODUCTION AND BIRTH. Ealing Corp.
Made by Milner-Fenwick. 25 min.,
sd., b&w, 16 mm. (Starting tomorrow)
© Ealing Corp.; 1May69; MP19606.
REPRODUCTION AND URINARY SYSTEMS OF THE FEMALE. See
THE FROG: REPRODUCTION AND URINARY
SYSTEMS OF THE FEMALE.
REPRODUCTION IN THE SEA URCHIN. Coronet
Instructional Films. 14 min.,
sd., b&w, 16 mm. © Coronet Instructional
Films, a division of Esquire,
Inc.; 8Oct65; MP15587.
REPRODUCTIVE AND URINARY SYSTEMS OF THE MALE. See
THE FROG: REPRODUCTIVE AND URINARY
SYSTEMS OF THE MALE.
REPRODUCTIVE AND URINARY SYSTEMS OF THE
MALE AND FEMALE COMPARED. See
THE FROG: REPRODUCTIVE AND URINARY
SYSTEMS OF THE MALE AND FEMALE
COMPARED.
THE REPTILE. Seven Arts-Hammer Film
Productions. Released by Twentieth
Century-Fox Film Corp. 90 min.,
sd., color, 35 mm. Color by DeLuxe.
© Hammer Film Productions, Ltd.;
6Apr66; LP32663.
REPTILES AND AMPHIBIANS. National
Geographic Society. 51 min., sd.,
color, 16 mm. Produced in association
with Metromedia Producers
Corp. © National Geographic Society;
25Nov68; MP18961.
REPTILES ARE INTERESTING. Film Associates
of California. 11 min., sd.,
color, 16 mm. Eastman color.
© Film Associates of California;
15Jan55 (in notice: 1954); MP14008.
REPTILICUS. Cinemagic. Released by
American International Pictures.
81 min., sd., color, 35 mm. An Alta
Vista Production. Pathecolor. Based
on story by Sid Pink. © Cinemagic,
Inc.; 21Nov62; LP23589.
THE REPUBLIC OF SOUTH AFRICA, ITS LAND
AND ITS PEOPLE. (Revised edition,
Union of South Africa) Encyclopaedia
Britannica Films. 17 min., sd.,
color, 16 mm. Eastman color.
© Encyclopaedia Britannica Films,
Inc.; 10Jul63; MP13445.
THE REPUBLICANS. See
CAMPAIGN '66.
REPUTATION FOR MURDER. See
JOHNNY RINGO. 2375.
REQUIEM AT DANCER'S HILL. See
THE DAKOTAS.
REQUIEM AT MISSION SPRINGS. See
THE RIFLEMAN.
REQUIEM FOR A BULL. See
MISTER MAGOO.
REQUIEM FOR A COUNTRY DOCTOR. See
THE VIRGINIAN.
REQUIEM FOR A HEAVYWEIGHT. Columbia
Pictures Corp. 85 min., sd., b&w,
35 mm. David Susskind's production.
© Columbia Pictures Corp.; 1Sep62;
LP22965.
REQUIEM FOR A PRESIDENT: FUNERAL OF
JOHN F. KENNEDY. Norwood Studios.
10 min., si., color, 35 mm. Eastman
color. Appl. author: Philip Martin.
© Norwood Studios, Inc.; 19May64;
MP14172.
REQUIEM FOR A SUCKER. See
MICKEY SPILLANE'S MIKE HAMMER.
REQUIEM FOR A SUNDAY AFTERNOON. See
NAKED CITY.
REQUIEM FOR AN UNDERWEIGHT HEAVYWEIGHT. See
THE MANY LOVES OF DOBIE GILLIS.
REQUIEM FOR OLD MAN CLANTON. See
THE LIFE AND LEGEND OF WYATT EARP.
REQUIEM FOR THE POPE OF UNITY. See
EYEWITNESS.
REQUIEM TO MASSACRE. See
CHEYENNE. Gold, glory and Custer—prelude.
THE RESCUE. See
BONANZA.
COMBAT!
LASSIE. 6002.
RESCUE 8. Wilbert Productions. 1 reel
each, sd., b&w, 16 mm. © Wilbert
Productions, Inc.
Add a pinch of death. © 30Dec59;
LP25062.
The ammonia trap. © 14Oct58; LP22283.
Backfire. © 24Dec59; LP25060.
The bells of fear. © 2Dec58; LP22288.
The birdman. © 26Nov59; LP25048.
Breakdown. © 7Jan60; LP25070.
The cage. © 7Oct58; LP22281.
Calamity coach. © 4Jan59; LP22291.
The cave-in. © 8Dec58; LP22287.
The chasm. © 11Nov58; LP22284.
Children of the sun. © 2Mar59;
LP22301.
The cliff. © 20Oct58; LP22279.
The collision. © 3Dec59; LP25057.
Comeback. © 3Mar60; LP25068.
The crackup. © 21Oct58; LP22282.
Danger in paradise. © 23Apr59;
LP22308.
Danger, 20,000 volts. © 22Dec58;
LP22289.
Dangerous Salvage. © 7Oct59; LP25052.
Death for hire. © 8Jun59; LP22315.
Deep Danger. © 14Apr60; LP25040.
The Devil's Cavern. © 30Mar60;
LP25037.
Disaster town. © 9Feb59; LP22298.
The ferris wheel. © 23Sep58; LP22277.
Find that bomb. © 18Nov58; LP22286.
Flash flood. © 26Jan59; LP22296.
Fool's gold. © 24Sep59; LP25055.
Forced landing. © 7Oct59; LP25044.
Forty five fathoms, dead or alive.
© 23Feb59; LP22300.
A handful of vengeance. © 16Feb59;
LP22299.
Heat wave. © 2Jul59; LP25049.
High explosive. © 30Dec59; LP25061.
High hazard. © 12Jan59; LP22294.
High lonely. © 17Feb60; LP25067.
High pressure. © 11May59; LP22311.
Hour of rage. © 27Apr59; LP22309.
I don't remember. © 21Apr60; LP25042.
If the bough breaks. © 4May59;
LP22310.
Initiation to danger. © 2Feb59;
LP22297.
International Incident. © 9Mar59;
LP22302.
Leap for life. © 10Dec59; LP25058.
Left hook to hades. © 25May59;
LP22313.
Lifeline. © 10Feb60; LP25066.
Nine minutes to live. © 16Mar59;
LP22303.
No trespassing. © 23Mar59; LP22304.
Not for glory. © 2Oct59; LP25053.
One more step. © 18May59; LP22312.
102 to Bakersfield. © 30Sep58;
LP22278.
Paid in full. © 30Sep59; LP25054.
Pitfall. © 19Nov59; LP25045.
Quicksand. © 17Mar60; LP25069.
The rock prison. © 24Sep59; LP25051.
Rubber gold. © 19Jan59; LP22295.
Runaway. © 3Oct59; LP25047.
School for violence. © 24Mar60;
LP25039.
The scrap iron jungle. © 9Dec58;
LP22290.
Second Team. © 28Apr60; LP25041.
Secret of the mission. © 5Jan59;
LP22292.
Smashout. © 3Oct59; LP25043.
Square triangle. © 17Feb60; LP25064.
The Squatters. © 10Dec59; LP25059.
The steel mountain. © 10Nov58;
LP22285.
The subterranean city. © 13Oct58;
LP22280.
Suitcase fireman. © 25Sep59; LP25056.
Ten minutes to doomsday. © 14Jan60;
LP25063.
The third strike. © 5Nov59; LP25046.
13 stories up. © 14Apr60; LP25038.
Three men in a vault. © 30Mar59;
LP22305.
3 mile bomb. © 14Oct59; LP25050.
Ti-Ling. © 18Feb60; LP25065.
Tower of hate. © 13Apr59; LP22307.
The trap. © 1Jun59; LP22314.
Trial by fire. © 7Jan59; LP22293.
The walking death. © 6Apr59; LP22306.
THE RESCUE OF RUFUS. See
BACHELOR FATHER.
RESCUE RIDGE. See
LASSIE.
RESCUE WITH YUL BRYNNER. See
CBS REPORTS.
RESEARCH BY ROCKETS. U.S. National
Academy of Sciences. 27 min., sd.,
color, 16 mm. © U.S. National
Academy of Sciences; 1Nov60; MP10976.
A RESEARCH PROBLEM: INERT (?) GAS COMPOUNDS.
Regents of the University
of California. Made by Chemical
Education Study. 19 min., sd., color,
16 mm. (CHEM study film) Eastman
color. © Regents of the University
of California; 14Nov63; MP13731.
RESENTMENTS. See
IT'S LIGHT TIME.
THE RESERVATION. See
UNITED STATES MARSHAL.
Welcome to our website – the perfect destination for book lovers and
knowledge seekers. We believe that every book holds a new world,
offering opportunities for learning, discovery, and personal growth.
That’s why we are dedicated to bringing you a diverse collection of
books, ranging from classic literature and specialized publications to
self-development guides and children's books.
More than just a book-buying platform, we strive to be a bridge
connecting you with timeless cultural and intellectual values. With an
elegant, user-friendly interface and a smart search system, you can
quickly find the books that best suit your interests. Additionally,
our special promotions and home delivery services help you save time
and fully enjoy the joy of reading.
Join us on a journey of knowledge exploration, passion nurturing, and
personal growth every day!
ebookbell.com

More Related Content

PDF
Evaluation Fundamentals Insights into Program Effectiveness Quality and Value...
PDF
Evaluation Fundamentals Insights into Program Effectiveness Quality and Value...
PDF
Program Evaluation 3rd Edition John M. Owen
PDF
Designing and Managing Programs: An Effectiveness-Based Approach (SAGE
PDF
Program Evalutaion Forms And Approaches 3rd Edition John M Owen
PDF
Formatted_EvaluationPlanTemplate
PDF
Program Evaluation Methods And Case Studies Emil J Posavac Kenneth J Linfield
PPTX
Research methodology
Evaluation Fundamentals Insights into Program Effectiveness Quality and Value...
Evaluation Fundamentals Insights into Program Effectiveness Quality and Value...
Program Evaluation 3rd Edition John M. Owen
Designing and Managing Programs: An Effectiveness-Based Approach (SAGE
Program Evalutaion Forms And Approaches 3rd Edition John M Owen
Formatted_EvaluationPlanTemplate
Program Evaluation Methods And Case Studies Emil J Posavac Kenneth J Linfield
Research methodology

Similar to The Practice Of Health Program Evaluation Second Edition David E Grembowski (20)

PDF
Full download Program Evaluation 3rd Edition John M. Owen pdf docx
DOCX
Ch 6 only 1. Distinguish between a purpose statement, research p
DOCX
Ch 6 only 1. distinguish between a purpose statement, research p
PDF
Program Evaluation 3rd Edition John M. Owen
PPT
Program Evaluation 1
DOCX
Practicum Project Methodology and Evaluation.docx
DOCX
Practicum Project Methodology and Evaluation.docx
DOCX
Process Improvement Plan For this, your penultimate assignment .docx
PDF
17- Program Evaluation a beginner’s guide.pdf
DOCX
Research Topics in Health AdministrationAssignment 1 Final Proj.docx
PDF
Program Evaluation For Social Workers Foundations Of Evidencebased Programs P...
PPTX
Ot5101 005 week 5
PDF
Getting started with a systematic review: developing your review question
PDF
Program Evaluation For Social Workers Foundations Of Evidencebased Programs P...
DOCX
Title of PaperYour nameHCA375– Continuous Quality Monito.docx
DOCX
programme evaluation by priyadarshinee pradhan
PPT
DOCX
© 2013 Laureate Education, Inc. 1 NURS 6231 Healthcar.docx
PPTX
COMMUNITY EVALUATION 2023.pptx
Full download Program Evaluation 3rd Edition John M. Owen pdf docx
Ch 6 only 1. Distinguish between a purpose statement, research p
Ch 6 only 1. distinguish between a purpose statement, research p
Program Evaluation 3rd Edition John M. Owen
Program Evaluation 1
Practicum Project Methodology and Evaluation.docx
Practicum Project Methodology and Evaluation.docx
Process Improvement Plan For this, your penultimate assignment .docx
17- Program Evaluation a beginner’s guide.pdf
Research Topics in Health AdministrationAssignment 1 Final Proj.docx
Program Evaluation For Social Workers Foundations Of Evidencebased Programs P...
Ot5101 005 week 5
Getting started with a systematic review: developing your review question
Program Evaluation For Social Workers Foundations Of Evidencebased Programs P...
Title of PaperYour nameHCA375– Continuous Quality Monito.docx
programme evaluation by priyadarshinee pradhan
© 2013 Laureate Education, Inc. 1 NURS 6231 Healthcar.docx
COMMUNITY EVALUATION 2023.pptx
Ad

Recently uploaded (20)

PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
Hazard Identification & Risk Assessment .pdf
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PPTX
Virtual and Augmented Reality in Current Scenario
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
Trump Administration's workforce development strategy
PPTX
20th Century Theater, Methods, History.pptx
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
PPTX
Introduction to pro and eukaryotes and differences.pptx
PDF
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Hazard Identification & Risk Assessment .pdf
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Virtual and Augmented Reality in Current Scenario
Paper A Mock Exam 9_ Attempt review.pdf.
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
Trump Administration's workforce development strategy
20th Century Theater, Methods, History.pptx
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Unit 4 Computer Architecture Multicore Processor.pptx
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
A GUIDE TO GENETICS FOR UNDERGRADUATE MEDICAL STUDENTS
Introduction to pro and eukaryotes and differences.pptx
medical_surgical_nursing_10th_edition_ignatavicius_TEST_BANK_pdf.pdf
TNA_Presentation-1-Final(SAVE)) (1).pptx
B.Sc. DS Unit 2 Software Engineering.pptx
Chinmaya Tiranga quiz Grand Finale.pdf
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
Ad

The Practice Of Health Program Evaluation Second Edition David E Grembowski

  • 1. The Practice Of Health Program Evaluation Second Edition David E Grembowski download https://ebookbell.com/product/the-practice-of-health-program- evaluation-second-edition-david-e-grembowski-10010134 Explore and download more ebooks at ebookbell.com
  • 2. Here are some recommended products that we believe you will be interested in. You can click the link to download. Progress Of Four Programs From The Comprehensive Addiction And Recovery Act 1st Edition And Medicine Engineering National Academies Of Sciences Health And Medicine Division Board On Population Health And Public Health Practice Committee On The Review Of Specific Programs In The Comprehensive Addiction And Recovery Act https://ebookbell.com/product/progress-of-four-programs-from-the- comprehensive-addiction-and-recovery-act-1st-edition-and-medicine- engineering-national-academies-of-sciences-health-and-medicine- division-board-on-population-health-and-public-health-practice- committee-on-the-review-of-specific-programs-in-the-comprehensive- addiction-and-recovery-act-51397008 An Integrated Framework For Assessing The Value Of Communitybased Prevention Nonclinical Prevention Programs Committee On Valuing Communitybased https://ebookbell.com/product/an-integrated-framework-for-assessing- the-value-of-communitybased-prevention-nonclinical-prevention- programs-committee-on-valuing-communitybased-2624726 Systems Practices For The Care Of Socially Atrisk Populations 1st Edition And Medicine Engineering National Academies Of Sciences Health And Medicine Division Board On Health Care Services Board On Population Health And Public Health Practice Committee On Accounting For Socioeconomic Status In Medicare Payment Programs https://ebookbell.com/product/systems-practices-for-the-care-of- socially-atrisk-populations-1st-edition-and-medicine-engineering- national-academies-of-sciences-health-and-medicine-division-board-on- health-care-services-board-on-population-health-and-public-health- practice-committee-on-accounting-for-socioeconomic-status-in-medicare- payment-programs-51568740 The Practice Of International Health A Casebased Orientation 1st Edition Daniel Perlman https://ebookbell.com/product/the-practice-of-international-health-a- casebased-orientation-1st-edition-daniel-perlman-5854060
  • 3. Public Health In The British Empire Intermediaries Subordinates And The Practice Of Public Health 18501960 Ryan Johnson And Amna Khalid https://ebookbell.com/product/public-health-in-the-british-empire- intermediaries-subordinates-and-the-practice-of-public- health-18501960-ryan-johnson-and-amna-khalid-30972676 The Practice Of Reform In Health Medicine And Science 15002000 Scott Mandelbrote https://ebookbell.com/product/the-practice-of-reform-in-health- medicine-and-science-15002000-scott-mandelbrote-49184112 The Principles And Practice Of Health Evangelism Elvin Adams https://ebookbell.com/product/the-principles-and-practice-of-health- evangelism-elvin-adams-33378734 From Patient Data To Medical Knowledge The Principles And Practice Of Health Informatics 1st Edition Paul Taylor https://ebookbell.com/product/from-patient-data-to-medical-knowledge- the-principles-and-practice-of-health-informatics-1st-edition-paul- taylor-2135094 The History And Practice Of College Health 1st Edition H Spencer Turner Janet L Hurley https://ebookbell.com/product/the-history-and-practice-of-college- health-1st-edition-h-spencer-turner-janet-l-hurley-51787272
  • 6. The Practice of Health Program Evaluation Second Edition 2
  • 8. The Practice of Health Program Evaluation Second Edition David Grembowski University of Washington 4
  • 9. FOR INFORMATION: SAGE Publications, Inc. 2455 Teller Road Thousand Oaks, California 91320 E-mail: order@sagepub.com SAGE Publications Ltd. 1 Oliver’s Yard 55 City Road London EC1Y 1SP United Kingdom SAGE Publications India Pvt. Ltd. B 1/I 1 Mohan Cooperative Industrial Area Mathura Road, New Delhi 110 044 India SAGE Publications Asia-Pacific Pte. Ltd. 3 Church Street #10-04 Samsung Hub Singapore 049483 Copyright © 2016 by SAGE Publications, Inc. All rights reserved. No part of this book may be reproduced or utilized in any form or by any means, electronic or mechanical, including photocopying, recording, or by any information storage and retrieval system, without permission in writing from the publisher. Printed in the United States of America Library of Congress Cataloging-in-Publication Data Grembowski, David, author. The practice of health program evaluation / David Grembowski. — Second edition. p.; cm. Includes bibliographical references and index. ISBN 978-1-4833-7637-0 (paperback: alk. paper) I. Title. [DNLM: 1. Program Evaluation—methods. 2. Cost-Benefit Analysis. 3. Evaluation Studies as Topic. 4. Health Care Surveys— methods. W 84.41] RA399.A1 362.1—dc23 2015020744 This book is printed on acid-free paper. Acquisitions Editor: Helen Salmon Editorial Assistant: Anna Villarruel Production Editor: Kelly DeRosa 5
  • 10. Copy Editor: Amy Marks Typesetter: C&M Digitals (P) Ltd. Proofreader: Sarah J. Duffy Indexer: Teddy Diggs Cover Designer: Michael Dubowe Marketing Manager: Nicole Elliott 6
  • 12. 8
  • 13. Detailed Contents Acknowledgments About the Author Preface Prologue 1. Health Program Evaluation: Is It Worth it? Growth of Health Program Evaluation Types of Health Program Evaluation Evaluation of Health Programs Evaluation of Health Systems Summary List of Terms Study Questions 2. The Evaluation Process as a Three-Act Play Evaluation as a Three-Act Play Act I: Asking the Questions Act II: Answering the Questions Act III: Using the Answers in Decision Making Roles of the Evaluator Evaluation in a Cultural Context Ethical Issues Evaluation Standards Summary List of Terms Study Questions Act I: Asking the Questions 3. Developing Evaluation Questions Step 1: Specify Program Theory Conceptual Models: Theory of Cause and Effect Conceptual Models: Theory of Implementation Step 2: Specify Program Objectives Step 3: Translate Program Theory and Objectives Into Evaluation Questions Step 4: Select Key Questions Age of the Program Budget Logistics Knowledge and Values Consensus 9
  • 14. Result Scenarios Funding Evaluation Theory and Practice Assessment of Fit Summary List of Terms Study Questions Act II: Answering the Questions Scene 1: Developing the Evaluation Design to Answer the Questions 4. Evaluation of Program Impacts Quasi-Experimental Study Designs One-Group Posttest-Only Design One-Group Pretest-Posttest Design Posttest-Only Comparison Group Design Recurrent Institutional Cycle (“Patched-Up”) Design Pretest-Posttest Nonequivalent Comparison Group Design Single Time-Series Design Repeated Treatment Design Multiple Time-Series Design Regression Discontinuity Design Summary: Quasi-Experimental Study Designs and Internal Validity Counterfactuals and Experimental Study Designs Counterfactuals and Causal Inference Prestest-Posttest Control Group Design Posttest-Only Control Group Design Solomon Four-Group Design Randomized Study Designs for Population-Based Interventions When to Randomize Closing Remarks Statistical Threats to Validity Generalizability of Impact Evaluation Results Construct Validity External Validity Closing Remarks Evaluation of Impact Designs and Meta-Analysis Summary List of Terms Study Questions 5. Cost-Effectiveness Analysis Cost-Effectiveness Analysis: An Aid to Decision Making 10
  • 15. Comparing Program Costs and Effects: The Cost-Effectiveness Ratio Types of Cost-Effectiveness Analysis Cost-Effectiveness Studies of Health Programs Cost-Effectiveness Evaluations of Health Services Steps in Conducting a Cost-Effectiveness Analysis Steps 1–4: Organizing the CEA Step 5: Identifying, Measuring, and Valuing Costs Step 6: Identifying and Measuring Effectiveness Step 7: Discounting Future Costs and Effectiveness Step 8: Conducting a Sensitivity Analysis Step 9: Addressing Equity Issues Steps 10 and 11: Using CEA Results in Decision Making Evaluation of Program Effects and Costs Summary List of Terms Study Questions 6. Evaluation of Program Implementation Types of Evaluation Designs for Answering Implementation Questions Quantitative and Qualitative Methods Multiple and Mixed Methods Designs Types of Implementation Questions and Designs for Answering Them Monitoring Program Implementation Explaining Program Outcomes Summary List of Terms Study Questions Act II: Answering the Questions Scenes 2 and 3: Developing the Methods to Carry Out the Design and Conducting the Evaluation 7. Population and Sampling Step 1: Identify the Target Populations of the Evaluation Step 2: Identify the Eligible Members of Each Target Population Step 3: Decide Whether Probability or Nonprobability Sampling Is Necessary Step 4: Choose a Nonprobability Sampling Design for Answering an Evaluation Question Step 5: Choose a Probability Sampling Design for Answering an Evaluation Question Simple and Systematic Random Sampling Proportionate Stratified Sampling Disproportionate Stratified Sampling Post-Stratification Sampling Cluster Sampling 11
  • 16. Step 6: Determine Minimum Sample Size Requirements Sample Size in Qualitative Evaluations Types of Sample Size Calculations Sample Size Calculations for Descriptive Questions Sample Size Calculations for Comparative Questions Step 7: Select the Sample Summary List of Terms Study Questions 8. Measurement and Data Collection Measurement and Data Collection in Quantitative Evaluations The Basics of Measurement and Classification Step 1: Decide What Concepts to Measure Step 2: Identify Measures of the Concepts Step 3: Assess the Reliability, Validity, and Responsiveness of the Measures Step 4: Identify and Assess the Data Source of Each Measure Step 5: Choose the Measures Step 6: Organize the Measures for Data Collection and Analysis Step 7: Collect the Measures Data Collection in Qualitative Evaluations Reliability and Validity in Qualitative Evaluations Management of Data Collection Summary Resources List of Terms Study Questions 9. Data Analysis Getting Started: What’s the Question? Qualitative Data Analysis Quantitative Data Analysis What Are the Variables for Answering Each Question? How Should the Variables Be Analyzed? Summary List of Terms Study Questions Act III: Using the Answers in Decision Making 10. Disseminating the Answers to Evaluation Questions Scene 1: Translating Evaluation Answers Back Into Policy Language Translating the Answers Building Knowledge 12
  • 17. Developing Recommendations Scene 2: Developing a Dissemination Plan for Evaluation Answers Target Audience and Type of Information Format of Information Timing of the Information Setting Scene 3: Using the Answers in Decision Making and the Policy Cycle How Answers Are Used by Decision Makers Increasing the Use of Answers in the Evaluation Process Ethical Issues Summary List of Terms Study Questions 11. Epilogue Compendium References Index 13
  • 18. Acknowledgments In many ways, this book is a synthesis of what I have learned about evaluation from my evaluation teachers and colleagues, and I am very grateful for what they have given me. I wish to acknowledge and thank my teachers—Marilyn Bergner and Stephen Shortell—who provided the bedrock for my professional career in health program evaluation when I was in graduate school. In those days, their program evaluation class was structured around a new, unpublished book, Health Program Evaluation, by Stephen Shortell and William Richardson, which has since become a classic in our field and provided an early model for this work. I also benefited greatly from the support and help of other teachers of health program evaluation. I wish to thank Ronald Andersen, who taught health program evaluation at the University of Chicago (now at UCLA) for many years. He was an early role model and shared many insights about how to package course material in ways that could be grasped readily by graduate students. His evaluation course divided the evaluation process into three distinct phases, and I discovered early that his model also worked very well in my own evaluation courses. I also wish to thank Diane Martin and Rita Altamore, who taught this course at the University of Washington and shared their approaches in teaching this subject with me. I also benefited greatly from “lessons learned” about evaluation through my work with other faculty—Melissa Anderson, Ronald Andersen, Betty Bekemeier, Shirley Beresford, Michael Chapko, Meei-shia Chen, Karen Cook, Douglas Conrad, Richard Deyo, Paula Diehr, Don Dillman, Mary Durham, Ruth Engelberg, Louis Fiset, Paul Fishman, Harold Goldberg, David Grossman, Wayne Katon, Eric Larson, Diane Martin, Peter Milgrom, Donald Patrick, James Pfeiffer, James Ralston, Robert Reid, Sharyne Shiu-Thornton, Charles Spiekerman, John Tarnai, Beti Thompson, and Thomas Wickizer and others—on various studies over the years. I also am grateful for the faculty in the Departments of Biostatistics and Epidemiology in the School of Public Health at the University of Washington, who continue to advance my ongoing education about research methods. I also wish to thank the Agency for Healthcare Research and Quality for inviting me to become a member of a standing study section, and the opportunity to review grant applications for 4 years. I learned much about research methods and evaluation from my study section colleagues and from performing the reviews, which has informed the content of this second edition. Several people played important roles in the production of this book. I especially want to thank the students in my health program evaluation class, who have provided me with insights about how to write a book that provides guidance to those who have never performed an evaluation. Many thanks also are extended to the anonymous SAGE reviewers. Their thoughtful comments significantly improved the quality of this textbook. Last but by no means least, I wish to thank my family for their support throughout both editions of this book. 14
  • 19. About the Author David Grembowski , PhD, MA, is a professor in the Department of Health Services in the School of Public Health and the Department of Oral Health Sciences in the School of Dentistry, and adjunct professor in the Department of Sociology, at the University of Washington. He has taught health program evaluation to graduate students for more than 20 years. His evaluation interests are prevention, the performance of health programs and health care systems, survey research methods, and the social determinants of population health. His other work has examined efforts to improve quality by increasing access to care in integrated delivery systems; pharmacy outreach to provide statins preventively to patients with diabetes; managed care and physician referrals; managed care and patient-physician relationships and physician job satisfaction; cost-effectiveness of preventive services for older adults; cost-sharing and seeing out-of- network physicians; social gradients in oral health; local health department spending and racial/ethnic disparities in mortality rates; fluoridation effects on oral health and dental demand; financial incentives and dentist adoption of preventive technologies; effects of dental insurance on dental demand; and the link between mother and child access to dental care. 15
  • 20. Preface Since The Practice of Health Program Evaluation was published over a decade ago, much has changed in evaluation. The methods for conducting evaluations of health programs and systems have advanced considerably, and the mastery of evaluation has become more challenging. In this second edition, my intent is to create a state-of-the-art resource for graduate students, researchers, health policymakers, clinicians, administrators, and other groups to use to evaluate the performance of public health programs and health systems. In particular, two areas receive much more attention in the second edition, program theory and causal inference. Program theory refers to the chain of logic, or mechanisms, through which a health program is expected to cause change that leads to desired beneficial effects and avoid unintended consequences. Theory is important in the evaluation process because of growing evidence that health programs based on theory are more likely to be effective than programs lacking a theoretical base. Consequently, the second edition focuses in more depth on the creation and application of conceptual models to evaluate health programs. For program theory, a conceptual model is a diagram that illustrates the mechanisms through which a program is expected to cause intended outcomes. Because implementation also influences program performance, conceptual models for program implementation also are covered, as well as their relationships to program theory. In the first edition, causal inference and study designs for impact evaluation were covered in the Campbell tradition. Over the past 15 years, William Shadish and colleagues have refined the Campbell model and its distinctions among internal, external, construct, and statistical conclusion validity. At the same time, Donald Rubin and Judea Pearl have advanced their own models of causal inference in experiments and observational studies. The second edition addresses all three models but retains the Campbell model as its core for causal inference. The second edition also contains new content in many other areas, including the following: The role of stakeholders in the evaluation process Ethical issues in evaluation and evaluation standards The conduct of evaluation in a cultural context Study designs for impact evaluation of population-based interventions The application of epidemiology and biostatistics in evaluation methods Mixed methods that combine quantitative and qualitative approaches for evaluating health programs Over the past decade, I also have changed how I teach and conduct evaluations, and this professional evolution is captured throughout the book. My professional growth has not been a solo journey but has been influenced continually by the students in my classes, the faculty with whom I work, service on study sections reviewing federal grant applications, and my own evaluation experiences. In particular, Gerald van Belle, professor of biostatistics at the University of Washington, published a book, Statistical Rules of Thumb, which offers simple, practical, and well-informed guidance on how to apply statistical concepts in public health 16
  • 21. studies. Inspired by van Belle’s work, as well as Gerin and Kapelewski’s practical “hints” in their book Writing the NIH Grant Proposal, I have sprinkled “Rules of Thumb” in several chapters, offering guidance on the practice of evaluation. The second edition retains its focus on applied research methods, with the assumption that teaching and learning can improve through a customized textbook about the evaluation of health programs and systems that presents information in a clear manner. However, designing and conducting a health program evaluation is much more than an exercise in applied research methods. All evaluations are conducted in a political context, and the ability to complete an evaluation successfully depends greatly on the evaluator’s ability to navigate the political terrain. In addition, evaluation itself is a process with interconnected steps designed to produce information for decision makers and other groups. Understanding the steps and their interconnections is just as fundamental to evaluation as is knowledge of quantitative and qualitative research methods. To convey these principles, I use the metaphor of evaluation as a three-act play with a variety of actors and interest groups, each having a role and each entering and exiting the stage at different points in the evaluation process. Evaluators are one of several actors in this play, and it is critical for them to understand their role if they are to be successful. Applying this principle, the book has three major sections, or “Acts,” that cover basic steps of the evaluation process. Act I, “Asking the Questions,” occurs in the political realm, where evaluators work with decision makers, stakeholders, and other groups to identify the questions that the evaluation will answer about a program. Chapter 3 presents material to help students and health professionals develop evaluation questions, specify program theory, and draw conceptual models. In Act II, “Answering the Questions,” evaluation methods are applied to answer the questions about the program. After the relevant interest groups and the evaluator agree on the key questions about a program, the next step is to choose one or more evaluation designs that will answer those questions. Chapter 4 presents experimental and quasi-experimental impact evaluation designs, and Chapter 5 reviews cost-effectiveness analysis, which has become more prevalent and important in health care over the past two decades. Chapter 6 presents methods for designing evaluations of program implementation, or process evaluation, including mixed methods. Once a design is chosen, quantitative and qualitative methods for conducting the evaluation must be developed and implemented. Chapter 7 presents methods for choosing the populations for the evaluation and sampling members from them. Chapter 8 reviews measurement and data collection issues frequently encountered in quantitative and qualitative evaluations. Finally, Chapter 9 describes data analyses for different impact and implementation designs. In Act III, “Using the Answers in Decision Making,” the evaluation returns to the political realm, where findings are disseminated to decision makers, interest groups, and other constituents. A central assumption is that evaluations are useful only when their results are used to formulate new policy or improve program performance. Chapter 10 presents methods for developing formal dissemination plans and reviews factors that influence whether evaluation findings are used or not. Chapter 11 presents some closing thoughts about 17
  • 22. evaluation in public health and health systems. In summary, by integrating the evaluation literature about health programs and services from a variety of sources, this book is designed to be an educational resource for teachers and students, as well as a reference for health professionals engaged in program evaluation. 18
  • 24. 1 Health Program Evaluation Is It Worth It? Growth of Health Program Evaluation Types of Health Program Evaluation Evaluation of Health Programs Evaluation of Health Systems Summary List of Terms Study Questions Evaluation is a part of everyday life. Does Ford make a better truck than Chevrolet? What kind of reviews did a new movie get? What are the top 10 football teams in the country? Who will be the recipient of this year’s outstanding teacher award? All these questions entail judgments of merit, or “worth,” reached by weighing information against some explicit or implicit yardstick (Weiss, 1972, 1998a). When judgments result in decisions, evaluation is being performed at some level (Shortell & Richardson, 1978). This book is about the evaluation of health programs and the role it plays in program management and decision making. All societies face sundry health problems. Accidents, cancer, diabetes, heart disease, HIV infection, suicide, inequitable health and access to health care across social groups, and many others are mentioned commonly in the health literature (U.S. Department of Health and Human Services, 2015a). A health program is an organized response, or “intervention,” to reduce or eliminate one or more problems by achieving one or more objectives, with the ultimate goal of improving the health of society or reducing health inequalities across social groups (Shortell & Richardson, 1978). Interventions are defined broadly and include intentional changes in health systems or other societal institutions to improve individual and population health and reduce health inequalities by socioeconomic status, race/ethnicity, religion, age, gender, sexual orientation or gender identity, mental health, disability, geographic location, or other characteristics linked historically to discrimination or social exclusion (U.S. Department of Health and Human Services, 2015a). Evaluation is the systematic assessment of a program’s implementation and consequences to produce information about the program’s performance in achieving its objectives (Weiss, 1998a). In general, most evaluations are conducted to answer two fundamental questions: Is the program working as intended? Why is this the case? Research methods are applied to answer these questions, to increase the accuracy and objectivity of judgments about the program’s success in reaching its objectives, and to search for evidence of unintended and unwanted consequences. The evaluation process fulfills this purpose by defining clear and explicit criteria for success, collecting representative evidence of program performance, and comparing this evidence to the criteria established at the outset. Evaluations help program managers understand the reasons for program performance, which may lead to improvement or refinement of the program. Evaluations also help program funders to make informed judgments about a program’s worth, which may result in decisions to extend it to other sites or to cut back or abolish a program so that resources may be allocated elsewhere. In essence, evaluation is a management or decision-making tool for program administrators, planners, policymakers, and 20
  • 25. other health officials. From a societal perspective, evaluation also may be viewed as a deliberate means of promoting social change for the betterment of society (Shortell & Richardson, 1978; Weiss, 1972). Just as personal growth and development are fundamental to a person’s quality of life, so do organizations and institutions mature by learning more about their own behavior (Shortell & Richardson, 1978). The value of evaluation comes from the insights that its findings can generate, which can speed up the learning process to produce benefits on a societal scale (Cronbach, 1982). Evaluation, however, can be a double-edged sword. The desire to learn more is often accompanied by the fear of what may be found (Donaldson et al., 2002). Favorable results typically are greeted with a sigh of relief by those who want the program to succeed. By contrast, unfavorable results may be as welcome as the plague. When an evaluation finds that a program has not achieved its objectives, the program’s very worth is often brought into question. Program managers and staff may feel threatened by poor evaluation results because they often are held accountable for them by funders, who may decide the program has little worth. In this case, funders or other decision makers often have the power and authority to change program implementation, replace personnel, or even terminate the program and allocate funds elsewhere. For popular health programs, such as prenatal care for low-income women, program advocates may view unfavorable results as a threat to the very life of the program. To a great degree, the worth of prenatal care programs is grounded on the argument that public spending now will prevent future costs and medical complications associated with low birth weight (Huntington & Connell, 1994). Previous evaluations reported “good” news: Prenatal care pays for itself (for every $1.00 spent, up to $3.38 will be saved). The “bad” news is that the evaluations have serious methodologic flaws that may have resulted in overestimates of the cost savings from prenatal care (Huntington & Connell, 1994). Today, the evidence is insufficient to conclude that universal prenatal care prevents adverse birth outcomes and is cost saving (Grosse et al., 2006; Krans & Davis, 2012). These findings have attracted national attention because they challenge the very worth of prenatal care programs if the objective of those programs is to save more than they cost (Kolata, 1994). In all evaluations, a program’s worth depends on both its performance and the desirability of its objectives, which is always a question of values (Kane et al., 1974; Palumbo, 1987; Weiss, 1983). For prenatal care and other prevention programs, the real question may not be, How much does this save? but more simply, How much is this program worth? (Huntington & Connell, 1994). For health and all types of social programs, the answer to this fundamental question can have far-reaching consequences for large numbers of people. Greenhalgh and Russell (2009) capture the essence of the values quandary in evaluation: Should we spend limited public funds on providing state-of-the-art neonatal intensive-care facilities for very premature infants? Or providing “Sure Start” programs for the children of teenage single mothers? Or funding in vitro fertilization for lesbian couples? Or introducing a “traffic light” system of food labeling, so even those with low health literacy can spot when a product contains too much fat and not enough fiber? Or ensuring that any limited English speaker is provided with a professional interpreter for 21
  • 26. health-care encounters? Of course, all these questions require “evidence”—but an answer to the question “What should we do?” will never be plucked cleanly from massed files of scientific evidence. Whose likely benefit is worth whose potential loss? These are questions about society’s values, not about science’s undiscovered secrets. (p. 310) 22
  • 27. Growth of Health Program Evaluation Evaluation is a relatively new discipline. Prior to the 1960s, formal, systematic evaluations of social and health programs were conducted rarely, and few professionals performed evaluations as a full-time career (Shadish et al., 1991). With the election of Lyndon Johnson to the presidency in 1964, the United States entered into an era of unprecedented growth in social and health programs for the disadvantaged. Medicare (public health insurance for adults aged 65 and over), Medicaid (public health insurance for low-income individuals), and other health care programs were launched, and Congress often mandated and funded evaluation of their performance (O. W. Anderson, 1985). As public and private funding for evaluation grew, so did the number of professionals and agencies conducting evaluations. Today, evaluation is a well-known, international profession. In many countries, evaluators have established professional associations that hold annual conferences (e.g., American Evaluation Association, Canadian Evaluation Society, African Evaluation Association, and International Organization for Cooperation in Evaluation). Although an association for health program evaluation does not exist in the United States, the American Public Health Association, AcademyHealth (the professional association for health services research), and other groups often serve as national forums for health evaluators to collaborate and disseminate their findings. Other forces also have contributed to the growth of health program evaluation since the 1960s. Two important factors are scarce resources and accountability. All societies have limited resources to address pressing health problems. In particular, low-income and middle-income countries face the twin problems of severe resource constraints and many competing priorities (Oxman et al., 2010). When resources are scarce, competition for funds can intensify, and decision makers may allocate resources only to programs that can demonstrate good performance at the lowest cost. In such environments, evaluations can provide useful information for managing programs and, if performance is sound, for defending a program’s worth and justifying continued funding. If, however, an evaluation is launched solely to collect information to defend a program’s worth in political battles over resource allocation, and if the evaluation is conducted in an impartial manner, there is no guarantee that an evaluation will produce results favoring the program. The trillions of dollars invested globally in health programs has increased the importance of accountability (Oxman et al., 2010). For financial and legal reasons, public and private funding agencies are concerned with holding programs accountable for funds and their disbursement, with an eye toward avoiding inappropriate payments. High-income countries have the greatest expenditures in health programs and, therefore, the greatest potential for waste (Oxman et al., 2010). For performance reasons, funding agencies also want to know if their investments produced expected benefits while avoiding harmful side effects. Similarly, government, employers, and other purchasers of health services are concerned with clinical and fiscal accountability, or evidence that health care systems and providers deliver services of demonstrated effectiveness and quality in an efficient manner (Addicott & Shortell, 2014; Relman, 1988; Rittenhouse et al., 2009; Shortell & Casalino, 2008). Employers also want to know whether their investments in health care improve their employees’ productivity, for example, by collecting information about how quickly workers are back on the job after an episode of care (Moskowitz, 1998). 23
  • 28. Another factor stimulating interest in health program evaluation is more emphasis on prevention. Many people and health professionals believe that preventing disease is better than curing it. Because the evidence indicates that much disease is preventable (U.S. Department of Health and Human Services, 2015a), a variety of preventive programs and technologies have emerged to maintain or improve the nation’s health. For example, immunizations to prevent disease, water fluoridation to reduce caries, mammography screening to detect breast cancer, and campaigns promoting the use of bicycle helmets to prevent injuries are common in our society. Healthy People 2000 and its successors, Healthy People 2010 and Healthy People 2020, specify health promotion and disease prevention objectives for the nation and provide a framework for the development and implementation of federal, state, and local programs to meet these objectives (U.S. Department of Health and Human Services, 2015a). As the number of programs has proliferated, so has interest in evaluating their performance in achieving their objectives. However, although prevention and the diagnosis and treatment of illness in its early stages are often advocated because they can save health dollars, preventing illness may either save money or add to health care costs, depending on the intervention and the target population (J. T. Cohen et al., 2008). Since the 1960s, health program evaluation also has been promoted by public agencies, foundations, and other groups sponsoring a variety of demonstration projects to improve population health and the performance of the health care system or to achieve other goals. Consistent with our nation’s belief in incrementalism in political decision making (Lindblom, 1959, 1979; Marmor, 1998; Shortell & Richardson, 1978), decision makers often desire information about whether a proposed change will work before authorizing changes on a broad scale. To supply this information, decision makers may approve demonstration projects or large-scale social experiments to test the viability of promising solutions to pressing health problems. A prominent example is the Rand Health Insurance Study, a large-scale experiment in which households were randomly assigned to health insurance plans with different cost-sharing arrangements to determine their impacts on health care utilization and expenditures, health outcomes, and satisfaction with medical and dental care (Aron-Dine et al., 2013; Newhouse & the Insurance Experiment Group, 1993). Because large-scale evaluations are relatively expensive to conduct, controversy may exist about whether their findings are worth the resources invested in them. Nevertheless, decision makers are likely to continue authorizing such projects because they often address critical issues in health policy, and because they give decision makers the flexibility to be responsive to a problem while avoiding long-term commitments of resources. Evaluation is an important element of demonstration projects because it provides the evidence for judging their worth. Another factor contributing to the growth of evaluation is increasing government intervention to fix complex problems in the U.S. health care system. Although the United States expends more per capita for health care than any other country in the world, in 2011 U.S. life expectancy (78.7 years) ranked 26th out of 34 developed countries (Organisation for Economic Co-operation and Development, 2008). Social and physical environments, health behaviors, and genetics account in part for these health patterns, but major problems in the U.S. health care system also contribute to the health deficits. Payment for health care remains largely fee- for-service, which increases overuse and costs, undermining health outcomes and leaving fewer resources for other health-producing societal investments (Evans & Stoddart, 1990). Before 2010, about 18% of Americans were uninsured, and social groups with the worst health were more likely to be uninsured, to have greater 24
  • 29. unmet needs for preventive and therapeutic care, and were less likely to have doctor visits (Hadley, 2003; National Center for Health Statistics, 2012). For those receiving health care, persistent health care disparities exist across social groups (Agency for Healthcare Research and Quality, 2012), and quality of care is low, with only 55% of Americans receiving recommended care (McGlynn et al., 2003). Patient dissatisfaction with the health care system is high, and health care is often fragmented and provider oriented rather than patient centered (Institute of Medicine, 2001). To address these and other problems, Congress, with much political rancor, passed the Patient Protection and Affordable Care Act (ACA), the most significant government intervention in the U.S. health care system since the passage of Medicare and Medicaid in the 1960s. Although a key reason for government intervention was to reduce the percentage of uninsured individuals in the United States, another important reason was the escalating costs of health care for federal, state, and local governments, which crowds out resources for other public investments. Between 1989 and 2010, the nation’s health care spending grew from $604 billion to $2.6 trillion, with the public share increasing from 45% in 2010 to a projected 49% in 2022 (Cuckler et al., 2013; Levit et al., 1991). Government intervention in the U.S. health care system likely will be greater in the 21st century than in the 20th, which may stimulate future evaluations of system performance. Another trend that is increasing health program evaluation is the movement toward evidence-based practice in medicine and related fields. One reason for the low quality of U.S. health care is the lack of evidence about comparative effectiveness, or what services work best, for whom, under what circumstances (Institute of Medicine, 2008; McGlynn et al., 2003). When evidence exists, clinicians may not provide evidence-based care, or the evidence for single conditions is less relevant for patients with multiple chronic conditions, who are the main challenge facing health care systems worldwide (Barnett, 2012; Institute of Medicine, 2001, 2008). To increase medical evidence, Congress created, as part of the ACA, the Patient-Centered Outcomes Research Institute (2015), which has funded over 365 studies of more than $700 million on the comparative effectiveness of alternative clinical services to prevent, diagnose, or treat common medical conditions as well as to improve health care delivery and outcomes. The National Institutes of Health, the Agency for Healthcare Research and Quality, and other federal agencies also fund studies that contribute to the medicine evidence base. Similar calls to grow the evidence base are voiced in public health practice, management, and other fields (Brownson et al., 2003; Brownson et al., 2009; Walshe & Rundall, 2001). The Centers for Disease Control and Prevention has supported The Community Guide, a website that presents evidence-based recommendations on preventive services, programs, and policies that work or do not work, based on systematic reviews conducted by the Community Preventive Service Task Force. Of equal importance, The Community Guide also indicates gaps in the evidence base in 20 areas where there is insufficient evidence to determine whether an intervention works and where more funding and evaluation are needed to close the gaps (Community Guide, 2015). A related trend, dissemination and implementation science, also is generating growth in evaluation research. This trend suggests that the supply of evidence is not the problem; rather, the critical issue is that only a 25
  • 30. fraction of the evidence—about 14%—is translated into clinical practice and that promising results in new publications take 17 years to be implemented widely (Balas & Boren, 2000; L. W. Green et al., 2009; Institute of Medicine, 2013). Implementation science is a relatively new field that identifies the factors, processes, and methods that increase the likelihood of evidence-based (i.e., effective) interventions being adopted and used in medical practice, public health departments, and other settings to sustain improvements in population health (Eccles & Mittman, 2006; Lobb & Colditz, 2013). Implementation science is part of the larger process of translation research, which involves studying and understanding the movement of scientific discovery from “bench-to-bedside-to-population”—or the linear progression from basic science discoveries to efficacy and effectiveness studies, followed by large-scale demonstrations, and, finally, dissemination into practice (Glasgow et al., 2012; Lobb & Colditz, 2013). As dissemination and implementation science continues to grow, so will methods of process and impact evaluation for identifying strategies that can cause greater and quicker uptake of effective interventions into routine use in health organizations. In the health care system, growth in organizational consolidation and information technology is creating new infrastructures for the evaluation of organizational and health system performance (Cutler & Morton, 2013; Moses et al., 2013). All sectors of the health care system (insurers, physician offices, hospitals, pharmaceuticals, and biotechnology) are consolidating, primarily horizontally (such as the merger of two or more hospitals) but also vertically (such as the merger of a hospital and a physician group), to lower costs through economies of scale and to gain market power over competitors and other sectors of the U.S. health care economy. The ACA has increased the pace of consolidation and integration by authorizing the development of accountable care organizations, or voluntary groups of integrated delivery systems, hospitals, and other providers that assume responsibility for defined populations of Medicare beneficiaries (Berwick, 2011; Cutler & Morton, 2013; Dafny, 2014). As the size of health care organizations has increased, so have investments in information technology to lower their costs, increase coordination, and improve the quality and safety of clinical care. Although the evidence is unclear regarding whether information technology has produced these expected benefits, consolidation and information technology have converged to produce massive databases, or “Big Data,” creating new opportunities for advancing evaluation methods, assessing health system performance, and building the evidence base in the foreseeable future (Schneeweiss, 2014). Because the trend toward larger, more complex health care organizations creates management challenges, leaders and administrators are major consumers of evaluation information from Big Data, which can be used to make informed decisions to deliver efficient, effective, equitable care on a population level. Growth in Big Data will require evaluators who are adept in both quantitative and qualitative methods (e.g., the analysis of free-form text in the electronic medical charts of thousands of patients) and who are aware of the strengths and limitations of Big Data (Khoury & Ioannidis, 2014). As a whole, the forces contributing to the growth of health program evaluation are interrelated and will likely continue well into the 21st century. From these trends, two broad types of health program evaluation have emerged, which are reviewed in the next section. 26
  • 31. 27
  • 32. Types of Health Program Evaluation Health programs usually are implemented to achieve specific outcomes by performing some type of intervention or service. In general, two basic types of evaluation are conducted in the health field: The evaluation of health programs The evaluation of health systems 28
  • 33. Evaluation of Health Programs Evaluation of health programs includes programs created to reduce or eliminate a health problem or achieve a specific objective. Healthy People 2020 has established 42 “topic areas” for improving the nation’s health and reducing inequalities, ranging from adolescent health and arthritis to tobacco use and vision health (U.S. Department of Health and Human Services, 2015b). In essence, the topic areas are a comprehensive inventory of the categories of health programs that can be implemented to achieve specific health objectives. For example, water fluoridation programs are implemented to improve oral health, or exercise programs are created to increase physical activity. This type of evaluation assesses the performance of programs developed to achieve health objectives in these and other areas. Some topic areas, such as access to health services and public health infrastructure, overlap with the evaluation of health systems. 29
  • 34. Evaluation of Health Systems Aday and colleagues (1998) present a framework for evaluating the performance of health care systems based partly on Donabedian’s (1973) earlier work (see Figure 1.1). A health care system has a structure defined by federal, state, and local laws and regulations; the availability of personnel, facilities, and other resources; and the organization and financing of care. The structure component also includes the characteristics of the population that the system serves, as well as the physical, social, and economic environments where they live. As a whole, the structure of the system influences the process or delivery of health services, which in turn produces outcomes, health and well-being. Three criteria are proposed for gauging the worth—or value—of system performance. The three “E’s” define what improvements in health and satisfaction (effectiveness) were produced by health services at what cost (efficiency) and for what population groups (equity). A fourth “E” (ethics) is essential for judging whether a fair or equitable distribution of the costs and outcomes of health services exists among those who need care and those who pay for it. Based on ethical principles of distributive justice, inequitable access to care exists when those who need care the most do not get it. Table 1.1 presents definitions of criteria for assessing effectiveness, efficiency, and equity at the clinical and population levels. Evaluations of the performance of health care systems typically examine the influence of the structural component on the process of care, or the influence of the structure and process components on the outcomes of care (Begley et al., 2013; Clancy & Eisenberg, 1998; Kane, 1997). For example, an evaluation of the association between the structure and process components of the system was performed by Baicker and associates (2013; see also Finkelstein et al., 2012), who examined Oregon’s 2008 expansion of its Medicaid program for low-income adults through a lottery drawing of about 30,000 individuals from a waiting list of almost 90,000 persons. As expected, for persons who met eligibility requirements and enrolled in the program, Medicaid coverage increased the use of health services, but the findings were mixed for quality of care and health outcomes. Medicaid coverage improved rates of diabetes detection and management, improved self- reported health and measures of mental health but not physical health, and reduced financial strain. The evaluation of health systems also includes the local public health system. Hajat and colleagues (2009) present a framework for evaluating the performance of local public health departments in improving population health and reducing health inequalities (see Figure 1.2). Local health departments (LHDs) are the government entities that are expected to improve population health and reduce health inequalities, particularly in vulnerable social groups, by creating conditions in communities that support good health (Scutchfield & Howard, 2011). LHDs typically have partnerships with other public, private, and voluntary entities, forming a loosely connected, local public health system that coordinates activities to achieve common goals. LHDs operate in a larger environment, or context, that informs their philosophy of public health practice, their mission, and long- and short-term objectives (or “purpose”). Within an organizational structure, LHDs have an infrastructure, or “inputs,” such as personnel, fiscal resources, information, and other resources, which are converted into processes performing the core functions of public health (assessment, policy development, and assurance) and the 10 essential public health services (see Table 1.2), which ultimately drive LHD performance (Hyde & Shortell, 2012). Expected outcomes are improved population health, reduced health 30
  • 35. inequalities, and a strengthened local public health system. Figure 1.1 Framework for Evaluating the Performance of Health Care Systems Source: Figure 1.4 in Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity, 3rd ed., by Lu Ann Aday et al., 2004. Chicago: Health Administration Press. 31
  • 36. Source: Table 1.1 in Evaluating the Healthcare System: Effectiveness, Efficiency, and Equity, 3rd ed., by Lu Ann Aday et al., 2004. Chicago: Health Administration Press. Public health services and systems research is the name of the relatively new field that applies the methods of health services research—which includes evaluation—to investigate the performance of public health systems (Scutchfield et al., 2009; Scutchfield & Shapiro, 2011). For example, recent studies have examined whether LHDs with greater expenditures per capita have lower mortality and reduced health inequalities. Using national data, Mays and Smith (2011) report that county-level mortality rates declined 1.1% to 6.9% for each 10% increase in LHD spending. Similarly, Grembowski et al. (2010) examined whether 1990–1997 changes in LHD expenditures per capita were associated inversely with 1990–1997 changes in all-cause mortality rates for Black and White racial groups in U.S. local jurisdictions. Although changes in LHD expenditures were not related to reductions in Black/White inequalities in mortality rates in the total population, inverse associations were detected for adults aged 15–44 and for males. Bekemeier et al. (2014) also report that LHD expenditures for maternal and child health (MCH) had the expected, inverse relationship with county-level low birth weights, particularly for counties with high concentrations of poverty and for categories of MCH spending based on need. Figure 1.2 Framework for Evaluating the Performance of Local Health Departments 32
  • 37. Source: “What Predicts Local Public Health Agency Performance Improvement? A Pilot Study in North Carolina,” by A. Hajat et al., 2009, Journal of Public Health Management and Practice, 15(2), p. E23. Note: LPHA, local public health agency; MCC, maternity care coordination; TB, tuberculosis; WIC, women, infants, and children. 33
  • 38. Source: “The Public Health System and the 10 Essential Public Health Services,” by Centers for Disease Control and Prevention, 2014. Retrieved from http://www.cdc.gov/nphpsp/essentialservices.html. Historically, the health care system and the public health system have operated largely in isolation in the United States, perhaps because 97% of national health expenditures go to health care services (Institute of Medicine, 2013), and in prior years the health care system had few incentives to build linkages with the public health system. However, by adopting a population perspective—long the hallmark of public health systems— the ACA is creating opportunities for the health care system and the public health system to work together through several mechanisms (Institute of Medicine, 2013). For example, accountable care organizations (ACOs) are responsible for the quality of care in a defined population (Lewis et al., 2013). Fisher et al. (2012) present a framework for evaluating the performance of ACOs, including their impacts on health outcomes, which creates incentives for ACO providers, LHDs, and community partners to deploy population-level strategies to protect the health of the ACO patient population (Institute of Medicine, 2013, 2014). The ACA’s health insurance exchanges and health information exchanges also are a population approach to health insurance expansion and access to health care (Mays & Scutchfield, 2012; Scutchfield et al., 2012). Economic evaluation, such as cost-effectiveness analysis and cost-benefit analysis, is an important element of assessing the performance of health programs and health systems. The focus is measurement of the benefits, or outcomes, of a health program or medical technology relative to the costs of producing those benefits. In the face of scarce resources, interventions that produce relatively large benefits at a low cost have greater worth than interventions that offer few benefits and high costs. Specific standards for conducting cost-effectiveness studies have emerged to ensure quality and adherence to fundamental elements of the methodology (Gold et al., 1996). Spurred by the trends toward cost containment and evidence-based medicine and public health practice, the number of cost-effectiveness evaluations in the published literature has skyrocketed since the 34
  • 39. 1980s. The Tufts-New England Medical Center (2013) Cost-Effectiveness Analysis Registry contains over 10,000 cost-effectiveness ratios for a variety of diseases and treatments. Because new technologies are always being created, cost-effectiveness studies will be a major area of evaluation for many years to come. This textbook is designed to provide a practical foundation for conducting evaluations in these arenas. The concepts and methods are similar to those found in the evaluation of social programs but have been customized for public health and medical care. Evaluation itself is a process conducted in a political context and composed of interconnected steps; another goal of this book is to help evaluators navigate these steps in health settings. A customized, reality-based treatment of health program evaluation should improve learning and ultimately may produce evaluations that are both practical and useful. While a key goal of evaluation is discovering whether a health program works, many decision makers ultimately want to know about the generalizability of an evaluation’s findings—that is, do the findings apply to different social groups and settings, variations in the intervention itself, and also for different ways of measuring outcomes (Shadish et al., 2002)? The evaluation methods for examining whether a single health program works cannot answer questions about the generalizability of a program. However, if a sufficient number of evaluations of a program are conducted that address a common evaluation question, and that contain different kinds of social groups and settings with variations in program and outcome, a meta-analysis of their findings may be performed. Meta-analysis is a quantitative technique for synthesizing, or combining, the results from different evaluations on the same topic, which may yield information about whether the findings are robust over variations in persons, settings, programs, and outcomes (Shadish et al., 2002). Although this textbook addresses in depth issues of generalizability of evaluation findings from the evaluation of a single evaluation, the methods of meta-analysis are not covered here but can be found in other references (M. Borenstein et al., 2009; Cooper et al., 2009). 35
  • 40. Summary At its core, evaluation entails making informed judgments about a program’s worth, ultimately to promote social change for the betterment of society. Unprecedented growth in health programs and the health care system since the 1960s is largely responsible for the development of health program evaluation. Other forces contributing to the growth of evaluation include the increasing importance of accountability and scarce resources, a greater emphasis on prevention, more attention being given to evidence-based practice and implementation science, escalating health care costs and government intervention in health systems, organizational consolidation and information technology, and a reliance on demonstration projects. Because many of these trends will continue in the remainder of this century, so will interest in the evaluation of health programs and health systems. To perform either of these two types of program evaluations, an evaluator completes a process composed of well-defined steps. Chapter 2 reviews the elements of the evaluation process. 36
  • 41. List of Terms Accountability 4 Demonstration projects 5 Dissemination 7 Economic evaluation 14 Evaluation 2 Evidence-based practice 6 Generalizability 14 Government intervention 6 Health program 1 Health systems 2 Implementation science 7 Information technology 7 Meta-analysis 14 Organizational consolidation 7 Patient Protection and Affordable Care Act (ACA) 6 Prevention 5 Public health services and systems research 11 Scarce resources 4 Translation research 7 37
  • 42. Study Questions 1. What is the purpose of health program evaluation? 2. What are the two fundamental questions of program evaluation? 3. How is the worth of a health program determined? 4. What are three factors that have contributed to the growth of health program evaluation since the 1960s? Why are they important? 5. What are the major types of health program evaluation, and what are their relationships (if any) to each other? 6. What kinds of evaluations of the health reforms in the ACA might be conducted? 38
  • 43. 2 The Evaluation Process as a Three-Act Play Evaluation as a Three-Act Play Act I: Asking the Questions Act II: Answering the Questions Act III: Using the Answers in Decision Making Role of the Evaluator Evaluation in a Cultural Context Ethical Issues Evaluation Standards Summary List of Terms Study Questions Performing a health program evaluation involves more than just the application of research methods. The evaluation process is composed of specific steps designed to produce information about a program’s performance that is relevant and useful for decision makers, managers, program advocates, health professionals, and other groups. Understanding the steps and their interconnections is just as fundamental to evaluation as knowledge of the quantitative and qualitative research methods for assessing program performance. There are two basic perspectives on the evaluation process. In the first perspective—the rational-decision- making model—evaluation is a technical activity, in which research methods from the social sciences are applied in an objective manner to produce information about program performance for use by decision makers (Faludi, 1973; Veney & Kaluzny, 1998). Table 2.1 lists the elements of the rational-decision-making model, which are derived from systems analysis and general systems theory (Quade & Boucher, 1968; von Bertalanffy, 1967). The model is a linear sequence of steps to help decision makers solve problems by learning about the causes of the problems, analyzing and comparing alternative solutions in light of their potential consequences, making a rational decision based on that information, and evaluating the actual consequences. Today, the systematic comparison of alternative approaches in Step 3 is often referred to as health policy analysis, which compares the potential, future advantages and disadvantages of proposed, alternative policy options to reduce or solve a health care issue or population health problem (Aday et al., 2004; Begley et al., 2013). In practice, however, the evaluation of health programs rarely conforms to the rational-decision-making model. Because politics is how we attach values to facts in our society, politics and values are inseparable from the evaluation of health programs (Palumbo, 1987; Weiss, 1972). For instance, the public health value of “health for everyone” conflicts with the differences in infant mortality rates across racial/ethnic groups, and politics is the use of values to define whether this difference is a problem and what, if anything, should be done about it. Consequently, in the second perspective, evaluations are conducted in a political context in which a variety of interest groups compete for decisions in their favor. Completing an evaluation successfully depends greatly on the evaluator’s ability to navigate this political terrain. 39
  • 44. This chapter introduces the political nature of the evaluation process, using the metaphor of the evaluation process as a three-act play. The remaining chapters of the book are organized around each act of the play. The last two sections of this chapter address the importance of ethics and cultural context in conducting evaluations and the role of the evaluator in the evaluation process. 40
  • 45. Evaluation as a Three-Act Play Drawing from Chelimsky’s earlier work (1987), I use the metaphor of a “three-act play” to describe the political nature of the evaluation process. The play has a variety of actors and interest groups, each having a role, and each entering and exiting the political “stage” at different points in the evaluation process. Evaluators are one of several actors in the play, and it is critical for them to understand their role if they are to be successful. The evaluation process itself generates the plot of the play, which varies from program to program and often has moments of conflict, tension, suspense, quiet reflection, and even laughter as the evaluation unfolds. Table 2.2 presents the three acts of the play, which correspond to the basic steps of the evaluation process (Andersen, 1988; Bensing et al., 2004). The play begins in the political realm with Act I, in which evaluators work with decision makers to define the questions that the evaluation will answer about a program. This is the most important act of the play, for if the questions do not address what decision makers truly want to know about the program, the evaluation and its findings are more likely to have little value and use in decision making. In Act II, the research methods are applied to answer the questions raised in Act I. Finally, in Act III, the answers to the evaluation questions are disseminated in a political context, providing insights that may influence decision making and policy about the program. 41
  • 46. Act I: Asking the Questions In Act I, the evaluation process begins when decision makers, a funding organization, or another group authorize the evaluation of a program. In general, decision makers, funders, program managers, and other groups may want to evaluate a program for overt or covert reasons (Rossi et al., 2004; Weiss, 1972, 1998a). Overt reasons are explanations that conform to the rational-decision-making model and are generally accepted by the public (Weiss, 1972, 1998a). In this context, evaluations are conducted to make decisions about whether to Continue or discontinue a program Improve program implementation Test the merits of a new program idea Compare the performance of different versions of a program Add or drop specific program strategies or procedures Implement similar programs elsewhere Allocate resources among competing programs Because Act I occurs in the political arena, covert reasons for conducting evaluations also exist (Weiss, 1972, 1998a). Decision makers may launch an evaluation to Delay a decision about the program Escape the political pressures from opposing interest groups, each wanting a decision about the program favoring its own position Provide legitimacy to a decision that already has been made Promote political support for a program by evaluating only the good parts of the program and avoiding or covering up evidence of program failure Whether a program is evaluated for overt or covert reasons may depend on the values and interests of the different actors and groups in the play (Rossi et al., 2004; Shortell & Richardson, 1978). 42
  • 47. A stakeholder analysis is an approach for identifying and prioritizing the interest groups in the evaluation process and defining each group’s values and interests about the health program, policy, or health system reform and the evaluation (Brugha & Varvasovszky, 2000; Page, 2002; Rossi et al., 2004; Sears & Hogg- Johnson, 2009; Varvasovszky & Brugha, 2000; Weiss, 1998a). The term stakeholder was created by companies to describe non-stockholder interest groups that might influence a company’s performance or survival (Brugha & Varvasovszky, 2000; Patton, 2008). In evaluation, a stakeholder is an individual or a group with a stake—or vested interest—in the health program and the evaluation findings (Patton, 2008). Based on definitions in the literature, a stakeholder is an individual, a group, or an organization that can affect or is affected by the achievement of the health program’s objectives, or the evaluation process or its findings (Bryson et al., 2011; Page, 2002). Stakeholders tend to have two broad types of stakes (Page, 2002). The first type is a stake in an investment in something of value in the health program, such as financial or human resources. For example, funders of the health program and evaluation have stakes in both. The second type is a stake in the activity of the health program; in other words, a stakeholder might be placed at risk or experience harm if the activity were withheld. For instance, providers who receive revenue from delivering a medical treatment to patients have stakes in evaluations of the effectiveness of the treatment. If the evaluations show that the treatment has few health benefits, the treatment may be delivered less often and ultimately lead to a loss in revenue. A stakeholder analysis provides essential information for planning the evaluation in a political context, including a better understanding of the program’s and the evaluation’s political context, the identification of common goals and contentious issues among the stakeholders, and the creation of an evaluation plan that addresses stakeholder interests as much as possible (Sears & Hogg-Johnson, 2009). Rossi et al. (2004) suggest the following best practices for stakeholder analysis: Identify stakeholders at the outset and prioritize those with high vested interests in the health program and evaluation. Involve stakeholders early because their perspectives may influence how the evaluation is carried out. Involve stakeholders continuously and actively through regular meetings, providing assistance with identifying the evaluation questions and addressing study design issues, and requesting comments on draft reports. Establish a structure by developing a conceptual framework for the evaluation to build a common understanding of the health program and evaluation, promote focused discussion of evaluation issues, and keep everyone in the evaluation process “on the same page.” (Chapter 3 addresses this conceptual frameworks in detail) In identifying and prioritizing stakeholders, most, if not all, evaluations of health programs have multiple stakeholders with different interests. Page (2002) suggests prioritizing stakeholders based on (a) their power to influence the health program or evaluation; (b) whether a stakeholder’s actions and perspectives are perceived to be legitimate and, therefore, should be taken into account in the evaluation; and (c) urgency— that is, whether a stakeholder’s interests call for immediate attention in the evaluation. Stakeholders with all three attributes tend to have the highest priority in the stakeholder analysis. 43
  • 48. Figure 2.1 presents a power-interest grid, which is a tool for identifying the stakeholders and rating roughly their relative power and interest in the evaluation (Bryson et al., 2011). The grid is a two-by-two matrix, where power, ranging from low to high, is shown in the columns, and interest, also ranging from low to high, is shown in the rows. Power is defined as the ability of stakeholders to pursue their interests (or who has the most or least control over the program or direction of the evaluation), whereas interest refers to having a political stake in the program and evaluation (or who has the most to gain or lose from the evaluation). The goal is to sort each stakeholder into one of the four mutually exclusive cells in the matrix: players, subjects, context setters, and crowd. Players are key stakeholders and potential users of evaluation results, assuming that the questions posed in Act I address at least some or all of those interests. Subjects may become more engaged in the evaluation by adopting a participatory or empowerment approach to advance their interests, as explained later in this chapter. Context setters’ interests may change, depending on the results of the evaluation, and obtaining their buy-in may become essential as the evaluation process unfolds. The spread of stakeholders across the four cells may reveal commonalities among them, which may be used to build stakeholder buy-in and collaboration in the evaluation process. Figure 2.1 Stakeholder Power Versus Interest Grid Source: “Working With Evaluation Stakeholders: A Rationale, Step-Wise Approach and Toolkit,” by J. M. Bryson, M. Q. Patton, & R. A. Bowman, 2011, Evaluation and Program Planning, 34(1), p. 5. Evaluations of health programs tend to have similar stakeholders (Rossi et al., 2004; Shortell & Richardson, 1978). Policymakers and decision makers often authorize evaluations to supply clear-cut answers to the policy problems they are facing, such as whether to continue, discontinue, expand, or curtail the program (Bensing et al., 2003). The funding agency may want to evaluate a program to determine its cost-effectiveness and discover whether the program has any unintended, harmful effects. Rossi et al. (2004) suggest that the policymakers and evaluation funders are the top stakeholders in the evaluation process. The organization that runs the program may be interested in an evaluation to demonstrate to interest groups that the program works, to justify past or future expenditures, to gain support for expanding the program, or simply to satisfy reporting requirements imposed by the funding agency. 44
  • 49. Program administrators may support an evaluation because it can bring favorable attention to a program that they believe is successful, which may help them earn a promotion later on. Administrators also may use an evaluation as a mechanism for increasing their control over the program, or to gather evidence to justify expanding the program, or to defend the program against attacks from interest groups that want to reduce or abolish it. Alternatively, contextual stakeholders, or organizations or groups in the immediate environment of the program, which either support or oppose the program, may advocate for an evaluation, with the hope of using “favorable” results to promote their point of view in Act III of the evaluation process. The public and its various interest groups may endorse evaluations for accountability or to ensure that tax dollars are being spent on programs that work. The public also may support evaluations because their findings can be a source of information—in the mass media, on the Internet, in journals, and elsewhere—about the merits of a health program or health system reform, such as the Patient Protection and Affordable Care Act (ACA). Program evaluators may want to conduct an evaluation for personal reasons, such as to earn an income or to advance their careers. Alternatively, evaluators may sympathize with a program’s objectives and see the evaluation as a means toward promoting those objectives. Other evaluators are motivated to evaluate because they want to contribute to the discipline’s knowledge by publishing their findings or presenting them at conferences. In addition, the larger evaluation and research community, composed mainly of evaluation professionals, may have interests in the methods and findings of the evaluation. After the stakeholders are identified and their relative power and interests are defined, a grid is constructed, as shown in Figure 2.2, displaying each stakeholder’s initial support versus opposition to the program and the proposed evaluation. The power-position grid provides information for planning the evaluation, such as developing a strategy for engaging stakeholders in the evaluation (Preskill & Jones, 2009) and taking steps to address explicitly the concerns of supporters and opponents in the evaluation process. Once the evaluation’s findings and recommendations are known, the grid offers information for planning the communication strategy to disseminate evaluation results to stakeholders and the public. Table 2.3 presents a brief case study of a stakeholder analysis that Sears and Hogg-Johnson (2009) performed for an evaluation of a pilot program in Washington state’s workers’ compensation system, which provides health insurance coverage for workers who are injured on the job. The pilot program was authorized by the Washington state legislature in a contentious political context. Key findings of the stakeholder analysis were the identification of key stakeholders, their values, whether they supported or opposed the pilot program at the outset, and what evaluation questions the stakeholders wanted the evaluation to address. Act I, “Asking the Questions,” has two parts, or scenes. In Scene 1, evaluators work with decision makers and other groups to develop one or more policy questions about the program, based on findings from the stakeholder analysis (see Chapter 3). A policy question is a general statement indicating what decision makers want to know about the program. Decision makers can include the funding agency, the director of the organization that runs the program, the program’s manager and staff, outside interest groups, and the program’s clients. Together, they constitute the play’s “audience,” and the objective of the evaluation is to 45
  • 50. produce results that will be used by at least some members of a program’s audience. Figure 2.2 Stakeholder Power Versus Support or Opposition Grid Source: “Working With Evaluation Stakeholders: A Rationale, Step-Wise Approach and Toolkit,” by J. M. Bryson, M. Q. Patton, & R. A. Bowman, 2011, Evaluation and Program Planning, 34(1), p. 9. 46
  • 51. Source: Adapted from “Enhancing the Policy Impact of Evaluation Research: A Case Study of Nurse Practitioner Role Expansion in a State Workers’ Compensation System,” by J. M. Sears & S. Hogg-Johnson, 2009, Nursing Outlook, 57(2), pp. 99–106. Although decision makers may authorize an evaluation of a health program for a variety of reasons, many evaluations are performed to answer two fundamental questions: “Did the program succeed in achieving its objectives?” and “Why is this the case?” For some programs, however, decision makers may want to know more about the program’s implementation than about its success in achieving its objectives. For example, questions about achieving objectives may be premature for new programs that are just finding their legs, or when decision makers want to avoid information about the program’s successes and failures, which may generate controversy downstream in Act III of the evaluation process. In these and other cases, the basic policy question becomes, “Was the program implemented as intended?” In general, as the number and diversity of decision makers from different interest groups increase, the number of policy questions about the program may increase greatly, which may decrease the likelihood of finding common ground and reaching consensus on the evaluation’s purpose and key questions. When a program addresses a controversial, political issue, heated debates may arise among decision makers and interest groups about what questions should and should not be asked about the program. Evaluators can play an important role when they facilitate communication among the decision makers and interest groups to 47
  • 52. help them form a consensus about what policy questions to ask about the program. In addition to moderating the discussions, evaluators also can be active participants and pose their own policy questions for decision makers to consider. If the program is already up and running, evaluators can support the discussions by providing descriptive information about program activities that may help decision makers formulate questions or choose among alternative questions. If the play is to continue, Scene 1 ends with one or more well-defined policy questions endorsed by decision makers and, in some contexts, by at least some interest groups. The play may end in Scene 1, however, if no questions about the program are proposed or if decision makers cannot agree on a policy question or what the program is trying to accomplish (Rossi et al., 2004; Weiss, 1998a). For covert reasons, decision makers may place stringent limits on what questions can and cannot be asked when they want to avoid important issues, or possibly to cover up suspected areas of program failure. Under these conditions, evaluation findings may have little influence on people’s views of the program, and consequently, there is little value in conducting the evaluation (Weiss, 1998a). Once one or more policy questions are developed, Scene 2 begins and the policy questions are translated into feasible evaluation questions. In Scene 2, the evaluator is responsible for translating a general policy question, such as “Does the program work?”, into a more specific evaluation question, such as “Did the smoking prevention program reduce cigarette smoking behavior among adolescents between the ages of 13 and 15?” To ensure that the evaluation will produce information that decision makers want in Act III of the play, decision makers should review the evaluation questions and formally approve them before advancing to the next act of the play. In Scene 2, the evaluator also is responsible for conducting a feasibility assessment (Bowen et al., 2009; Centers for Disease Control and Prevention, 1999; Melnyk & Morrison-Beedy, 2012; Rossi et al., 2004; Weiss, 1998a). Before going ahead with the evaluation, the evaluator should confirm that adequate resources, including time and qualified staff (or consultants), exist to conduct the evaluation for the budgeted amount of money. The evaluator should verify that data required for answering the questions are available or can be collected with minimal disruption and that a sufficient number of observations will exist for subsequent quantitative or qualitative data analyses (see Chapter 7). For quantitative evaluations, a key issue is whether the projected number of cases will have adequate statistical power. If the evaluation will engage staff in multiple sites, the evaluator should request a letter indicating a site’s agreement to participate in the evaluation. The feasibility assessment also should confirm whether the program has matured and has established, stable routines. Stable programs are preferred because the reasons for program success or failure can be identified more readily than in unstable programs. With stable programs, evaluation findings based on data collected a year ago have a better chance of still being relevant today. In contrast, when a program is changing continually, the findings obtained at the end of the evaluation process may apply to a program that no longer exists. If no insurmountable obstacles are encountered and the evaluation appears to be feasible, the evaluator and the other actors in the play have a “green light” to proceed to the next act of the evaluation process. 48
  • 53. 49
  • 54. Act II: Answering the Questions In Act II, the evaluation is conducted. Evaluators apply research methods to produce qualitative and quantitative information that answers the evaluation questions raised in Act I. Evaluations may be prospective or retrospective. In a prospective evaluation, the evaluation is designed before the program is implemented, as shown in Table 2.4. Prospective evaluations are ideal because a greater number of approaches can be considered for evaluating a program. With greater choice, evaluators have more flexibility to choose an evaluation approach with the greatest strengths and fewest weaknesses. In addition, evaluators have more freedom to specify the information they want to collect about the program and, once the program is implemented, to ensure that the information is actually gathered. In contrast, retrospective evaluations are designed and conducted after a program has ended, and a smaller number of alternative approaches usually exist for evaluating such programs. Historical information about the program may exist in records and computer files, but the information may not be useful for answering key questions about the program. In retrospective evaluations, choice of design and availability of information are almost always compromised, which may limit what can be learned about the program. Causation is an intrinsic feature of prospective and retrospective evaluations. All health programs have an inherent assumption that the program is expected to cause change to achieve its objectives. Program theory refers to the chain of causation, or the pathways or mechanisms, through which a health program is expected to cause change that leads to desired beneficial effects and avoids unintended consequences (Alkin & Christie, 2005; Donaldson, 2007; Weiss, 1995). Program theories are always probabilistic rather than deterministic; few, if any, interventions invariably produce the intended effects (Cook & Campbell, 1979; Shadish et al., 2002). A health program’s causal assumptions may or may not be based on formal theory. Drawing from Merton (1968), Chen (1990), Donaldson (2007), and Krieger (2011), a program’s causal assumptions may be grounded on discipline theory, or what Merton refers to as “grand” theories that are intended to explain a wide range of human behaviors by positing “what causes what.” For example, a health insurance plan may 50
  • 55. Exploring the Variety of Random Documents with Different Content
  • 56. LP20118. San Fernando and the Kaaka Maami Island. © 8Dec61; LP23032. San Fernandocal. © 2Jan61 (in notice: 1960); LP19593. Sing along with Cauliflower. © 19Oct61; LP23026. This goon for hire. © 24May62; LP23666. Will success spoil Clem Kadiddlehopper? © 23Sep61; LP23022. RED SKY OVER BISMARCK. See THE DAKOTAS. THE RED SWAMP FOX. See THE RED SWAMP POX. THE RED SWAMP POX. Terrytoons. Released by Twentieth Century-Fox Film Corp. 1 reel, sd., color, 35 mm. (A Terrytoon cartoon). Appl. ti.: The red swamp fox. © Terrytoons, division of CBS Films, Inc.; 1Jan66; LP36741. RED TAPE. See THE DANNY THOMAS SHOW. No. 17-E (170).
  • 57. RED TAPE ROMANCE. See MY THREE SONS. RED THORN IN CUBA. See EYEWITNESS. RED TOMAHAWK. A. C. Lyles Productions. Released by Paramount Pictures. 82 min., sd., color, 35 mm. © Paramount Pictures Corp. & A. C. Lyles Productions, Inc.; 22Dec66; LP33643. THE RED TRACTOR. Terrytoons. Released by 20th Century-Fox Film Corp. 7 min., sd., color, 35 mm. Color by DeLuxe. (Duckwood) © Terrytoons, a division of CBS Films, Inc.; 3Jun64; LP28882. RED WATER NORTH. See CHEYENNE. THE RED, WHITE, AND BLUE. See PROJECT 20. REDCOAT STRATEGY. See WALT DISNEY PRESENTS. Show no. 16. THE REDCOATS ARE COMING. See
  • 58. THE DICK VAN DYKE SHOW. REDECORATING. See I LOVE LUCY. REDECORATING THE MERTZ APARTMENT. See I LOVE LUCY. THE REDHEAD MEETS THE MOUSTACHE. See THE LUCILLE BALL-DESI ARNAZ SHOW. 5102-203. THE REDHEADS. See OFFICIAL DETECTIVE. REDIGO. Wilrich Productions. Released by Screen Gems. Approx. 30 min. each, sd., b&w, 16 mm. © Wilrich Productions, Inc. No. 1. Lady Warbonnet. © 24Sep63; LP35557. 2. The blooded bull. © 1Oct63; LP35558. 3. Boy from Rio Bravo. © 8Oct63; LP35559. 4. Prince among men, © 15Oct63; LP35560.
  • 59. 5. The crooked circle. © 22Oct63; LP35561. 6. Little Angel Blue Eyes. © 29Oct63; LP35562. 7. Man in a blackout. © 5Nov63; LP35563. 8. Papa-San. © 12Nov63; LP35564. 9. Horns of hate. © 19Nov63; LP35565. 10. Shadow of the cougar. © 26Nov63; LP35566. 11. The thin line. © 3Dec63; LP35567. 12. Hostage hero hiding. © 10Dec63; LP35568. 13. Privilege of a man. © 17Dec63; LP35569. 14. The black rainbow. © 24Dec63; LP35570. 15. The hunters. © 31Dec63; LP35571. THE REDISCOVERY OF CHARLOTTE HYDE. See RUN FOR YOUR LIFE. REDS HURL RECORD BARRAGE AT QUEMOY, AUG. 23, 1958. See ALMANAC NEWSREEL. Aug. 23, 1960.
  • 60. RED'S RELATIVES. See THE RED SKELTON HOUR. REDS WIN IN CHINA. See GREATEST HEADLINES OF THE CENTURY. REDS WIN WORLD SERIES. See SPORTFOLIO. REDUCING THE VARIANCE. See DESIGN OF EXPERIMENTS. Course program 12. THE REDWOODS. Sierra Club. 20 min., sd., color, 16 mm. A King Screen production. © Sierra Club; 19Jun68; MP19665. REDWOODS—SAVED? Sierra Club. Made by Larry Dawson Productions. 4 min., sd., color, 16 mm. © Sierra Club; 5May69; MP19666. REEL PINK. Mirisch-Geoffrey D-F. Released by United Artists Corp. 7 min., sd., color, 35 mm. © Mirisch-Geoffrey D-F; 16Nov65; LP35584. REFACING A VALVE. McGraw-Hill Book Co. 3 min., si., color, 8 mm. (Automotive mechanics series: Engine rebuilding, set 2) Presented by McGraw-Hill
  • 61. Text-Films. © McGraw-Hill, Inc.; 29Dec67; MP19114. REFERENDUM ON MURDER. See CHECKMATE. REFERRED FOR UNDERACHIEVEMENT. President & Fellows of Harvard College. 35 min., sd., b&w, 16 mm. Appl. author: Edward A. Mason. © President & Fellows of Harvard College; 18May66; MP16122. REFINING PRECIOUS METALS FROM THE SUDBURY NICKEL ORES. International Nickel Co. 33 min., Eastman color, 16 mm. © International Nickel Co., Inc.; 28Oct60; MU6935. REFLECTION. See LIGHT: REFLECTION. REFLECTION LAND. See OUT OF THE INKWELL. REFLECTIONS IN A GOLDEN EYE. Warner Bros.-Seven Arts International. Released by Warner Bros. Pictures. 109 min., sd., color, 35 mm. Panavision. Based on the novel by Carson McCullers. © Warner Bros.-Seven Arts International, Ltd.; 28Oct67; LP35799.
  • 62. REFLECTIONS OF A SOVIET SCIENTIST. See CBS REPORTS. THE REFORMATION. See OF MEN AND FREEDOM. REFORMATION AT BIG NOSE BUTTE. See THE DAKOTAS. REFORMATION OF DOC HOLLIDAY. See THE LIFE AND LEGEND OF WYATT EARP. D-16 (127) REFORMATION OF WILLIE. See GOING MY WAY. THE REFORMATION OF WILLIE HENRATTY. See GOING MY WAY. Reformation of Willie. REFRACTION. See LIGHT: REFRACTION. REFRESHMENT THROUGH THE YEARS. Coca-Cola Co. Made by Jam Handy Organization. 22 min., sd., color, 35 mm. Eastman color. © Coca-Cola Co.; 15May63; LU3280. THE REFRIGER-RAIDER. See
  • 63. OUT OF THE INKWELL. REFRIGERATED AMMONIA SPILL TESTS. Phillips Petroleum Co. 22 min., sd., color, 16 mm. © Phillips Petroleum Co.; 18Oct65; MP15910. REFRIGERATION AND AIR CONDITIONING. Coronet Instructional Films. 10 min., sd., b&w, 16 mm. © Coronet Instructional Films, a division of Esquire, Inc.; 1May64; MP14532. REFUGE IN RIO. See EYEWITNESS. THE REFUGEE. See M-SQUAD. THE REGAL ROAD TO RICHES. Video Films. 9 min., sd., color, 16 mm. © Video Films, Inc.; 27Mar63; MU7274. REGENERATION IN FLATWORMS. Thorne Films. 2 min., sd., color, 16 mm. (Biology demonstration series) Produced in cooperation with Dept. of Biology, University of Colorado. © Thorne Films, Inc.; 25Mar65 (in notice: 1963); MP15460. THE REGIONAL CAMPUSES OF INDIANA UNIVERSITY. Indiana University. Audio Visual Center. 15 min., sd., color,
  • 64. 16 mm. Eastman color. © Indiana University; 3Dec63; MP14076. REGIONAL DANCES OF SPAIN. See DANZAS REGIONALES ESPANOLAS. REGISTERED MAIL. See JOHNNY MIDNIGHT. THE REGULAR. See THE AMERICANS. REGULAR CIGARETTE. See OLD GOLD SPIN FILTERS COMMERCIAL. No. OGF-7-60. REGULAR TRANSATLANTIC AIR SERVICE BEGINS, JUNE 28, 1939. See ALMANAC NEWSREEL. June 28, 1960. REGULATION AND CONTROL. See BIOLOGY SERIES I. Film no. 10. REGULATION OF GROWTH. See BIOLOGY SERIES III. Film no. 9. REGULATION OF PLANT DEVELOPMENT: COLEOPTILE RESPONSE IN ZEA. Iowa State University of Science & Technology. 5 min., sd., color, 16 mm. (4-35)
  • 65. © Iowa State University of Science & Technology (in notice: Iowa State University); 21Dec64; MP14954. REHABILITATION. MB Productions. 30 min., sd., color, 16 mm. (Athletic injuries, the knee) © MB Productions, Inc.; 28Jul67; MP17160. REHABILITATION CENTER. B.P.O.E. Idaho State Elks Assn. Made by Film Originals. 14 min., sd., color, 16 mm. © George Oliver Smith & Helen Stanfield Smith d.b.a. Film Originals; 7Mar60; MP9972. REHABILITATION OF THE BELOW KNEE AMPUTEE. Stanley L. Adelsberg. 8 min., color, 16 mm. © Stanley L. Adelsberg; 10May66; MU7701. REHABILITATION: THE SCIENCE AND THE ART. See WHAT FINER PURPOSE? REINFORCEMENT. General Learning Corp. Made by the David Coffing Co. 8 min., sd., color, 16 mm. (Teaching skills for elementary school teachers) © General Learning Corp.; 31Dec68; MP19767. REINFORCEMENT. General Learning Corp. Made by the David Coffing Co. 8 min., sd., color, 16 mm. (Teaching skills for secondary school teachers) © General
  • 66. Learning Corp.; 31Dec68; MP19752. REINFORCEMENT THERAPY. Smith, Kline & French Laboratories. Made by Robert Aller Productions. 45 min., sd., b&w, 16 mm. © Smith, Kline & French Laboratories; 12May66; MP17878. REJECTION. See IT'S LIGHT TIME. REJOICE. Andrew Burke. 13 min., b&w, 16 mm. © Andrew Burke; 27Dec67; LU3555. REJOICE IN LABOR. See IT'S LIGHT TIME. RELATING SETS TO NUMBERS. General Electric Co. 11 min., sd., color, 16 mm. (Pathways to modern mathematics series) Eastman color. © General Electric Co.; 1Oct64; MP14673. RELATIVE VALUE. See ALFRED HITCHCOCK PRESENTS. RELAXATION. West End Brewing Co. of Utica, N.Y. Made by Doyle Dane Bernbach. 60 sec., sd., b&w. © West End Brewing Co. of Utica, N.Y.; 21Nov62; MU7228.
  • 67. RELAY RACE. See KID'S STUFF. Episode no. 30. RELAY STARTS. McGraw-Hill Book Co. Made by California Academy of Sciences. 2 min., sd., color, 8 mm. (Swimming series) © McGraw-Hill, Inc.; 21Nov66 (in notice: 1946); MP16620. RELAY STATION. See TALES OF WELLS FARGO. THE RELEASE. See DICK POWELL'S ZANE GREY THEATRE. RELIC OF FORT TEJON. See MAVERICK. RELIEF OF AIRWAY OBSTRUCTION. University Extension, University of Wisconsin. 7 min., sd., color, 16 mm. Appl. author: University Extension, University of Wisconsin, employer for hire of B. J. Bamforth. © Regents of the University of Wisconsin; 2Dec68; MP19148. RELIGIOUS READINESS FILMS FOR RETARDED CHILDREN. Sister M. Coletta Dunn. 12 reels, si., b&w, 16 mm. © Sister M. Coletta Dunn; 27May69;
  • 68. MU8056. THE RELUCTANT ASTRONAUT. Universal Pictures. 102 min., sd., color, 35 mm. © Universal Pictures; 4Mar67; LP35395. THE RELUCTANT BRIDE. See PONY EXPRESS. 7129. THE RELUCTANT BRIDEGROOM. See COMEDY CAPERS. THE TALL MAN. THE TEXAN. 49 THE RELUCTANT GUN. See DEATH VALLEY DAYS. 7216. THE RELUCTANT HANDICAPPER. See OUR MAN HIGGINS. RELUCTANT HERO. See SUGARFOOT. THE RELUCTANT LOVER. See THE DOUBLE LIFE OF HENRY PHYFE. THE RELUCTANT REBEL. See
  • 69. BONANZA. THE RELUCTANT SPY. See 77 SUNSET STRIP. THE REMARKABLE MRS. HAWK. See THRILLER. THE REMARKABLE SCHOOLHOUSE. See THE 21ST CENTURY. REMEMBER EDDIE SIMPSON? Robert J. Fudge. 13 min., b&w. © Robert J. Fudge; 4Oct68; MU7959. REMEMBER LAKE SERENE? See PLEASE DON'T EAT THE DAISIES. REMEMBER ME NOT. See MAN FROM BLACKHAWK. REMEMBER PEARL HARBOR. See HENNESEY. REMEMBER PEARL HARBOR; AMERICA AT WAR, 1941-1945. See THE SCREEN NEWS DIGEST. V. 9, no. 5. REMEMBER ST. PETERSBURG. See
  • 70. CAR 54, WHERE ARE YOU? REMEMBER THE ALAMO. See CORONADO 9. REMEMBER THE ALIMONY. See THE DICK VAN DYKE SHOW. REMEMBER THE MAINE. See THE ALASKANS. REMEMBER THE YAZOO. See TALES OF WELLS FARGO. REMEMBRANCE OF CRIMES PAST See CHECKMATE. REMINDER TO DAIRYMEN. Babson Bros. Co. 9 min., sd., color, 16 mm. Appl. author: Sumner J. Lyon. © Babson Bros. Co.; 1Jan61; MP13172. [REMINGTON ELECTRIC SHAVER TELEVISION COMMERCIALS] Sperry Rand Corp. Remington Electric Shaver Division. Approx. 30 sec. each, sd., color, 16 mm. © Sperry Rand Corp., Remington Electric Shaver Division. Father. RM-2-20-68R. © 14Nov68;
  • 71. MP19547. Sneaky shaves. RM-8-30-69. © 26May69; MP19545. Spend less time ugly. RM-5-30-68. © 29Nov68; MP19718. REMINGTON SHAVER; DERMATOLOGIST REPORT. Sperry Rand Corp. 120 sec., sd., b&w, 16 mm. Appl. author: Young & Rubicam, Inc. © Sperry Rand Corp.; 17May61; MP11563. REMITTANCE MAN. See THE LIFE AND LEGEND OF WYATT EARP. D-8 (119) TALES OF WELLS FARGO. REMOTE CONTROL. National Telepix. 14 min., sd., b&w, 16 mm. (Comedy capers) NM: editing, animation & additions. © National Telepix, Inc.; 1Oct61; LP20485. THE REMOUNTS. See STAGECOACH—WEST. REMOVAL & INSTALLATION OF BRAKE SHOES ON SELF ADJUSTING BENDIX BRAKES. Raybar Technical Films. 4 min., si., color, 16 mm. (Light automobile service and maintenance) © Raybar Technical Films, Inc.; 3Jan66;
  • 72. MP16046. REMOVING A CYLINDER RIDGE. McGraw-Hill Book Co. 4 min., si., color, Super 8 mm. (Automotive mechanics series: Engine rebuilding I) Presented by McGraw-Hill Text-Films. © McGraw-Hill, Inc.; 29Dec67; MP19632. REMOVING AND REPLACING CHUCKS. See MACHINE SHOP SERIES: ELEMENTARY ENGINE LATHE 1. REMOVING FROG PITUITARY. Ealing Corp. 3 min., si., color, 8 mm. © Ealing Corp.; 1Apr67; MP17569. REMOVING GLAZE FROM CYLINDER WALL. McGraw-Hill Book Co. 4 min., si., color, Super 8 mm. (Automotive mechanics series: Engine rebuilding I) Presented by McGraw-Hill Text-Films. © McGraw-Hill, Inc.; 29Dec67; MP19636. REMOVING THE PISTON FROM THE CYLINDER. McGraw-Hill Book Co. 4 min., si., color, Super 8 mm. (Automotive mechanics series: Engine rebuilding I) Presented by McGraw-Hill Text-Films. © McGraw-Hill, Inc.; 29Dec67; MP19633. REMOVING VALVES FROM A CYLINDER HEAD. McGraw-Hill Book Co. 3 min., si., color, 8 mm. (Automotive mechanics series: Engine rebuilding, set 2) Presented by McGraw-Hill Text-Films.
  • 73. © McGraw-Hill, Inc.; 29Dec67; MP19109. THE RENAISSANCE AND THE REFORMATION. See I AM WITH YOU. Part III. THE RENAISSANCE OF GUSSIE HILL. See CHECKMATE. RENAL HYPERTENSION, BILATERAL NEPHRECTOMY, KIDNEY TRANSPLANTATION. Upjohn Co. 23 min., sd., color, 16 mm. © Upjohn Co.; 9Sep65; MU7646. RENDEZVOUS. Columbia Broadcasting System. 30 min. each, sd., b&w, 16 mm. © Columbia Broadcasting System, Inc. Alone. © 6Dec58; LP15914. The funmaster. © 29Dec58; LP15864. In an early winter. © 12Dec58; LP15863. The sound of gunfire. © 6Dec58; LP15915. A very fine deal. © 6Dec58; LP15916. The white circle. © 6Dec58; LP15917. RENDEZVOUS. Robert Grand Productions. 24 min., sd., b&w, 35 mm. © Robert
  • 74. Grand Productions, Inc.; 17Sep65; LP33751. RENDEZVOUS. See ALCOA PRESENTS ONE STEP BEYOND. TWO FACES WEST. RENDEZVOUS AT ARILLO. See LAREDO. RENDEZVOUS AT RED ROCK. See CHEYENNE. RENDEZVOUS AT SUNDOWN. See ZORRO. 7205. RENDEZVOUS FOR TWO. See THE FARMER'S DAUGHTER. RENDEZVOUS IN SPACE. Martin-Marietta Corp. 19 min., sd. Appl. author: Frank Capra. © Martin-Marietta Corp.; 29Jul65; LU3387. RENDEZVOUS IN TOKYO. See RUN FOR YOUR LIFE. RENDEZVOUS IN WASHINGTON. See
  • 75. CHECKMATE. RENDEZVOUS WITH A MIRACLE. See CHEYENNE. RENDEZVOUS WITH LOVE. See GRAND JURY. THE RENEGADE. See LASSIE. 6005. PONY EXPRESS. 7112. THE RENEGADE BRAND. See LARAMIE. RENEGADE WHITE. See GUNSMOKE. RENEGADES. See CHEYENNE. GUNSMOKE. STAGECOACH—WEST. RENEWING THE CENTRAL BUSINESS DISTRICT. See DECISION FOR A CITY, RENEWING THE CENTRAL BUSINESS DISTRICT.
  • 76. THE RENO BROTHERS. See JOHNNY RINGO. 2343. REPAIR OF ROBESPIERRE. See BRINGING UP BUDDY. REPASSAGE. (Ironing) Lucie Eber, France. 3 min., si., b&w, 16 mm. © Lucie Eber; 12Jan65; LP29765. REPEAT. See LEHN & FINK CONSUMER PRODUCTS DIVISION TELEVISION COMMERCIALS. REPEAT PERFORMANCE. See LIFE OF RILEY. THE REPENTANT OUTLAW. See TALES OF WELLS FARGO. THE REPLACEMENT. See LARAMIE. PONY EXPRESS. 7102. REPLACEMENT FOR PHOEBE. See HAZEL.
  • 77. REPLACING A LOWER MAIN BEARING INSERT. McGraw-Hill Book Co. 4 min., si., color, Super 8 mm. (Automotive mechanics series: Engine rebuilding I) Presented by McGraw-Hill Text-Films. © McGraw-Hill, Inc.; 29Dec67; MP19639. REPLACING AN UPPER MAIN BEARING INSERT. McGraw-Hill Book Co. 4 min., si., color, Super 8 mm. (Automotive mechanics series: Engine rebuilding I) Presented by McGraw-Hill Text-Films. © McGraw-Hill, Inc.; 29Dec67; MP19638. REPLACING GENERATOR BRUSHES. Raybar Technical Films. 4 min., si., color, 16 mm. © Raybar Technical Films, Inc.; 30Jun66; MP16274. REPLACING IGNITION POINTS. Raybar Technical Films. 4 min., si., color, 16 mm. © Raybar Technical Films, Inc.; 30Jun66; MP16276. REPLACING THE THERMOSTAT. Raybar Technical Films. 4 min., si., color, 16 mm. © Raybar Technical Films, Inc.; 30Jun66; MP16275. THE REPORT CARD. See THE DANNY THOMAS SHOW. THE DONNA REED SHOW. REPORT FILM ON THE SEATTLE CONFERENCE
  • 78. ON NEW INSTRUCTIONAL MATERIALS. University of Washington. 17 min., sd., b&w. © University of Washington; 19Sep66; MU7729. A REPORT FROM BUDD. Marathon TV Newsreel. 20 min., sd., b&w, 16 mm. © Marathon TV Newsreel; 1Apr58; MP10593. A REPORT FROM SAN JUAN. Delta Films International. Released by Warner Bros. Pictures. 17 min., sd., color, 35 mm. (World wide adventure special) © Warner Bros. Pictures, Inc.; 8Aug64; LP29442. REPORT FROM THIOKOL. See POLYSULFIDE BASE INDUSTRIAL SEALANTS. POLYSULFIDES FOR INDUSTRY. REPORT FROM VIETNAM BY WALTER CRONKITE. See WHO? WHAT? WHEN? WHERE? WHY? THE REPORT GENERATOR. International Business Machines Corp. 10 min., sd., color, 16 mm. © International Business Machines Corp.; 1Dec61; MU7109. A REPORT ON FINDINGS OF SEVEN-YEAR RESEARCH AND DEVELOPMENT PROGRAM ON BALLET AND DANCE TRAINING. Imperial
  • 79. Studios of Ballet. 113 min., sd., b&w, 16 mm. Appl. authors: Edward S. Kneeland, Jr. & Jo Anna Kneeland. © Imperial Studios of Ballet, Inc.; 29Oct63; MU7353. A REPORT ON FOUNTAIN DESIGNS. Kim Lighting & Manufacturing Co. 24 min., color, 16 mm. © Kim Manufacturing Co., Inc.; 5Dec62; MU7241. REPORT ON HONG KONG. See CBS REPORTS. REPORT THAT ACCIDENT! National Association of Automotive Mutual Insurance Companies. Released by Dallas Jones Productions. 11 min., sd., color, 16 mm. Eastman color. © National Association of Automotive Mutual Insurance Companies; 28Dec62; MP12990. REPORT TO ARMCO PEOPLE: A NEWSREEL OF EVENTS AROUND THE WORLD OF ARMCO. Armco Steel Corp. Made by Jam Handy Organization. 16 min., sd., b&w, 16 mm. © Armco Steel Corp.; 15May63; MU7322. A REPORT TO YOU. Great Lakes Steel Corp. Made by Jam Handy Organization. 24 min., sd., Ektachrome, 16 mm. © Jam Handy Organization, Inc.; 1Oct62; MU7218.
  • 80. REPRESENTATION AND DESIGN. Rensselaer Polytechnic Institute. 14 min., sd., color, 16 mm. © Rensselaer Polytechnic Institute; 12Feb63; MP13143. REPRIEVE. Kaufman-Lubin Productions. Released by Allied Artists Pictures Corp. 106 min., sd., b&w, 35 mm. Based on Reprieve, the autobiography of John Resko. © Allied Artists Pictures Corp. & Kaufman-Lubin Productions, Inc.; 4Apr62; LP21654. REPRIEVE. See CHEYENNE. REPRISAL. See GUNSMOKE. REPRODUCTION. See BIOLOGY SERIES II. Film no. 7. REPRODUCTION AND BIRTH. Ealing Corp. Made by Milner-Fenwick. 25 min., sd., b&w, 16 mm. (Starting tomorrow) © Ealing Corp.; 1May69; MP19606. REPRODUCTION AND URINARY SYSTEMS OF THE FEMALE. See THE FROG: REPRODUCTION AND URINARY SYSTEMS OF THE FEMALE. REPRODUCTION IN THE SEA URCHIN. Coronet
  • 81. Instructional Films. 14 min., sd., b&w, 16 mm. © Coronet Instructional Films, a division of Esquire, Inc.; 8Oct65; MP15587. REPRODUCTIVE AND URINARY SYSTEMS OF THE MALE. See THE FROG: REPRODUCTIVE AND URINARY SYSTEMS OF THE MALE. REPRODUCTIVE AND URINARY SYSTEMS OF THE MALE AND FEMALE COMPARED. See THE FROG: REPRODUCTIVE AND URINARY SYSTEMS OF THE MALE AND FEMALE COMPARED. THE REPTILE. Seven Arts-Hammer Film Productions. Released by Twentieth Century-Fox Film Corp. 90 min., sd., color, 35 mm. Color by DeLuxe. © Hammer Film Productions, Ltd.; 6Apr66; LP32663. REPTILES AND AMPHIBIANS. National Geographic Society. 51 min., sd., color, 16 mm. Produced in association with Metromedia Producers Corp. © National Geographic Society; 25Nov68; MP18961. REPTILES ARE INTERESTING. Film Associates of California. 11 min., sd., color, 16 mm. Eastman color. © Film Associates of California; 15Jan55 (in notice: 1954); MP14008.
  • 82. REPTILICUS. Cinemagic. Released by American International Pictures. 81 min., sd., color, 35 mm. An Alta Vista Production. Pathecolor. Based on story by Sid Pink. © Cinemagic, Inc.; 21Nov62; LP23589. THE REPUBLIC OF SOUTH AFRICA, ITS LAND AND ITS PEOPLE. (Revised edition, Union of South Africa) Encyclopaedia Britannica Films. 17 min., sd., color, 16 mm. Eastman color. © Encyclopaedia Britannica Films, Inc.; 10Jul63; MP13445. THE REPUBLICANS. See CAMPAIGN '66. REPUTATION FOR MURDER. See JOHNNY RINGO. 2375. REQUIEM AT DANCER'S HILL. See THE DAKOTAS. REQUIEM AT MISSION SPRINGS. See THE RIFLEMAN. REQUIEM FOR A BULL. See MISTER MAGOO.
  • 83. REQUIEM FOR A COUNTRY DOCTOR. See THE VIRGINIAN. REQUIEM FOR A HEAVYWEIGHT. Columbia Pictures Corp. 85 min., sd., b&w, 35 mm. David Susskind's production. © Columbia Pictures Corp.; 1Sep62; LP22965. REQUIEM FOR A PRESIDENT: FUNERAL OF JOHN F. KENNEDY. Norwood Studios. 10 min., si., color, 35 mm. Eastman color. Appl. author: Philip Martin. © Norwood Studios, Inc.; 19May64; MP14172. REQUIEM FOR A SUCKER. See MICKEY SPILLANE'S MIKE HAMMER. REQUIEM FOR A SUNDAY AFTERNOON. See NAKED CITY. REQUIEM FOR AN UNDERWEIGHT HEAVYWEIGHT. See THE MANY LOVES OF DOBIE GILLIS. REQUIEM FOR OLD MAN CLANTON. See THE LIFE AND LEGEND OF WYATT EARP. REQUIEM FOR THE POPE OF UNITY. See EYEWITNESS.
  • 84. REQUIEM TO MASSACRE. See CHEYENNE. Gold, glory and Custer—prelude. THE RESCUE. See BONANZA. COMBAT! LASSIE. 6002. RESCUE 8. Wilbert Productions. 1 reel each, sd., b&w, 16 mm. © Wilbert Productions, Inc. Add a pinch of death. © 30Dec59; LP25062. The ammonia trap. © 14Oct58; LP22283. Backfire. © 24Dec59; LP25060. The bells of fear. © 2Dec58; LP22288. The birdman. © 26Nov59; LP25048. Breakdown. © 7Jan60; LP25070. The cage. © 7Oct58; LP22281. Calamity coach. © 4Jan59; LP22291. The cave-in. © 8Dec58; LP22287.
  • 85. The chasm. © 11Nov58; LP22284. Children of the sun. © 2Mar59; LP22301. The cliff. © 20Oct58; LP22279. The collision. © 3Dec59; LP25057. Comeback. © 3Mar60; LP25068. The crackup. © 21Oct58; LP22282. Danger in paradise. © 23Apr59; LP22308. Danger, 20,000 volts. © 22Dec58; LP22289. Dangerous Salvage. © 7Oct59; LP25052. Death for hire. © 8Jun59; LP22315. Deep Danger. © 14Apr60; LP25040. The Devil's Cavern. © 30Mar60; LP25037. Disaster town. © 9Feb59; LP22298. The ferris wheel. © 23Sep58; LP22277. Find that bomb. © 18Nov58; LP22286. Flash flood. © 26Jan59; LP22296.
  • 86. Fool's gold. © 24Sep59; LP25055. Forced landing. © 7Oct59; LP25044. Forty five fathoms, dead or alive. © 23Feb59; LP22300. A handful of vengeance. © 16Feb59; LP22299. Heat wave. © 2Jul59; LP25049. High explosive. © 30Dec59; LP25061. High hazard. © 12Jan59; LP22294. High lonely. © 17Feb60; LP25067. High pressure. © 11May59; LP22311. Hour of rage. © 27Apr59; LP22309. I don't remember. © 21Apr60; LP25042. If the bough breaks. © 4May59; LP22310. Initiation to danger. © 2Feb59; LP22297. International Incident. © 9Mar59; LP22302. Leap for life. © 10Dec59; LP25058. Left hook to hades. © 25May59;
  • 87. LP22313. Lifeline. © 10Feb60; LP25066. Nine minutes to live. © 16Mar59; LP22303. No trespassing. © 23Mar59; LP22304. Not for glory. © 2Oct59; LP25053. One more step. © 18May59; LP22312. 102 to Bakersfield. © 30Sep58; LP22278. Paid in full. © 30Sep59; LP25054. Pitfall. © 19Nov59; LP25045. Quicksand. © 17Mar60; LP25069. The rock prison. © 24Sep59; LP25051. Rubber gold. © 19Jan59; LP22295. Runaway. © 3Oct59; LP25047. School for violence. © 24Mar60; LP25039. The scrap iron jungle. © 9Dec58; LP22290. Second Team. © 28Apr60; LP25041.
  • 88. Secret of the mission. © 5Jan59; LP22292. Smashout. © 3Oct59; LP25043. Square triangle. © 17Feb60; LP25064. The Squatters. © 10Dec59; LP25059. The steel mountain. © 10Nov58; LP22285. The subterranean city. © 13Oct58; LP22280. Suitcase fireman. © 25Sep59; LP25056. Ten minutes to doomsday. © 14Jan60; LP25063. The third strike. © 5Nov59; LP25046. 13 stories up. © 14Apr60; LP25038. Three men in a vault. © 30Mar59; LP22305. 3 mile bomb. © 14Oct59; LP25050. Ti-Ling. © 18Feb60; LP25065. Tower of hate. © 13Apr59; LP22307. The trap. © 1Jun59; LP22314. Trial by fire. © 7Jan59; LP22293.
  • 89. The walking death. © 6Apr59; LP22306. THE RESCUE OF RUFUS. See BACHELOR FATHER. RESCUE RIDGE. See LASSIE. RESCUE WITH YUL BRYNNER. See CBS REPORTS. RESEARCH BY ROCKETS. U.S. National Academy of Sciences. 27 min., sd., color, 16 mm. © U.S. National Academy of Sciences; 1Nov60; MP10976. A RESEARCH PROBLEM: INERT (?) GAS COMPOUNDS. Regents of the University of California. Made by Chemical Education Study. 19 min., sd., color, 16 mm. (CHEM study film) Eastman color. © Regents of the University of California; 14Nov63; MP13731. RESENTMENTS. See IT'S LIGHT TIME. THE RESERVATION. See UNITED STATES MARSHAL.
  • 90. Welcome to our website – the perfect destination for book lovers and knowledge seekers. We believe that every book holds a new world, offering opportunities for learning, discovery, and personal growth. That’s why we are dedicated to bringing you a diverse collection of books, ranging from classic literature and specialized publications to self-development guides and children's books. More than just a book-buying platform, we strive to be a bridge connecting you with timeless cultural and intellectual values. With an elegant, user-friendly interface and a smart search system, you can quickly find the books that best suit your interests. Additionally, our special promotions and home delivery services help you save time and fully enjoy the joy of reading. Join us on a journey of knowledge exploration, passion nurturing, and personal growth every day! ebookbell.com