BUS370.W3A1.04.2021
Description:
Total Possible Score: 6.00
Identifies the Type of Change Occurring
Total: 1.00
Distinguished - Clearly and accurately identifies the type of
change occurring.
Proficient - Identifies the type of change occurring. Minor
details are slightly unclear or inaccurate.
Basic - Vaguely identifies the type of change occurring.
Relevant details are unclear and/or inaccurate.
Below Expectations - Attempts to identify the type of change
occurring; however, significant details are unclear and
inaccurate.
Non-Performance - The identification of the type of change
occurring is either nonexistent or lacks the components
described in the assignment instructions.
Develops the Action Research Process
Total: 1.00
Distinguished - Comprehensively develops the action research
process.
Proficient - Develops the action research process. Minor details
are missing.
Basic - Partially develops the action research process. Relevant
details are missing.
Below Expectations - Attempts to develop the action research
process; however, significant details are missing.
Non-Performance - The development of the action research
process is either nonexistent or lacks the components described
in the assignment instructions.
Selects the Diagnostic Process for the Planning Phase and
Defends the Reasoning for the Choice
Total: 1.00
Distinguished - Selects the appropriate diagnostic process for
the planning phase and clearly defends the reasoning for the
choice.
Proficient - Selects the diagnostic process for the planning
phase and defends the reasoning for the choice. Minor details
are missing or slightly unclear.
Basic - Selects a somewhat appropriate diagnostic process for
the planning phase and partially defends the reasoning for the
choice. Minor details are missing and/or slightly unclear.
Below Expectations - Attempts to select a diagnostic process for
the planning phase and defend the reasoning for the choice;
however, significant details are missing and unclear.
Non-Performance - The selection of a diagnostic process for the
planning phase and defense of the reasoning for the choice are
either nonexistent or lack the components described in the
assignment instructions.
Classifies the Intervention Type
Total: 0.50
Distinguished - Accurately classifies the intervention type.
Proficient - Classifies the intervention type. Minor details are
inaccurate.
Basic – Vaguely classifies the intervention type. Relevant
details are inaccurate.
Below Expectations - Attempts to classify the intervention type;
however, significant details are inaccurate.
Non-Performance - The classification of the intervention type is
either nonexistent or lacks the components described in the
assignment instructions.
Defines How to Implement the Intervention Type
Total: 0.50
Distinguished - Thoroughly defines how to implement the
intervention type.
Proficient - Defines how to implement the intervention type.
Minor details are missing.
Basic - Minimally defines how to implement the intervention
type. Relevant details are missing.
Below Expectations - Attempts to define how to implement the
intervention type; however, significant details are missing.
Non-Performance - The definition of how to implement the
intervention type is either nonexistent or lacks the components
described in the assignment instructions.
Utilizes the Action Research Checking Phase to Validate the
Change Outcome
Total: 0.50
Distinguished - Completely utilizes the Action Research
Checking phase to validate the change outcome.
Proficient - N/A
Basic - Partially utilizes the Action Research Checking phase to
validate the change outcome. Relevant details are incomplete.
Below Expectations - N/A
Non-Performance - The utilization of the Action Research
Checking phase to validate the change outcome is either
nonexistent or lacks the components described in the assignment
instructions.
Written Communication: Control of Syntax and Mechanics
Total: 0.25
Distinguished - Displays meticulous comprehension and
organization of syntax and mechanics, such as spelling and
grammar. Written work contains no errors and is very easy to
understand.
Proficient - Displays comprehension and organization of syntax
and mechanics, such as spelling and grammar. Written work
contains only a few minor errors and is mostly easy to
understand.
Basic - Displays basic comprehension of syntax and mechanics,
such as spelling and grammar. Written work contains a few
errors which may slightly distract the reader.
Below Expectations - Fails to display basic comprehension of
syntax or mechanics, such as spelling and grammar. Written
work contains major errors which distract the reader.
Non-Performance - The assignment is either nonexistent or
lacks the components described in the instructions.
Written Communication: APA Formatting
Total: 0.25
Distinguished - Accurately uses APA formatting consistently
throughout the paper, title page, and reference page.
Proficient - Exhibits APA formatting throughout the paper.
However, layout contains a few minor errors.
Basic - Exhibits limited knowledge of APA formatting
throughout the paper. However, layout does not meet all APA
requirements.
Below Expectations - Fails to exhibit basic knowledge of APA
formatting. There are frequent errors, making the layout
difficult to distinguish as APA.
Non-Performance - The assignment is either nonexistent or
lacks the components described in the instructions.
Intro, Thesis, & Conclusion
Total: 0.25
Distinguished - The paper is logically organized with a well-
written introduction, thesis statement, and conclusion.
Proficient - The paper is logically organized with an
introduction, thesis statement, and conclusion. One of these
requires improvement.
Basic - The paper is organized with an introduction, thesis
statement, and conclusion. The introduction, thesis statement,
and/or conclusion require improvement.
Below Expectations - The paper is loosely organized with an
introduction, thesis statement, and conclusion. The introduction,
thesis statement, and/or conclusion require much improvement.
Non-Performance - The introduction, thesis statement, and
conclusion are either nonexistent or lack the components
described in the assignment instructions.
Written Communication: Page Requirement
Total: 0.25
Distinguished - The length of the paper is equivalent to the
required number of correctly formatted pages.
Proficient - The length of the paper is nearly equivalent to the
required number of correctly formatted pages.
Basic - The length of the paper is equivalent to at least three
quarters of the required number of correctly formatted pages.
Below Expectations - The length of the paper is equivalent to at
least one half of the required number of correctly format ted
pages.
Non-Performance - The assignment is either nonexistent or
lacks the components described in the instructions.
Written Communication: Resource Requirement
Total: 0.50
Distinguished - Uses more than the required number of
scholarly sources, providing compelling evidence to support
ideas. All sources on the reference page are used and cited
correctly within the body of the assignment.
Proficient - Uses the required number of scholarly sources to
support ideas. All sources on the reference page are used and
cited correctly within the body of the assignment.
Basic - Uses less than the required number of sources to support
ideas. Some sources may not be scholarly. Most sources on the
reference page are used within the body of the assignment.
Citations may not be formatted correctly.
Below Expectations - Uses an inadequate number of sources
that provide little or no support for ideas. Sources used may not
be scholarly. Most sources on the reference page are not used
within the body of the assignment. Citations are not formatted
correctly.
Non-Performance - The assignment is either nonexistent or
lacks the components described in the instructions.
Powered by
Action Research:
The Checking Phase
6
DragonImages/iStock/Thinkstock
Learning Outcomes
After reading this chapter, you should be able to:
• Describe evaluation according to how it is defined and what
steps encompass it.
• Identify the types and categories of evaluation.
• Examine different frameworks of evaluation.
• Determine how to plan and perform an evaluation.
• Explore strategies for concluding the action research process,
including terminating the
consultant–client relationship or recycling the intervention.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
In Chapter 5, we learned about the Public Health Leadership
Academy, which was founded by a
major university using funds from a federal grant to promote
leadership development among
public health employees in a southern state. The project
involved developing a Leadership Acad-
emy for mid-level managers who exhibited potential to advance
to higher levels of public health
leadership in the state. The intervention was in response to a
long-term need based on previous
analyses of the state’s public health agency, including
succession planning. This need had existed
for many years because there were not enough public funds
available to provide a comprehensive
program. The grant finally created the opportunity to deli ver
this much-needed program. James
(the client) worked with Leah (the external consultant) to plan
and implement the program.
James and Leah engaged in action research to collect and
analyze data about the needs of the
target population (mid-level public health managers) using
interviews and surveys to deter-
mine the content of the courses that would be offered in the
Leadership Academy. The project
had a 2-year implementation timeline, with year 1 focused on
planning and year 2 devoted to
implementation. Evaluation would be ongoing and continue past
year 2 with a new cohort start-
ing in year 3, staffed by internal consultants.
During the year 1 planning phase, James and Leah were very
involved in collecting data to
inform the content and process of the Leadership Academy.
They continually stopped to reflect
on their decisions, plans, and processes and made adjustments
to each as the project unfolded.
They also piloted the first session among a small group of
advisors to the Leadership Academy to
make sure their design would resonate with the participants.
They made more changes follow-
ing the pilot to improve the program.
During year 2, 25 managers chosen for the
academy participated in monthly leader-
ship development experiences and semi-
nars. The Leadership Academy began in
September with these 25 managers, who
had been competitively selected from across
the state. The participants convened at a
resort, and the program was kicked off by
high-level state public health officials. The
first session lasted 3 days, during which
time the participants received the results of
a leadership styles inventory, listened to
innovative lectures and panels on leader-
ship, planned an individual leadership proj-
ect in their districts, and engaged with each
other to develop working relationships. The academy continued
meeting monthly for a year and
focused on a range of topics related to leadership that were
prioritized based on prior data col-
lection. The grant provided for an evaluator, so data was
collected at each meeting.
The first 2 years of the project involved ongoing assessment of
the academy’s plans and imple-
mentation, followed by appropriate adjustments. James and
Leah included cycles of assessment
and adjustment as a regular part of their agenda and
conversation.
The evaluator observed all of the sessions and sent out formal
evaluations after each monthly
session. During the sessions, facilitators regularly asked
participants to provide feedback. For
example, they were asked to respond to questions like, “How
did that exercise work for you?”
PeopleImages/E+/Getty Images Plus
The Leadership Academy is off to a lively start.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.1Defining Evaluation in Action Research
“How are you looking at this now?” and “How could we do this
better?” The evaluation data con-
tributed to changes to the planned curriculum and program
activities. For example, the partici-
pants took an inventory to assess leadership style and wanted to
spend more time on the topic,
so the next month’s agenda was adjusted to accommodate the
request. Participants complained
that the sequencing of topics was not logical, so the agenda for
the second cohort to follow in
year 3 of the project was adjusted.
The first cohort graduated at its final session, during which the
cohort welcomed the mem-
bers of the new cohort. Leah had worked with an internal team
of consultants throughout the
implementation, and the team was ready to take over the
facilitation with the second cohort.
Following the event, Leah met with James and the new team to
tie up loose ends and make the
transition. She met periodically with James during the third year
to ensure that the Leadership
Academy was running smoothly.
As this vignette illustrates, although checking is the third phase
of the action research process,
it takes place during the planning and doing phases as well.
This chapter focuses on checking,
which is a data-based evaluation to assess whether an
intervention had the intended result.
6.1 Defining Evaluation in Action Research
The model of action research used in this book has three phases:
planning, doing, and check-
ing. See Table 5.1 in Chapter 5 for a review of each phase.
The final phase of action research, checking, involves three
steps. First, the consultant and
client gather data about the key changes and learning that have
occurred. This step is known
as assessing changes. Next, the consultant uses this data to
assess if the intended change
occurred. Was the change implementation effective? Were the
proposed outcomes met? As
a result of this assessment, the consultant adjusts the
intervention accordingly. This step is
known as adjusting processes. The third step is to terminate the
OD process or repeat it
to correct or expand the intervention (known as recycling).
Assessment, adjustment, and
terminating or recycling are collectively known as evaluation of
the action research process.
Purposes of Evaluation
The overall purpose of conducting an eval-
uation is to make data-based decisions
about the quality, appropriateness, and
effectiveness of OD interventions. Evalua-
tion helps us determine whether an inter-
vention’s intended outcomes were real-
ized and assess and adjust the intervention
as needed. Evaluation helps ensure
accountability and knowledge generation
from the intervention.
An evaluation creates criteria of merit, con-
structs standards, measures performance,
compares performance to standards, and
zimmytws/iStock/Thinkstock
Evaluation makes judgments about the
effectiveness and impact of OD interventions
through the analysis of data such as “employee
satisfaction” surveys.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.1Defining Evaluation in Action Research
synthesizes and integrates data into a judgment of merit or
worth (Fournier, 1995). Evalua-
tion findings help render judgments, facilitate improvements, or
generate knowledge (Patton,
1997). Evaluations used to render judgments focus on
accountability for outcomes such as
holding management responsible for making changes in
leadership. Improvements concen-
trate on developmental processes such as creating new learning
and growth. Knowledge gen-
eration emphasizes academic contributions such as new insights
that may change a process.
Establishing a Benchmark
To illustrate how evaluation helps OD consultants assess and
adjust an intervention, let us
consider an organization that has conducted survey resear ch to
assess employee satisfaction.
The first year creates a benchmark (when an organization
compares its business processes,
practices, and performance standards to those of other
organizations that are considered best
in class) that can be used in future evaluations. Further, let us
imagine that employee satisfac-
tion is at a moderately satisfied level the first time it is
measured. When the survey research
instrument on employee satisfaction is replicated in future
years, the level of satisfaction will
be compared with the original baseline to evaluate whether the
organization is doing worse,
the same, or better than it had originally. The evaluation can
help the organization identify
key changes and learning that occurred as a result of the
intervention. Then the organization
can adjust practices accordingly.
The American Productivity and Quality Center developed a
benchmarking definition repre-
senting consensus among 100 U.S. companies:
Benchmarking is a systematic and continuous measurement
process; a pro-
cess of continuously measuring and comparing an
organization’s business
process against business process leaders anywhere in the world
to gain infor-
mation, which will help the organization take action to improve
its perfor-
mance. (as cited in Simpson, Kondouli, & Wai, 1999, p. 718)
See Who Invented That? Benchmarking to read about the origins
of benchmarking.
Benchmarking is a specific type of action research, but the
process can also be applied during
OD intervention evaluations. There are several types of
benchmarking (Ellis, 2006):
Who Invented That? Benchmarking
The exact derivation of the term benchmarking is unknown. It is
thought to have possibly origi-
nated from using the surface of a workbench in ancient Egypt to
mark dimensional measure-
ments on an object. Alternatively, surveyors may have used the
term to refer to the process of
marking cuts into stone walls to measure the altitude of land
tracts, and cobblers may have
used it to describe measuring feet for shoes (Levy & Ronco,
2012).
Benchmarking in U.S. business emerged in the late 1970s.
Xerox is generally considered the
first corporation to apply benchmarking. Robert Camp (1989), a
former Xerox employee,
wrote one of the earliest books on benchmarking. Camp
described how U.S. businesses took
their market superiority for granted and were thus unprepared
when higher-quality Japanese
goods disrupted U.S. markets.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.1Defining Evaluation in Action Research
• Competitive: Uses performance metrics to assess how well or
poorly an organization
is performing against direct competitors, such as measuring
quality defects between
the companies’ products
• Comparative: Focuses on how similar processes are handled
by different organiza-
tions, such as two organizations’ recruitment and retention
activities
• Collaborative: Involves sharing knowledge about particular
activity between compa-
nies, with the goal of learning
Bogan and English (2014) described benchmarking types a bit
differently, noting the activity
might focus on processes (e.g., order fulfillment or billing
processes, similar to comparative),
performance (e.g., comparing competitive positions, similar to
competitive), and strategy
(e.g., identifying winning tactics across an industry, perhaps
similar to collaborative). Almost
any issue of interest can be benchmarked, including processes,
financial results, investor per-
spectives, performance, products, strategy, structure, best
practices, operations, and manage-
ment practices. Benchmarking could be part of the data
collection process in OD, an interven-
tion, or the basis of an evaluation. Table 6.1 shows typical
benchmarking steps.
Table 6.1: Typical benchmarking process
Benchmarking step Example
1. Identify process, practice,
method, or product to
benchmark.
Identifying best practices for recruiting and retaining a diverse
work
force
2. Identify the industries with
similar processes.
Finding the companies that are best at retaining a diverse work
force,
even those in a different industry
3. Identify organization
leaders in a target area.
Selecting the organizations against which to benchmark
4. Survey the selected
organizations for their
measures and practices.
Sending a survey to the target companies asking for information
on
issues such as turnover and hire rates, formal retention
programs (e.g.,
orientation, development), management training, and rewards
5. Identify best practices. Analyzing data to identify best
practices to implement. Analysis
depends on the type of data collected—that is, whether it is
statistical
(quantitative data), such as from a survey of employees on
attitudes
about diversity, or interpretive (qualitative data), such as from
inter-
views with employees who quit.
6. Implement new and
improved practices.
Implementing best practices, such as new recruitment and
retention
strategies, affinity groups, or rewards for managers who
develop a
diverse staff
Other Purposes of Evaluation
Caffarella (1994) and Caffarella and Daffron (2013) identified
12 specific purposes of evalu-
ation data. Evaluation helps to
1. adjust the intervention as it is being made in terms of design,
delivery, management,
and evaluation;
2. keep employees focused on the intervention’s goals and
objectives;
3. provide information to inform the continuation of the
intervention;
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.1Defining Evaluation in Action Research
4. identify improvements needed to design and deliver the
intervention;
5. assess the intervention’s cost-effectiveness;
6. justify resource allocations;
7. increase application of participants’ learning by building in
strategies that help them
transfer learning back to the organization;
8. provide data on the results of the intervention;
9. identify ways to improve future interventions;
10. cancel or change an intervention that is poorly designed or
headed for failure;
11. explore why an intervention fails; and
12. provide intervention accountability.
Moreover, during the planning phase, evaluation can help
consultants assess needs and make
decisions about how best to intervene. The Leadership
Academy’s goal was to improve lead-
ership, but James and Leah had to assess the content that would
be most appropriate for lead-
ership in public health. Then, when the participants were
selected, they had to make further
assessments to ensure the program was relevant to the
participants’ particular needs.
Evaluation may also help test different theories and models of
addressing the problem. In the
case of the Leadership Academy, James and Leah based their
interventions on theories and
models of leadership. They threw out what did not resonate with
the participants or work
well during sessions and revised the program for the second
cohort.
Evaluation also helps monitor how the intervention is going
during implementation so it can
be adjusted accordingly. Such adjustments occurred throughout
the Leadership Academy
implementation over the course of a year.
Finally, evaluation helps determine whether the intervention
goals were met and what impact
the change had on individuals and the organization. Measuring
this type of impact may require
more longitudinal study than other types of evaluation. The
evaluation of impact helps con-
sultants decide whether to extend the intervention, change it, or
abandon it altogether. The
Leadership Academy will be continually reevaluated as new
cohorts participate each year.
Clearly, evaluations have the potential to accomplish a variety
of goals. Throughout the OD
process, it is critical to stay focused on an evaluation’s purpose.
Have you experienced any of
the evaluation activities discussed here?
Steps in Evaluation
Just as with action research models, so too are there many
approaches to undertaking evalu-
ation. That is, there are different ways to model the steps in the
process. Two are discussed
here.
Evaluation Hierarchy
Rossi, Lipsey, and Freeman (2004) offered an evaluation
hierarchy that recognizes the impor-
tance of engaging in evaluation from the beginning of the action
research process. That is,
evaluation should occur during the initial client contacts, be
built into the plan for interven-
tion, and be ongoing throughout the implementation, prior to the
formal assessment of the
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.1Defining Evaluation in Action Research
intervention’s impact, cost, and efficiency. Doing evaluation is
a matter of conducting a mini-
action research project.
Caffarella’s Systematic Program Evaluation
Caffarella (1994) outlined the steps generally taken during an
evaluation. Her steps have been
modified to address key OD issues in the following points.
Caffarella’s steps are intended to be
sequential under ideal conditions, although reality may be quite
different. Note that Caffarella
has proposed a lot of steps. She has elaborated more on the
steps than some other models but
still follows an action research process.
1. Secure support for the evaluation from stakeholders such as
the client and key
management. This step should be a provision of the contract, as
discussed in Chap-
ter 3. It is the process of getting management to commit to the
time and resources
needed to evaluate the process, as well as being willing to pay
attention to the
findings.
2. Identify individuals who can be involved in planning and
overseeing the evaluation,
such as the participants, management, client, and others affected
by the interven-
tion. This is usually led by the consultant and client and would
involve employees
who are engaged in the implementation. It could also involve
those affected by the
change who did not necessarily participate in it, such as
customers or suppliers.
3. Define the evaluation’s purpose and how the results will be
used. This step is
elaborated on in a later section of this chapter. The evaluation’s
focus should be
determined and then built accordingly. For example, is it aimed
at improving a
process or judging an outcome? Does it pertain to planning the
intervention or the
intervention itself ? Is it aimed at assessing adherence to budget
or performance
outcomes?
4. Specify what will be judged and formulate the evaluation
questions. This step is
driven by the evaluation’s purpose. If you decide to evaluate
how satisfied employ-
ees are with a new performance appraisal process, questions
should relate to that
change and be used to judge whether it was effective and should
continue.
5. Determine who will supply the evidence, such as participants,
customers, manage-
ment, employees, or others affected by the intervention.
6. Specify the evaluation strategy in terms of purpose,
outcomes, timeline, budget,
methods, and so forth.
7. Identify the data collection methods and timeline. Data
collection was discussed
extensively in Chapter 5. The selected methods should match
the evaluation’s
purpose.
8. Specify the data analysis procedures and logistics (covered in
Chapter 5). However,
the analysis should be focused on making decisions and changes
to the interven-
tion, not on diagnosing the problem.
9. Define the criteria for judging the intervention. This can be
somewhat subjec-
tive unless the metrics are defined in advance. For example, if
the intervention
were aimed at improving employee retention, would a
consultant measure simply
whether it improved or look for a certain benchmark (such as
10%) to deem it
successful?
10. Complete the evaluation, formulate recommendations, and
prepare and present
the evaluation report. These steps mirror the data analysis steps
presented in
Chapter 5 and the feedback meeting strategies in Chapter 4.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.1Defining Evaluation in Action Research
11. Respond to recommendations for changes as appropriate.
Adapted from Planning Programs for Adult Learners: A
Practical Guide for Educators, Trainers, and Staff Developers
(pp. 255–
256), by R. S. Caffarella, 1994, San Francisco, CA: Jossey-
Bass. © John Wiley & Sons.
Table 6.2 compares the action research model used in this book
to Rossi, Lipsey, and Free-
man’s and Caffarella and Daffron’s evaluation steps. These
models vary in terms of detail and
number of steps, but they essentially follow the three phases of
action research: planning,
doing, and checking. Evaluation is essentially conducting
research within an action research
process, as shown by these three examples.
Table 6.2: Comparing the action research model to evaluation
models
Action research
model
Rossi, Lipsey, and Free-
man evaluation model Caffarella and Daffron evaluation model
Planning 1. Assess intervention
cost and efficiency.
2. Assess intervention
outcome or impact.
1. Secure support for the evaluation from
stakeholders.
2. Identify individuals who can be involved in
planning and overseeing the evaluation.
3. Define the evaluation’s purpose and how the results
will be used.
4. Specify what will be judged and formulate the
evaluation questions.
5. Determine who will supply the evidence.
6. Specify the evaluation strategy in terms of purpose,
outcomes, timeline, budget, methods, and so forth.
7. Identify the data collection methods and timeline.
8. Specify the data analysis procedures and logistics.
9. Determine the specific timeline and the budget
needed to conduct the evaluation.
Doing 3. Assess intervention
implementation.
10. Complete the evaluation, formulate
recommendations, and prepare and present the
evaluation report.
Checking 4. Assess intervention
design and theory.
5. Assess need for the
intervention.
11. Respond to recommendations for changes as
appropriate.
Caffarella and Daffron’s steps are comprehensive, covering the
key tasks that must be com-
pleted during an intervention’s evaluation. However, it may not
always be possible to follow
these clearly articulated steps; evaluation can be unpredictable
and may present challenges
that are often unanticipated. For example, if an implementation
has been challenging, a cli-
ent may balk at the evaluation out of fear of receiving negative
feedback; on the other hand,
employees may be reluctant to participate if trust levels are low.
Thus, it helps to pay atten-
tion to relevant dynamics and expect the unexpected.
Evaluation provides critical information about an intervention’s
impact both during and after
its implementation. Thus, no matter what model is followed for
performing an evaluation, it
is essential to begin planning it before the intervention is well
underway. A consultant’s job is
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.2Types and Categories of Evaluation
to ensure that evaluation is integrated into the OD process from
start to finish. Unfortunately,
evaluation is often overlooked in favor of wanting simply to
take action on the problem, and
too many consultants consider their work finished once the
intervention has occurred. In
other cases, consultants go about evaluation haphazardly. If
they cannot demonstrate that
their action was effective, however, they risk undermining their
client’s confidence in the OD
effort, fail to permanently solve the problem, and put
themselves at risk of repeating similar
mistakes on future assignments.
6.2 Types and Categories of Evaluation
Theorists have proposed different types and categories for
evaluation. This section identifies
some of these different approaches.
Types of Evaluation
Evaluation can be either formative or summative, depending on
the intervention’s goal
(Scriven, 1967, 1991a, 1991b). Scriven is considered a leader in
evaluation; you can view one
of his lectures by visiting the media links provided at the end of
this chapter.
Formative Evaluation
Making changes to an implementation that is already in progress
is called doing a forma-
tive evaluation. Formative evaluation is concerned with
improving and enhancing the OD
process rather than judging its merit. The following types of
questions might be asked when
conducting a formative evaluation:
• What are the intervention’s strengths and weaknesses?
• How well are employees progressing toward desired
outcomes?
• Which employee groups are doing well/not so well?
• What characterizes the implementation problems being
experienced?
• What are the intervention’s unintended consequences?
• How are employees responding? What are their likes, dislikes,
and desired changes?
• How are the changes being perceived culturally?
• How well is the implementation conforming to budget?
• What new learning or ideas are emerging?
For example, consider an intervention focused on changing
reporting relationships as part of
a work redesign in a manufacturing plant. A consultant might
discover that some of the new
arrangements do not make sense once implemented. These might
therefore be modified as
Consider This
Think of an evaluation in which you have participated. How
well did it follow the plan–do–
check steps?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.2Types and Categories of Evaluation
the work redesign progresses. Asking questions pertaining to
the problems, the employees’
perspectives, their likes and dislikes, and so forth yields
information that helps tweak and
improve the process. Formative evaluation is generally ongoing
throughout the
implementation.
Summative Evaluation
Undertaking evaluation at the end of the OD
implementation, with the goal of judging
whether the change had the intended out-
comes and impact, is called summative
evaluation. Summative evaluation is also
known as outcome or impact evaluation
because it allows the intervention’s overall
effectiveness to be ascertained. A consul-
tant can then decide whether to continue or
terminate it (Patton, 1997). The following
types of questions might be asked by the
consultant, management, or an external
evaluator when conducting a summative
evaluation:
• Did the intervention work?
• Did the intervention satisfactorily
address the performance gap?
• Should the intervention be continued or expanded?
• How well did the intervention stick to the budget?
Summative evaluations should follow four steps:
1. Select the criteria of merit—what are the sought metrics?
2. Set standards of performance —what level of resolution is
sought?
3. Measure performance—conduct the evaluation.
4. Synthesize results into a judgment of value. (Shadish, Cook,
& Leviton, 1991,
pp. 85–94)
Adequate levels of both formative and summative evaluation
must be incorporated into the
OD process. Failure to conduct formative evaluation leads to
missed opportunities to adjust
and improve on the implementation as it is in progress.
Omitting the summative evaluation
means never learning the intervention’s outcomes and impact or
lacking adequate data on
which to base future decisions.
Consider This
What types of formative evaluation have you participated in or
observed?
Vgajic/E+/Getty Images Plus
An OD consultant must always assess
interventions to learn the outcomes and
impact, which serve as a foundation for future
decisions.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.2Types and Categories of Evaluation
Cervero’s Evaluation Categories
Cervero (1985) identified seven categories of evaluation for
planners of educational pro-
grams that have relevance for OD. His list has been adapted for
OD interventions in terms of
categories of evaluation:
1. Intervention design and implementation. This could be either
formative or summa-
tive, because the design and intervention are assessed for fit and
impact. Imagine
implementing a new performance appraisal process. Formative
evaluation might
involve piloting the evaluation and evaluating how well it
worked for both employees
and supervisors. The performance appraisal would then be
modified and imple-
mented. Summative evaluation in this case might examine
whether the new perfor-
mance appraisal process improved performance, satisfaction,
and learning.
2. Employee participation. This type of evaluation assesses
employees’ level of involve-
ment in the intervention. This could also be formative or
summative. In the case of
performance evaluation, a consultant might examine the level of
involvement and
seek feedback from employees. A summative evaluation might
evaluate whether
the level of employee participation was adequate and whether it
yielded positive
outcomes.
3. Employee satisfaction. This type of evaluation assesses
employees’ level of satis-
faction in the intervention. This could also be formative or
summative. In the case
of performance evaluation, a consultant might examine the level
of satisfaction
with the new performance appraisal or its implementation
process. A summative
evaluation might evaluate how satisfied employees are once the
new performance
appraisal system is in place.
4. Acquisition of new knowledge, skills, and attitudes. This type
of evaluation measures
learning during and after the intervention and could also be
formative or summa-
tive. In the case of performance evaluation, a consulta nt might
examine the level of
involvement and seek feedback from employees. A formative
evaluation during the
pilot phase might determine that supervisors lack the skills to
effectively implement
the new process and give the level of feedback desired. It would
allow the consul-
tant and client to revise the process and provide adequate
training to supervisors. A
summative evaluation would assess the level of learning from
the new performance
appraisal system. This could take the form of employees
improving their perfor-
mance and supervisors showing demonstrated improvement in
their ability to give
feedback.
5. Application of learning after the intervention. This category
is similar to the previous
one, but it is summative; it judges how learning was applied
after the intervention.
A consultant might look for evidence of how supervisors
applied what they learned
about giving effective performance feedback to other
interactions with employees
throughout the year.
6. The impact of the intervention on individuals and the
organization. This category
could be formative or summative. The formative evaluation
might look at how the
intervention affects organization life via communication,
understanding, participa-
tion, and satisfaction. A summative evaluation might look at the
overall impact on
job satisfaction, financial performance, and retention.
7. Intervention characteristics associated with outcomes. This
type of evaluation attempts
to link aspects of the intervention to outcomes and is
summative. This can be more
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
difficult to measure if the intervention was complex or had
several interventions built
into it. A consultant might evaluate how a participative process
affected the imple-
mentation’s overall success or employee satisfaction.
6.3 Frameworks of Evaluation
This section profiles some common evaluation frameworks.
There is no “best” framework.
Rather, you should find what you are comfortable working with
and what effectively fits the
situation. Further, you should determine the purpose of the
evaluation and use a framework
that best facilitates it.
Kirkpatrick’s Four-Level Framework
Kirkpatrick’s (2006) four-level evaluation framework can be
formative or summative and
is one of the most widely known evaluation typologies. It
became popular in the 1990s,
although Kirkpatrick developed it as a doctoral dissertation at
the University of Wisconsin
in the 1950s. It was originally created to evaluate training
programs, and OD consultants use
it to conduct evaluation at a range of points over time. The
framework classifies an interven-
tion’s outcomes into one of four categories—reaction, learning,
behavior, or results (see Table
6.3). An outcome is assigned a category based on how difficult
it is to evaluate. For example,
the simplest type of outcome to evaluate is participant reaction
to the intervention. Thus, this
is assigned level 1.
Table 6.3: Kirkpatrick’s four-level evaluation framework
Level Focus Examines
1 Reaction Did participants like the intervention?
2 Learning What skills and knowledge did participants gain?
3 Behavior How are participants performing differently?
4 Results How was the bottom line affected?
Consider This
If you have had the opportunity to evaluate organization change
efforts, have you experi-
enced any evaluative measures on Cervero’s list? Which ones
would be most relevant for
the Leadership Academy vignette? Can you think of other
categories that might be added to
Cervero’s list?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
Level 1: Evaluation
Level 1 measures participant reaction to the intervention. This
type of evaluation is sometimes
referred to as a “smile sheet” (see Figure 6.1 and Tips and
Wisdom: Smile Sheet) because it
measures only what participants thought and felt during or
immediately after an intervention,
and in very simple terms—whether they were satisfied, neutral,
or dissatisfied. As an example,
consider the Leadership Academy vignette. At this level of
evaluation, the consultants might
ask the academy participants questions such as “How well did
you like the session?” “Was the
learning environment comfortable?” “Were the facilitators
capable and credible?” and “Did you
feel it was time well spent?” This type of evaluation may make
facilitators feel good about intro-
ducing an intervention, but it does not effectively measure
change. Unfortunately, it is the most
common form of evaluation employed in organizations, because
it is the easiest to measure.
Figure 6.1: Reaction evaluation using a smile sheet
The reaction evaluation sheet shown here is just one way to
solicit feedback from an audience.
How are we doing?
*
* * *
* *
* *
* * *
* *
*
* *
*
* *
* *
*
* *
*
* *
*
Tips and Wisdom: Smile Sheet
If you are facilitating a meeting, workshop, or seminar and want
to gauge the participants’
reactions to the event, create an opportunity for them to share
feedback. One way to do this is
to put a flip chart page with the smiley face symbols shown in
Figure 6.1 in the hall outside the
session room. Give participants a dot apiece and ask them to
place it in the column that best
represents how they feel about the session so far. It is important
to place the chart out of your
eyeshot, so people feel comfortable sharing honest feedback.
This offers a snapshot of partici-
pants’ reactions and allows you to make adjustments in the
moment. You can also use Twitter
or other social media to solicit this data.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
Level 2: Evaluation
Level 2 measures participant learning from an intervention. This
level of evaluation assesses
whether the intervention helped participants improve or
increased their knowledge or skills.
At this level, James and Leah might ask the Leadership
Academy participants, “What was the
key thing you learned from this session?” or “What new skills
have you acquired as a result
of this experience?” This type of evaluation works best after
participants have had a chance
to return to their workplace and apply the principles and
behaviors they learned (and thus
is summative). Participants might also be interviewed or
surveyed about learning during the
course of the intervention (which would be formative).
Level 3: Evaluation
Level 3 measures changes in behavior. This level of summative
evaluation assesses whether
participants are using their new knowledge or skills in their job.
At this level, James and Leah
might ask Leadership Academy participants, their supervisors,
or subordinates, “To what
extent has the leader’s behavior changed and improved as a
result of the Leadership Acad-
emy?” or “What is the person doing differently now?” Similar
to level 2, this type of evaluation
is best done postintervention. It can be accomplished by
interviewing, observing, or survey-
ing participants and stakeholders affected by the intervention.
Level 4: Evaluation
Level 4 measures results for the organization. This level of
summative evaluation measures
how the intervention affected business performance or
contributed to the achievement of
organization goals. At this level, James and Leah might ask
Leadership Academy participants,
their supervisors, or subordinates, “How has the organization
benefited from the Leader-
ship Academy?” “To what degree has employee satisfaction,
productivity, or performance
improved?” “To what degree has recruitment and retention of
employees improved as a result
of improved leadership?” “How many promotions have occurred
as a result of participating
in the academy?” or “How much money has the organization
saved due to better leadership
decisions?” As these questions indicate, it might be difficult to
actually measure and attribute
changed leadership to organization results and outcomes.
Kirkpatrick continued to evolve his model and even questioned
whether it was a true model
or just a guideline; his family members have continued
developing it (Kirkpatrick & Kirkpat-
rick, 2016). He also expanded his focus to consider an
intervention’s cost–benefit ratio and
whether it demonstrated a return on investment. Measuring
these variables can also present
challenges to organizations and OD consultants.
Lawson’s Application of Kirkpatrick’s Framework
Building on Kirkpatrick’s framework, Lawson (2016)
categorized variables relating to the
what, who, when, how, and why of the framework’s use. Her
approach has been adapted for
OD and is depicted in Table 6.4.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
Table 6.4: Applying Kirkpatrick’s framework to OD
interventions
Level What Who When How Why
1 Reaction: Did
they like it?
Participants During or
after the
intervention
Smile sheet Determine level
of participant
satisfaction and
need to revise
intervention if
duplicated
2 Learning:
What
knowledge
or skills were
retained?
Participants
and
consultants
During,
before, and/
or after the
intervention
Pre- and post-
tests, skills
applications, role
plays, case stud-
ies, and exercises
Determine
whether consul-
tant has been
effective in imple-
menting interven-
tion purpose and
objectives
3 Behavior:
How are
participants
performing
differently?
Participants,
supervisors,
subordinates,
and peers
3 to 6
months after
intervention
Surveys, inter-
views, observa-
tion, perfor-
mance, and
appraisal
Determine extent
to which partici-
pants transferred
their learning
from the inter-
vention to the
workplace
4 Results: What
is the impact
on the bottom
line?
Participants
and control
group
After comple-
tion of level 3
assessment
Cost–benefit anal-
ysis and tracking
operational data
Determine
whether the
benefits outweigh
costs and how the
intervention con-
tributed to orga-
nization goals
and strategy
Source: Adapted from The Trainer’s Handbook (4th ed., p. 234),
by K. Lawson, 2016, Hoboken, NJ: John Wiley & Sons.
Critiques of Kirkpatrick’s Framework
Kirkpatrick’s (1994, 2006) four levels of criteria have been
dominant for decades among
evaluators. With popularity, however, comes criticism. First,
the model has been critiqued
for being primarily focused on postintervention realities, that is,
for evaluating what happens
after the intervention versus incorporating more formative
evaluation into the process. The
four-level framework also does not help evaluators link causal
relationships between out-
comes and the levels of evaluation. Finally, the framework does
not help evaluators determine
what changes equate to the different levels of evaluation or how
best to measure each level.
Some authors have suggested expanding the reaction level to
include assessing participants’
reaction to the intervention techniques and efficiency (Kaufman
& Keller, 1994). One might
also try splitting the reaction level to include measuring
participants’ perceptions of enjoy-
ment, usefulness, and the difficulty of the program (Warr &
Bunce, 1995). Kaufman and Keller
(1994) recommended adding a fifth level to address the societal
contribution and outcomes
created by the intervention, which is becoming more popular
with the increased emphasis
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
on corporate social responsibility and sustainability. Phillips
(1996) advocated adding a fifth
level that specifically addresses return on investme nt.
Other Frameworks of Evaluation
Although the Kirkpatrick model is one of the dominant
evaluation models, it is not necessarily
the best or most appropriate for every situation. In fact, six
decades of evaluation research
have not yielded a universal evaluation model (Reio, Rocco,
Smith, & Chang, 2017). This sec-
tion briefly profiles some lesser-known evaluation models.
Hamblin’s Five-Level Model
Similar to Kirkpatrick’s model, this model measures reactions,
learning, job behavior, and
organizational impacts, as well as a fifth level—the economic
outcomes of training. The hier-
archy of Hamblin’s (1974) model is more specific than
Kirkpatrick’s in that reactions lead
to learning, learning leads to behavior changes, and behavior
changes have organizational
impact. Because of this assertion, Hamblin believed that
evaluation at a given level is not
meaningful unless the evaluation at the previous level has been
performed. It is worth noting
that a major criticism of the Kirkpatrick model is his
assumption that the four levels are caus-
ally linked (Reio et al., 2017). That linkage has never been
demonstrated, although Hamblin
may be trying to make that linkage.
Preskill and Torres’s Evaluative Inquiry Model
Preskill and Torres (1999) contributed a model of inquiry to the
literature that uses the evalu-
ation process as a learning and development opportunity:
Evaluative inquiry is an ongoing process for investigating and
understanding
critical organization issues. It is an approach to learning that is
fully integrated
with an organization’s work practices, and as such, it engenders
(a) organiza-
tion members’ interest and ability in exploring critical issues
using evaluation
logic, (b) organization members’ involvement in evaluative
processes, and (c)
the personal and professional growth of individuals within the
organization.
(pp. 1–2)
Evaluative inquiry is the fostering of relationships among
organization mem-
bers and the diffusion of their learning throughout the
organization; it serves
as a transfer-of-knowledge process. To that end, evaluative
inquiry provides
an avenue for individuals’ as well as the organization’s ongoing
growth and
development. (p. 18)
Their definition emphasizes that evaluation is more than simply
reporting survey findings.
Rather than being event driven, such as sending a survey to
participants after the interven-
tion is over, evaluation should be an ongoing part of everyone’s
job, that is, a shared learning
process. Evaluative inquiry should be focused on
• intervention and organizational processes as well as outcomes;
• shared individual, team, and organizational learning;
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
• educating and training organizational practitioners in inquiry
skills (action
learning);
• collaboration, cooperation, and participation;
• establishing linkages between learning and performance;
• searching for ways to create greater understanding of the
variables that affect orga-
nizational success and failure; and
• using a diversity of perspectives to develop understanding
about organizational
issues (Preskill & Catsambas, 2006).
Preskill and Torres (1999) identified four learning processes —
dialogue, reflection, question-
ing, and identifying and clarifying values, beliefs, assumptions,
and knowledge—that facilitate
three phases of evaluative inquiry: focusing the evaluative
inquiry, carrying out the inquiry,
and applying learning. The phases are depicted in Table 6.5.
Table 6.5: Preskill and Torres’s evaluative inquiry phases
Phase Description Strategies
At each stage of
inquiry, the follow-
ing skills are used:
1. Dialogue
2. Reflection
3. Asking
questions
4. Identifying
and clarifying
values, beliefs,
assumptions,
and knowledge
Focusing the
evaluative
inquiry
• Determine issues and
concerns for evaluation
• Identify stakeholders
• Identify guiding
questions for the
evaluation
• Focused dialogues
• Group model building
• Open space technology
• Critical incidents
• Assumption testing
through questioning
Carrying out
the inquiry
• Design and implement
the evaluation (collect,
analyze, interpret data)
• Address evaluative
questions
• Develop a database for
organization learning
• Literature-based
discussions
• Working session to
interpret survey results
(or other data collected)
• Framing findings as
lessons learned
Applying
learning
• Identify and select action
alternatives
• Develop and implement
action plans
• Monitor progress
• Capturing concerns,
issues, and action
alternatives
• Using technology to
facilitate brainstorming
• Developing an action
plan
• Solving implementation
issues
Brinkerhoff ’s Six-Stage Model
This model defines evaluation as the collection of information
to facilitate decision making
(Brinkerhoff, 1989). Brinkerhoff advocated systemic evaluation
that considers the change
process holistically. It requires that consultants articulate how
and why each training or devel-
opment activity is supposed to work, without which
comprehensive evaluation is impossible.
This model helps to assess whether and how programs benefit
the organization; analysis can
help trace any failures to one or more of the six stages of
Brinkerhoff ’s model:
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.3Frameworks of Evaluation
1. Goal setting (What is the need?): A need, problem, or
opportunity worth addressing
that could be favorably influenced if someone learned
something is identified.
2. Program design (What will work?): A program that teaches
the needed topic is cre-
ated or, if one already exists, it is located.
3. Program implementation (It is working?): The organization
successfully implements
the designed program.
4. Immediate outcomes (Did they learn it?): The participants
exit the program after
successfully acquiring the intended skills, knowledge, or
attitudes.
5. Immediate or usage outcomes (Are they keeping and/or using
it?): The participants
retain and use what they learned.
6. Impacts and worth (Did it make a worthwhile difference?):
The organization benefits
when participants retain and use what they learned.
Brinkerhoff ’s model provides a platform to think about desired
impact at the beginning of the
change process and takes a more holistic look at the activities
and impacts and how they will
be communicated.
Input, Process, Output, and Outcomes Evaluation
This model evaluates training programs at four levels (input,
process, output, and outcomes)
in terms of their potential contribution to the overall
effectiveness of a training program
(Bushnell, 1990). It is similar to the systems model introduced
in Chapter 2 that considers
inputs, throughputs, and outputs in organization systems:
1. Inputs: trainee qualifications, instructor abilities,
instructional material, facilities,
and budget
2. Process: value-adding activities such as planning, designing,
developing, and deliver-
ing the training
3. Output: trainee reactions, knowledge and skills gained, and
improved job
performance
4. Outcomes: profits, customer satisfaction, and productivity
This model has the following benefits:
1. It can help determine whether training programs are
achieving the right purpose.
2. It can help identify the types of changes that could improve
course design, content,
and delivery.
3. It can help determine whether students have actually acquired
knowledge and skills.
Steps in the evaluation process:
1. Identify evaluation goals: Determine the overall structure of
the evaluation effort
and establish the parameters that influence later stages.
2. Develop an evaluation design and strategy: Select appropriate
measures, develop
a data collection strategy, match data types with experimental
designs, allocate the
data collection resources, and identify appropriate data sources.
3. Select and construct measurement tools: Select or construct
tools that best fit the
data requirements and meet criteria for reliability and validity.
Examples include
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
questionnaires, performance assessments, tests, observation
checklists, problem
simulations, structured interviews, and performance records.
4. Analyze the data: Tie the results of the data-gathering effort
to the evaluation’s
original goals.
5. Make conclusions and recommendations and present the
findings.
Evaluation: A Simpler Way?
Organizations are accountable for showing results, and OD is no
exception. Yet, a comprehen-
sive, widespread evaluation model does not exist. Paine (2014)
asked if evaluation could be
simplified and advocated Brinkerhoff and Mooney’s Business
Case Success Method (2008),
whereby instead of putting a lot of time and resources into
conducting evaluation, the ques-
tion becomes, “Did this intervention add value?” Paine
recommended asking questions to this
end such as the following:
1. What results did we expect?
2. What evidence is available to support or refute the results we
obtained?
3. What should we explore more in depth?
4. What additional compelling evidence is available?
5. How can we communicate the results in an understandable
way to the organization?
The Brinkerhoff and Mooney model is based on taking a
systemic approach that attempts to
understand how the change was applied on the job, what the
results were of the learning or
change, and how the change contributed to organization goals.
Paine added a final variable to
the model: How did the benefits compare with the costs?
6.4 Planning and Performing the Evaluation
Just as with other aspects of the action research process,
evaluation requires deliberate plan-
ning and buy-in from the client. To plan the evaluation, Cervero
(1985) suggested identifying
five key factors:
1. What is the purpose of the evaluation?
2. Who needs what information?
3. Are there any practical and ethical constraints?
4. What resources are available?
5. What are the evaluator’s values?
Consider This
After reading these evaluation models, you might think they
look familiar. Indeed, they follow
a mini action research cycle of planning, doing, and checking. If
you were planning an evalua-
tion, which approach would you use and why?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
The answers to these five questions offer an evaluation a strong
foundation, so it is worth
reflecting with the client about them. Once clear on these
issues, you can get more specific
about evaluation purposes, measurements, information sources,
data collection, data analy-
sis, feedback, further action, and how to anticipate and manage
resistance. These evaluation
steps may look familiar, because conducting an evaluation is
similar to doing a small-scale
action research project. These steps, illustrated in Figure 6.2,
will be explored in the next
sections.
Figure 6.2: Steps in the checking phase
This figure outlines the steps of a typical evaluation process.
Notice that it looks very similar to the
action research process of plan, do, check.
Determine
evaluation
purpose
Conduct
summative
evaluation
Develop
evaluation
plan
Choose and employ
data collection
methods
Analyze data
and provide
feedback
Identify appropriate
evaluation
measures
Consider This
Apply Cervero’s five questions to an evaluation you have
completed or anticipate doing in the
future. How have the questions helped you reflect on a past
evaluation or plan for a future one?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Determine the Purpose of the Evaluation
Determining the evaluation’s purpose(s) offers clear focus
moving forward. Referring back to
Cervero’s categories of evaluation may help pinpoint what is
being evaluated. For effective
formative evaluation, a consultant should work with the client
to determine what needs to be
evaluated throughout the action research process, particularly
regarding process improve-
ment. For example, in the case of the Leadership Academy,
formative evaluation consisted of
assessing the curriculum for relevance and cost. The
examination of a performance appraisal
process change earlier in the chapter showed how one might
assess employees’ satisfaction
with their level of participation in the process or with a pilot
phase of the appraisal.
Consultants should also plan the purpose of summative
evaluation. Returning to the example
of the Leadership Academy, it was essential to find out whether
the participants learned and
applied the new behaviors and skills, and if so, how their
actions affected their organiza-
tions. In the case of the performance appraisal process change,
the organization wanted to
know if it changed supervisor behavior, improved retention,
affected learning, and increased
performance.
Once an evaluation’s purpose has been decided, a consultant can
begin to identify what ques-
tions to ask the participants. Table 6.6 offers examples of
appropriate questions for different
evaluation goals. Questions revolve around needs assessment,
intervention conceptualiza-
tion, intervention delivery, outcomes, and costs.
Table 6.6: Typical evaluation questions
Questions about the
need for the interven-
tion (planning and
needs assessment)
• What is the nature and magnitude of the problem to be
addressed?
• What are the characteristics of the individuals/team members/
organization?
• What are the needs of the individuals/team
members/organization?
• What consulting services are needed?
• What is the time frame?
• What contracting is necessary?
Questions about the
intervention’s concep-
tualization or design
(planning)
• Who is the client? Target population?
• What consulting services should be provided?
• What is the best intervention?
What are the best delivery systems for the intervention?
• How can the intervention identify, recruit, and sustain the
intended
participants?
• How should the intervention be organized?
• What resources are necessary and appropriate for the
intervention?
Questions about inter-
vention logistics, deliv-
ery, and reach (plan-
ning and intervention)
• Are intervention goals being met?
• Are the intended interventions being delivered to the intended
individuals/teams/organization?
• Has the intervention missed participants who need it?
• Are sufficient numbers of participants engaged in the
intervention?
• Are the participants satisfied with the intervention?
• Are administrative, organizational, and personnel functions
handled well?
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Table 6.6: Typical evaluation questions (continued)
Questions about
intervention outcomes
(evaluation)
• Are the outcome goals and objectives being achieved?
• Does the intervention have beneficial effects on the
recipients?
• Does the intervention have adverse side effects on the
participants?
• Are some participants affected more by the intervention than
others?
• Did the intervention improve the issue/problem?
Questions about
intervention cost and
efficiency (evaluation)
• Are resources used efficiently?
• Is the cost reasonable in relation to the magnitude of benefits?
• Would alternative approaches yield equivalent benefits at less
cost?
Adapted from Evaluation: A Systematic Approach (7th ed., p.
77), by P. H. Rossi, M. W. Lipsey, and H. E. Freeman, 2004,
Thousand
Oaks, CA: Sage.
See Case Study: Piloting and Evaluating a New Performance
Appraisal Process.
Case Study: Piloting and Evaluating a New Performance
Appraisal Process
A paper products manufacturing company begins working with a
consultant to improve
employee retention, because the company has a significantly
higher attrition rate compared
with other companies it benchmarked. One of the interventions
selected during the action
research process is to overhaul the way supervisors share
feedback with employees; the per-
formance appraisal process will become more developmental
than punitive, ongoing rather
than once a year, with no surprises in the feedback delivered.
Making this change requires developing a new process. A small
design team is formed to advise
on the new performance appraisal process. It includes the
consultant, client, supervisors, and
employees. The team designs an intervention that is based on
supervisors providing feedback
in the moment—that is, when they notice something and want to
coach the employee through
it. The model also incorporates periodic opportunities for the
employee and supervisor to
meet, focus on the employee’s developmental plan, and make
adjustments as needed. Once the
process is developed, the team decides to pilot it with a small
department.
Before the pilot begins, the participating employees are briefed
on the intervention and con-
firm their participation. Both the employees and supervisors are
trained in the new method,
and the supervisors receive additional training on how to
provide coaching and give develop-
mental feedback. The design team begins to study the pilot
group’s response by questioning
whether it is meeting the original need of improving retention
and whether the design is best
for meeting the needs. To this end, the design team holds
informal conversations with the
employees and supervisors and asks them questions such as
these:
• What do you like about this new process?
• What don’t you like about this new process?
• How do you perceive these changes?
• Will this work if we expand it further, or would you suggest
changes?
• What have you learned in the process?
The informal conversations yield important data that design
team members share during a
meeting. There is general support for the idea, but the
supervisors do not always feel com-
petent in using the new process correctly; they also feel stressed
about the time it takes. The
(continued on next page)
Case Study: Piloting and Evaluating a New Performance
Appraisal Process (continued)
employees are unsure of what the purpose of the periodic
meetings is. Some employees and
supervisors are resistant, feeling either distrust toward the
process or resignation that things
will not change.
The design team decides to adjust the process. It provides more
support and training to the
supervisors on how to coach and share feedback, with the
expectation that it will take less
time as they become more comfortable with the process. It also
assembles the department and
models an ideal periodic meeting to touch base on development.
The pilot group continues to
work with the process for several more weeks, with ups and
downs.
Prior to rolling out the process to the wider organi-
zation, the design team meets with the pilot depart-
ment to see if additional adjustments are needed.
The process works better for most but still has logis-
tical problems that require further change.
Finally, the company is ready to roll out the changes
organization-wide. It starts with a communication
plan and provides training to all employees. The
design team continues to monitor the process over
the next year.
After the plan has been in place for a year, the design
team comes together and decides to plan and per-
form an evaluation to assess whether the new per-
formance appraisal process met its intended goals.
The evaluation follows these steps:
1. Describe the purpose of the evaluation. The design team
determines that the evalua-
tion’s purpose is to assess whether the new performance
appraisal process increased
employee retention.
2. Identify appropriate evaluation measures. The design team
decides to look at three
measures:
a. Retention comparing the year before the intervention with the
year after it
b. Employee attitudes
c. Supervisor attitudes
3. Choose and employ data collection methods. This depends
on the type of data desired
and the question the organization wishes to answer. The design
team chooses three
methods:
a. Attrition records to measure the year-to-year comparison of
retention rates
b. Survey of employees
c. Interviews of supervisors
4. Analyze data and provide feedback. Once the data is
collected, the consultant and client
make the first analysis and then involve the design team. Once
the findings are refined,
they share them during an open meeting with employees. They
also share them with
supervisors in a separate meeting to get their feedback and
input.
(continued on next page)
FS-Stock/iStock/Getty Images Plus
The design team asks employees
how they perceive the changes
in the communication with
supervisors.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Table 6.6: Typical evaluation questions (continued)
Questions about
intervention outcomes
(evaluation)
• Are the outcome goals and objectives being achieved?
• Does the intervention have beneficial effects on the
recipients?
• Does the intervention have adverse side effects on the
participants?
• Are some participants affected more by the intervention than
others?
• Did the intervention improve the issue/problem?
Questions about
intervention cost and
efficiency (evaluation)
• Are resources used efficiently?
• Is the cost reasonable in relation to the magnitude of benefits?
• Would alternative approaches yield equivalent benefits at less
cost?
Adapted from Evaluation: A Systematic Approach (7th ed., p.
77), by P. H. Rossi, M. W. Lipsey, and H. E. Freeman, 2004,
Thousand
Oaks, CA: Sage.
See Case Study: Piloting and Evaluating a New Performance
Appraisal Process.
Case Study: Piloting and Evaluating a New Performance
Appraisal Process
A paper products manufacturing company begins working with a
consultant to improve
employee retention, because the company has a significantly
higher attrition rate compared
with other companies it benchmarked. One of the interventions
selected during the action
research process is to overhaul the way supervisors share
feedback with employees; the per-
formance appraisal process will become more developmental
than punitive, ongoing rather
than once a year, with no surprises in the feedback delivered.
Making this change requires developing a new process. A small
design team is formed to advise
on the new performance appraisal process. It includes the
consultant, client, supervisors, and
employees. The team designs an intervention that is based on
supervisors providing feedback
in the moment—that is, when they notice something and want to
coach the employee through
it. The model also incorporates periodic opportunities for the
employee and supervisor to
meet, focus on the employee’s developmental plan, and make
adjustments as needed. Once the
process is developed, the team decides to pilot it with a small
department.
Before the pilot begins, the participating employees are briefed
on the intervention and con-
firm their participation. Both the employees and supervisors are
trained in the new method,
and the supervisors receive additional training on how to
provide coaching and give develop-
mental feedback. The design team begins to study the pilot
group’s response by questioning
whether it is meeting the original need of improving retention
and whether the design is best
for meeting the needs. To this end, the design team holds
informal conversations with the
employees and supervisors and asks them questions such as
these:
• What do you like about this new process?
• What don’t you like about this new process?
• How do you perceive these changes?
• Will this work if we expand it further, or would you suggest
changes?
• What have you learned in the process?
The informal conversations yield important data that design
team members share during a
meeting. There is general support for the idea, but the
supervisors do not always feel com-
petent in using the new process correctly; they also feel stressed
about the time it takes. The
(continued on next page)
Case Study: Piloting and Evaluating a New Performance
Appraisal Process (continued)
employees are unsure of what the purpose of the periodic
meetings is. Some employees and
supervisors are resistant, feeling either distrust toward the
process or resignation that things
will not change.
The design team decides to adjust the process. It provides more
support and training to the
supervisors on how to coach and share feedback, with the
expectation that it will take less
time as they become more comfortable with the process. It also
assembles the department and
models an ideal periodic meeting to touch base on development.
The pilot group continues to
work with the process for several more weeks, with ups and
downs.
Prior to rolling out the process to the wider organi-
zation, the design team meets with the pilot depart-
ment to see if additional adjustments are needed.
The process works better for most but still has logis-
tical problems that require further change.
Finally, the company is ready to roll out the changes
organization-wide. It starts with a communication
plan and provides training to all employees. The
design team continues to monitor the process over
the next year.
After the plan has been in place for a year, the design
team comes together and decides to plan and per-
form an evaluation to assess whether the new per-
formance appraisal process met its intended goals.
The evaluation follows these steps:
1. Describe the purpose of the evaluation. The design team
determines that the evalua-
tion’s purpose is to assess whether the new performance
appraisal process increased
employee retention.
2. Identify appropriate evaluation measures. The design team
decides to look at three
measures:
a. Retention comparing the year before the intervention with the
year after it
b. Employee attitudes
c. Supervisor attitudes
3. Choose and employ data collection methods. This depends
on the type of data desired
and the question the organization wishes to answer. The design
team chooses three
methods:
a. Attrition records to measure the year-to-year comparison of
retention rates
b. Survey of employees
c. Interviews of supervisors
4. Analyze data and provide feedback. Once the data is
collected, the consultant and client
make the first analysis and then involve the design team. Once
the findings are refined,
they share them during an open meeting with employees. They
also share them with
supervisors in a separate meeting to get their feedback and
input.
(continued on next page)
FS-Stock/iStock/Getty Images Plus
The design team asks employees
how they perceive the changes
in the communication with
supervisors.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Identify Appropriate Evaluation Measures
Once an evaluation’s purpose has been determined, actions to
measure it should be identified.
This book has covered a range of evaluation techniques. The
formative measures that the
Leadership Academy team members identified allowed them to
recruit a small group of top
leaders to critique the curriculum. They followed this with a
small pilot session to trouble-
shoot and revise the curriculum with an actual audience.
Participants evaluated the academy
throughout the implementation, and adjustments were made
accordingly.
Summative measures could have included any of the examples
listed in the Kirkpatrick dis-
cussion, such as promotions, employee satisfaction, or customer
satisfaction. In the case of
the Leadership Academy, measures included improved
performance, promotions, a leader-
ship project, and team satisfaction. Because a main goal was to
cultivate leaders from within
the organization, measuring the percentage of participants who
were promoted from middle
management to executive positions was a key metric for
evaluating the intervention’s success.
Choose and Employ Data Collection Methods
With the purpose and measures determined, the consultant
should identify appropriate
sources of information and methods for gathering the
information. For example, if you want to
measure the results of a customer-service training, you could
measure the number of com-
plaints, review written complaints, or contact customers. The
methods you might use to do
this include surveys, documents, or interviews. Table 6.7 offers
an overview of data collection
methods appropriate for evaluation. The more commonly used
methods to collect evalua-
tion data include archival data, observations, surveys and
questionnaires, assessments, inter-
views, and focus groups. Chapter 4 reviewed methods used to
conduct analysis or planning—
many of them are similar.
Case Study: Piloting and Evaluating a New Performance
Appraisal Process (continued)
5. Anticipate and manage resistance to the evaluation.
Although the team reviewed worst-
case scenarios for how the organization or employees might
resist, the problems are
minimal. For example, complaints are similar to those heard
throughout the pilot pro-
cess from employees who were skeptical that the new
performance appraisal process
would work.
Once the analysis is complete, the design team presents its
findings to top management. The
organization now needs to determine how effective the
intervention was and whether it would
be wise to invest further in it.
Critical Thinking Questions
1. Given your knowledge of evaluation, what are some steps the
design team followed in
implementing its evaluation?
2. What steps did the design team miss?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Table 6.7: Evaluation data collection methods
Evaluation method Description
Interview A conversation with one or more individuals to assess
their opinions,
observations, and beliefs. Questions are usually determined in
advance, and
the conversation is recorded.
Questionnaire A standardized set of questions intended to assess
opinions, observations,
and beliefs that can be administered in paper form or
electronically
Direct observation Viewing a task or set of tasks as they are
performed and recording what is
seen
Tests and simulations Structured situations to assess an
individual’s knowledge or proficiency to
perform some task or behavior
Archival performance data Use of existing information, such as
files, reports, quality records, or perfor-
mance appraisals
Product reviews Internal or external evaluations of products or
services
Performance reviews Written assessments of individual
performance against established criteria
Records and documents Written materials developed by
organizations and communities (perfor-
mance appraisals, production schedules, financial reports,
attendance
records, annual reports, company and board minutes, training
data, etc.)
Portfolio A purposeful collection of a learner’s work assembled
over time that docu-
ments events, activities, products, and/or achievements
Cost–benefit analysis A method for assessing the relationship
between the outcomes of an educa-
tional program and the costs required to produce them
Demonstration Exhibiting a specific skill or procedure to show
competency
Pre- and posttests Instruments used to measure knowledge and
skills prior to and after the
intervention to see if there were changes
Focus groups Group interviews of approximately five to 12
participants to assess opin-
ions, beliefs, and observations. Focus groups require a trained
facilitator.
Source: Adapted from Planning Programs for Adult Learners: A
Practical Guide (3rd ed.), by R. S. Caffarella and S. R. Daffron,
2013, San Francisco, CA: Jossey-Bass.
Archival Data
Evaluating the degree of change an intervention produced
requires establishing a baseline of
existing information from employment records, production
figures, or quarterly reports. This
information is referred to as archival data (or documents and
records). You are not seeking
new data but are using existing data to assess the intervention’s
effectiveness. Archival data is
easily accessible, typically available for no or minimal cost, and
useful for providing historical
context or a chronology of events, such as employee satisfaction
over time. The Leadership
Academy team relied on archival data from performance
reviews and employee satisfaction
surveys to evaluate impact.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Observation Data
Watching the organization engage in its everyday operations
involves observation. Obser-
vation is a type of evaluation based on detailed descriptions of
day-to-day behaviors that
cannot be explored by viewing existing archival records.
Examples of observation data might
include checklists of meeting-leader behaviors completed by
one of the team members, call
monitoring forms, listening skills, and body language.
Observation did not play an official role
in the Leadership Academy evaluation process; however,
participants’ supervisors observed
the changes they made in their approach to their work and
documented these in their per-
formance reviews. Data collection by observation can range
from routine counting of certain
occurrences to writing narrative descriptions of what is being
observed.
Surveys and Questionnaires
Surveys and questionnaires are helpful evaluation data
collections for measuring the inter-
vention’s effects. They should be completed by respondents
with some experience related to
the intervention. In the Leadership Academy vignette, the
consultants used surveys to gather
participants’ input on their individual leadership styles during
the program. Other examples
include end-of-course reaction forms or surveys of stakeholders
such as customers, employ-
ees, or management. Surveys and questionnaires might also be
appropriate when evaluators
desire new data from multiple individuals who may be dispersed
throughout the organiza-
tion. Surveys and questionnaires are relatively inexpensive and
easy to administer, particu-
larly with the use of technology. It is important that these
instruments be well constructed;
their wording must be unambiguous, and they must be easy to
complete.
Paper, Pencil, or Computer-Based Tests
Consultants can administer a variety of commercially produced
tests to assess the knowledge
or skills imparted by an intervention, or they can develop an
original test unique to the inter-
vention. No matter the type of test employed, the evaluation
result is based on the test scores.
This type of evaluation works well when trying to determine the
quantity and quality of the
participants’ education. OD consultants might administer a
pretest before an intervention
and a posttest afterward, or they might require participants to
pass a test to attain a certifi-
cate of completion.
Tests should be cautiously designed and prudently administered.
First, questions must be
written in a way that consistently and accurately measures what
was taught. Second, partici-
pants may perceive test taking as threatening, especially if the
results will be used to make
performance appraisal decisions. Therefore, efforts to defuse
test apprehension should be
built into the process.
This very book uses some of these tools. Teaching you about
OD is the intervention, which
is executed via concepts presented in book form. Additional
interventions take the form of
assignments and opportunities to engage with other learners.
Pre- and posttests check your
prior knowledge on the topic and gauge how well you learned
the concepts after you engaged
with them.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.4Planning and Performing the Evaluation
Individual and Focus Group Interviews
Chapter 4 discussed interviews and focus groups as effective
ways of understanding targeted
individuals’ or groups’ views, beliefs, or experiences with the
issue under investigation. Both
approaches depend on developing well-crafted questions that
yield useful information. Inter-
views and focus groups should be run by an experienced
facilitator. These methods yield rich,
qualitative information that includes insights about the
intervention, critiques, or success
stories. Not all participants react well to these data collection
methods, however, and some
may not trust the interviewers or the process; they may not feel
comfortable enough to be
honest. Participants may also say what they think the facilitator
wants to hear.
Analyze Data and Provide Feedback
Once data has been collected from an appropriate source and via
an appropriate method, it
needs to be analyzed. Refer to Chapter 5 for a full discussion of
how to analyze data. In the
Leadership Academy vignette, performance reviews and
employee satisfaction data from sur-
vey research were analyzed. The team also monitored
participants’ leadership projects and
promotional advances.
Next, the data analysis should be presented as feedback to key
decision makers such as affected
employees and management. How to share feedback with a
client is covered extensively in
Chapter 4, and the same rules apply when sharing evaluation
feedback. It is a consultant’s
job to determine the feedback meeting’s key purpose and
desired outcomes. Does the client
need help determining whether to continue the intervention?
Modify it? Measure learning or
performance? Address unintended consequences of the
intervention? Sharing feedback with
the client involves determining the focus of the feedback
meeting, developing the agenda for
feedback, recognizing different types of feedback, presenting
feedback effectively, managing
the consulting presence during the meeting, addressing
confidentiality concerns, and antici-
pating defensiveness and resistance.
At any point in the evaluation process, data collection and
analysis can prompt the team to
decide to change future action. For example, the team might
decide to adjust the ongoing pro-
cess, continue the process with new interventions, or close the
project if the problem is per-
manently solved. This is the third step of the evaluation
process, defined earlier in the chapter
as termination or recycling. It is discussed in detail in the next
section.
Anticipate and Manage Resistance to the Evaluation
Sometimes, evaluation is resisted by the client, organization, or
other stakeholders. Resis-
tance and strategies for curbing it were discussed at length in
Chapter 5. Resistors may not
want to spend more money to learn the results of the
intervention. Or the organization may
be unwilling to spend the time required to conduct an evaluation
and instead want to move on
to the next issue. There may be fear about what the evaluation
will reveal (perhaps manage-
ment failed to implement the changes, or perhaps employee
views remain negative). Organi-
zation members can also suffer from change fatigue and worry
that the evaluation will bring
even more change. Of course, such resistance patterns are likely
what created problems in the
first place, so observing them warrants timely intervention with
the client.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.5Concluding the Action Research Process
Moreover, a consultant should anticipate political issues the
evaluation might create. Results
of the evaluation can also influence future resource allocations,
which could cause trepidation
and conflict among organization members. Remaining vigilant
as a consultant and working to
be authentic and influential is key to navigating the politics of
evaluation. See Assessment:
Testing Your Change Management Skills.
Some clients may resist doing the evaluation because they are
more interested in moving on
to the next challenge or opportunity. Or they may not want to
subject themselves to poten-
tially negative feedback. In this case, the consultant should lay
out the benefits of measuring
results and learning from both positive and negative feedback.
Doing so shows good steward-
ship of the time and resources committed to the intervention and
provides data to support
future initiatives. One way to minimize resistance to evaluati on
is to make sure it has been
addressed during contracting, as outlined in Chapter 3.
Even when there is cooperation and investment, evaluation is
not easy. Demonstrating impact
and results can be challenging for certain interventions such as
improving leadership. Link-
ing results to intervention events can also be tricky. Devising
appropriate evaluation criteria
can be problematic, especially if intervention outcomes were
vague from the initial planning.
Finally, the client may balk at making judgme nts about the
intervention.
6.5 Concluding the Action Research Process
All consulting jobs end. Indeed, your goal as a successful
consultant is to become redundant
and work yourself out of a job. In our Leadership Academy
vignette, Leah terminated her role
with the project after 2 years, the first of which focused on
planning and the second on imple-
mentation. During year 2, she worked with internal consultants
who would take over her role
leading and facilitating the Leadership Academy in its third
year. In this way, she fostered a
repetition—called a recycling—of the intervention.
Disengagement or Termination
When the client has successfully implemented a change, the OD
consultant is no longer needed.
At this juncture, the client has become self-reliant and can
effectively disengage or terminate
the consulting relationship. Working oneself out of a consulting
job may at first seem like a
bad idea. On the contrary, smoothly disengaging from a client is
how to help clients build
capacity and also the way to get repeat business as a consultant.
Effectively navigating this
Assessment: Testing Your Change Management Skills
Visit the following link to take an assessment that provides a
good review of change and some
insight into resistance. The website offers several resources for
learning more about manage-
ment skills.
http://www.mindtools.com/pages/article/newPPM_56.htm
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.5Concluding the Action Research Process
stage depends on setting the expectation
during contracting, recognizing the appro-
priate timing, processing any interpersonal
issues between the consultant and the cli-
ent, ensuring that the learning is ongoing,
verifying that the client is satisfied, and
planning for post-consulting contact.
Contracting About Termination
A consultant should start setting expecta-
tions about disengagement right from the
beginning of the consultancy, during con-
tracting, as discussed in Chapter 3. There
are several things that help disengagement
go smoothly. First, the consultant should
work with the client to train others in the
organization to take over the role played by
the consultant, as Leah did in the Leadership
Academy vignette. A consultant’s disengagement may be abrupt
or more gradual, depending
on client needs and resources. If the relationship is expected to
be terminated gradually, make
sure the client builds the ongoing consulting into the budget.
Ensuring Learning Capacity
The action research process focuses on promoting learning and
change that helps the client
diagnose issues, act on them, and evaluate the results. As
emphasized in Chapter 5, change
and learning go hand in hand. The action research process helps
the client build capacity to
solve future problems. When the client has the capacity to
follow the action research process
and continue learning, the client is ready to tackle future
challenges without your help.
Recognizing Appropriate Timing
It is the consultant’s job to monitor both the client and the
change implementation to assess
when the organization has the capacity to continue without help.
Clients may resist termina-
tion because they have become over-reliant on the consultant.
You can avoid this dependency
by striking a collaborative relationship from the beginning.
When it is time to terminate, it
makes sense to make a grand gesture to signal the relationship
has ended. You might want
to plan an event with the client, such as presenting a final
report, celebrating the key stake-
holders, or publishing some type of document that tells the
organization’s story. The Leader-
ship Academy consultancy culminated in the graduating cohort
and the new cohort coming
together to celebrate and Leah turning over the management
reins to the internal consultants.
Verifying Client Satisfaction
We have discussed the importance of being authentic with your
client and completing the
business of each phase of the action research project, as Block
(2011) recommended in his
classic consulting text. Those key roles remain relevant right up
until the end. That is, a consul-
tant should continue to ask the client questions such as “Are
you getting what you need from
Skynesher/E+/Getty Images Plus
The action research cycle is terminated when
the implementation has been a success. Ending
the consulting relationship smoothly and
ensuring customer satisfaction helps drive
repeat business.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 6.5Concluding the Action Research Process
me?” “Is this outcome what you expected?” “What are you
concerned about in the future?”
“Can you maintain this change on your own?”
When you have verified that the client is happy with the OD
effort, you can move toward ter-
mination. If the client is unhappy, however, work remains.
Planning for Post-Consult Contact
Although the consulting relationship will end at some point, it
is advisable to have a plan for
consulting after the intervention has been deemed a success.
Clients may run into trouble in
the future or need their questions answered. It is thus wise to
develop a follow-up and mainte-
nance plan with the client that involves periodic checking to
make sure the change is on track.
Agree on a minimal support maintenance plan such as periodic
meetings or reports. Leah, for
example, continued to periodically touch base with the
Leadership Academy to ensure things
were functioning smoothly after her departure.
Although it would be considered a failure if a consultant had to
return to solve the problem he
or she was initially contracted for, it is likely that the client will
face new challenges and seek
out help. Ensuring that there is an open communication channel
and guidelines for future
engagement can put both parties at ease.
Recycling
There are times in the consultancy when termination or
disengagement is not a good option
for the client. This is true of interventions that are designed to
repeat over time. In the Lead-
ership Academy vignette, for example, the project was designed
to repeat annually. Although
Leah terminated her involvement with the project, she trained
internal consultants to carry
on her role, effectively repeating or recycling the action
research process.
Recycling can also be an option when the client seeks additional
changes beyond the change
that has already been effectively implemented. For example,
consider a company that started
providing executive coaching for its emerging leaders. The
program was so successful that
the company decided to offer training that brought some of the
coaching principles to a wider
audience.
Recycling can also occur when the intervention was only
moderately successful or even failed.
An evaluation can usually expose an intervention’s
shortcomings and help the organization
identify adjustments or new interventions. One example would
be an organization that did
not follow the action research process and implemented a
random intervention that was not
clearly linked to the problem, such as requiring employees to
attend training unrelated to the
organization’s needs. Or the organization might have
implemented something similar to the
Leadership Academy but failed to prepare upper management to
deal with highly enthusias-
tic emerging leaders clamoring to make changes that challenge
the status quo. In this case,
a recycled intervention would target upper management
members and help them become
more equipped to mentor up-and-coming employees.
Regardless of whether the OD intervention and action research
process has been terminated
or recycled, when your client has been successful at changing
and has learned new ways
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
of thinking and behaving, you have completed successful OD.
Ultimately, OD seeks to build
capacity in individuals and organizations so they can problem
solve without your help. That
is the mark of an effective action research process.
Summary and Resources
Chapter Summary
• Evaluation is a process of assessing, adjusting, and
terminating or recycling the
intervention based on data and subsequent decisions.
• The purpose of evaluation is to make data-based decisions
about an intervention’s
quality, appropriateness, and effectiveness.
• Evaluation can be formative or summative. Formative
evaluation is concerned with
improving and enhancing an OD process as it is underway,
rather than judging
its merit. Summative evaluation occurs after the implementation
is complete and
ascertains whether the change accomplished the desired
outcomes and impact.
Both provide valuable ways to assess the intervention before,
during, and after it has
occurred.
• Cervero’s categories of evaluation show the different
approaches to evaluation. It
is important to be clear on an evaluation’s purpose at the
planning stage. Typical
evaluation categories include intervention design and
implementation; employee
participation and satisfaction; acquisition of new knowledge,
skills, and attitudes;
application of learning after the intervention; impact of the
intervention; and inter-
vention characteristics associated with outcomes.
• Multiple frameworks for conducting evaluation exist. The best
known is Kirkpat-
rick’s four-level evaluation framework. This model measures
reaction, learning,
behavior, and results.
• Other models of evaluation include Hamblin’s five-level
model; Preskill and Torres’s
evaluative inquiry; Brinkerhoff ’s six-stage model; and the
input, process, output, and
outcomes model.
• Planning and performing the evaluation involves several steps,
the first of which is
determining the evaluation’s purpose. Articulating a clear
purpose gives the evalua-
tion focus and helps identify appropriate participants, measures,
and methods.
• Identifying appropriate evaluation measures is driven by the
evaluation’s purpose. If
a consultant aims to measure employee satisfaction after a
change in leadership, he
or she would likely survey employees to assess their satisfaction
with the change.
• Once the evaluation purpose and measures have been chosen,
the data collection
methods should be determined and carried out. Typical methods
include surveys,
interviews, focus groups, observations, and documents.
• Once the data is collected, it can be analyzed and fed back to
the client and other
interested stakeholders. This information is important for
making decisions about
the continuance of the intervention and future funding.
• It is advisable to anticipate client resistance both to
conducting the evaluation and to
hearing the results. Consultants can write evaluation protocols
into their initial con-
tract. When you notice resistance to evaluation, act quickly to
defuse the resistance,
address the concerns, and help the client use information most
effectively.
• The action research process concludes by being terminated or
recycled. The process
is terminated when the change is successfully implemented.
There is no longer a
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
need for a consultant. The client has built capacity to use the
action research process
on future problems.
• The action research process is recycled when termination is
not a good option for
the client. For example, there may be a desire to expand or
improve the implementa-
tion. There may also be a need to continue working with the
consultant if the inter-
vention repeats over time. In some cases, however, the
intervention has failed, and it
is time to consider a new approach.
Think About It! Reflective Exercises to Enhance Your Learning
1. The chapter began with a vignette about a leadership
academy for a state public
health agency, which featured both formative and summative
evaluation. Have you
been in a situation where an evaluation occurred? If so, can you
recall the different
types of evaluation? If you have not experienced a formal
evaluation, how might you
go about evaluating a change you experienced?
2. Think about a change you have implemented. It could be
personal, like changing a
habit or starting something new, or professional, like taking on
a new responsibility
or position or meeting a challenge. Conduct a formative
evaluation (focusing on what
you did or could have improved on) and a summative evaluation
(in which you judge
the effectiveness and impact) on the change.
3. Which evaluation framework presented in this chapter was
the most appealing to
you? Why?
4. Reflect on how you might go about evaluating a recent
change in your organization
using one of the data collection methods outlined in the chapter.
5. Recall a time you have resisted change, especially
organization change. How could a
consultant or the organization have helped you become more
accepting?
Apply Your Learning: Activities and Experiences to Bring OD
to Life
1. Imagine an organization hires you as an external consultant.
It needs you to imple-
ment a new recruitment and retention process aimed at hiring a
more diverse work
force. How would you go about evaluating whether the change
was successful?
a. What is the evaluation’s purpose?
b. What steps will you follow to conduct the evaluation?
c. What level(s) do you hope to evaluate, as per the Kirkpatrick
framework?
d. What data collection method will you use?
2. Identify a process, practice, or performance standard you
would like to improve and
plot how you would benchmark it.
3. Evaluation may be an afterthought in many interventions.
How would you ensure
evaluation is integrated into a change effort you are involved
with or leading? How
might you curb resistance?
4. Identify an intervention in which you have participated at
work and evaluate it
according to Kirkpatrick’s four-level framework:
a. reaction
b. learning
c. behavior
d. results
5. Plan an evaluation according to its
a. purpose
b. measures
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
c. information sources and methods
d. analysis and feedback
e. future action
f. political issues
6. If you have ever participated in an OD intervention led by a
consultant, identify what
types of evaluation were conducted. How well did the
consultant do, based on the
principles presented in this chapter?
7. Have you experienced a failed OD intervention that had to be
recycled? If so, use the
information presented in this chapter to diagnose what went
wrong.
Additional Resources
Media
• Michael Quinn Patton Evaluation Videos
https://www.youtube.com/results?search_query=michael+patton
+quin+evaulation
• Kirkpatrick Model: Should I Always Conduct a Level 1
Evaluation?
https://www.youtube.com/watch?v=dVnBE2W7qAI&list=PL3D
286DBB9370267D
• Kirkpatrick Model: Monitoring Level 3 to Maximize Results
https://www.youtube.com/watch?v=r-
qF4kJrTiI&list=PL3D286DBB9370267D
Web Links
• The American Evaluation Association (AEA), an international
professional associa-
tion of evaluators devoted to the application and exploration of
program evaluation,
personnel evaluation, technology, and many other forms of
evaluation:
http://www.eval.org
• The Online Evaluation Resource Library, a useful site that
collects and makes avail-
able evaluation plans, instruments, and reports that can be used
as examples by
principal investigators, project evaluators, and others:
http://oerl.sri.com
• The Centers for Disease Control and Prevention Program
Evaluation Resources site,
which offers a plethora of useful content on conducting
evaluations:
http://www.cdc.gov/EVAL/resources/index.htm
Key Terms
adjusting processes The process of chang-
ing the OD intervention once the assess-
ment data has been collected and analyzed.
archival data Existing records such as
employment records, production figures,
or quarterly reports. Used as data sources
when collecting evaluation data.
assessing changes The process of gath-
ering and analyzing data related to the
learning and change associated with an OD
intervention.
behavior Kirkpatrick’s level 3 evaluation,
which measures how participants perform
differently as a result of the intervention.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
benchmark When an organization com-
pares its business practices, processes, and
performance standards with other organi-
zations that are best in class.
checking A data-based evaluation to assess
whether an intervention had the intended
result.
evaluation A data-based checking process
(assessment, adjustment, terminating, or
recycling) to assess whether an interven-
tion had the intended result.
formative evaluation Assessments of an
intervention before or during its imple-
mentation geared toward improving the
process.
learning Kirkpatrick’s level 2 evaluation,
which measures what skills and knowledge
participants gained from the intervention.
observation Type of evaluation based
on detailed descriptions of day-to-day
operations.
reaction Kirkpatrick’s level 1 evaluation,
which measures how well participants liked
the intervention.
recycling The process of repeating or
revising the action research process when
further interventions are desired or the
initial intervention has failed.
results Kirkpatrick’s level 4 evaluation,
which measures how the bottom line was
affected by the intervention.
summative evaluation Assessment that
is done once the intervention is completed
to judge whether it attained its goals and
addressed the problem and to make future
decisions about funding and continuance.
surveys and questionnaires Evaluation
data collection method that uses instru-
ments that participants complete to provide
feedback on the intervention.
terminate To disengage from a consulting
relationship with an organization at the end
of the action research process.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Action Research:
The Doing Phase
5
monkeybusinessimages/iStock/Getty Images Plus
Learning Objectives
After reading this chapter, you should be able to:
• Describe factors that influence a client and organization’s
readiness for change and promote
acceptance of interventions.
• Define an OD intervention, including the different ways to
classify interventions and the crite-
ria for choosing an appropriate intervention.
• Explain the consultant’s role in implementing OD
interventions and how to promote learning
to sustain them.
• Discuss common issues related to monitoring and sustaining
change, including the reasons
that interventions fail, the ethics of the implementation stage,
client resistance, and strategies
to sustain change.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
A major land-grant university received federal funding to
promote education among public
health employees in a southern state. As soon as the monies
were awarded, several educational
initiatives began to serve multiple stakeholders across the state.
One of the projects that James,
the grant’s principal investigator, wanted to initiate was a
leadership academy for mid-level
managers with potential to advance to higher levels of public
health leadership in the state.
Previous analyses of the organization, including succession
planning, had revealed a long-term
need to provide leadership development. This need lingered for
many years because public fund-
ing was not available to provide a comprehensive program. The
grant finally created the oppor-
tunity to deliver this much-needed program. James contacted an
external consultant, Leah, to
help plan the program.
Leah was a good choice for a consultant; she was an expert in
leadership and program develop-
ment. The contracting meeting was set up, at which Leah and
James determined the scope of the
project: a 1-year leadership development academy for the
state’s top 25 leaders. The project had
two objectives:
1. Pilot a program that will become a permanent leadership
development academy
available to high-potential leaders on an annual basis.
2. Strengthen the leadership competencies and culture within
the state public health
work force.
Although Leah would be the lead consultant and facilitator for
the project’s planning and imple-
mentation over the first 2 years, the goal was to build capacity
within the state so that internal
facilitators could sustain the program over the long term.
The project required an action research approach to collect and
analyze initial data about
the target population’s needs, so the decision was made to
conduct interviews and surveys to
determine the content of the leadership development academy.
Based on Leah’s expertise in
leadership development, her role was defined as part expert,
part collaborative partner with
James and his university. The project had a 2-year
implementation timeline, with the first year
focused on planning and the second year devoted to
implementation. Evaluation would be ongo-
ing and continue past the second year as a new cohort was
started in year 3, staffed by internal
consultants.
James and Leah met regularly to plan the pro-
gram. This involved undertaking the planning
or discovery phase of action research: diag-
nosing the issue, gathering data on the issue,
sharing feedback, presenting the data analy-
sis, and planning to act.
The “doing” phase of the action research pro-
cess is the phase in which the intervention is
implemented. For the Public Health Leader-
ship Academy, this phase began in September
with 25 participants who had been competi-
tively selected from across the state. The par-
ticipants convened at a resort, and the program was kicked off
by high-level state public health
CasarsaGuru/E+/Getty Images Plus
The Public Health Leadership Academy kicks
off after an extensive planning process.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
officials. The first session lasted 3 days. During this time, the
participants received results of a
leadership styles inventory, listened to innovative lectures and
panels on leadership, planned an
individual leadership project in their districts, and engaged with
each other to develop working
relationships. The academy met monthly for a year and focused
on a range of topics related to
leadership that were prioritized based on prior data collection.
The grant provided for an evalu-
ator, so formative data was collected at each meeting. The
kickoff of the Leadership Academy
set the stage for the entire year. The beginning set the tone and
expectations for what the par-
ticipants could expect.
Pythagoras is credited with saying, “The beginning is half the
whole” (as cited in Infoplease,
n.d., para. 1), which inspired the modern idiom “Well begun is
half done.” This philosophy is
well applied to creating OD interventions; that is, effective
planning is key to successful change
implementation. Chapter 4 introduced the first phase of the
action research model, planning
or discovery. This chapter focuses on the second phase, doing
or action. Action research takes a
data-based approach to diagnosing organization problems so
that interventions can be imple-
mented to permanently solve problems. We will return to the
Public Health Leadership Academy
vignette throughout the chapter to illustrate the action phase.
See Table 5.1 to review the action
research model we are following in this book.
Table 5.1: Action research model
Phase Action
Planning (the discovery
phase)
1. Diagnosing the issue
2. Gathering data on the issue
3. Sharing feedback (data analysis) with the client
4. Planning action to address the issue
Doing (the action phase)
1. Learning related to the issue
2. Changing related to the issue
Checking (the evaluative
phase)
1. Assessing changes
2. Adjusting processes
3. Ending or recycling (back to the planning stage) the action
research
process
This chapter will pick up at Step 4 of the planning phase,
planning action to address the issue.
Planning action involves choosing and initiating interventions.
Interventions represent the
action taken to resolve the problem, so they link the action
research model’s planning and
doing phases. A key activity in Step 4, however, is to assess the
organization’s readiness for
change.
5.1 Readiness for Change
Once you have worked with the client to plan for how the
organization will address the prob-
lem, you move into the implementation phase. Ultimately, the
measure of effective OD is
whether a change was made and if it stuck. Implementing
change is easier than sustaining it.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
Most people have successfull y dieted and lost weight; the hard
part is maintaining the weight
loss and sustaining new behaviors over the long term. Similarly,
organizations may success-
fully implement a new leadership development program but
have difficulty in sustaining the
necessary behavioral and cultural changes that ensure improved
leadership. Making change
is not the same as sustaining change. The latter is much more
difficult. That is why OD consul-
tants must help the client develop strategies to ensure people
are accountable to maintain the
change and create measures to help the organization sustain the
change.
Effectively initiating change depends on the organization’s
perception that the change is nec-
essary and achievable and that employees are willing to support
change efforts (McKay, Kunts,
& Näswall, 2013). These variables signal readiness for change.
Our understanding of change
readiness emerged from the fields of health psychology and
medical studies (Block & Keller,
1998) and was later applied to organizatio ns. See Who Invented
That? The Transtheoretical
Model of Health Behavior Change.
Who Invented That? The Transtheoretical Model of Health
Behavior Change
Models of change readiness originated in health care. The
transtheoretical model is consid-
ered the most influential and was proposed by Prochaska and
DiClemente in 1983 based on
their research on smoking cessation. A description of the
model’s six stages follows.
1. Precontemplation (not ready). A state in which people are
unaware their behavior is
problematic; thus, there is no intention to take action. For
example, suppose Jacob is
a manager with an ineffective leadership style. Jacob is doing
what he has observed
other managers doing and does not give his performance any
thought.
2. Contemplation (getting ready). A state in which people begin
to notice their behavior
is problematic and begin to weigh the pros and cons of their
continued actions. (Lewin
would refer to this as “unfreezing,” or readiness for change.)
For example, Jacob may
start to notice that he is not getting the results he would like in
his role as a manager.
He can see that people do not listen to him, and he starts to
ponder whether he should
change his behavior.
3. Preparation (ready). A state in which people set intenti ons to
take action on the prob-
lem behavior in the immediate future. For example, Jacob may
decide to start explor-
ing different leadership approaches and resources for
improving, such as reading, tak-
ing a class, or seeking mentoring from managers whose
behavior he wants to emulate.
4. Action (doing). A state in which people are engaged in
making visible modifications to
their behavior, usually by replacing the problematic behavior
with a new, more pro-
ductive behavior. (This would be known as “moving” in
Lewin’s terms.) For example,
Jacob may decide to seek a mentor, read some books, and take a
leadership class. He
may also begin to implement some new behaviors with his staff.
5. Maintenance (maintaining). A state of preservation in which
people have been able to
sustain the change for a time and are working to prevent relapse
to the former prob-
lematic behavior. For example, Jacob may work to avoid
slipping back to less effective
management behaviors, such as failing to consult employees on
important decisions.
He may also seek feedback and support from his mentor.
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
Reflect on an organization change you recently experienced.
Perhaps there was a transforma-
tion in financial systems requiring employees to adopt new
procedures for reporting travel or
making expenditures. Other changes might include adjusting to
a new CEO or president,
learning new features in products or services provided to
customers, or abiding by additional
expectations for completing work. Most often, people are
neither pleased about nor ready for
changes they are asked to make. Changes are often met with
skepticism, resistance, and even
anger. There are several dimensions to preparing an
organization for change, and readiness
for change is important for practitioners of OD to consider at
individual, group, and organiza-
tion levels.
Dimensions of Change Readiness
When a change is sprung on an individual or organization, a
range of reactions occurs. Per-
haps there is a sense of surprise, dismay, anger, excitement,
fear, or dread. How people and
organizations respond to a change can be measured by how
ready they are to make a change,
whether planned or unplanned. Dimensions of change readiness
involve gauging readiness
and understanding the dynamics of change. When you have been
faced with change, what
was your reaction?
Gauging Change Readiness
Five dimensions influence the level of readiness to make
changes (Hord & Roussin, 2013).
The first is whether data exists that justifies the change in a way
that is relevant and compel-
ling to the organization. That the data exists is not enough: It
must be communicated clearly
and compellingly by management. Next, employees must be
engaged in ways that promote
their input and ownership of the change. The third dimension is
to ensure that the scope
and impact of the change is appropriate for the organization’s
culture and strategy. Next,
the structure of the change should be clearly defined in terms of
new roles, procedures, and
resources. Finally, the organization needs to prepare to let go of
past practices and find a rea-
sonable timeline and process for incorporating the change.
Table 5.2 offers a checklist of these dimensions, with examples
of each category. It can be used
to gauge an organization’s change readiness.
Who Invented That? The Transtheoretical Model of Health
Behavior Change (continued)
6. Termination (ending). A state in which the new behavior has
become permanent (this
would be known as “refreezing” in Lewin’s terms) and people
are not tempted to
revert to their old problematic behaviors. By this point, Jacob
has integrated the more
participative leadership style into his repertoire and does not
even think about it any-
more—it has become a natural part of his being. He may now be
ready to help others
make similar changes (Prochaska & DiClemente, 1983).
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
Table 5.2: Readiness for change
Readiness dimensions Readiness dimension indicators
Relevance and meaning:
Make a compelling case for
the change or identify the
benefits of the interventions.
• There is ample data to justify the need for this change.
• Employees have had plenty of opportunity to discuss the whys
for this
change.
• This change is not being driven by a crisis mindset.
• There is anecdotal evidence from employees expressing why
this
change is important.
• There is evidence that a culture of trust exists with employees
about
this change.
Consensus and ownership:
Engage employees so there
is ownership of the desired
change.
• Employees express ownership for this change.
• Employees say they are willing to commit energy and time
toward
this change.
• This change was not driven by a top-down mandate and one-
way
communication.
• Employees think this change will make a significant difference
and
bring results.
• Stakeholders are strong supporters of the change.
• There is shared responsibility and collective trust for this
change.
Scope and culture: Define
the scope of the change and
the impact it will have on the
organization’s culture, cur-
rent mindsets, and behaviors.
• Advocacy for the change has been sensitive to organization
culture.
• Employees mentally, emotionally, and physically embrace the
change.
• Change leaders have been respectful and sensitive in helping
employees make sense of the change over time.
• The change aligns well with other recently implemented
interventions.
• The change will not overwhelm employees’ current workload.
• The change leaders serve as role models of the desired change.
Structure and coherence:
Determine change leadership
roles, structure, decision mak-
ing, and how the change will
interface with organization
operations.
• The right stakeholders have participated in the action research
process and decision making for this change.
• Leadership has identified key roles to support the change
moving
forward.
• Employees understand how future decisions will be made
around the
change.
• Appropriate resources have been dedicated to implement the
change
(e.g., finances, time).
• The change is feasible and the right resources are in place to
sustain
it.
• Frequent and adequate communication with feedback has
guided the
change.
Focus, attention, and letting
go: Assess where to focus
attention based on data and
determine what can be let go
in order to create room for
change.
• Change leaders have determined what past initiatives/practices
can
be let go in order to make room for this change.
• There is a reasonable timeline established for this change to
support
its full implementation.
• There is clear understanding by employees of what the change
is
going to entail.
• Employees understand the demand and expectations for the
change.
• There are indicators established for this change to identify
early
successes.
• The appropriate technology tools are available to support this
change.
Source: Adapted from Implementing Change Through Learning:
Concerns-Based Concepts, Tools, and Strategies for Guiding
Change (p. 38), by S. M. Hord and J. L. Roussin, 2013,
Thousand Oaks, CA: Corwin.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
Cheung-Judge and Holbeche (2015) offered tools to map change
readiness of various stake-
holder groups by charting readiness according to “level of
commitment to change” and “abil-
ity to make it happen.” Table 5.3 presents their process.
Consider a change you experienced in
the past or are currently facing. Using Table 5.3, plot your
commitment to change and ability
to make change happen. Next, plot your view of other key
stakeholders or groups involved in
the change. What issues in change readiness do you see in
yourself or others, and how might
you address them?
Table 5.3: Mapping readiness for change
Key stakeholders/
groups Commitment to change Ability to make change happen
Low Medium High Low Medium High
Dynamics of Change
Readiness to change usually indicates a willingness to entertain
new ways of thinking and
doing. Hord and Roussin (2013) outlined the change dynamics
in the following manner:
1. “All change is based on learning, and improvement is based
on change” (Hord & Rous-
sin, 2013, p. 2). Most change depends on learning. Hord and
Roussin (2013) valued
learning for the way in which it enables people to abandon
nonproductive behaviors
and replace them with behaviors more supportive of the
intervention. They empha-
sized, “At the center of all successful implementation of a
change is the opportunity
for adults to come together and learn” (Hord & Roussin, 2013,
p. 2).
2. “Implementing change has greater success when it is guided
through social interac-
tion” (Hord & Roussin, 2013, p. 3). OD’s collaborative,
collective ethic lends itself to
building communities of change that band together to implement
new programs and
solutions.
3. Individuals have to change before organizations can change.
If a group or team is to
successfully pull off major changes, individuals need to possess
the skills and capaci-
ties to execute the necessary behaviors. Key to facilitating
individual change is giving
individuals choice and opportunities to influence the process
and their environment.
The stages of concern model (Hall & Hord, 1984, 2011)
discussed in Chapter 2 pro-
vides a framework for helping individuals address concerns
related to change.
4. “Change has an effect on the emotional and behavioral
dimensions of humans” (Hord
& Roussin, 2013, p. 3). Change is stressful. When we fail to
respect and tend to the
emotional reactions to change, the change will likely fail.
People need opportunities
to air their hopes and fears about a change; this helps them feel
safe during and after
the process.
5. Employees will more readily accept change when they
understand how the interven-
tion will enhance their work. This belief ties in to adults’ need
for learning to be
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
timely, relevant, and linked to their experience; it also relates to
the power of con-
necting individual and organization goals.
6. The client and/or leader’s role is to engage employees in
dialogue about the changes as
a way of promoting communication and ownership of the
change. The more the change
is talked about and explained, the easier it will be for
employees to embrace.
Factors Influencing Readiness to Change
The client and the consultant can take steps to prepare the
organization for change. The first
is to clearly communicate the discrepancy between the status
quo and the desired state. In
Chapter 4, this discrepancy was defined as a performance gap.
Employees will be prepared
to change when they understand why the change matters
(Madsen, Miller, & John, 2005).
The second step is to bolster employees’ confidence that they
possess the knowledge, skills,
and abilities to deal with the performance gap and make the
changes necessary to close it.
Employees will accept change when they perceive a match
between their skills and abilities
and those needed to diminish the performance gap (Chreim,
2006).
Perceived Appropriateness of the Change
Readiness to change depends on several additional variables to
be in place if it is to suc-
ceed. When employees view the change as appropriate to the
organization, they will gener-
ally support and readily embrace it (Holt, Armenakis, Feild, &
Harris, 2007). For example,
several years ago, most organizations did not recycle; the idea
of sustainability was unfamil-
iar to both companies and communities. As global awareness of
pollution and environmen-
talism has increased, so has the willingness to change our
behavior. Today, it is common to
have recycling bins throughout an organization—you may even
have one in your office and
at home. Recycling is now embraced because we view it as
appropriate and necessary. Of
course, even the most appropriate change must be
communicated well and visibly supported
by management.
Creating a Shared Vision of the Change
When management engages employees in planning for the
future, they are working to cre-
ate a shared vision (Hord & Roussin, 2013). A shared vision is
the creation and articulation
of the organization’s desired future state that is mutually agreed
upon with employees. As
discussed in Chapter 4, a shared vision may be attained by
completing a gap analysis that
identifies the discrepancy between the current state and the
desired state. For example, when
the University of Georgia decided to change its platform for
online learning, it involved sev-
eral stakeholders, including students and faculty, in evaluating
new platforms and providing
input in the final decision. By creating a shared vision for what
the university desired in terms
of technology, image, and learning experience, the OD
intervention significantly increased
buy-in when the new platform was implemented. The more
management can involve affected
employees in the change’s planning and implementation, the
more the entire organization
will support the change.
Level of Managerial Support of the Change
When management visibly advocates for and adopts a change, it
sends a message to the orga-
nization about the change’s necessity and importance (Holt et
al., 2007). Management serves
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
as a model to employees, who watch to see whether managers
actually commit to the change.
For example, suppose an organization attempts to create a more
diverse and inclusive culture.
Managerial support might include articulating the
organization’s commitment to diversity
and inclusion at every opportunity, promoting a diverse range of
employees to key positions,
hiring for diversity, and rewarding behaviors that support
diversity and inclusion.
Providing the Necessary Resources, Support, and Assistance
Displaying managerial support goes beyond setting an example.
It also involves making sure
the necessary resources are provided to make the change (Hord
& Roussin, 2013). Changing
usually takes time, costs money, and diverts energy from other
activities. Creating a realis-
tic budget and providing resources up front helps ease the
transition. Employees may need
moral support or training, the organization may need additional
resources, or the community
may need to be informed of the changes. It benefits the
organization to provide sustained
assistance as needed during implementation. For example, the
university that changed its
online learning platform had to develop a strategy for
communicating to faculty, staff, and
students; train for the implementation; obtain the ongoing
support of faculty and students
working within the platform; and hire staff to support the
logistics of working with the new
technology.
Thakur and Srivastava (2018) surveyed 276 middle managers in
India about change readi-
ness and found that it influences resistance to change.
Readiness increases and resistance
decreases when levels of trust, perceived organization support,
and emotional attachment
are high. They also found the human touch to be important, that
is, fostering communica-
tion, trust, and security for employees experiencing change.
Other researchers found that
perceived organization support affected individual change
readiness among 154 employees
of a chain restaurant that introduced new leadership and
restructuring and that providing
support prior to the introduction of change improved trust and
readiness (Gigliotti, Varda-
man, Marshall, & Gonzalez, 2019).
Level of Organization Members’ Self-Efficacy for Adopting the
Change
The perception that employees are skilled and competent
enough to successfully implement
a change bolsters readiness for it (Holt et al., 2007). Helping
employees become comfort-
able with both the content and the process of the change is
important. Change content is the
focus of the change—for example, adopting a new electronic
medical record (EMR) program.
Change process is the way the change is implemented —for
example, piloting the EMR in a
small department and seeking user feedback before rolling it out
organization-wide. Research-
ers investigating individual and group openness to change in
relation to primary health care
employees’ ability to improve their use of information and
communication technologies in
Sweden found that openness to both the change content and the
process positively predicted
competence with adoption of the change (Augustsson, Richter,
Hasson, & von Thiele Schwarz,
2017). It is often up to the OD consultant and management to
show employees that they have
the self-efficacy to adopt the change. For example, if an
organization were implementing a new
technology, it would be helpful to provide opportunities for
employees to experiment with it.
Doing so would allow them to discover that they have the skills
to implement it. Investing in
professional development and professional learning is a key way
to build self-efficacy (Hord
& Roussin, 2013). Implementing and sustaining change requires
acquiring new knowledge,
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.1Readiness for Change
skills, and abilities. Such learning may boost employees’
confidence that they can adopt the
new changes, as well as enhance their understanding and
acceptance of the change.
Level of Organization Members’ Personal Attachment to the
Change
Change is more likely to be accepted if management can show
that adopting it will positively
affect individual employees (Holt et al., 2007). Helping
employees connect their personal
goals to company goals creates a winning combination. Authors
surveyed 1,833 nurses in
23 acute care hospitals across Switzerland and concluded that
quality of care and support-
ive leadership were positively associated with readiness
(Sharma et al., 2018). Connecting
employees’ personal attachment to change requires
communication and support of employee
interests. For example, in a quest to become a learning
organization that readily captures
and shares information and knowledge, an organization might
bolster support of individual
learning efforts by funding them, providing in-house learning
opportunities, or sponsoring
degree attainment and continuing education.
Including a System for Checking and Assessing Progress
The change implementation should be evaluated throughout the
action research process
(Hord & Roussin, 2013). As Chapter 6 will discuss, to ensure
the intended outcomes are being
achieved, it is important to assess progress and results during
and after implementation. For
example, in the case of the university that implemented a new
online learning platform, a
small pilot group of faculty users was designated “early
adopters.” The group received train-
ing and used the new platform the semester before it was
officially implemented. This small,
contained implementation offered the opportunity to
troubleshoot and eliminate bugs prior
to the large-scale implementation.
Promoting Acceptance of Interventions
There are several ways the client and consultant can prepare the
organization for change and
bolster acceptance of the interventions. Acceptance is
encouraged via effective and ongoing
communication with employees about the change and by
creating opportunities to partici-
pate in its planning and implementation.
Developing a Change Communication Strategy
Management communication about the change is key during
both the planning and the imple-
mentation phases. Communication not only informs and engages
employees about the change
but also helps diminish resistance to it. Effective
communication comes in many forms; man-
agement should take advantage of as many as possible by using
media, meetings, and face-to-
face interactions.
Communicating about the change has several benefits. First,
communication can serve an
educational function; it helps employees learn about the
performance gap and understand
how they can help reduce it. Employees also need to learn about
the change’s purpose and
value to understand how they can contribute to achieving it.
Additionally, communication
helps alleviate fears about how the change might negatively
affect the organization, certain
jobs, or individuals. Effective communication will bolster
employees’ confidence that they can
cope with the change and the new demands it will bring (Mayer,
Davis, & Schoorman, 1995;
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.2What Are OD Interventions?
McKay et al., 2013; Walinga, 2008). More broadly, a
comprehensive communication plan gives
employees the opportunity to understand the scope and strategy
behind the change and to
raise issues of concern (McKay et al., 2013).
Promoting Employee Participation in the Change
Participative management has been advocated as an effective
strategy throughout this book.
Engaging employees in planning and implementing change will
likely result in higher levels
of acceptance and understanding (Holt et al., 2007). Involving
employees may also benefit the
change itself because employees may have insight and
information that can inform change-
related decision making and problem solving (Courpasson,
Dany, & Clegg, 2012).
Promoting active participation might involve educating
employees about the change and
inviting their critical analysis of its purposes and procedures. It
is also wise to engage employ-
ees in learning and development activities that will build their
competence and ability to
cope with the change and whatever new tasks and
responsibilities it requires. Engaged par-
ticipation gives employees a sense of ownership and
responsibility toward the change. When
employees participate in the change process, they are less likely
to resist the change.
5.2 What Are OD Interventions?
During the planning or discovery phase of action research, the
OD consultant works with
the client to collect, analyze, review, and present data to help
diagnose the problem. Once
those steps are completed, an informed action to address the
problem, also known as an
intervention, can be selected. “OD interventions are sets of
structured activities in which
selected organizational units (target groups or individuals)
engage in a task or a sequence
of tasks with the goals of organizational improvement and
individual development” (French
& Bell, 1999, p. 145). Burke and Noumair (2015) defined an
intervention as “some specified
activity, some event or planned sequence of events that occurs
as a result of diagnosis and
feedback” (p. 185). Cheung-Judge and Holbeche (2015) offered
a composite definition of
intervention drawing on multiple definitions. They concluded
that interventions have the
following elements: entrance into an existing system; use of a
structured and planned activ-
ity; focus on a person, group, intergroups, or an entire
organization; seeking to disturb the
status quo and shift the system toward a new state; and aiming
to improve and develop the
organization (p. 92).
OD interventions might consist of one-time events that happen
within a short time frame
such as announcing a new leader, or long-term events that may
have several shorter inter-
ventions such as mergers and acquisitions that bring new team
members, additional prod-
ucts and services, layoffs, cultural clashes, and so forth.
Interventions are fluid and can occur
throughout the OD process.
Defining an Intervention
Interventions signify the point at which action is taken on an
issue. Schein (2011), writing about
the consulting role, asserted, “Everything you say or do is an
intervention that determines the
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.2What Are OD Interventions?
future of the relationship” (p. 151). Schein’s view underscores
that interventions can hap-
pen throughout the action research cycle. Schein (2013) later
explained, “Once we have made
some kind of judgment, we act” (p. 94). As noted in Chapter 3,
even the mere presence of a
consultant is considered to be an intervention. According to
French and Bell (1999),
The OD practitioner, a professional versed in the theory and
practice of OD,
brings four sets of attributes to the organizational setting: a set
of values; a
set of assumptions about people, organizations, and
interpersonal relation-
ships; a set of goals for the practitioner and the organization
and its members;
and a set of structured activities that are the means for
achieving the values,
assumptions, and goals. These activities are what we mean by
the word inter-
ventions. (pp. 145–146)
Interventions have certain characteristics of creating disruption,
interacting with the existing
organization system, and requiring careful planning. Each of
these features is discussed in the
following sections.
Interventions Disrupt the Status Quo
Coghlan and Brannick (2010) viewed intervening as the point at
which the actual change
process begins. This is where the organization may be caught in
transition from a present
defective state to a future desired state (managing its
performance gap). In a similar vein,
French and Bell (1999) suggested that interventions “purposely
disrupt the status quo” (p.
143) toward a more effective state of affairs. This process is
similar to identifying and resolv-
ing a performance gap; that is, the current state is incongruent
with the desired state, and
thus actions are required to resolve the gap. Coghlan and
Brannick recognized two manage-
rial marks of being in this disruptive transition:
1. possessing a plan that describes the goals, activities, projects,
and experiments that
will move the organization to the desired state and
2. a plan of commitment to the change by the organization.
In the case of the Leadership Academy vignette, Leah and
James defined a plan for the acad-
emy to develop leaders’ capacity on the individual, group, and
organization levels. They also
secured the organization’s commitment to the intervention.
Interventions Happen in Complex Systems
Interventions do not happen in a vacuum. “To intervene is to
enter into an ongoing system of
relationship, to come between or among persons, groups or
objects for the purpose of helping
them” (Argyris, 2000, p. 117). Anderson (2016) deconstructed
this definition to emphasize
three points:
1. The system is “ongoing,” which suggests that any
intervention represents an inter-
ruption or disruption to the flow of organization life. The
intervention happens in
the midst of organization complexities such as politics,
priorities, interpersonal
relationships, constraints, history, and other factors.
2. Interventions “come between or among persons, groups or
objects,” which alludes to
how they disrupt usual ways of doing business. As a result,
members may resist them.
© 2020 Zovio, Inc. All rights reserved. Not for resal e or
redistribution.
Section 5.2What Are OD Interventions?
3. “For the purpose of helping” indicates that the goal of
intervening is to positively
influence organization functioning, effectiveness, and
performance.
If we consider the Leadership Academy intervention in
Argyris’s (2000) terms, it disrupted
the flow of organization life by identifying people with
leadership potential. James and Leah
also had to navigate politics that evolved within the state public
health agency related to the
program and its control in the initial planning stages. The
academy also came between and
among persons, groups, or objects by changing the way
participants behaved, problem solved,
and coached their employees. In some cases, participants
completely overhauled their leader-
ship style and began developing their own teams in new ways.
Some participants experienced
receptivity to their new skills, whereas others encountered
resistance. On the third point, the
individuals, their teams, and the statewide public health agency
were positively affected due
to the multiple changes that occurred as a result of this
program.
Interventions Require Careful Planning and Sequencing
Interventions entail planning the action or intervention, as well
as executing these. An inter-
vention strategy is a series of events that take place over several
weeks or months to address
the problem. French and Bell (1999) referred to an intervention
strategy as a sequence of
tasks. In contrast, a single event, meeting, or workshop is
known as an intervention activity
(Anderson, 2016) or task (French & Bell, 1999). The Leadership
Academy vignette was an
example of an intervention strategy, given its yearlong
implementation and multiple associ-
ated activities.
Interventions share common attributes in that they
• signify a shift away from the status quo toward a new future,
• initiate action on the problem,
• rely on evidence,
• are grounded in relationship,
• occur in complex social systems, and
• require planning and sequencing.
The Leadership Academy vignette met these attributes. It
shifted the culture toward leader-
ship and gave participants the tools and support to try their new
knowledge within their
work environments. The program was based on data collected
from participants and the
organization, as well as the best evidence available on effective
leadership. Each participant’s
organization had to provide a letter of support to show
commitment to the academy partici-
pant, and there were multiple relationships developed during the
program that were critical
to its success.
Classifying Interventions
Interventions are generally decided during the discovery or
planning that occurs in the first
phase of the action research model. They are implemented in
Phase 2, doing or action, and
assessed in Phase 3, checking or evaluating. OD interventions
are the point of OD. They repre-
sent the actions taken on the problem or issue. Intervention is
the peak of the OD process—it
is what OD intends to do from the start.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.2What Are OD Interventions?
That said, interventions may occur at any point in the action
research cycle and are best
understood as fluid. For example, a consultant’s presence
represents several opportunities to
acknowledge a problem, make an impending change, or alter
conditions. A consultant might
make an intervention during initial or feedback meetings by
offering an observation, such as
“Everyone here is talking over one another,” or asking, “What
is preventing you from deal-
ing with this problem?” or stating, “You are talking over
everyone during meetings and not
listening to what people are trying to tell you. Are you aware of
that?” Of course, in addition
to making what might be considered microinterventions, the
consultant’s job throughout the
process is to guide the client toward making a
macrointervention that addresses the root
cause of the problem.
Classification as Diagnostic or Confrontive
Interventions vary in terms of their level, scope, duration, and
strategy. The Leadership Acad-
emy vignette focused on individual leaders, but it affected the
leaders’ direct reports, groups,
organizations, and the wider state public health agency. Schein
(1988) classified interventions
as either diagnostic or confrontive in terms of their timing and
level of difficulty. Interventions
that occur at any time during the contracting process, initial
meetings, and data collection are
diagnostic interventions because they occur as the organization
grapples with the correct
diagnosis and intervention strategy. As has been pointed out,
the consultant is an intervention.
“Every decision to observe something, to ask a question, or to
meet with someone constitutes
an intervention into the ongoing organizational process”
(Schein, 1988, p. 141).
Interventions made based on data collection and analysis are
confrontive interventions
(Schein, 1988). There are four types of confrontive
interventions that range in their level of
difficulty:
1. Agenda-managed interventions. As the name indicates,
agenda-managed interven-
tions are concerned with helping the group or organization
prioritize what to focus
on; these interventions also examine its internal processes.
Groups or organiza-
tions may get stuck determining what is important; they may do
nothing or vacil-
late instead of making a decision. Working with a stalled group
or organization can
be quite frustrating. Helping a group focus on how it functions
can be pivotal in
improving actions and outcomes. Schein (1988) argued that
something as simple as
evaluating meetings significantly affects group members’
awareness of interpersonal
dynamics, emotional reactions, communications, and decision-
making processes.
Schein recommended starting with low-risk issues like the
agenda. Higher-risk
issues regarding the group’s relational and interpersonal
patterns should be tackled
once people are emotionally prepared to deal with the
vulnerability and feelings that
will surface when the group begins to critique how it functions.
2. Confronting through feedback of observations or other data.
Chapter 4 discussed the
process of sharing feedback with the client during the discovery
phase of the action
research model. Schein (1988) recommended sharing feedback
when the group or
organization commits to examining interpersonal workings.
Confronting through
feedback requires that the client be ready to hear and act on it.
Consultants play
a key role in helping the client absorb and act on feedback. The
ability to observe
reactions, listen, ask great questions, facilitate learning, defuse
defensiveness, and
deliver difficult messages with tact and diplomacy are key to
these types of interven-
tions. Consultants should also model how to hear and accept
feedback for the client.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.2What Are OD Interventions?
This involves asking for feedback on one’s own performance
and graciously accept-
ing and acting on it as appropriate.
3. Coaching. When individuals and groups receive feedback,
there is a natural inclina-
tion to seek help in modifying behavior to facilitate the change
process. Coaching
has become a significant management trend in recent years
(Feldman & Lankau,
2005). It involves helping people overcome behaviors that limit
their ability to work
effectively with others (Corbett, Ho, & Colemon, 2009) and
understanding what
drives behavior (Corbett & Chloupek, 2019). Coaching is a
higher-risk confrontive
intervention than agenda-managed interventions or confronting
through feedback
of observations or other data, making it one that Schein (1988)
recommended using
with caution. Coaching interventions can have life-altering
effects and should there-
fore be used with care and sensitivity. The most effective
coaching programs follow
a defined process (Corbett & Colemon, 2005) and support
people in shifting thought
and behavior in ways that affect business positively (Corbett &
Ho, 2012).
4. Structural interventions. The confrontive interventions in this
section are arranged in
a descending hierarchy. That is, they are presented from ease of
implementation (1)
to difficulty of execution (4). Structural interventions pertain to
allocation of work,
modification of communications, group membership, assignment
of responsibility,
and lines of authority. These types of changes, which are often
called “reorganiza-
tion,” are greeted with trepidation by most organization
members. They are also the
most difficult to implement and sustain and must be undertaken
for the right rea-
sons. Before resorting to restructuring, consultants should
consider interventions 1
through 3 to see if they sufficiently solve the problem.
Interventions are diverse and can address multiple levels of the
organization. Selecting the
best possible intervention based on your time frame, budget,
culture, and goals is a key task
of the action research process.
Classification by Level and Process
Other theorists use a range of schemes for classifying OD
interventions. McLean (2006), for
example, classified by levels of analysis (individual, group,
team, organization, etc.). Cummings
and Worley (2018) organized interventions according to the
underlying process (human pro-
cess interventions, technostructural interventions, human
resource management interven-
tions, and strategic change interventions). Table 5.4 goes into
more detail on different ways
to classify OD interventions.
Table 5.4: OD intervention classifications
McLean (2006) Cummings and Worley (2018) French and Bell
(1999)
1. Individual
2. Team and interteam
3. Process
4. Global
5. Organizational
6. Community and national
1. Human process
2. Technostructural
3. Human resource
management
4. Strategic
1. Team interventions
2. Intergroup and third-party
peacemaking interventions
3. Comprehensive
interventions (e.g., large
scale or strategic)
4. Structural interventions
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.2What Are OD Interventions?
Although classification helps the OD consultant and client
understand an intervention’s scope
and focus, no one classification is necessarily “right,” because
both levels and processes are
fluid. Leadership interventions, for example, commonly fall
under more than one classifica-
tion. A leadership development program like the one described
in the Leadership Academy
vignette crosses the classifications of individual, team,
organization, human process, human
resource, and strategic, because potential leaders receive
individual development that affects
their interactions with groups and influences the overall
organization and future strategies.
Another example of an intervention that crosses all levels would
be the implementation of a
performance management system. Individual behavior is usually
affected when performance
is appraised, and this in turn influences groups and the
organization itself.
See Tips and Wisdom: French and Bell’s OD Interventions.
This Book’s OD Intervention Classification
Regardless of classification, each phase of action research
builds toward making one or more
interventions. This book uses three levels of intervention
classification: individual, group or
team, and organization. Each of the categories listed previ ously
can be accounted for under
one or more of these three categories. Table 5.5 lists typical
interventions at these levels. We
will further describe these interventions in chapters 7 and 8.
Tips and Wisdom: French and Bell’s OD Interventions
As Table 5.4 showed, there are many types of interventions to
choose from, particularly based
on level. French and Bell (1999) classified OD interventions
into 14 types that center more on
the activities:
1.
diagnosticactivities,suchasdatacollectionandfeedbacktodetermin
ecausesof
problems
2. team-
buildingactivities,suchasdetermininggroundrulesorassessingindi
vidual
interaction styles
3.
intergroupactivities,suchasinterofficecollaborationorconflictreso
lution
4. surveyfeedback,suchasclimateassessment
5. educationandtraining,suchasaleadershipworkshop
6.
technostructuralorstructuralactivities,suchastechnologyimpleme
ntationor
reorganization
7. processconsultation,suchasgroupdynamicsanalysis
8. grid-
organizationdevelopment,suchasdeterminingmanagementstyleba
sedonlev-
els of concern for people and concern for production (based on
Blake & Mouton, 1964)
9. third-partypeacemakingactivities,suchasmediation
10. coaching and counseling activities, such as executive
coaching
11. life- and career-planning activities, such as career
development or life coaching
12. planning and goal-setting activities, such as departmental
goal setting
13. strategic management activities, such as strategic planning
14. organizational transformation activities, such as
restructuring and new leadership
The Leadership Academy vignette interventions could be
classified according to French and
Bell’s (1999) list as 1, 2, 5, 10, 11, 12, and likely others.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.2What Are OD Interventions?
Table 5.5: Levels of OD interventions
Individual-level interventions Group-level interventions
Organization-level interventions
• Learning and development
• Leadership and
management development
• Career development
• Assessment
• Job development
• Group or team process
and development
• Diversity and inclusion
• Conflict management
• Problem solving and
decision making
• Vision and mission
development
• Strategic planning
• Organization design
• Culture
• Talent management
• Large-scale interactive events
Criteria for Choosing an Intervention
Argyris (1970, 2000) recommended that three primary
intervention tasks occur before mak-
ing any type of intervention:
• First, recommended interventions must be based on vali d
information. This means
thoroughly engaging Phase 1 of the action research model by
collecting and analyz-
ing data on the problem before proceeding.
• Second, the client’s discretion and autonomy must be
respected; engagement in the
intervention must be based on the client’s free, informed choice.
• Third, the client must be committed to learning and change.
All three of these prerequisites had been met in the QuickCo
vignette featured in chapters 3
and 4 and were in place in the case of the Leadership Academy
vignette, which positioned the
organizations for intervention success.
There is usually more than one appropriate intervention for
every problem. Interventions
vary in terms of their implementation time frame, cost, scale,
level, and complexity. For exam-
ple, in the QuickCo vignette, the intervention involved a
relatively short time frame in which
a facilitated intervention was made with the shipping
department and some coaching was
provided to the supervisor. The Leadership Academy vignette
presents a much more costly,
long-term, complex implementation that will last for a year and
continue into the future.
Cummings and Worley (2018) defined an effective intervention
as one that fits an organiza-
tion’s needs, targets outcomes that will address the problem’s
root cause, and transfers com-
petence to the organization to manage future changes. A key
feature of the Leadership Acad-
emy was to build internal capacity for the state public health
agency to run future programs.
French and Bell (1999) advocated a strategic approach to
interventions that incorporates
goals, activities, and timing. The strategy also needs to
anticipate the organization’s readiness
to change, potential barriers, and sources of support and
leadership. French and Bell’s tips for
making effective interventions follow:
1. Include the relevant stakeholders.
2. Base the intervention on the data generated.
3. Involve the stakeholders in the action research process.
4. Keep the intervention focus on the key goal.
5. Set manageable, attainable goals.
6. Design the intervention so that key learning can be attained
and shared.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.3Implementing OD Interventions
7. Emphasize collaborative learning throughout the process.
8. Use the opportunity for the client group to enhance learning
about the interpersonal
workings of the group.
Anderson (2016) outlined useful considerations for making
good intervention choices. First,
the intervention should be congruent with the data and diagnosis
from the discovery phase of
the action research model. Incongruence will result in solving
the wrong problem.
Second, the client readiness for change should be assessed.
Without a client that is willing
and able to change, the intervention will fail. Striking a
collaborative consulting relationship
is foundational to promoting readiness throughout the process.
Anderson’s (2016) third consideration is determining where to
intervene. Do you start
with top management or line management? Do you work on
relationships before issues
or vice versa? Would it be wise to pilot the intervention on a
small scale before rolling it
out to the whole organization? Do you start with the easiest or
most difficult aspect of the
implementation?
Anderson’s (2016) fourth consideration is the depth of the
intervention. Less deep interven-
tions are observable, whereas very deep interventions are more
abstract. The following are
Anderson’s depths, listed in order with some potential
examples:
• work content (tasks, skills, knowledge);
• overt group issues (communication, decision making, or
conflict);
• hidden group issues (coalitions and power);
• values and beliefs (quality, cooperation, stability); and
• unconscious issues (assumptions about how we do business,
culture).
Finally, Anderson’s (2016) fifth consideration is to sequence
activities to ensure optimal out-
comes. Consultants need to make the best use of data; be highly
effective, efficient, and quick;
and use relevant activities that minimize stress on individuals
and the organization.
5.3 Implementing OD Interventions
Now that the data has been analyzed and shared with the client
and an intervention has been
agreed upon, it is time to implement a solution. This is the
action phase of OD. “Implementa-
tion is . . . the point of the consultation” (Block, 1999, p. 247).
Consider This
Use French and Bell’s list to evaluate an intervention in which
you participated. How did the
intervention stack up against this list?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.3Implementing OD Interventions
The moment of implementation or action is also the moment for
the client to visibly take
ownership of and lead the process. That is, the client will be
accountable for maintaining the
intervention in the future. If the client is not hands-on with
implementation, the entire proj-
ect will be at risk. If the consultant has managed the client
relationship well and insisted on
a joint process, the client should have little issue with taking
charge of the intervention. The
client will likely need ongoing coaching and support to help see
the implementation through
and build confidence in the process.
Determining the Consulting Role
Consultants have a range of options for how to conduct
themselves during the implementa-
tion. A consultant may elect to stay out of the way, take a
hands-on approach, or serve as
facilitator. As discussed in Chapter 3, the collaborative role is
the most effective and gener-
ally preferred in OD. Table 5.6 takes Cockman, Evans, and
Reynolds’s (1996) list of roles to
collaboratively facilitate intervention implementation and offers
some strategies for using
these roles.
Table 5.6: Consulting roles and strategies during
implementation
Role Strategies
Provide support and
encouragement.
• Acknowledge the implementation effort in ways that give the
client and employees recognition and appreciation.
• Offer praise and words of encouragement to those engaged in
the implementation.
Observe and share feedback. • Prepare clear and direct feedback
and share it with the client.
• Develop observation checklists so the client can also
participate in making observations and checking on progress.
Listen and offer counsel when things
go wrong.
• Serve as a sounding board.
• Ask good questions to help the client find the answer instead
of giving the answer.
• Mediate conflict as necessary.
Help the client modify and fine-tune
the plan.
• Engage in ongoing evaluation of the implementation.
• Devise adjustments to the change as needed.
Identify process problems that
impede implementation.
• Conduct ongoing evaluation and take quick action to make
needed corrections.
• Create a process for identifying and resolving problems.
Bring together parts of the client
system to address process issues
(e.g., conflict, communication).
• Create an implementation task force that conducts regular
audits of the implementation and has the authority to
intervene as needed.
• Ensure that communication is ongoing with everyone involved
in the implementation.
Bring people together from differ-
ent disciplines or different parts
of the organization to work on
implementation.
• Create an implementation task force.
• Employ task force members to conduct communication,
training, and evaluation related to the intervention.
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.3Implementing OD Interventions
Table 5.6: Consulting roles and strategies during
implementation (continued)
Role Strategies
Organize necessary training and
education.
• Create ongoing training sessions that will help prepare the
employees for the change.
• Develop in-house trainers to help with the training effort.
Work with managers to help them
support the change process.
• Develop a means of communicating with managers so
information can be shared and problems solved easily.
• Consider regular meetings, technology, or a mix.
Confront inconsistencies between
the plan and how it transpires.
• Check the implementation progress to plan regularly and
make adjustments.
• Decide on a protocol for making changes and stick to it.
Refer to Chapter 3 for more information about the consulting
relationship and how to interact
with clients throughout the action research process.
Promoting Learning Related to the Intervention
The intervention process moves the client from the current state
through a transitional phase
and into the new, desired state. Another way to think of it is in
terms of Lewin’s (1946/1997)
unfreezing, moving, refreezing change model introduced in
Chapter 2. People are engaged in
the unfreezing stage when they become aware of the need to
change, build the desire to create
change, and undergo a process of unlearning.
Imagine you decide to go on a diet. The unlearning is the
process of recognizing that your cur-
rent eating habits are unhealthy and searching for an alternative.
Moving is making the
changes, and this requires new learning. For example, you might
consider several diet plans or
review the basics of nutrition. Refreezing occurs when the new
behavior becomes part of your
lifestyle. Another way to think about this is to imagine a
company decides to cut costs. The
unlearning process involves recognizing areas where spending
is unnecessary or too high.
Moving or changing might involve reviewing expenditures and
cutting extraneous purchases
from the budget. Sometimes, travel budgets get cut in these
circumstances or the company
switches to cheaper raw materials to make its products. A more
extreme cost savings effort
might be to lay off workers. Refreezing occurs when new
spending behaviors and policies
have been adopted and the company is able
to sustain a lower cost threshold to operate.
“Human beings have always engaged in
learning—learning to survive, learning to
live in a social group, learning to understand
the meaning of our experiences” (Merriam
& Bierema, 2014, p. 44). Learning in the
workplace is no exception and is a common
focus of OD. Most change requires learning.
People often are not aware of a problem
until there is a crisis or they reflect on the
results they are achieving. At that point,
people begin to ask questions such as “Why
are we doing it this way?” “Is there another
mapodile/E+/Getty Images Plus
Learning is often stimulated by crisis or by
reflecting on events and ideas.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.3Implementing OD Interventions
way to think about this?” “What mistakes have we made and
what have we learned from
them?” or “What could be improved?” Asking such questions is
what is known as reflective
practice.
Crises or reflection can jolt people into a learning mode as they
build knowledge and under-
standing. This learning is often the catalyst for change. People
may learn the competition is
gaining an edge, their quality is declining, their relationships
are dysfunctional, or their man-
agement is lacking vision. These insights make them want to
act, and OD provides a process
for addressing these challenges through action research.
Learning happens at every phase of
the process, from discovery of the problem, to planning an
intervention, to maintai ning the
change, to evaluating its effectiveness. This section considers
the role of learning in the action
research process by appreciating the relationship between
learning and change and explor-
ing ways of facilitating client learning.
The Relationship Between Learning and Change
Most OD involves change, and most change involves learning.
Think about a change you made
in your life, such as switching jobs, starting a relationship,
moving to a new city, or pursuing
a goal. Chances are these shifts created new learning. Changes
often require new action or
new thinking that depends on new learning. For example, when
you switch jobs, you have to
learn how to navigate relationships and how to interact with
your new colleagues. This might
involve learning how your new boss likes to receive information
and make decisions and then
changing your behavior to accommodate the preferences of your
boss. Working with a new
team might involve learning how to be more assertive than you
have been in the past if you
are working with strong personalities or the team is looking to
you for the expertise you bring
to the organization.
Similarly, changes made in OD—such as heightened awareness
of interpersonal relations,
understanding through feedback, or attempting to change your
leadership style—also involve
learning. Certain conditions promote learning. For example,
adults are motivated to learn
when education is relevant to their current situation, work
challenges, or life needs.
Facilitating Client Learning
Knowles (1980) and Knowles and Associates (1984) developed
key principles related to
adult learning that are relevant to implementing change. These
principles are considered the
art and practice of teaching adults, also known as andragogy.
Principles of andragogy as they
relate to implementing change include the following:
1. As people mature, their self-concept moves from that of a
dependent personality
toward one of a self-directed human being. This means that
people desire to have say
and control in their learning. Building ways for affected
employees to have input
into the change and control over aspects of it will enhance buy-
in and adoption. For
example, if you implement a new procedure, engage the people
handling the process
in devising best practices.
2. People accumulate a growing reservoir of experience, which
is a resource for learning.
The people affected by a change have spent a great deal of time
in the organization
and have a repertoire of know-how related to the problem or
issue being addressed
by OD. Failure to tap their experience and knowledge will breed
resentment and
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.3Implementing OD Interventions
resistance. Find ways for the involved parties to contribute their
insights to the pro-
cess to enhance buy-in and minimize resistance.
3. People become ready to learn when the tasks and challenges
of life demand new
knowledge. My organization, for example, is converting to a
much-needed data man-
agement system. Although it is a major change, the employees
who have been wres-
tling with outdated, unresponsive, clunky databases have
eagerly attended training
and are excited about the implementation of the new
technology. When change is
communicated well and addresses a true need in the
organization, there is a better
chance that employees will be enthusiastic about learning to
adopt it.
4. People tend to be life or problem centered in their learning,
rather than subject cen-
tered. It is likely that many people did not have any interest in
birthing classes until
they were expecting a baby. That is because the learning was
timely and relevant to
their life. Similarly, someone would likely be more motivated to
take a wine-tasting
class (life centered) than an organic chemistry class (subject
centered). Changes
that are relevant to employees become learning opportunities.
Part of a consultant’s
role is to effectively communicate the relevance of planned
change and help those
affected see the linkage.
5. People are driven by intrinsic motivation. People are more
inclined to seek learning
that meets an internal need for knowledge or mastery rather
than an external need
for recognition or money.
6. People need to know the reason to learn something. People
will be resistant to learning
new software or changing their behavior if they are not
provided with a rationale. A
consultant’s job may well be to sell the OD effort and connect it
with the necessary
learning. When my organization announced the shift to a new
database, the rationale
was for ease of generating reports, combination of databases,
and a user-friendly
format. These were reasons that made sense to the users and
motivated their accep-
tance of and learning related to the change.
Facilitating Transformation Through Learning
Scharmer and Senge (2016) proposed “Theory U,” a theory of
learning and management
that helps leaders change unproductive patterns of behavior that
hurt relationships and
decision making. Recently, Scharmer (2018) published a
distillation of this model that uses
action research to address learning and leadership challenges in
organizations. Scharmer and
Kaufer (2013) distinguished two types of learning in the Theory
U model: learning from the
past and learning from the emerging future. They described this
new model of learning and
leadership extensively in their model of Theory U, “a
framework for learning, leading, inno-
vating and profound systemic renewal” (p. 18). The model is
called Theory U because of its U
shape (Figure 5.1).
Consider This
Think of the times you have been motivated to learn. How do
the principles of learning relate
to your own life? What about to changes you have experienced
at work? How can you craft the
change in a way that gives affected employees an opportunity
for input and control over the
learning?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.3Implementing OD Interventions
Figure 5.1: The Theory U model
The Theory U model moves the client through the process of
letting go of the old (left side of the U)
and embracing the new (right side of the U).
From The Essentials of Theory U: Core Principles and
Applications, by C. O. Scharmer, 2018, San Francisco, CA:
Berrett-Koehler.
Scharmer and Kaufer (2013) proposed that “energy follows
attention” (p. 21) and that we
should therefore focus on what we want to create versus what
we want to avoid. That means
consultants need to keep clients focused on the outcomes they
seek, rather than the problems
they want to avoid. To understand this model of learning and
change, start at the top left of the
U. Moving down the left side of the U involves opening minds,
hearts, and wills. You can help
clients do this by observing them closely for ideas or practices
that are holding them back,
then feeding this information back to them. The bottom of the U
is a place of deep reflection
and shifting away from the problem toward the desired future.
Your role is to create activities
that help clients reflect on their problem. This might include
key assumptions of individuals
and the organization, or raise new questions that have not yet
been asked. Going up the right
side of the U involves acting—much like the doing phase of
action research. In these steps, you
develop a vision of the intended future and devise and
implement appropriate interventions.
Navigating change in the Theory U model requires what
Scharmer and Kaufer (2013) refer to
as “transform[ing] the three enemies” (p. 23). These are
• the voice of doubt and judgment (shutting down the open
mind),
• the voice of cynicism (shutting down the open heart), and
• the voice of fear (shutting down the open will).
DOWNLOADING
past patterns
suspending embodying
redirecting enacting
letting go letting come
SEEING
with fresh eyes
SENSING
from the field
PERFORMING
by operating
from the whole
PROTOTYPING
by linking head,
heart, hand
CRYSTALLIZING
vision and
intention
PRESENCING
Connecting to source
OPEN
MIND
OPEN
HEART
OPEN
WILL
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Scharmer and Kaufer (2013) suggested beginning by focusing
on the future and paying par-
ticular attention to where the past seems to end. This “place” is
similar to what Bridges (1980)
called the neutral zone (see Chapter 2).
Theory U is an innovative, future-oriented change model worth
knowing. Resources for fur-
ther study of this model are listed at the end of the chapter.
5.4 Monitoring and Sustaining Change
A vast amount of planning and work goes into making an
intervention. As previously noted,
making change is easier than sustaining change. A consultant’s
job is to keep the client on track
to successful change implementation. This requires knowing the
warning signs of a faltering
intervention and how to get the client back on track to a
successful, sustained intervention.
See Case Study: Reorganization Resistance for an example of an
organizational change that
may not have been planned effectively.
Case Study: Reorganization Resistance
The CEO of a publishing company instructs Brenda Frank, the
president of one of its divi-
sions, to reorganize its management structure. The division was
recently purchased and is
not aligned with the other divisions. The CEO thinks that the
division has too many vice presi-
dents and management layers and that its administrative
structure is too expensive. Brenda is
unconvinced that restructuring is the best answer, due to the
niche of the publishing division,
but she also understands her marching orders. She contacts a
consultant, George Reed, with
whom she has worked before on leadership development issues.
“George,” she says, “I’ve got to find a way to reorganize that
makes my CEO happy. According
to corporate, we have too many layers and too many VPs. I’m
going to need your help to figure
this out. Can you help?”
George pauses for a moment before answering. He is an expert
at leadership but has limited
experience with the kind of restructuring Brenda is asking for.
“I’m not sure that falls within
my expertise, Brenda, but I am willing to hear more about the
matter. Let’s meet.”
A meeting is set, and George and Brenda discuss the change.
George is hesitant and tells Brenda
she might be better off with someone else. “Nonsense!” she
exclaims. “You are an expert at
leadership. How hard can this be? Let’s get to work.”
Brenda and George set about planning the change. The next
thing Brenda and George do is call
a meeting of the vice presidents to notify them of the change.
Brenda opens the meeting. “We
are going to have to reorganize,” she says. “According to
corporate, we have too many layers
and too many VPs. I’m asking for your help in this process.”
Brenda explains that over the com-
ing weeks, George will be meeting with them to discuss their
functional areas and collect data
to help inform the change.
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Reasons Interventions Fail
Perhaps your organization has experienced interventions such as
training, survey feedback,
or restructuring. Can you think of interventions that failed? This
section examines reasons
interventions fail and the implications of such failures.
Interventions fail for several reasons.
First, organizations must be ready for change. In addition,
certain flaws inherent in the inter-
vention design itself can contribute to failure. Anderson (2016)
identified 10 reasons inter-
ventions fail. They are listed in Table 5.7, along with tips for
fixing intervention failures.
Case Study: Reorganization Resistance (continued)
After the meeting, the groans and complaints from the VPs are
largely uniform. The comments
in the hallway range from anger to disbelief to denial:
“I’ll tell you what, we are not at all valued. This is a signal
we’d better all be dusting off
our résumés.”
“Well, that’s the dumbest idea I’ve heard out of corporate since
we were acquired. They
have no idea what it takes to run our business and have given no
rationale for the
change other than they think it will save money. What about the
money it could lose?”
“This plan will never work. Let’s just keep our noses to the
grindstone and ride it out.”
George and Brenda have a good working relationship, so they
forge ahead and try to make the
best of a difficult situation. George begins studying the
organization chart and interviewing
the VPs. Together, they come up with a new structure that
merges 10 departments into six,
displacing four VPs. The rollout of the change involves holding
individual meetings with the
VPs to unveil the new structure. Brenda works hard to find new
roles within the company for
the displaced VPs, but she is not entirely successful and winds
up laying off two of them. Once
the personnel changes have been made at the individual VP
level, Brenda crafts an email to
all employees with a new organization chart and informs them
that the changes are effective
immediately.
The reorganization announcement throws the organization into a
frenzy. It catches the
employees by surprise; they see no reason for the changes.
Immediate reactions are anger,
fear, and suspicion. Employees are nervous about their job
security and the integrity of their
work units. The remaining VPs are unclear about how to
implement the changes or how to
manage the new staff units of the merged departments.
Productivity and morale plummet.
Several employees at multiple levels begin to look for other
jobs. Customers begin to complain
about a lack of support or clarity about whom to contact to meet
their needs.
Clearly, Brenda and George have a disaster on their hands. They
thought they were doing
things right, but obviously they were not.
Critical Thinking Questions
1. What did Brenda and George do wrong?
2. What would you do differently?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Table 5.7: Intervention failures and fixes
Failure Fix
1. The intervention attempted to solve the wrong
problem.
• Ensure that Phase 1 of the action research
process arrives at the correct diagnosis.
• Involve multiple stakeholders to analyze the
problem and provide inputs.
2. The wrong intervention was selected. • Ensure that Phase 1 of
the action research
process plans an appropriate intervention.
• Identify a backup intervention if it becomes
clear that the selected one fails to meet the
need.
3. Goals were ambiguous, unclear, or too lofty. • Work with the
client to establish a clear
purpose and goals for the intervention.
• If there is no clarity of purpose, intended
outcomes, and process to achieve them, an
intervention is not ready to be implemented.
4. The intervention was undertaken as an event
rather than as a program of activities with
multiple targets for change (strategy missing).
• Develop a long-term implementation strategy
using a Gantt chart (see Chapter 3).
• Distinguish interventions from intervention
strategy.
5. Not enough time was devoted to change. • Estimate how long
it will take to make the
intervention and then add at least 10% more
time.
• Build time into the workday to implement the
change. This is part of the resource allocation
the organization has to make if it is committed
to change.
6. The intervention was poorly designed to reach
the specified goals.
• Ensure that Phase 1 of the action research
process arrives at the correct diagnosis and
appropriate intervention.
• Engage employees in intervention design—
they will be the best debuggers and critics and
help get it right the first time.
7. The consultant was not skilled at implementing
the intervention.
• Hire the right consultant.
• Part ways with the consultant if you are not
getting what you need.
8. Responsibility for change was not transferred
to the client.
• Establish client accountability for monitoring
and sustaining change during the contracting
phase (see Chapter 3).
• Provide the necessary learning and
development to managers and leaders to
assume accountability for the change.
9. Organizational members resisted or were not
committed to the intervention.
• Follow the recommendations for promoting
change readiness.
• Watch for evidence of resistance and follow the
strategies in this chapter to respond to it.
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Table 5.7: Intervention failures and fixes (continued)
Failure Fix
10. The organization was not ready for change. • Prepare
management for the change first so it
can provide support to employees.
• Prepare employees for the change prior to
implementation.
Source: Adapted from Organization Development: The Process
of Leading Organizational Change (pp. 206–208), by D. L.
Anderson, 2016, San Francisco, CA: Sage.
Failed interventions have serious implications for the consultant
and the organization (Ander-
son, 2016). They can damage a consultant’s reputation, causing
the consultant to lose clients
and future referrals. Failed interventions can also be detrimental
to a consultant’s sense of
self-efficacy and trust in his or her intuition. This same level of
self-doubt can plague orga-
nizations with failed OD efforts and may cause organization
members to distrust their own
intuition about organization problems or ability to implement
lasting change. In fact, failure
can become a repetitive cycle for consultants and organizations
if confidence in the process
is not quickly restored.
Argyris (1970) noted that other implica-
tions for failed interventions on the organi-
zation level include increased defensiveness
against any change; diminished ability to
cope through conflict resolution and pro-
ductive communications; waning energy to
work on solving the problem; increased
frustration, stress, cynicism, and controlling
behaviors; and unrealistic goals (aiming too
high or too low to avoid future risk or
failure).
Overcoming Resistance
A lesser-known definition of change readi-
ness is “the cognitive precursor to the behaviours of either
resistance to, or support for a
change effort” (Armenakis, Harris, & Mossholder, 1993, pp.
681–682). When employees
express stress, negativity, or cynicism toward the change, they
are showing resistance. Resis-
tance has also been defined as “an adherence to any attitudes or
behaviours that thwart orga-
nizational change goals” (Chawla & Kelloway, 2004, p. 485).
Resistance behaviors might be
readily visible, such as sabotage or vocal opposition. Or they
may be subtler, such as reducing
output or withholding information (Giangreco & Peccei, 2005).
Resistance may also take the
form of ridiculing the change, boycotting change conversations,
or sabotage (Lines, 2005).
The beginning of this chapter discussed readiness to change.
Readiness is related to resis-
tance because, when people or organizations are not prepared to
change, they will likely find
ways to stall or distract the change effort.
See Assessment: Test Your Change Resistance to find out how
much you embrace change.
Ridofranz/iStock/Getty Images Plus
When interventions fail, the team must decide
which steps to take next.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Causes of Resistance
Resistance to change might be caused by management’s
dismissal of employee input or fail-
ure to handle negative attitudes toward the change, or it might
arise because the level of
employee input in planning, implementation, and change
maintenance is too low (McKay et
al., 2013). Most people do not like change, so resistance is the
general disposition most will
initially experience. Resistance can also occur on ethical and
strategic grounds if employees
do not regard the change as favorable to the organization and its
stakeholders (Oreg, 2006;
Piderit, 2000).
Acknowledging Resistance
Management may be tempted to disregard resistance; however,
it is a mistake to ignore it.
When employees resist change and relay concerns about it, they
are behaving normally.
Impending change causes fear and a sense of personal loss and
grief among employees who
find value and a sense of security in their daily routine and
work group (Burke, Lake, & Paine,
2008). Sometimes, employees just need an opportunity to raise
issues and have management
hear their fears. Dismissing employees’ concerns or
disregarding how the change will affect
employees’ sense of security and trust in the organization risks
intensifying negative atti-
tudes, increasing resistance behaviors, and compromising
effective change implementation.
Instead, it is to the organization’s advantage to create
opportunities for dialogue about the
change and to seek solutions that resolve the concerns. How
leaders talk about change also
matters (see Tips and Wisdom: Discussing Change). Cheung-
Judge and Holbeche (2015) sug-
gested talking about change in a less directive and more
inclusive way. Table 5.8 shows the
differences.
Table 5.8: Using inclusive language about organization change
Instead of saying Say instead
“We [management] are managing change.” “We invite you
[employees] to participate and influ-
ence the change initiative.”
“We are dealing with resistance to change.” “Here’s what to
expect as we make these changes.”
(continued on next page)
Assessment: Test Your Change Resistance
Would you consider yourself generally open to change, or are
you more inclined to eschew
it? Most people claim to welcome change, yet they tend to
waver when it happens to them.
You may find it hard to believe that inventions such as
lightbulbs, coffee, air travel, umbrel-
las, taxis, personal computers, and vaccines were widely
mocked upon their introduction
(Nguyen, 2016), yet these innovations are things we now
depend on for our lifestyle, career,
convenience, and health. Although you probably don’t like or
even care about changes foisted
upon you, you may need them and not even realize it.
Find out how much (or little) you embrace change by taking the
following survey: http://
pluto.huji.ac.il/~oreg/questionnaire.php.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Table 5.8: Using inclusive language about organization change
(continued)
Instead of saying Say instead
“We are allowing them [employees] to . . .” “We invite you to
participate.”
“We are giving them the opportunity to . . .” “We invite you to
identify ways of navigating the
change.”
“We are gaining buy-in.” “What processes will help you adjust
to the
change?” “What support do you need?”
Adapted from Organization Development: A Practitioner’s
Guide for OD and HR (pp. 161–162), by M. Cheung-Judge and
L.
Holbeche, 2015, London, England: Kogan Page.
Ethical Issues Pertaining to Interventions
Integrity and authenticity help the OD process run smoothly and
avoid failed interventions.
OD ethics have also been discussed in chapters 1 and 3. There
are some important principles
to keep in mind to ensure that the intervention process is
ethical.
Avoid Misrepresentation
Although it is tempting to avoid telling clients what they do not
want to hear, it is a mistake
to misrepresent the intervention’s timeline, cost, or difficulty.
This mistake can occur due to
inexperience, overpromising, or trying to please a client. A
better strategy is to underpromise
and overdeliver. That way, there are no surprises in the long
run. It is also important that you
know the limits of your skill set as a consultant. If you promise
to deliver a skill or knowledge
you do not have, it can create distrust and anger with the client,
put the intervention in danger
of failure, and imperil your reputation as a consultant.
Avoid Collusion
Colluding with the client is another ethical challenge. For
example, you might scheme to adopt
an intervention because it is appealing or interesting or because
it will bring you more busi-
ness as a consultant. If you lack evidence to support the need
and appropriateness of an inter-
vention, it is unethical to recommend it. It is also bad ethics to
conspire with the client in ways
that result in distortion of the process and exclusion of others.
For example, if you know that a
certain manager is going to disagree with your desired course of
action and you exclude him
or her from a meeting where it is discussed, this is considered
colluding to exclude.
Tips and Wisdom: Discussing Change
A simple exercise to help employees discuss change is to give
them an opportunity to talk
about their hopes for the change as well as their fears about the
change. It is useful to record
these (often on a flip chart or whiteboard) and for management
to respond to the fears, which
helps defuse them. This activity can be done in a meeting
format or via an electronic forum or
survey.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Avoid Coercion or Manipulation
Finally, it is unethical to coerce or manipulate the client or
members of the organization. This
might involve blocking opportunities for organization members
to participate in the decisions
about the process, which in effect foists the intervention on
them. This and other ethical chal-
lenges noted can be avoided by following a good action research
process that generates data
on which to base decisions while actively involving
organization stakeholders in the process.
Sustaining Change
Whether the change has been on an individual, group, or
organization level, implementing it
is the easy part. Sustaining it is where trouble occurs.
Successful change implementation may
cause overconfidence, which fosters an unpreparedness for the
difficult work of maintain-
ing it (Anderson, 2016; Senge et al., 1999). Anderson cautioned
that relapsing to old ways of
being is an implementation hazard, especially when an external
consultant exits the picture.
Change also requires energy that organization members may
lack, because other distractions
may pull them away from consciously maintaining the change.
The education necessary for
full change adoption may not keep pace with the change
implementation, making it difficult
to sustain. Sometimes, the old organization culture and
practices are just too powerful for the
change to sway, leaving the organization vulnerable to reverting
to old ways of doing business.
Actions to Sustain Change
How can organizations avoid these pitfalls to lasting change?
Change should be translated
into the organization’s daily operations so it simply becomes
the way business is done. Strate-
gies that help sustain change include the following:
1. Communicate regularly about the change implementation.
This could be via regu-
lar meetings, written or electronic communication, social media,
and informal
conversations.
2. Formulate an implementation task force that includes top
leaders and affected
employees. This group can hold regular meetings and help
communicate the change.
3. Hold meetings that include a cross-functional, intragroup
mixture of people involved
with the change to monitor progress, troubleshoot, and evaluate
the process.
4. Find ways to reward and recognize employees involved in the
implementation. This
might include visible items such as T-shirts or trinkets, awards,
monetary rewards,
or time off.
5. Build change implementation into the performance review
criteria of those employ-
ees accountable for supporting and sustaining change.
6. Invite external stakeholders and consultants to evaluate the
change progress.
7. Ensure that the reward system is aligned with the desired
changes.
8. Provide the learning and development needed to sustain the
change.
9. Ensure that needed resources are available to sustain the
change.
See Tips and Wisdom: The Challenge of Change.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 5.4Monitoring and Sustaining Change
Strategies to Defuse Challenges to Change
During implementation, a consultant will
monitor the client for signs of low commit-
ment, such as anger, hostility, objections,
inflexibility with implementation options,
unwillingness to look at process issues, hid-
den agendas, delaying tactics, or failure to
implement. To successfully make an inter-
vention, there must be commitment and
leadership from the top, individual compe-
tence, and adequate organization. When
consultants observe signs of waning com-
mitment, they will want to take action
quickly. Faltering commitment will nega-
tively affect learning and lasting change.
Senge and colleagues (1999) identified 10
challenges created by resistance to change that relate to
initiating change, sustaining change
momentum, and meeting the challenges of redesigning and
rethinking processes and proce-
dures during and after the change:
Time challenges. Employees can feel frustration or worry that
they do not have enough time
to learn or implement changes. To counter this challenge
requires giving employees flexibility
and time to process and implement the change.
Support and help with change implementation. Employees wil l
become frustrated and disen-
chanted if coaching, guidance, and support are absent during the
change. It is important to
provide the resources both for supporting the change and for
management to be skilled in
this area.
Perceptions that the change is not relevant. When employees do
not see the rationale for
change or its relation to the big picture or business reasons,
they may ignore it because they
think it does not matter. Establishing the need for and relevance
of change is necessary from
the beginning of the action research process. The need for
change must be tied to business
goals, new learning, changed procedures, and processes so
employees are not left wondering
why they have to make changes.
Tips and Wisdom: The Challenge of Change
If there is no struggle, there is no progress.
—Frederick Douglass
Change is hard work. Well-implemented changes make
tremendous differences for individu-
als, teams, and organizations. The OD and action research
processes greatly enhance the prob-
ability of success in change endeavors.
NicoElNino/iStock/Getty Images Plus
Time must be made for change, and sustaining
it takes additional time.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
Management fails to set an example. When management fails to
“walk the talk,” people notice.
Management must be held accountable for visibly and
personally supporting and implement-
ing the change.
Mounting fear and anxiety. When changes are implemented,
employees can get nervous. They
may feel vulnerable, unable to adopt the change, and distrustful
of the change and manage-
ment. Open and candid communication from the beginning is a
must, along with management
setting a good example.
Perceptions that the change is not working as intended.
Employees might be negative about the
change and look for evidence that it is not working. This
perception can serve as an excuse to
return to the way things were. It is important to show how the
change is resulting in progress
and intended outcomes. If metrics are available, it is helpful to
show that “since implementing
the change, our defects have decreased by 10%,” for example.
Perceptions that the old way of doing things was better. These
perceptions can allow employee
groups to feel like victims who are disrespected or
misunderstood by management. These
perceptions are countered by ongoing, effective communication
about the change and its
need.
Confusion about who is responsible for the change and new
procedures. Change can natu-
rally breed confusion over new procedures and policies.
Management can help by modeling
patience, flexibility, and problem solving to create new
infrastructure when making changes.
Frustration that the organization is doing nothing but
“reinventing the wheel.” Employees
can get frustrated when they feel like no real change is
occurring or that the change has not
improved the problem. Making the case for change early in the
process can help minimize
frustration and help people focus on the change’s future
benefits.
Confusion about the purpose and benefit of the change in the
bigger organization picture.
Employees may not immediately link the change to organization
strategy and purpose. Man-
agement can help by showing how the change will benefit the
business and its stakeholders.
As has been stressed throughout this chapter, key themes in
avoiding resistance include timely
and ample communication about the change, providing a clear
rationale for why the change is
needed, and management support and role modeling throughout
the change process.
Summary and Resources
Chapter Summary
• Propensity to accept change depends on the level of change
readiness for both indi-
viduals and the organization. Change readiness signals that
employees perceive the
change as necessary and attainable and are willing to support its
implementation.
• Change readiness is influenced by clear management
communication and employ-
ees’ confidence in management’s attitudes, knowledge, and
skills to effectively
implement the change.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
• Interventions are more likely to be accepted when the change
has been clearly com-
municated and employees have an opportunity to participate in
its planning and
implementation.
• OD interventions are change activities that help resolve the
presenting problem.
• Interventions can be classified in multiple ways, including
diagnostic, confrontive,
level, or process. This book classifies them according to
individual, group or team, or
organization level.
• The criteria for making an effective intervention include
basing the intervention on
valid data, verifying the client’s free and informed choice to
proceed, and establish-
ing the client’s commitment to learning and change.
• During the implementation, consultants should be clear about
the type of consulting
role they want to play. Roles vary from less-involved
observation of the implementa-
tion to active engagement in providing feedback, modifying the
plan, and provid-
ing needed training and support. The role the consultant plays
will depend on how
skilled the client is at leading and facilitating change.
• It is important to promote learning related to the intervention
by encouraging reflec-
tive practice, helping employees see the connection between
learning and change,
and facilitating client learning by building principles of
andragogy (effective adult
learning) into the intervention.
• Theory U embraces the idea that energy follows attention. In
consulting, this means
consultants need to keep clients focused on the outcomes they
seek rather than the
problems they want to avoid.
• Interventions fail for multiple reasons, including lack of
change readiness, resis-
tance, poor levels of management communication and support,
and a flawed OD
process that results in the wrong problem being solved,
ambiguous goals, inade-
quate time being allotted, poor design, ineffective consulting,
failure to ensure client
accountability, and lack of organization commitment.
• Ethical issues abound in OD. During the intervention phase,
such issues include mis-
representation, collusion, and coercion.
• Resistance to change can be overcome by open and regular
communication from
management that engages employees in the change’s planning
and implementation
and acknowledges the fears and concerns that underlie
resistance.
• Change can be sustained by regular communication, broad
engagement of employ-
ees in monitoring the change, rewarding and recognizing
employees who are com-
mitted to the change effort, and providing the necessary
learning and support.
Think About It! Reflective Exercises to Enhance Your Learning
1. The chapter began with a vignette about a Leadership
Academy for a state public
health agency. Can you recall an intervention you participated
in? What was it? How
was it executed?
2. Recount a time you or someone you know participated in an
OD intervention led by
a consultant. What were the outcomes and consequences? How
well did the consul-
tant do, based on the principles presented in this chapter?
3. Think back to a change you experienced in either your
professional or your personal
life. How applicable are the principles of andragogy to your
experience?
4. When was the last time you reflected on your assumptions,
thoughts, and actions
related to an idea, practice, or process? Make an appointment
with yourself to
engage in some deep thinking, and journal about what emerges.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
5. Assess the changes you have made in your life or
organization and evaluate how well
you maintained the change. Do you agree with the argument that
change is easier to
make than maintain? Why or why not?
Apply Your Learning: Activities and Experiences to Bring OD
to Life
1. Use the transtheoretical model of health behavior change to
assess a change you
have made. Did you follow the steps? Why or why not?
2. In Chapter 4, one of the activities was to identify a problem
in your organization
and plan a data collection process to examine the issue.
Assuming you did that, how
would you go about planning an intervention to address it?
What level(s) of inter-
vention would be most appropriate (individual, group, and
organizational)?
3. Refer back to Tips and Wisdom: French and Bell’s OD
Interventions in section 5.2 and
reclassify French and Bell’s 14 types of interventions into the
model we are using in
this book (individual, team, and organization).
4. Using Table 5.5, take a real example of implementation and
identify specific roles
and strategies you would use to support the intervention
implementation.
5. Go back to the case study in section 5.4 and use the key
points in this chapter about
change readiness and resistance to change to identify at least
five mistakes made by
the division president and consultant during the change process.
6. Have you experienced a failed OD intervention? If so, use the
information presented
in this chapter on effective interventions and reasons
interventions fail to diagnose
what went wrong.
7. Apply the steps from the Theory U model to a behavior
change, organization change,
or new learning you have made or hope to make.
8. Using Hord and Roussin’s (2013) readiness for change
checklist presented in Table
5.2, assess your readiness to make a change that is impending.
Additional Resources
Media
• Theory U: An Interview With Dr. Otto
https://www.youtube.com/watch?v=k8HKxvKVUsU
• Change Is Good... You Go First
https://www.youtube.com/watch?v=jwxrsngEJDw
Further Reading
Merriam, S. B., & Bierema, L. L. (2014). Adult learning:
Linking theory and practice. San Francisco, CA: Jossey-Bass.
Scharmer, C. O. (2009). Theory U: Leading from the future as it
emerges. San Francisco, CA: Berrett-Koehler.
Key Terms
andragogy The art of teaching adults; a
series of principles for effectively facilitat-
ing adult learning.
confrontive interventions Activities
that occur as a result of the data collected
and analyzed during the action research
process.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Summary and Resources
diagnostic interventions Activities that
address issues as they arise during the
OD–action research process of contracting,
initial meetings, or data collection.
intervention activity A single event,
task, meeting, or workshop implemented
to address a problem or issue in the
organization.
intervention strategy A sequence of tasks
or series of interventi on activities that occur
over several weeks or months to address a
problem or issue in the organization.
readiness for change A perception that
making a change is necessary and achiev-
able and that willingness to support the
change effort exists.
reflective practice The process of ques-
tioning the assumptions that underlie
thoughts and actions.
resistance An expression of stress, nega-
tivity, or cynicism toward a change that
can thwart achieving the change goal,
along with the general absence of change
readiness.
shared vision A mutual picture of a
desired future state that an organization’s
members seek to achieve together.
Theory U Embraces the idea that energy
follows attention. In consulting, this means
consultants need to keep clients focused
on the outcomes they seek rather than the
problems they want to avoid.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Action Research:
The Planning Phase
4
Rawpixel/iStock/Getty Images Plus
Learning Outcomes
After reading this chapter, you should be able to:
• Describe action research and compare Lewin’s model with
those of at least two other
OD theorists.
• State the importance of considering multiple levels of analysis
in the planning phase.
• Identify the steps of the planning phase.
• Describe different types of research.
• Describe different types of research methodologies.
• Discuss five methods of gathering organization data, including
strengths and weaknesses of each.
• Discuss methods of analyzing the data collected.
• Explain how to prepare for and manage the feedback meeting,
including how to address
confidentiality concerns and manage defensiveness and
resistance.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
In Chapter 3, the QuickCo vignette provided one example of
how OD consultants work. Jack,
the internal OD consultant at QuickCo, led his clients, Ned (the
shipping supervisor) and Sarah
(the manufacturing manager), through an action research
process to solve communication and
teamwork problems in the shipping department. Action
research, the process OD consultants
follow to plan and implement change, follows three general
phases:
1. Planning. Data is collected, analyzed, and shared with the
client to determine
corrective action.
2. Doing. Action is taken to correct the problem.
3. Checking. The effectiveness of the intervention is evaluated,
and the cycle is
repeated as needed.
Let us return to the QuickCo vignette and examine the action
research steps taken. Ned and
Sarah met with Jack to outline how employees were at each
other’s throats, letting conflicts
fester, and failing to work well together. Their first meeting
incorporated their planning phase.
As explained in Chapter 3, this initial meeting is known as
contracting. During the meeting,
Jack asked questions to begin identifying the root cause of the
conflicted department. The three
struck a collaborative agreement and worked to devise a plan
for resolving the issues.
The first action they took was to collect data. Jack reviewed the
performance trends and cus-
tomer complaints from the shipping department and interviewed
the employees individually
about their views on the problems.
The planning also involved analyzing the data Jack collected to
arrive at a diagnosis. When he
met with Ned and Sarah to share feedback from the data
collection, Jack presented his analysis,
noting, “Ned and Sarah, you have a dysfunctional team on your
hands. They have no ground
rules, collaboration, or means of handling conflict. Everyone
needs to be more understanding
and respectful toward each other. It would also be helpful to
create some guidelines for how the
team wants to operate and manage conflict. Ned, you also need
to take a more active role in
resolving issues.”
Jack laid the problems out in a matter-of-fact, nonjudgmental
way. Once all the analyzed data
was presented, the three worked jointly to plan an intervention
to address the problems. They
agreed to take the group through a facilitated process to address
communication and team
effectiveness. They also agreed that Ned would benefit from
individualized executive coaching to
help him learn behaviors that would be more productive for
dealing with conflict.
The second phase of action research, doing, occurred when
Jack, Ned, and Sarah scheduled the
intervention with the shipping department and implemented it.
The outcome of the intervention
was a tangible plan for the department for how to be more
effective, including specific actions
they would take to address conflict.
The final phase, checking, involved Ned, Sarah, and Jack
continuing to monitor the shipping
department after the intervention. Ned helped the department
uphold its new ground rules on a
daily basis and coached employees to help them stick to the
plan. He also asked for regular feed-
back on his own management skills as part of his ongoing
coaching. Ned, Sarah, and Jack
reviewed departmental data on productivity and customer
complaints and learned that the
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.1A Review of Action Research
timeliness and accuracy of shipped orders
had significantly improved. Jack followed up
a few months later by conducting individual
interviews with shipping department mem-
bers. He discovered that the solutions had
been maintained. If and when new conflicts
arise, or new members join the team, it may
be time to start the action research process
over again to address new issues.
The QuickCo vignette demonstrates all three
phases of the action research process. This
chapter focuses on the first phase, plan-
ning. Chapters 5 and 6 provide a similarly
detailed look at the second and final phases,
doing and checking, respectively. But before
turning to the planning phase, let us review
action research.
4.1 A Review of Action Research
Chapter 1 defined OD as a process of planned change that is
grounded in a humanistic, demo-
cratic ethic. This specific process of planned change is known
as action research.
Defining Action Research
Action research is a recurring, collaborative effort between
organization members and OD
consultants to use data to resolve problems. As such, it involves
data collection, analysis,
intervention, and evaluation. Essentially, it is a repeating cycle
of action and research, action
and research. However, the words action research reverse the
actual sequence (Brown, 1972),
in that “research is conducted first and then action is taken as a
direct result of what the
research data are interpreted to indicate” (Burke, 1992, p. 54).
Moreover, the cycle yields new
knowledge about the organization and its issues that becomes
useful for addressing future
problems. It thereby allows organizations to improve processes
and practices while simulta-
neously learning about those practices and processes, the
organization, and the change pro-
cess itself.
Action research provides evidence, which enables a consultant
to avoid guesswork about
what the issue is and how to resolve it. According to French and
Bell (1999),
Action research is the process of systematically collecting
research data about
an ongoing system relative to some objective, goal, or need of
that system;
feeding these data back into the system; taking actions by
altering selected
variables within the system based both on the data and on
hypotheses; and
evaluating the results of actions by collecting more data. (p.
130)
Catherine Yeulet/iStock/Thinkstock
Following the action research process helped
the QuickCo shipping department resolve
employees’ interpersonal conflicts.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.1A Review of Action Research
Action Research Is a Democratic Approach to Problem Solving
Many theorists have characterized action research as democratic
and collaborative:
• “Action research is a participatory, democratic process
concerned with developing
practical knowing in the pursuit of worthwhile human purposes,
grounded in a par-
ticipatory worldview” (Reason & Bradbury, 2008, p. 1).
• “Action research is the application of the scientific method of
fact-finding and experi-
mentation to practical problems requiring action solutions and
involving the col-
laboration and cooperation of scientists, practitioners, and
laypersons” (French &
Bell, 1999, p. 131).
• “Action research approaches are radical to the extent that they
advocate replacing
existing forms of social organization” (Coghlan & Brannick,
2010, p. 6).
In addition, Coghlan and Brannick (2010) identified broad
characteristics of action research:
• Research in action, rather than research about action
• A collaborative, democratic partnership
• Research concurrent with action
• A sequence of events and an approach to problem solving (p.
4)
These definitions are similar in that they all characterize action
research as a democratic,
data-driven, problem-solving, learning-based approach to
organization improvement. Some
other examples of how organizations apply action research
include a nonprofit organization
that surveys donors or beneficiaries before engaging in strategic
planning, a government
department that conducts a needs analysis prior to a training
program, or a corporation that
conducts exit interviews before initiating recruitment for
positions.
Action Research Helps Clients Build Capacity for Future
Problem Solving
Although typically guided by a consultant, action research
engages key stakeholders in the
process. Indeed, its effectiveness depends on the active
engagement and accountability of
the stakeholders. As discussed in Chapter 3, OD consultants are
responsible for influencing
the action research process while at the same time exercising
restraint to avoid solving the
problem for the client.
An example can illuminate how action research helps the client
build problem-solving capac-
ity. Suppose an organization introduces a process of
assimilating new leaders when they join
Consider This
Can you recall a project in your organization that involved
members in a collaborative prob-
lem-solving mission? Chances are it was action research, even
if that terminology was not
used. Can you think of any other examples?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.1A Review of Action Research
it (action). The organization hires a consultant to survey team
members about this initiative’s
effectiveness (research). The client and the consultant
collaborate to develop the survey and
analyze the results. What is learned informs continued
assimilation of new leaders and the
way the process gets modified (action). The client is initially
engaged to learn the process so
that it can be repeated in the future without the help of a
consultant. The action research pro-
cess helps the organization collect, analyze, and apply data to
make informed decisions and
not waste time and money on inappropriate interventions.
Helping organizations become
proficient at the action research process is the outcome of
effective consulting, because the
best consultants work themselves out of a job.
Models of Action Research
Recall from Chapter 1 that action research originated with the
work of Kurt Lewin, the father
of OD. Lewin’s model (1946/1997) includes a prestep (in which
the context and purpose of
the OD effort are identified), followed by planning, action, and
fact finding (evaluation). Sev-
eral models of action research generally follow Lewin’s,
although the number and names of
steps may vary. See Table 4.1 for a comparison.
Table 4.1: Comparison of action research models to Lewin’s
original model
Lewin’s (1946/1997)
original action
research steps
Cummings and
Worley (2018)
Coghlan (2019) Stringer (2013)
1. Prestep to
determine context
and purpose
1. Entering and
contracting
0. Prestep:
Understanding
context and
purpose of the
issue
1. Constructing:
Determining what
the issues are
1. Look
a. Gather relevant
information
b. Build a picture;
describe the
situation
2. Planning 2. Diagnosing 2. Planning action 2. Think
a. Explore and
analyze
b. Interpret and
explain
3. Action 3. Planning and
implementing
change
3. Taking action 3. Act
a. Plan
b. Implement
c. Evaluate
4. Fact finding
(evaluation)
4. Evaluating and
institutionalizing
change
4. Evaluating action
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.1A Review of Action Research
Figure 4.1: Plan, do, check action research cycle
The plan, do, check model of action research was popularized
by the total quality movement. The
contemporary research cycle has more steps, although it
essentially accomplishes the same steps of
diagnosing and designing (plan), implementing (do), and
evaluating (check).
The model of action research used in this book has three phases,
paralleling Lewin’s
(1946/1997) model (Figure 4.1): planning, doing, and checking.
(See Who Invented That?
Plan, Do, Check Cycle to read about the person who originally
developed plan, do, check.) Each
phase has substeps derived from multiple action research
models:
1. Planning (the discovery phase)
a. Diagnosing the issue
b. Gathering data on the issue
c. Analyzing the data gathered
d. Sharing feedback (data analysis) with the client
e. Planning of action to address the issue
2. Doing (the action phase)
a. Learning related to the issue
b. Changing related to the issue
3. Checking (the evaluative phase)
a. Assessing changes
Plan
Check Do
Action
research
cycle
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.2Planning: The Discovery Phase
b. Adjusting processes
c. Ending or recycling (back to the planning stage) the action
research process
The action research steps may look simple, and it may appear
that planning change is a neat,
orderly, and rational process. In reality, though, it can be
chaotic, political, and shifting, with
unexpected developments and outcomes. Nevertheless, learning
the action research process
equips consultants with a proven method for navigating such
shifts as they work with clients
on organization challenges.
4.2 Planning: The Discovery Phase
When beginning an OD intervention, the initial steps taken to
identify the problem and gather
data about it are known as planning. The planning phase is a
diagnostic one. The client and
consultant work with other organization stakeholders to study
the problem and determine
the difference between desired outcomes and actual outcomes.
The discrepancy between
what is and what should be is known as a performance gap. For
example, if an organization
aspires to be first in quality in the industry but lags behind in
second or third place, that would
be a performance gap. The organization would have to engage
in performance improvement
practices to close the gap with its competitors. Or, perhaps a
leader receives feedback that she
is not as skilled at leadership as she had thought. The leader
begins to work with a mentor or
coach to identify what behaviors she needs to be more effective.
By improving listening, rec-
ognition, and delegation behaviors, the leader begins to narrow
the gap between her current
and desired future leadership performance.
Organizations perform gap analysis to assess reasons for a gap
between reality and the
desired outcome. The performance gap idea can also be applied
to yourself. Let us say you
aspire to a managerial position but have not achieved it. Upon
analyzing the gap, you realize
you lack the training and experience to attain the position. If
you decide to eliminate the gap,
you might enroll in a graduate program, earn a leadership
certificate, or find a mentor to help
you attain your goal. Consider a performance gap you have
experienced and complete the
chart in Figure 4.2.
Who Invented That? Plan, Do, Check Cycle
Although often attributed to quality guru W. Edwards Deming,
the plan, do, check cycle was
created by Walter A. Shewhart of Bell Labs. Shewhart was an
American physicist, engineer, and
statistician who was one of the originators of statistical quality
control, which preceded the
total quality movement.
Consider This
In your life, what example do you have of action research? How
have you employed plan, do,
check? What actions or adjustments were necessary?
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.2Planning: The Discovery Phase
Figure 4.2: Performance gap analysis
Use this chart to assess your own performance gap. Identify a
desired reality—perhaps running a 5K.
Next, honestly note your current performance goal: Can you run
around the block? Run or walk for a
mile? Once you determine the gap, fill out the middle column
with specific action steps to move closer
to your goal—how will you close the gap? To download an
interactive version of this figure, visit
your e-book.
Now that you have applied the gap analysis to yourself, let’s
think about using it in an orga-
nization setting. Identify a desired reality—perhaps being first
to market with a new tech-
nology. Next, honestly note the organization’s current reality.
In the case of introducing the
technology: Does it have the right people to do the work? Is the
technology ready for market?
Is the marketing campaign ready to go? Once you determine the
gap, fill out the middle col-
umn with specific action steps to move the organization closer
to its goal—how will you close
the gap? What would be the desired reality in your own
organization? How equipped is it to
close the gap? What other performance gaps have you
experienced?
Benefits of the Planning Phase
Planning is a critical phase of OD, because poor plans will
result in poor outcomes such as fix-
ing the wrong problem, wasting time and resources, and
frustrating organization members.
The benefits of good planning include setting the OD process up
for success through careful
analysis and diagnosis of the problem; engaging organization
members from the beginning in
the processes of collaboration, ongoing learning, and capacity
building in the action research
process; and prioritizing issues. See Tips and Wisdom: Alan
Lakein to read and apply tips
about planning.
Tips and Wisdom: Alan Lakein
Time management guru Alan Lakein is credited with coining the
phrase “Failing to plan is plan-
ning to fail” (as cited in Johnson & Louis, 2013, para. 1). This
advice is to be heeded in OD. Plan-
ning is key to effective interventions. How does Lakein’s
quotation apply to your experience?
Current reality Steps to close the gap Desired reality
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.2Planning: The Discovery Phase
Levels of Analysis
Before we delve into the steps of the planning phase, we should
understand the location of
the OD effort—that is, the level at which the action research
might occur. This is known as
the level of analysis. The OD effort might focus on the
individual, group, organization, or sys-
tem. Each level comes with its own issues, needs, and
appropriate interventions. These levels,
along with appropriate interventions, were discussed in Chapter
2.
All levels of analysis, from the individual to the system, face
similar issues. Cockman, Evans,
and Reynolds (1996) categorized organization issues according
to purpose and task, struc-
ture, people, rewards, procedures, or technology:
• Purpose and task refers to identifying the reason the
organization exists and how its
members advance its mission.
• Structure pertains to reporting relationships and how formal
and informal power
relations affect the organization.
• People issues relate to relationships, leadership, training,
communication, emotions,
motivation and morale, and organization culture.
• Rewards systems include financial and nonfinancial incentives
available for perfor-
mance and perceived equity among employees.
• Procedures include decision-making processes, formal
communication channels, and
policies. These are an important category for analysis.
• Technology involves assessing whether the organization has
the necessary equip-
ment, machinery, technology, information, and transport to
accomplish its tasks.
Table 4.2 identifies questions to ask about each area of
Cockman, Evans, and Reynolds’s levels
of analysis.
Table 4.2: Cockman, Evans, and Reynolds’s organizational
issues and
diagnostic questions
Organizational issues Diagnostic questions
Purpose and tasks • What business are we in?
• What do people do?
Structure • Who reports to whom?
• Where is the power?
People • How are relationships managed?
• What training is provided?
• Who communicates with whom?
• How do people feel?
• How high is motivation and morale?
• What is the culture?
Rewards • What are the incentives to perform well?
Procedures • What are the decision-making procedures?
• What are the channels of communication?
• What are the control systems?
Technology • Does the organization have the necessary
equipment, machinery,
information technology, transport, and information?
Source: From Client-Centered Consulting: Getting Your
Expertise Used When You’re Not in Charge, by P. Cockman, B.
Evans, & P.
Reynolds, 1996, New York, NY: McGraw-Hill.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.2Planning: The Discovery Phase
Identify a performance gap you are aware of personally or
professionally and see if you can
answer Cockman, Evans, and Reynolds’s questions.
Steps in the Planning Phase
The steps in the planning phase include identifying the problem
area, gathering data, analyz-
ing the data, sharing feedback, and planning action. These steps
illuminate the core problem
and identify key information for making an intervention.
Step 1: Preliminary Diagnosis of the Issue
When an OD process is initiated, it is imperative that the
problem be correctly defined. Doing
so involves a process of diagnosis. A consultant’s job is to push
the client to identify the
root cause of the problem, rather than its symptoms.
Considering the QuickCo example, it
might have been easy for Ned to decide to put the department
through a customer service
training based on the symptoms of late, erroneous orders. Had
he done so, however, it likely
would have worsened matters, because no amount of customer
service training would fix
the department’s interpersonal conflicts, poor communication,
and ineffective conflict reso-
lution. It may take intensive study and data collection to
accurately diagnose a problem, but
doing so is well worth it.
The action research process begins by defining a problem that
warrants attention. Consul-
tants must ask good questions to illuminate a problem’s source.
They can then move on to
the next step in the planning phase. Questions a consultant
might ask a client include the
following:
• “What do you think is causing the problem?”
• “What have you tried to fix it?”
• “How has this attempt to fix the problem worked?”
• “What has been stopping you from fully addressing this
issue?”
In addition to asking questions to pinpoint the issue, consultants
must ask questions about
who else will be involved in the OD effort. Also, as Chapter 3
explored, a consultant needs to
uncover the client’s expectations regarding the duration of the
project and make sure the cli-
ent is willing to assume an equal responsibility for outcomes.
Good questioning enhances one’s authenticity as a consultant.
How have you diagnosed
problems in your organization? Have you ever misdiagnosed an
issue? What were the
consequences?
Step 2: Gathering Data on the Issue
Once QuickCo diagnosed the team’s lack of communication and
interpersonal effectiveness
as the source of the problem, it was ready to collect information
to inform next steps. This is
known as data gathering. Data can be gathered in many ways.
The most common data col-
lection methods in action research include interviews,
questionnaires, focus groups, direct
observation, and document analysis.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.2Planning: The Discovery Phase
Jack, the internal QuickCo consultant,
took several steps to better understand
the problem. He reviewed performance
trends and customer complaints, inter-
viewed department members, and
relied on his own working knowledge
and observations of the department to
formulate a solid understanding of the
issues. What types of data have you
gathered to better understand organiza-
tion issues? Methods of data gathering
are explored in detail in the next section
of this chapter.
Step 3: Analyzing the Data
Once data has been collected, it must be turned into something
meaningful and useful for the
client. Data collected to provide information about a problem is
not useful until it is inter-
preted in ways that inform the issue and provide clues to
possible interventions. For example,
a survey is not helpful unless it is examined within the
organization’s context. Data analysis
will be more fully defined in the data analysis methods section
later in this chapter.
Step 4: Sharing Feedback With the Client
Once data has been collected and analyzed, a feedback meeting
is scheduled in which results
are presented to the client. In the QuickCo example, Jack met
with Ned and Sarah to share
his analysis. Feedback meetings require careful planning to
keep the consultancy on track.
Consultants should decide on the key purpose and desired
outcomes for the meeting. For
example, do they want the client to better understand the
problem? Agree on a course of
action? Confront some issues affecting the problem? Sharing
feedback with the client involves
determining the focus of the feedback meeting, developing the
agenda for feedback, recogniz-
ing different types of feedback, presenting feedback effectively,
managing the consulting pres-
ence during the meeting, addressing confidentiality concerns,
and anticipating defensiveness
and resistance.
Step 5: Planning Action to Address the Issue
The last step of the planning or discovery phase is to plan the
action that will be taken. This
planning might occur during the feedback meeting, or you might
schedule a time at a later
date to give the client an opportunity to digest the data analysis
and feedback. The outcome of
the planning is to design the activity, action, or event that will
be the organization’s response
to the issue. This is known as an intervention. The type of
intervention selected depends on
the organization’s readiness and capability to change, the
cultural context, and the capabilities
of the OD consultant and internal change agent (Cummings &
Worley, 2018). The intervention
will also target strategy, technology and structure, and human
resource or human process
issues. The consultant and the client will collaboratively plan
the appropriate intervention(s)
to address the issue. Chapter 5 will address interventions in
detail.
Filo/DigitalVision Vectors/Getty Images Plus
Collecting data ensures the OD process is evidence
based.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.3Types of Research
4.3 Types of Research
OD is a joint endeavor between the client and the consultant
that includes data gathering and
analysis. Involving clients in the data collection process
reinforces their commitment to the
OD process. The consultant’s role in this process is to help the
client focus on the root cause of
the problem and to organize the data collection and
interpretation. A consultant’s objectivity
can be very helpful to clients, enhancing their understanding of
how they might be contribut-
ing to the problem or how the issue plays out within the broader
organization context.
Einstein is credited with saying, “If we knew what it was we
were doing, it would not be called
research, would it?” (as cited in Albert Einstein Site, 2012,
para. 4). People conduct research
when they have questions that do not have obvious answers.
Depending on the question they
wish to answer, there are differing types of research.
Basic Research
The word research might evoke images of people working in
labs, examining petri dish cul-
tures, and making new discoveries. This type of research is
known as basic research, and it
generally creates or extends the knowledge base of a discipline
such as medicine, physics, or
chemistry through experiments that allow researchers to test
hypotheses and examine per-
plexing questions. Basic research results in new discoveries and
theories and includes inno-
vations such as testing cures for cancer, establishing scientific
laws such as gravity, or refuting
previously held beliefs such as the world being flat. There are
other types of research beyond
basic, and they vary based on the type of question being asked.
Applied Research
When people seek to answer questions such as “What is the best
way to facilitate learning
during change?” or “How do we motivate employees to embrace
new technology?” they are
usually seeking to improve practice within a certain field. This
is known as applied research
because its results are germane to problems and issues within a
particular setting such as
business. This type of research is practical and helps people
solve problems, but unlike basic
research, it does not necessarily yield new knowledge. OD is
applied research because it asks
questions about challenges that are unique to the individual
organizational context in which
they are located but does not necessarily expand our
understanding of human behavior in
organizations.
Action Research
Action research explores specific problems within a locality
such as an organization or com-
munity. It might ask questions such as “How can we prevent
employees from leaving Com-
pany A at a rate three times higher than the industry standard?”
“How can Hospital B imple-
ment an electronic health record with minimal disruption to
patient care?” or “How can we
lower poverty rates in Community C?” As the name implies and
we have already covered,
action research involves recurring cycles of study and action
regarding a problem within a
specific context. Action research is participative because it
usually involves members of the
organization.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.4Research Methodology
OD generally engages in both applied research and action
research because it aims to improve
practice (applied) within a specific context (action). When you
engage in action research, you
are conducting a systematic inquiry on a particular organization
problem by methodically
collecting and analyzing data to provide evidence on which to
base your intervention. When
people do research in organizations, they are seeking not so
much to generate new knowl-
edge (or cure diseases) as to improve the quality of organization
life. Action research is there-
fore a form of applied research because it seeks to directly
address organization problems
and respond to opportunities in ways that improve the
organization for all its stakeholders.
Evaluation Research
People may also want to judge the quality of something like an
educational program, confer-
ence, or OD intervention. Here they might ask, “How was the
learned information applied?”
“What was the most effective mode of delivery of instruction?”
or “What are people doing
differently as a result of the intervention?” This type of
research is known as evaluation
research. Evaluation seeks to establish the value of programs or
interventions and judge
their usefulness. Evaluation can occur during the OD process,
especially when the process is
being evaluated before, during, or after the intervention. We
will learn more about evaluation
research in OD in Chapter 6. Refer to Table 4.3 for further
description of the different types
of research.
Table 4.3: Different types of research
Basic Applied Action Evaluation
• Contributes to
knowledge base in
field (basic, pure)
• Experimental
• Tests hypotheses
• Seeks to answer
perplexing
problems
• Improves practice
in discipline
(applied)
• Seeks to describe,
interpret, or
understand
problems within
specific settings
• Will not
necessarily create
new knowledge
• Addresses
particular, local
problem (action
research)
• Systematic inquiry
• Addresses specific
problem within
specific setting
• Often involves
participants
• Focused on
practical problems,
social change
• Assesses value
• Measures worth or
value of program,
process, or
technique
• Judges
accomplishments
and effectiveness
• Establishes
decision-making
basis
4.4 Research Methodology
In addition to the four types of research based on the types of
questions asked, research can
also be classified according to the type of methodology that is
used to collect data. Methodol-
ogy represents the overarching philosophy and approach to
collecting data.
Qualitative Research Methodology
When seeking to understand “how” a phenomenon occurs or
unfolds (“How do leaders best
develop?”) or inquire into the nature or meaning of something
(“How does participation on a
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.4Research Methodology
high-performing team affect individual identity and
performance?”), a qualitative methodol-
ogy is appropriate. Qualitative methodology is concerned with
“understanding the mean-
ing people have constructed” (italics in original, Merriam &
Tisdell, 2015, p. 15) and as “an
umbrella term covering an array of interpretive techniques
which seek to describe, decode,
translate, and otherwise come to terms with the meaning, not
the frequency, of certain more-
or-less naturally occurring phenomena in the social world” (Van
Maanen, 1979, p. 520).
Qualitative inquiry is not generally quantifiable but rather
provides convincing evidence.
Qualitative data is generated from methods such as interviews,
focus groups, or observations
that are commonly conducted as part of the discovery phase of
action research. Qualitative
methods are rooted in constructivist philosophy—the idea that
people build meaning from
experience and interpret their meanings in different ways. For
example, two people would
likely define the meaning of life differently.
Qualitative research occurs within the social setting or field of
practice, and data collection
is often referred to as “fieldwork” or being in the “field.”
Qualitative approaches can effec-
tively address organization members’ everyday concerns, help
consultants understand and
improve their practice, and inform decisions. Examples of
qualitative questions asked in OD
include “Why are employees dissatisfied with Organization Y?”
and “What specific concerns
do employees have about anticipated changes in the
organization?” Qualitative methodology
uses techniques that allow deep exploration of social
phenomena through interviews, obser-
vations, focus groups, or analysis of documents.
Qualitative Research Characteristics
Qualitative research focuses on building meaning and
understanding about social phenom-
ena. The researcher (generally the consultant in OD) is the
primary instrument for data col-
lection and analysis. This means that it is the consultant who
conducts interviews, focus
groups, or observations and then interprets or analyzes their
meaning. Interpretation is con-
sidered an inductive process—that is, meaning is inferred from
the data through a process
of comparison, reflection, and theme building. Unlike
quantitative methodology, where study
participants are often collected at random, qualitative
participants are selected purposefully
and are individuals who can provide informed accounts of the
topic under study. For example,
if a consultant wants to know about the experiences of new
employees, he or she obviously
needs to ask new employees.
Qualitative Analysis and Results
Qualitative analysis provides a detailed account of the
phenomenon. Direct quotations from
participants and a full depiction of the setting, issue, or
individuals under study is known as
rich description. The design of a qualitative study is emergent
and flexible, meaning that the
questions may change as new insights are gained. For example,
if Sarah is conducting focus
groups on issues faced by new employees, a topic may arise that
she wants to query future
groups about as she collects data.
Quantitative Research Methodology
When people want to know “how much” or “how many” of
something, they generally seek a
quantitative methodology. For example, a researcher might ask,
“What are the percentage
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.4Research Methodology
breakdowns of employee satisfaction in Organization Y, from
very dissatisfied to very satis-
fied?” or “What is our organization’s productivity rate
compared with the industry standard?”
Quantitative methods assume there is one correct answer to a
question. This type of research
yields statistical descriptions and predictions of the topics under
study.
Recall from earlier coverage in this book the process of survey
feedback, in which employees
are given a questionnaire about the organization’s management,
culture, or atmosphere. Sur-
veys are regularly used in OD to assess issues such as attitudes,
individual performance, and
technology needs or to evaluate certain functions or products.
Surveys provide quantifiable
data, such as what percentage of employees feel management is
doing a good job or what
percentage of employees plan to look for other work in the
coming year.
Quantitative Research Characteristics
Quantitative techniques include surveys, questionnaires, and
experiments that may involve
testing with control groups. For example, Team A might be
trained on effective team dynam-
ics and facilitation procedures. Its productivity and performance
might then be measured
against Team B, which received no prior training. Quantitative
studies are carefully designed,
and once data collection begins, they are not changed. For
example, if Jonas were administer-
ing a survey to a population, he would not change the questions
halfway through data collec-
tion. Samples in a quantitative study are random and large. A
corporation of 40,000 employ-
ees being surveyed on their opinions about health benefits
would target a smaller number
of randomly selected workers to provide a representation of
what the majority of workers
would likely prefer.
Quantitative Analysis and Results
Quantitative data is analyzed using a deductive process in which
the numbers or statistics
will be used to determine an understanding of what is being
studied. Assuming a benefits
survey was conducted in the previous example, the organization
might learn that 60% of
employees prefer managed care, 40% want vision, and only 30%
want dental insurance. The
company would use this information to modify its benefits
packages.
Table 4.4 compares and contrasts qualitative and quantitative
methods.
Table 4.4: Comparison of qualitative and quantitative research
methods
Comparison Qualitative Quantitative
Research focus Quality (nature, essence) Quantity (how much,
how many)
Philosophical roots Phenomenology, symbolic inter-
actionism, constructivism
Positivism, logical empiricism,
realism
Associated phrases Fieldwork, ethnographic, natu-
ralistic, grounded, constructivist
Experimental, empirical,
statistical
Goal of investigation Understanding, description,
discovery, meaning, hypothesis
generating
Prediction, control, description,
confirmation, hypothesis testing
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.5Research Methods
Table 4.4: Comparison of qualitative and quantitative research
methods
(continued)
Comparison Qualitative Quantitative
Design Flexible, evolving, emergent Predetermined, structured
Sample Small, nonrandom, purposeful,
theoretical
Large, random, representative
Data collection Researcher as primary instru-
ment, interviews, observation,
documents
Inanimate instruments (scales,
tests, surveys, questionnaires,
computers)
Analysis Inductive, constant comparative
method
Deductive, statistical
Findings Comprehensive, holistic, richly
descriptive
Precise, numerical
4.5 Research Methods
Research methods are procedures used to collect data. They are
based on the type of research
methodology used. Methods typically used in OD are profiled in
this section.
Interviews
A conversation facilitated by the consultant for the purpose of
soliciting a participant’s opin-
ions, observations, and beliefs is an interview. Interviews give
participants the opportunity
to explain their experience, record their views and perspectives,
and legitimize their under-
standings of the phenomenon under study (Stringer, 2013). The
interviews at QuickCo likely
asked employees about departmental problems, communication,
leadership, and so forth.
Conducting interviews requires constructing questions that best
address the issues under
investigation. For example, Jack might have asked the QuickCo
shipping employees these
questions:
• “What do you see as the top three challenges in the shipping
department?”
• “Can you tell me about a specific event that contributed to the
problems you face
today?”
• “What has to change for you to be happy here?”
• “What have you tried to resolve the problem?”
• “What role have you played in the shipping department?”
• “How likely are you to leave your position in the next year?”
Recording interviews can be useful, but make sure you have
permission from the participant
(interviewee) and prepare and test the recording equipment in
advance. If you are not able
to record, you will want to take notes, but this is not ideal
because it distracts you from what
the interviewee is sharing.
Interviews have several strengths. They provide in-depth insight
into an interviewee’s opin-
ions, attitudes, thoughts, preferences, and experiences.
Interviews allow the interviewer to
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.5Research Methods
probe and pose follow-up questions. Interviews can be done
rapidly, particularly by tele-
phone and email, and they tend to elicit high response rates.
Interviews also have several weaknesses, including that they
can be costly and time consum-
ing, especially when done in person. Interviewees may answer
in ways they think will please
the interviewer rather than tell the truth. The quality of the
interview is dependent on an
interviewer’s skill and ability to avoid bias and ask good
questions. To avoid bias, an inter-
viewer should set aside expectations about the problem and
solutions and truly listen to what
the participants say during data collection. Interviewees may
lack self-awareness or forget
important information and thus fail to provide good data. They
may also have confidentiality
and trust concerns. Data analysis can also be time consuming.
Questionnaires
A questionnaire is an electronic or paper form that has a
standardized set of questions
intended to assess opinions, observations, and beliefs about a
specific topic, such as employee
satisfaction. It is a quantitative method. Questionnaires are also
known as surveys, and one of
OD’s first interventions was survey research, as was discussed
in Chapter 1. Questionnaires
measure attitudes and other content from research participants.
The results can be quanti-
fied, often to show statistical significance of the responses.
Questionnaires are commonly administered to employees to
inquire about the organiza-
tion’s culture and climate and their satisfaction levels with their
work, management, and
relationships. Participants are usually asked to rate the
questionnaire items using a Likert
scale (described in Chapter 1). For example, they might rate an
item such as “Management is
concerned with my welfare” on a 5-point scale from “Strongly
Disagree” to “Strongly Agree.”
Questionnaires should feature clearly written questions that will
yield actionable information.
Questionnaires and surveys have several
benefits. They are inexpensive to adminis-
ter, especially if done electronically or in
groups. Software programs make surveys
relatively easy to develop and distribute.
Questionnaires provide insights into partic-
ipants’ opinions, thoughts, and preferences.
They allow rapid data collection and are
generally trusted for confidentiality and
anonymity. Questionnaires are reliable and
valid when well constructed and permit
open-ended data to be collected, as well as
exact responses to direct questions.
Questionnaires and surveys also pose some
challenges. They should be kept short or
participants may not complete them. Participants may answer in
ways they think please you
instead of telling the truth. They may not respond to certain
items at all, especially if the
wording is unclear. Participants may not trust confidentiality or
may feel that the survey is
tedious; thus, the response rate may be low. Finally, data
analysis can be time consuming for
open-ended items.
AnreyPopov/iStock/Getty Images Plus
Surveys and questionnaires are common data
collection methods used in OD.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.5Research Methods
Focus Groups
A group of approximately eight to 12 participants assembled to
answer questions about a
certain topic is known as a focus group. Focus groups are
similar to interviews, but they
are conducted collectively and facilitated by a moderator.
Developing targeted questions is
important, as is inviting the right people who possess insight
and experience relevant to the
problem. Focus group sessions should be recorded and
transcribed verbatim, with partici-
pants’ permission.
Focus groups are beneficial for understanding participants’
thinking and perspectives, as well
as for exploring new ideas and concepts. Participants can
generate new knowledge and ideas,
especially if they build off each other’s remarks. Focus groups
might also yield in-depth infor-
mation about problems or potential fixes. They can offer insight
into the client organization’s
relationships and communications and may provide an
opportunity to probe relationship
issues. Focus groups are relatively easy to organize and
represent an efficient way to collect
data from several stakeholders simultaneously.
Focus groups also pose challenges. They might be expensive to
conduct if participants are
brought in from multiple locations. Finding a skilled facilitator
can be difficult. Participants
may be suspect of the process and have confidentiality
concerns. Participants may also be
overbearing, negative, or dominant during the session, so adroit
facilitation is needed. If
employees are angry or worried, their emotions can dominate.
Focus groups can also gener-
ate voluminous findings that may not be generalizable if the
participants are not representa-
tive of the organization or that may not be relevant to the issue
under investigation. Finally,
large amounts of data may be time consuming to analyze.
Consultants should hone their focus
group facilitation skills, and resources for building this
competency are listed at the end of
this chapter.
Direct Observation
Suppose Nina watches people, meetings, events, work
processes, or day-to-day activity in the
organization setting and records what she sees. Nina is
undertaking direct observation. This
data collection method involves recording observations in the
form of field notes. Stringer
(2013) listed typical observations made in action research:
• Places: the contexts where people work, live, and socialize,
including the physical
layout
• People: the personalities, roles, formal positions, and
relationships experienced by
participants
• Objects: the artifacts in our contexts such as buildings,
furniture, equipment, and
materials
• Acts: the actions people take (signing a form, asking a
question)
• Activities: a set of related acts (e.g., facilitating a meeting)
• Events: a set of related activities (e.g., putting on a training
session)
• Purposes: what people are trying to accomplish
• Time: times, frequency, duration, and sequencing of events
and activities
• Feelings: emotional orientations and responses to people,
events, activities, and
so forth
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.5Research Methods
Direct observation has several benefits. It allows direct insight
into what people are doing,
avoiding the need to rely on what they say they do. Observation
offers firsthand experience,
especially if the observer participates in activities he or she
observes. This is known as par-
ticipant observation, and it is just as useful for observing what
happens as for what does not
(for example, a manager may tell you she involves employees in
decision making, but you may
observe her doing the opposite). An observation might yield
valuable details that offer insight
into the organization’s context and politics that organization
members may miss. Observa-
tional data may also provide a springboard from which to raise
issues that people would
otherwise be unwilling to talk about.
Direct observation also poses challenges. It may be impossible
to determine a rationale for
observed behavior. If people know they are being observed,
they may alter their behavior.
Observations may be clouded by personal bias and selective
perception. One must avoid over-
identifying with the studied group so that observations are
objective (this is especially chal-
lenging in the case of participant observation). Doing
observation can be time consuming,
and access may sometimes be limited, depending on the type of
organization. A consultant
may have to sort through observations that seem meaningless in
relation to the problem. Data
analysis can also be time consuming.
See Tips and Wisdom: Effective Observation to read advice
about undertaking productive
observation.
Tips and Wisdom: Effective Observation
Doing effective observations requires planning, skill,
and focus. Here are some tips to make your observa-
tions more robust:
1. Determine the purpose of the observation.
Observations can be used to understand
a wide range of activities in organizations
such as how employees respond to a new
office layout, how customers engage with
employees, how supervisors relate to their
subordinates, or how certain procedures are
executed. You should be able to state in one
sentence the focus of your observation: The
purpose of this observation is to document
the use of personal safety equipment usage
[specify time, procedure, or location]. Or per-
haps you are more interested in the nature of
interaction: The purpose of this observation is
to understand what types of questions medical professionals ask
during a clinic. Specific-
ity saves the consultant from capturing a lot of extraneous data.
In the first example,
you might note the frequency and types of personal safety
equipment used, and the
(continued on next page)
DragonImages/iStock/Getty Images Plus
Tips for conducting effective
observation are to determine the
purpose and what is relevant,
decide how to document, and
report observations directly rather
than interpreting.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.5Research Methods
Document Analysis
Document analysis involves reviewing relevant records, texts,
brochures, or websites to gain
insight into organization functioning, problem solving, politics,
culture, or other issues. Docu-
ments might include memoranda, meeting minutes, records,
reports, policies, procedures,
bylaws, plans, evaluation reports, press accounts, public
relations materials, vision state-
ments, newsletters, and websites. Most organizations have a
prolific amount of documen-
tation, so using this type of data requires a clear focus and
purpose. For example, Jack, our
QuickCo consultant, reviewed performance trends and customer
complaints to better under-
stand the shipping department’s problems. If Jack were trying to
help an executive improve
communication skills, he might review his client’s email
correspondence to determine how
effectively and respectfully the executive communicates. This
type of data collection can sig-
nificantly inform the OD initiative.
Documents provide several advantages, including access to
historical data on people, groups,
and the organization, as well as insight into what people think
and do. Document analysis is
an unobtrusive data collection method, which minimizes
negative reactions. Certain docu-
ments might also prove useful for corroborating other data
collected; for example, Jack could
compare the executive’s email communications with colleagues’
accounts collected through
interviews.
Tips and Wisdom: Effective Observation (continued)
conditions when it is not. In the second example, you might be
interested in who
is asking the questions, assumptions made about the cases, or
what emotions
are expressed. Clarity about purpose increases the likelihood of
seeing what you
are seeking.
2. Determine what is relevant to the observation. If you are
observing participation and
team dynamics during a meeting, what occurs during an outside
interruption of the
meeting is probably irrelevant to what is going on in the team.
3. Decide how to document the observation. Your choices
include videotaping, audiotap-
ing, photography, and notetaking. There is not a perfect method.
Technology-assisted
video or audio recording might subdue participants who feel
self-conscious about the
information they are sharing or are fearful of reprisals. Notes
can miss key information
and quickly lose their meaning. Notetaking can be assisted by
creating a shorthand for
participants (e.g., “ee” for “employee” and “mgr” for
“manager”). Practice taking notes
to build skill. Use more than one notetaker and then compare
findings. Finally, create
a checklist for the observation to make it easy to record items,
such as a list of behav-
iors during meetings (interruptions, new ideas, constructive
criticism, building on
ideas, etc.).
4. Avoid interpreting what is observed and instead report it
directly. So, if you were
observing personal safety equipment usage, you might say
“Person A did not wear
safety glasses” instead of “Person A appeared to be distracted
and hurried and forgot
to put on safety glasses.” You will likely have to pair
observations with interviews or
focus groups to understand intentions behind behaviors and
interactions you witness.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.6Methods of Analyzing the Data
On the other hand, documents may provide little insight into
participants’ thinking or behav-
ior or may not apply to general populations. They can also be
unwieldy and overwhelming
in the action research process. Confidential documents may
sometimes be difficult to access.
Additional Data Sources
Although interviews, questionnaires, focus groups, direct
observation, and document analy-
sis are the most commonly used OD data sources, other sources
of information include the
following:
• Tests and simulations: Structured situations to assess an
individual’s knowledge or
proficiency to perform a task or behavior. For example, some
organizations might
use an inbox activity to assess delegation skills during a hiring
process. Others use
psychological tests to measure ethics, personality preferences,
or behaviors. These
instruments can be used in hiring, team development,
management development,
conflict resolution, and other activities.
• Product reviews: Reviews of products or services from
internal or external sources.
These can be useful for addressing quality or market issues.
• Performance reviews: Formal records of employee
performance. These can be par-
ticularly useful for individual interventions that are
developmental or for succession
planning on an organization level.
• Competitor information and benchmarking: Comparative
analyses of what competi-
tors are doing regarding the issue under exploration. Examples
might include salary,
market, or product comparisons.
• Environmental scanning: Analysis of political, economic,
social, and technological
events and trends that influence the organization now or in the
future.
• Critical incidents: Interviews that ask participants to identify a
specific task or
experience and pinpoint when it went well, when it did not go
well, and what they
learned. Critical incidents were first used in military pilot
training to identify and
eliminate mistakes.
4.6 Methods of Analyzing the Data
The most common types of research in OD
are survey research using quantitative
methods and qualitative inquiry that could
employ interviews, focus groups, observa-
tion, document analysis, or a combination
thereof. As you recall, quantitative methods
are used to determine “how much,” while
qualitative methods are used to determine
“how.” We have already identified the many
methods for collecting data; now, what do
you do with it?
Data points are simply bits of information
until they are assimilated in ways that tell
a story or provide deeper understanding of
ktasimarr/iStock/Getty Images Plus
A consultant’s role is not just to collect data but
to analyze its significance and present findings
to the client.
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.6Methods of Analyzing the Data
a phenomenon. For instance, employee responses on a survey
about job satisfaction are just
numbers on a page until interpreted. Once you know that 35% of
respondents are only mod-
erately satisfied and are clustered within a certain division or
job classification, then you can
begin to understand the scope of the problem and consider
interventions.
A consultant’s job is to make sense of data and present it to the
client. Such a presentation
should be in plain language and in quantities that the client can
easily manage. It is advis-
able to involve the client and other relevant organization
members in the presentation of the
analysis, because doing so promotes buy-in, collaboration, and
accurate data interpretation.
There are several steps to analyzing data effectively. These
steps differ depending on whether
you are doing qualitative or quantitative analysis. It is beyond
the scope of this book to fully
train you as a researcher, so it is a good idea to gain additional
training and experience in this
area if it interests you. Until you gain experience with data
analysis, it is recommended that
you partner with someone who is an expert. If you have access
to a university or other orga-
nizational research team, this can be an easy way of both
finding a research expert and devel-
oping a research partnership. Such relationships help bridge
theory and practice and can be
great opportunities to enhance your learning. There are also
some suggestions for continued
learning listed at the end of this chapter. Case Study: Data
Collection and Analysis at Jolt Trans-
formers offers an example of how to ensure effective data
analysis.
Case Study: Data Collection and Analysis at Jolt Transformers
Jo Lee of Design
Solution
s Consulting receives a phone call from Rex James of Jolt
Transform-
ers. “Rex, what can I do for you?” asks Jo, who has done work
for Jolt in the past. “Jo, we’ve
got a problem with our technicians,” Rex replies. “We can’t
keep them. We hire them and train
them, and then they go work for the competition for more
money. Then the cycle repeats and it
seems we wind up hiring folks back again until they can jump
ship for more cash. Our manage-
ment team thinks they need more training.”
“What makes you think that, Rex?” Jo is skeptical that training
is the solution in this case. She
listens a bit longer and sets up a time to meet with Rex and his
division CEO. During the meet-
ing, Jo asks several questions about the extent of the problem
and what steps have been taken
to address it. The three agree that the first step is to collect
more data to understand the scope
of the problem. They decide on a three-pronged approach: a
survey of technicians, interviews
with key executives, and focus groups with selected technicians.
These methods will provide
both quantitative and qualitative data.
Over the coming weeks, Jo and Rex work on developing a
survey with a small team that
includes technician supervisors, technicians, and human
resource personnel. They administer
the survey to each of the company’s 75 technicians. The survey
results show that 70% are dis-
satisfied with their careers at Jolt and 62% are planning to apply
elsewhere in the next year.
Jo and Rex also develop interview questions for the executives
and a format and questions for
the technician focus groups.
During the interviews, it becomes clear to Jo that the executives
believe the problem is that the
company lacks a training institute for technicians. A couple of
executives want her to design a
curriculum to train the technicians more effectively. Jo is highly
skeptical of this assumption,
however, because it runs counter to what she is learning from
the technicians. Other executives
(continued on next page)
Case Study: Data Collection and Analysis at Jolt Transformers
(continued)
express concern that the company is not investing appropriately
in the development and
retention of its work force. Jo thinks they might be on to
something.
During the focus groups with technicians, Jo hears comments
such as these:
“There is no clear career path at Jolt. The only way to progress
is to go elsewhere.”
“The company doesn’t seem interested in us at all. They just
want us to produce—the
faster, the better.”
“The competing companies provide a much better orientation
program and connect
you with a mentor to help you develop your skills.”
“It’s a mystery what you have to do to get promoted around
here. Instead of moving up,
you might as well just plan to move out.”
During the weeks that Jo collects and analyzes data, she
undertakes several measures to pro-
mote a thorough, effective analysis. Each is discussed as a tenet
of effective analysis related to
the case.
Design a systematic approach; keep a data log. Jo works with a
team from Jolt to design a
process for collecting quantitative and qualitative data. As the
data collection process unfolds,
Jo keeps a detailed log of the steps taken, especially for the
interviews and focus groups. These
notes allow her to tweak the interview and focus group
questions based on what she learns.
When you use data logs, you can keep them in the form of a
journal or official memoranda that
highlight key steps, decisions, and emerging themes. These logs
might include visual images of
what you are learning, such as models, system diagrams, or
pictures. Write notes to yourself
as you analyze. Thoroughly documenting your procedures is
good practice and should allow
another person to step in and repeat your data collection and
analysis procedures.
Allow data to influence what is learned. Jo listens and watches
carefully as she collects
data. Her attention to detail offers her new insights into
prevailing assumptions at play in
the organization. She is able to add questions to the interviews
and focus groups that push
participants to think more broadly about the problem. For
example, she pushes executives
to provide evidence that a training institute would result in
better retention of employees.
When the executives find they cannot provide clear answers,
they reflect more deeply on the
problem. Jo is also able to probe more around the lack of
development and retention activities
going on in the organization.
Constantly compare qualitative data. Constant comparison
earned its name because it
involves a repetitive process of comparing themes that appear in
the data until the researcher
arrives at a cogent list that satisfactorily explains the
phenomenon. This involves careful study,
note making, and looking for patterns in the data. Having more
than one set of eyes coding the
data and generating themes helps verify the analysis.
In the case of Jolt, two themes were found from the data
analysis:
1. Technicians were dissatisfied with
a. the lack of a career path and
b. the level of support for advancement and growth.
2. Jolt was lacking
a. long-term strategies for recruitment, retention, and
development of technicians and
(continued on next page)
© 2020 Zovio, Inc. All rights reserved. Not for resale or
redistribution.
Section 4.6Methods of Analyzing the Data
a phenomenon. For instance, employee responses on a survey
about job satisfaction are just
numbers on a page until interpreted. Once you know that 35% of
respondents are only mod-
erately satisfied and are clustered within a certain division or
job classification, then you can
begin to understand the scope of the problem and consider
interventions.
A consultant’s job is to make sense of data and present it to the
client. Such a presentation
should be in plain language and in quantities that the client can
easily manage. It is advis-
able to involve the client and other relevant organization
members in the presentation of the
analysis, because doing so promotes buy-in, collaboration, and
accurate data interpretation.
There are several steps to analyzing data effectively. These
steps differ depending on whether
you are doing qualitative or quantitative analysis. It is beyond
the scope of this book to fully
train you as a researcher, so it is a good idea to gain additional
training and experience in this
area if it interests you. Until you gain experience with data
analysis, it is recommended that
you partner with someone who is an expert. If you have access
to a university or other orga-
nizational research team, this can be an easy way of both
finding a research expert and devel-
oping a research partnership. Such relationships help bridge
theory and practice and can be
great opportunities to enhance your learning. There are also
some suggestions for continued
learning listed at the end of this chapter. Case Study: Data
Collection and Analysis at Jolt Trans-
formers offers an example of how to ensure effective data
analysis.
Case Study: Data Collection and Analysis at Jolt Transformers
Jo Lee of Design
BUS370.W3A1.04.2021Description Total Possible Score

More Related Content

DOCX
MGT450.W5A1.06.2014Description .docx
DOCX
SOC402.W5A1.07.2015Description .docx
DOCX
Description Total Possible Score 30.00Distinguished
DOCX
Ashford 3 - Week 2 - AssignmentGlobal Supply Chain Management Qu.docx
DOCX
Total Possible Score 7.00Explains how to Differentiate the Co.docx
DOCX
MGT415.W5A1.10.2014Description .docx
DOCX
Strategic Planning ProcessIn a four- to five-page paper (exc.docx
DOCX
MGT330.W3A1.03.2015DescriptionTotal Possible Score 15.00.docx
MGT450.W5A1.06.2014Description .docx
SOC402.W5A1.07.2015Description .docx
Description Total Possible Score 30.00Distinguished
Ashford 3 - Week 2 - AssignmentGlobal Supply Chain Management Qu.docx
Total Possible Score 7.00Explains how to Differentiate the Co.docx
MGT415.W5A1.10.2014Description .docx
Strategic Planning ProcessIn a four- to five-page paper (exc.docx
MGT330.W3A1.03.2015DescriptionTotal Possible Score 15.00.docx

Similar to BUS370.W3A1.04.2021Description Total Possible Score (18)

DOCX
Each week’s assignment in this class will lead up to the Research .docx
DOCX
DescriptionTotal Possible Score 30.00Introduction Thesi.docx
DOCX
Week 1 Assignment Grading GuideTotal Possible Score 8.00Dev.docx
DOCX
MGT460.W3A1.02.2015Description .docx
DOCX
COM425.W5A1.05.2015Description .docx
DOCX
Communications Skill AssessmentMost of us have situations in whi.docx
DOCX
Contemporary LeadershipWrite a three page paper (not including t
DOCX
AssignmentAnalysis of the Walt Disney CompanyRead the article,.docx
DOCX
Week 6 Final Grading RubricDescription Total Possible Score.docx
DOCX
Description Total Possible Score 30.00Distinguished .docx
DOCX
Description Total Possible Score 30.00Distinguished .docx
DOCX
Rubic print format_course codeclass codeassignment titletotal point
DOCX
This assignment focuses on how the management practices of plannin.docx
DOCX
1Running head TitleTITLE5Week 5 - Final Paper Final Paper.docx
DOCX
Introduction, Thesis Statement, and ClaimTotal 3.50Distinguis.docx
DOCX
672015 httpsashford.waypointoutcomes.comassessment7697.docx
DOCX
DescriptionTotal Possible Score 9.00Selects Two Health S
DOCX
Organization Introduction, Thesis Statement, and ConclusionTota.docx
Each week’s assignment in this class will lead up to the Research .docx
DescriptionTotal Possible Score 30.00Introduction Thesi.docx
Week 1 Assignment Grading GuideTotal Possible Score 8.00Dev.docx
MGT460.W3A1.02.2015Description .docx
COM425.W5A1.05.2015Description .docx
Communications Skill AssessmentMost of us have situations in whi.docx
Contemporary LeadershipWrite a three page paper (not including t
AssignmentAnalysis of the Walt Disney CompanyRead the article,.docx
Week 6 Final Grading RubricDescription Total Possible Score.docx
Description Total Possible Score 30.00Distinguished .docx
Description Total Possible Score 30.00Distinguished .docx
Rubic print format_course codeclass codeassignment titletotal point
This assignment focuses on how the management practices of plannin.docx
1Running head TitleTITLE5Week 5 - Final Paper Final Paper.docx
Introduction, Thesis Statement, and ClaimTotal 3.50Distinguis.docx
672015 httpsashford.waypointoutcomes.comassessment7697.docx
DescriptionTotal Possible Score 9.00Selects Two Health S
Organization Introduction, Thesis Statement, and ConclusionTota.docx

More from VannaSchrader3 (20)

DOCX
Topic that identifies characteristics of Native American Culture and.docx
DOCX
Topic Stem Cell ResearchAPA Format I need these topics. don.docx
DOCX
Topic Styles of PolicingYou are a patrol officer in a middle- to .docx
DOCX
Topic the legalization of same sex adoptionThese same sex adopti.docx
DOCX
TOPIC The Truth About Caffeine3 pages,give some statistics of neg.docx
DOCX
Topic Media Example (article)1) as usual, do an analysis of the.docx
DOCX
Topic Servant LeadershipThread In our reading we explored th.docx
DOCX
Topic Organization of Law Enforcement AgenciesDo you agree or d.docx
DOCX
Topic Parents Should have a license to have childrenaprox. 500 wo.docx
DOCX
Topic PATIENT DATA PRIVACYPerformance Improvement plan Proper an.docx
DOCX
Topic Kelly’s Personal ConstructsQuestionPrompt  Analyze th.docx
DOCX
Topic Fingerprints.Study fingerprinting in the textbook and res.docx
DOCX
Topic is Domestic Violence, Both men and women being the abus.docx
DOCX
Topic is regional integration .First You need to find article and re.docx
DOCX
Topic Human Trafficking in relation to US Border and Coastal securi.docx
DOCX
Topic is AutonomyShort papers should use double spacing, 12-point .docx
DOCX
Topic Genetic connection of hypertension to cardiovascular disease .docx
DOCX
topic Errors (medication or patient injury)in particular stra.docx
DOCX
Topic differences between folk guitar and classic guitar.Minimu.docx
DOCX
Topic Death Investigations. Review homicide investigation as de.docx
Topic that identifies characteristics of Native American Culture and.docx
Topic Stem Cell ResearchAPA Format I need these topics. don.docx
Topic Styles of PolicingYou are a patrol officer in a middle- to .docx
Topic the legalization of same sex adoptionThese same sex adopti.docx
TOPIC The Truth About Caffeine3 pages,give some statistics of neg.docx
Topic Media Example (article)1) as usual, do an analysis of the.docx
Topic Servant LeadershipThread In our reading we explored th.docx
Topic Organization of Law Enforcement AgenciesDo you agree or d.docx
Topic Parents Should have a license to have childrenaprox. 500 wo.docx
Topic PATIENT DATA PRIVACYPerformance Improvement plan Proper an.docx
Topic Kelly’s Personal ConstructsQuestionPrompt  Analyze th.docx
Topic Fingerprints.Study fingerprinting in the textbook and res.docx
Topic is Domestic Violence, Both men and women being the abus.docx
Topic is regional integration .First You need to find article and re.docx
Topic Human Trafficking in relation to US Border and Coastal securi.docx
Topic is AutonomyShort papers should use double spacing, 12-point .docx
Topic Genetic connection of hypertension to cardiovascular disease .docx
topic Errors (medication or patient injury)in particular stra.docx
Topic differences between folk guitar and classic guitar.Minimu.docx
Topic Death Investigations. Review homicide investigation as de.docx

Recently uploaded (20)

PDF
IGGE1 Understanding the Self1234567891011
PPTX
202450812 BayCHI UCSC-SV 20250812 v17.pptx
PPTX
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
PDF
International_Financial_Reporting_Standa.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
HVAC Specification 2024 according to central public works department
PDF
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
PPTX
Virtual and Augmented Reality in Current Scenario
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PDF
AI-driven educational solutions for real-life interventions in the Philippine...
PDF
Uderstanding digital marketing and marketing stratergie for engaging the digi...
PDF
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
PPTX
History, Philosophy and sociology of education (1).pptx
PPTX
TNA_Presentation-1-Final(SAVE)) (1).pptx
IGGE1 Understanding the Self1234567891011
202450812 BayCHI UCSC-SV 20250812 v17.pptx
Onco Emergencies - Spinal cord compression Superior vena cava syndrome Febr...
B.Sc. DS Unit 2 Software Engineering.pptx
International_Financial_Reporting_Standa.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
What if we spent less time fighting change, and more time building what’s rig...
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
HVAC Specification 2024 according to central public works department
FOISHS ANNUAL IMPLEMENTATION PLAN 2025.pdf
Virtual and Augmented Reality in Current Scenario
Cambridge-Practice-Tests-for-IELTS-12.docx
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Paper A Mock Exam 9_ Attempt review.pdf.
FORM 1 BIOLOGY MIND MAPS and their schemes
AI-driven educational solutions for real-life interventions in the Philippine...
Uderstanding digital marketing and marketing stratergie for engaging the digi...
OBE - B.A.(HON'S) IN INTERIOR ARCHITECTURE -Ar.MOHIUDDIN.pdf
History, Philosophy and sociology of education (1).pptx
TNA_Presentation-1-Final(SAVE)) (1).pptx

BUS370.W3A1.04.2021Description Total Possible Score

  • 1. BUS370.W3A1.04.2021 Description: Total Possible Score: 6.00 Identifies the Type of Change Occurring Total: 1.00 Distinguished - Clearly and accurately identifies the type of change occurring. Proficient - Identifies the type of change occurring. Minor details are slightly unclear or inaccurate. Basic - Vaguely identifies the type of change occurring. Relevant details are unclear and/or inaccurate. Below Expectations - Attempts to identify the type of change occurring; however, significant details are unclear and inaccurate. Non-Performance - The identification of the type of change
  • 2. occurring is either nonexistent or lacks the components described in the assignment instructions. Develops the Action Research Process Total: 1.00 Distinguished - Comprehensively develops the action research process. Proficient - Develops the action research process. Minor details are missing. Basic - Partially develops the action research process. Relevant details are missing. Below Expectations - Attempts to develop the action research process; however, significant details are missing. Non-Performance - The development of the action research process is either nonexistent or lacks the components described in the assignment instructions. Selects the Diagnostic Process for the Planning Phase and Defends the Reasoning for the Choice Total: 1.00
  • 3. Distinguished - Selects the appropriate diagnostic process for the planning phase and clearly defends the reasoning for the choice. Proficient - Selects the diagnostic process for the planning phase and defends the reasoning for the choice. Minor details are missing or slightly unclear. Basic - Selects a somewhat appropriate diagnostic process for the planning phase and partially defends the reasoning for the choice. Minor details are missing and/or slightly unclear. Below Expectations - Attempts to select a diagnostic process for the planning phase and defend the reasoning for the choice; however, significant details are missing and unclear. Non-Performance - The selection of a diagnostic process for the planning phase and defense of the reasoning for the choice are either nonexistent or lack the components described in the assignment instructions. Classifies the Intervention Type Total: 0.50
  • 4. Distinguished - Accurately classifies the intervention type. Proficient - Classifies the intervention type. Minor details are inaccurate. Basic – Vaguely classifies the intervention type. Relevant details are inaccurate. Below Expectations - Attempts to classify the intervention type; however, significant details are inaccurate. Non-Performance - The classification of the intervention type is either nonexistent or lacks the components described in the assignment instructions. Defines How to Implement the Intervention Type Total: 0.50 Distinguished - Thoroughly defines how to implement the intervention type. Proficient - Defines how to implement the intervention type.
  • 5. Minor details are missing. Basic - Minimally defines how to implement the intervention type. Relevant details are missing. Below Expectations - Attempts to define how to implement the intervention type; however, significant details are missing. Non-Performance - The definition of how to implement the intervention type is either nonexistent or lacks the components described in the assignment instructions. Utilizes the Action Research Checking Phase to Validate the Change Outcome Total: 0.50 Distinguished - Completely utilizes the Action Research Checking phase to validate the change outcome. Proficient - N/A Basic - Partially utilizes the Action Research Checking phase to validate the change outcome. Relevant details are incomplete. Below Expectations - N/A Non-Performance - The utilization of the Action Research
  • 6. Checking phase to validate the change outcome is either nonexistent or lacks the components described in the assignment instructions. Written Communication: Control of Syntax and Mechanics Total: 0.25 Distinguished - Displays meticulous comprehension and organization of syntax and mechanics, such as spelling and grammar. Written work contains no errors and is very easy to understand. Proficient - Displays comprehension and organization of syntax and mechanics, such as spelling and grammar. Written work contains only a few minor errors and is mostly easy to understand. Basic - Displays basic comprehension of syntax and mechanics, such as spelling and grammar. Written work contains a few errors which may slightly distract the reader. Below Expectations - Fails to display basic comprehension of syntax or mechanics, such as spelling and grammar. Written work contains major errors which distract the reader. Non-Performance - The assignment is either nonexistent or lacks the components described in the instructions.
  • 7. Written Communication: APA Formatting Total: 0.25 Distinguished - Accurately uses APA formatting consistently throughout the paper, title page, and reference page. Proficient - Exhibits APA formatting throughout the paper. However, layout contains a few minor errors. Basic - Exhibits limited knowledge of APA formatting throughout the paper. However, layout does not meet all APA requirements. Below Expectations - Fails to exhibit basic knowledge of APA formatting. There are frequent errors, making the layout difficult to distinguish as APA. Non-Performance - The assignment is either nonexistent or lacks the components described in the instructions. Intro, Thesis, & Conclusion Total: 0.25
  • 8. Distinguished - The paper is logically organized with a well- written introduction, thesis statement, and conclusion. Proficient - The paper is logically organized with an introduction, thesis statement, and conclusion. One of these requires improvement. Basic - The paper is organized with an introduction, thesis statement, and conclusion. The introduction, thesis statement, and/or conclusion require improvement. Below Expectations - The paper is loosely organized with an introduction, thesis statement, and conclusion. The introduction, thesis statement, and/or conclusion require much improvement. Non-Performance - The introduction, thesis statement, and conclusion are either nonexistent or lack the components described in the assignment instructions. Written Communication: Page Requirement Total: 0.25
  • 9. Distinguished - The length of the paper is equivalent to the required number of correctly formatted pages. Proficient - The length of the paper is nearly equivalent to the required number of correctly formatted pages. Basic - The length of the paper is equivalent to at least three quarters of the required number of correctly formatted pages. Below Expectations - The length of the paper is equivalent to at least one half of the required number of correctly format ted pages. Non-Performance - The assignment is either nonexistent or lacks the components described in the instructions. Written Communication: Resource Requirement Total: 0.50 Distinguished - Uses more than the required number of scholarly sources, providing compelling evidence to support ideas. All sources on the reference page are used and cited correctly within the body of the assignment. Proficient - Uses the required number of scholarly sources to support ideas. All sources on the reference page are used and
  • 10. cited correctly within the body of the assignment. Basic - Uses less than the required number of sources to support ideas. Some sources may not be scholarly. Most sources on the reference page are used within the body of the assignment. Citations may not be formatted correctly. Below Expectations - Uses an inadequate number of sources that provide little or no support for ideas. Sources used may not be scholarly. Most sources on the reference page are not used within the body of the assignment. Citations are not formatted correctly. Non-Performance - The assignment is either nonexistent or lacks the components described in the instructions. Powered by Action Research:
  • 11. The Checking Phase 6 DragonImages/iStock/Thinkstock Learning Outcomes After reading this chapter, you should be able to: • Describe evaluation according to how it is defined and what steps encompass it. • Identify the types and categories of evaluation. • Examine different frameworks of evaluation. • Determine how to plan and perform an evaluation. • Explore strategies for concluding the action research process, including terminating the consultant–client relationship or recycling the intervention. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. In Chapter 5, we learned about the Public Health Leadership Academy, which was founded by a major university using funds from a federal grant to promote leadership development among public health employees in a southern state. The project involved developing a Leadership Acad- emy for mid-level managers who exhibited potential to advance to higher levels of public health
  • 12. leadership in the state. The intervention was in response to a long-term need based on previous analyses of the state’s public health agency, including succession planning. This need had existed for many years because there were not enough public funds available to provide a comprehensive program. The grant finally created the opportunity to deli ver this much-needed program. James (the client) worked with Leah (the external consultant) to plan and implement the program. James and Leah engaged in action research to collect and analyze data about the needs of the target population (mid-level public health managers) using interviews and surveys to deter- mine the content of the courses that would be offered in the Leadership Academy. The project had a 2-year implementation timeline, with year 1 focused on planning and year 2 devoted to implementation. Evaluation would be ongoing and continue past year 2 with a new cohort start- ing in year 3, staffed by internal consultants. During the year 1 planning phase, James and Leah were very involved in collecting data to inform the content and process of the Leadership Academy. They continually stopped to reflect on their decisions, plans, and processes and made adjustments to each as the project unfolded. They also piloted the first session among a small group of advisors to the Leadership Academy to make sure their design would resonate with the participants. They made more changes follow- ing the pilot to improve the program. During year 2, 25 managers chosen for the
  • 13. academy participated in monthly leader- ship development experiences and semi- nars. The Leadership Academy began in September with these 25 managers, who had been competitively selected from across the state. The participants convened at a resort, and the program was kicked off by high-level state public health officials. The first session lasted 3 days, during which time the participants received the results of a leadership styles inventory, listened to innovative lectures and panels on leader- ship, planned an individual leadership proj- ect in their districts, and engaged with each other to develop working relationships. The academy continued meeting monthly for a year and focused on a range of topics related to leadership that were prioritized based on prior data col- lection. The grant provided for an evaluator, so data was collected at each meeting. The first 2 years of the project involved ongoing assessment of the academy’s plans and imple- mentation, followed by appropriate adjustments. James and Leah included cycles of assessment and adjustment as a regular part of their agenda and conversation. The evaluator observed all of the sessions and sent out formal evaluations after each monthly session. During the sessions, facilitators regularly asked participants to provide feedback. For example, they were asked to respond to questions like, “How did that exercise work for you?” PeopleImages/E+/Getty Images Plus
  • 14. The Leadership Academy is off to a lively start. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.1Defining Evaluation in Action Research “How are you looking at this now?” and “How could we do this better?” The evaluation data con- tributed to changes to the planned curriculum and program activities. For example, the partici- pants took an inventory to assess leadership style and wanted to spend more time on the topic, so the next month’s agenda was adjusted to accommodate the request. Participants complained that the sequencing of topics was not logical, so the agenda for the second cohort to follow in year 3 of the project was adjusted. The first cohort graduated at its final session, during which the cohort welcomed the mem- bers of the new cohort. Leah had worked with an internal team of consultants throughout the implementation, and the team was ready to take over the facilitation with the second cohort. Following the event, Leah met with James and the new team to tie up loose ends and make the transition. She met periodically with James during the third year to ensure that the Leadership Academy was running smoothly. As this vignette illustrates, although checking is the third phase of the action research process, it takes place during the planning and doing phases as well.
  • 15. This chapter focuses on checking, which is a data-based evaluation to assess whether an intervention had the intended result. 6.1 Defining Evaluation in Action Research The model of action research used in this book has three phases: planning, doing, and check- ing. See Table 5.1 in Chapter 5 for a review of each phase. The final phase of action research, checking, involves three steps. First, the consultant and client gather data about the key changes and learning that have occurred. This step is known as assessing changes. Next, the consultant uses this data to assess if the intended change occurred. Was the change implementation effective? Were the proposed outcomes met? As a result of this assessment, the consultant adjusts the intervention accordingly. This step is known as adjusting processes. The third step is to terminate the OD process or repeat it to correct or expand the intervention (known as recycling). Assessment, adjustment, and terminating or recycling are collectively known as evaluation of the action research process. Purposes of Evaluation The overall purpose of conducting an eval- uation is to make data-based decisions about the quality, appropriateness, and effectiveness of OD interventions. Evalua- tion helps us determine whether an inter- vention’s intended outcomes were real- ized and assess and adjust the intervention as needed. Evaluation helps ensure accountability and knowledge generation
  • 16. from the intervention. An evaluation creates criteria of merit, con- structs standards, measures performance, compares performance to standards, and zimmytws/iStock/Thinkstock Evaluation makes judgments about the effectiveness and impact of OD interventions through the analysis of data such as “employee satisfaction” surveys. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.1Defining Evaluation in Action Research synthesizes and integrates data into a judgment of merit or worth (Fournier, 1995). Evalua- tion findings help render judgments, facilitate improvements, or generate knowledge (Patton, 1997). Evaluations used to render judgments focus on accountability for outcomes such as holding management responsible for making changes in leadership. Improvements concen- trate on developmental processes such as creating new learning and growth. Knowledge gen- eration emphasizes academic contributions such as new insights that may change a process. Establishing a Benchmark To illustrate how evaluation helps OD consultants assess and adjust an intervention, let us consider an organization that has conducted survey resear ch to
  • 17. assess employee satisfaction. The first year creates a benchmark (when an organization compares its business processes, practices, and performance standards to those of other organizations that are considered best in class) that can be used in future evaluations. Further, let us imagine that employee satisfac- tion is at a moderately satisfied level the first time it is measured. When the survey research instrument on employee satisfaction is replicated in future years, the level of satisfaction will be compared with the original baseline to evaluate whether the organization is doing worse, the same, or better than it had originally. The evaluation can help the organization identify key changes and learning that occurred as a result of the intervention. Then the organization can adjust practices accordingly. The American Productivity and Quality Center developed a benchmarking definition repre- senting consensus among 100 U.S. companies: Benchmarking is a systematic and continuous measurement process; a pro- cess of continuously measuring and comparing an organization’s business process against business process leaders anywhere in the world to gain infor- mation, which will help the organization take action to improve its perfor- mance. (as cited in Simpson, Kondouli, & Wai, 1999, p. 718) See Who Invented That? Benchmarking to read about the origins of benchmarking.
  • 18. Benchmarking is a specific type of action research, but the process can also be applied during OD intervention evaluations. There are several types of benchmarking (Ellis, 2006): Who Invented That? Benchmarking The exact derivation of the term benchmarking is unknown. It is thought to have possibly origi- nated from using the surface of a workbench in ancient Egypt to mark dimensional measure- ments on an object. Alternatively, surveyors may have used the term to refer to the process of marking cuts into stone walls to measure the altitude of land tracts, and cobblers may have used it to describe measuring feet for shoes (Levy & Ronco, 2012). Benchmarking in U.S. business emerged in the late 1970s. Xerox is generally considered the first corporation to apply benchmarking. Robert Camp (1989), a former Xerox employee, wrote one of the earliest books on benchmarking. Camp described how U.S. businesses took their market superiority for granted and were thus unprepared when higher-quality Japanese goods disrupted U.S. markets. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.1Defining Evaluation in Action Research • Competitive: Uses performance metrics to assess how well or poorly an organization
  • 19. is performing against direct competitors, such as measuring quality defects between the companies’ products • Comparative: Focuses on how similar processes are handled by different organiza- tions, such as two organizations’ recruitment and retention activities • Collaborative: Involves sharing knowledge about particular activity between compa- nies, with the goal of learning Bogan and English (2014) described benchmarking types a bit differently, noting the activity might focus on processes (e.g., order fulfillment or billing processes, similar to comparative), performance (e.g., comparing competitive positions, similar to competitive), and strategy (e.g., identifying winning tactics across an industry, perhaps similar to collaborative). Almost any issue of interest can be benchmarked, including processes, financial results, investor per- spectives, performance, products, strategy, structure, best practices, operations, and manage- ment practices. Benchmarking could be part of the data collection process in OD, an interven- tion, or the basis of an evaluation. Table 6.1 shows typical benchmarking steps. Table 6.1: Typical benchmarking process Benchmarking step Example 1. Identify process, practice, method, or product to
  • 20. benchmark. Identifying best practices for recruiting and retaining a diverse work force 2. Identify the industries with similar processes. Finding the companies that are best at retaining a diverse work force, even those in a different industry 3. Identify organization leaders in a target area. Selecting the organizations against which to benchmark 4. Survey the selected organizations for their measures and practices. Sending a survey to the target companies asking for information on issues such as turnover and hire rates, formal retention programs (e.g., orientation, development), management training, and rewards 5. Identify best practices. Analyzing data to identify best practices to implement. Analysis depends on the type of data collected—that is, whether it is statistical (quantitative data), such as from a survey of employees on attitudes about diversity, or interpretive (qualitative data), such as from inter-
  • 21. views with employees who quit. 6. Implement new and improved practices. Implementing best practices, such as new recruitment and retention strategies, affinity groups, or rewards for managers who develop a diverse staff Other Purposes of Evaluation Caffarella (1994) and Caffarella and Daffron (2013) identified 12 specific purposes of evalu- ation data. Evaluation helps to 1. adjust the intervention as it is being made in terms of design, delivery, management, and evaluation; 2. keep employees focused on the intervention’s goals and objectives; 3. provide information to inform the continuation of the intervention; © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.1Defining Evaluation in Action Research 4. identify improvements needed to design and deliver the intervention; 5. assess the intervention’s cost-effectiveness; 6. justify resource allocations;
  • 22. 7. increase application of participants’ learning by building in strategies that help them transfer learning back to the organization; 8. provide data on the results of the intervention; 9. identify ways to improve future interventions; 10. cancel or change an intervention that is poorly designed or headed for failure; 11. explore why an intervention fails; and 12. provide intervention accountability. Moreover, during the planning phase, evaluation can help consultants assess needs and make decisions about how best to intervene. The Leadership Academy’s goal was to improve lead- ership, but James and Leah had to assess the content that would be most appropriate for lead- ership in public health. Then, when the participants were selected, they had to make further assessments to ensure the program was relevant to the participants’ particular needs. Evaluation may also help test different theories and models of addressing the problem. In the case of the Leadership Academy, James and Leah based their interventions on theories and models of leadership. They threw out what did not resonate with the participants or work well during sessions and revised the program for the second cohort. Evaluation also helps monitor how the intervention is going during implementation so it can be adjusted accordingly. Such adjustments occurred throughout the Leadership Academy
  • 23. implementation over the course of a year. Finally, evaluation helps determine whether the intervention goals were met and what impact the change had on individuals and the organization. Measuring this type of impact may require more longitudinal study than other types of evaluation. The evaluation of impact helps con- sultants decide whether to extend the intervention, change it, or abandon it altogether. The Leadership Academy will be continually reevaluated as new cohorts participate each year. Clearly, evaluations have the potential to accomplish a variety of goals. Throughout the OD process, it is critical to stay focused on an evaluation’s purpose. Have you experienced any of the evaluation activities discussed here? Steps in Evaluation Just as with action research models, so too are there many approaches to undertaking evalu- ation. That is, there are different ways to model the steps in the process. Two are discussed here. Evaluation Hierarchy Rossi, Lipsey, and Freeman (2004) offered an evaluation hierarchy that recognizes the impor- tance of engaging in evaluation from the beginning of the action research process. That is, evaluation should occur during the initial client contacts, be built into the plan for interven- tion, and be ongoing throughout the implementation, prior to the formal assessment of the
  • 24. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.1Defining Evaluation in Action Research intervention’s impact, cost, and efficiency. Doing evaluation is a matter of conducting a mini- action research project. Caffarella’s Systematic Program Evaluation Caffarella (1994) outlined the steps generally taken during an evaluation. Her steps have been modified to address key OD issues in the following points. Caffarella’s steps are intended to be sequential under ideal conditions, although reality may be quite different. Note that Caffarella has proposed a lot of steps. She has elaborated more on the steps than some other models but still follows an action research process. 1. Secure support for the evaluation from stakeholders such as the client and key management. This step should be a provision of the contract, as discussed in Chap- ter 3. It is the process of getting management to commit to the time and resources needed to evaluate the process, as well as being willing to pay attention to the findings. 2. Identify individuals who can be involved in planning and overseeing the evaluation, such as the participants, management, client, and others affected by the interven-
  • 25. tion. This is usually led by the consultant and client and would involve employees who are engaged in the implementation. It could also involve those affected by the change who did not necessarily participate in it, such as customers or suppliers. 3. Define the evaluation’s purpose and how the results will be used. This step is elaborated on in a later section of this chapter. The evaluation’s focus should be determined and then built accordingly. For example, is it aimed at improving a process or judging an outcome? Does it pertain to planning the intervention or the intervention itself ? Is it aimed at assessing adherence to budget or performance outcomes? 4. Specify what will be judged and formulate the evaluation questions. This step is driven by the evaluation’s purpose. If you decide to evaluate how satisfied employ- ees are with a new performance appraisal process, questions should relate to that change and be used to judge whether it was effective and should continue. 5. Determine who will supply the evidence, such as participants, customers, manage- ment, employees, or others affected by the intervention. 6. Specify the evaluation strategy in terms of purpose, outcomes, timeline, budget, methods, and so forth.
  • 26. 7. Identify the data collection methods and timeline. Data collection was discussed extensively in Chapter 5. The selected methods should match the evaluation’s purpose. 8. Specify the data analysis procedures and logistics (covered in Chapter 5). However, the analysis should be focused on making decisions and changes to the interven- tion, not on diagnosing the problem. 9. Define the criteria for judging the intervention. This can be somewhat subjec- tive unless the metrics are defined in advance. For example, if the intervention were aimed at improving employee retention, would a consultant measure simply whether it improved or look for a certain benchmark (such as 10%) to deem it successful? 10. Complete the evaluation, formulate recommendations, and prepare and present the evaluation report. These steps mirror the data analysis steps presented in Chapter 5 and the feedback meeting strategies in Chapter 4. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.1Defining Evaluation in Action Research 11. Respond to recommendations for changes as appropriate.
  • 27. Adapted from Planning Programs for Adult Learners: A Practical Guide for Educators, Trainers, and Staff Developers (pp. 255– 256), by R. S. Caffarella, 1994, San Francisco, CA: Jossey- Bass. © John Wiley & Sons. Table 6.2 compares the action research model used in this book to Rossi, Lipsey, and Free- man’s and Caffarella and Daffron’s evaluation steps. These models vary in terms of detail and number of steps, but they essentially follow the three phases of action research: planning, doing, and checking. Evaluation is essentially conducting research within an action research process, as shown by these three examples. Table 6.2: Comparing the action research model to evaluation models Action research model Rossi, Lipsey, and Free- man evaluation model Caffarella and Daffron evaluation model Planning 1. Assess intervention cost and efficiency. 2. Assess intervention outcome or impact. 1. Secure support for the evaluation from stakeholders. 2. Identify individuals who can be involved in
  • 28. planning and overseeing the evaluation. 3. Define the evaluation’s purpose and how the results will be used. 4. Specify what will be judged and formulate the evaluation questions. 5. Determine who will supply the evidence. 6. Specify the evaluation strategy in terms of purpose, outcomes, timeline, budget, methods, and so forth. 7. Identify the data collection methods and timeline. 8. Specify the data analysis procedures and logistics. 9. Determine the specific timeline and the budget needed to conduct the evaluation. Doing 3. Assess intervention implementation. 10. Complete the evaluation, formulate recommendations, and prepare and present the evaluation report. Checking 4. Assess intervention design and theory. 5. Assess need for the intervention. 11. Respond to recommendations for changes as appropriate. Caffarella and Daffron’s steps are comprehensive, covering the key tasks that must be com-
  • 29. pleted during an intervention’s evaluation. However, it may not always be possible to follow these clearly articulated steps; evaluation can be unpredictable and may present challenges that are often unanticipated. For example, if an implementation has been challenging, a cli- ent may balk at the evaluation out of fear of receiving negative feedback; on the other hand, employees may be reluctant to participate if trust levels are low. Thus, it helps to pay atten- tion to relevant dynamics and expect the unexpected. Evaluation provides critical information about an intervention’s impact both during and after its implementation. Thus, no matter what model is followed for performing an evaluation, it is essential to begin planning it before the intervention is well underway. A consultant’s job is © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.2Types and Categories of Evaluation to ensure that evaluation is integrated into the OD process from start to finish. Unfortunately, evaluation is often overlooked in favor of wanting simply to take action on the problem, and too many consultants consider their work finished once the intervention has occurred. In other cases, consultants go about evaluation haphazardly. If they cannot demonstrate that their action was effective, however, they risk undermining their client’s confidence in the OD
  • 30. effort, fail to permanently solve the problem, and put themselves at risk of repeating similar mistakes on future assignments. 6.2 Types and Categories of Evaluation Theorists have proposed different types and categories for evaluation. This section identifies some of these different approaches. Types of Evaluation Evaluation can be either formative or summative, depending on the intervention’s goal (Scriven, 1967, 1991a, 1991b). Scriven is considered a leader in evaluation; you can view one of his lectures by visiting the media links provided at the end of this chapter. Formative Evaluation Making changes to an implementation that is already in progress is called doing a forma- tive evaluation. Formative evaluation is concerned with improving and enhancing the OD process rather than judging its merit. The following types of questions might be asked when conducting a formative evaluation: • What are the intervention’s strengths and weaknesses? • How well are employees progressing toward desired outcomes? • Which employee groups are doing well/not so well? • What characterizes the implementation problems being experienced? • What are the intervention’s unintended consequences? • How are employees responding? What are their likes, dislikes, and desired changes? • How are the changes being perceived culturally?
  • 31. • How well is the implementation conforming to budget? • What new learning or ideas are emerging? For example, consider an intervention focused on changing reporting relationships as part of a work redesign in a manufacturing plant. A consultant might discover that some of the new arrangements do not make sense once implemented. These might therefore be modified as Consider This Think of an evaluation in which you have participated. How well did it follow the plan–do– check steps? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.2Types and Categories of Evaluation the work redesign progresses. Asking questions pertaining to the problems, the employees’ perspectives, their likes and dislikes, and so forth yields information that helps tweak and improve the process. Formative evaluation is generally ongoing throughout the implementation. Summative Evaluation Undertaking evaluation at the end of the OD implementation, with the goal of judging whether the change had the intended out- comes and impact, is called summative evaluation. Summative evaluation is also
  • 32. known as outcome or impact evaluation because it allows the intervention’s overall effectiveness to be ascertained. A consul- tant can then decide whether to continue or terminate it (Patton, 1997). The following types of questions might be asked by the consultant, management, or an external evaluator when conducting a summative evaluation: • Did the intervention work? • Did the intervention satisfactorily address the performance gap? • Should the intervention be continued or expanded? • How well did the intervention stick to the budget? Summative evaluations should follow four steps: 1. Select the criteria of merit—what are the sought metrics? 2. Set standards of performance —what level of resolution is sought? 3. Measure performance—conduct the evaluation. 4. Synthesize results into a judgment of value. (Shadish, Cook, & Leviton, 1991, pp. 85–94) Adequate levels of both formative and summative evaluation must be incorporated into the OD process. Failure to conduct formative evaluation leads to missed opportunities to adjust and improve on the implementation as it is in progress. Omitting the summative evaluation means never learning the intervention’s outcomes and impact or lacking adequate data on which to base future decisions.
  • 33. Consider This What types of formative evaluation have you participated in or observed? Vgajic/E+/Getty Images Plus An OD consultant must always assess interventions to learn the outcomes and impact, which serve as a foundation for future decisions. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.2Types and Categories of Evaluation Cervero’s Evaluation Categories Cervero (1985) identified seven categories of evaluation for planners of educational pro- grams that have relevance for OD. His list has been adapted for OD interventions in terms of categories of evaluation: 1. Intervention design and implementation. This could be either formative or summa- tive, because the design and intervention are assessed for fit and impact. Imagine implementing a new performance appraisal process. Formative evaluation might involve piloting the evaluation and evaluating how well it worked for both employees and supervisors. The performance appraisal would then be modified and imple- mented. Summative evaluation in this case might examine
  • 34. whether the new perfor- mance appraisal process improved performance, satisfaction, and learning. 2. Employee participation. This type of evaluation assesses employees’ level of involve- ment in the intervention. This could also be formative or summative. In the case of performance evaluation, a consultant might examine the level of involvement and seek feedback from employees. A summative evaluation might evaluate whether the level of employee participation was adequate and whether it yielded positive outcomes. 3. Employee satisfaction. This type of evaluation assesses employees’ level of satis- faction in the intervention. This could also be formative or summative. In the case of performance evaluation, a consultant might examine the level of satisfaction with the new performance appraisal or its implementation process. A summative evaluation might evaluate how satisfied employees are once the new performance appraisal system is in place. 4. Acquisition of new knowledge, skills, and attitudes. This type of evaluation measures learning during and after the intervention and could also be formative or summa- tive. In the case of performance evaluation, a consulta nt might examine the level of involvement and seek feedback from employees. A formative evaluation during the
  • 35. pilot phase might determine that supervisors lack the skills to effectively implement the new process and give the level of feedback desired. It would allow the consul- tant and client to revise the process and provide adequate training to supervisors. A summative evaluation would assess the level of learning from the new performance appraisal system. This could take the form of employees improving their perfor- mance and supervisors showing demonstrated improvement in their ability to give feedback. 5. Application of learning after the intervention. This category is similar to the previous one, but it is summative; it judges how learning was applied after the intervention. A consultant might look for evidence of how supervisors applied what they learned about giving effective performance feedback to other interactions with employees throughout the year. 6. The impact of the intervention on individuals and the organization. This category could be formative or summative. The formative evaluation might look at how the intervention affects organization life via communication, understanding, participa- tion, and satisfaction. A summative evaluation might look at the overall impact on job satisfaction, financial performance, and retention. 7. Intervention characteristics associated with outcomes. This type of evaluation attempts
  • 36. to link aspects of the intervention to outcomes and is summative. This can be more © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.3Frameworks of Evaluation difficult to measure if the intervention was complex or had several interventions built into it. A consultant might evaluate how a participative process affected the imple- mentation’s overall success or employee satisfaction. 6.3 Frameworks of Evaluation This section profiles some common evaluation frameworks. There is no “best” framework. Rather, you should find what you are comfortable working with and what effectively fits the situation. Further, you should determine the purpose of the evaluation and use a framework that best facilitates it. Kirkpatrick’s Four-Level Framework Kirkpatrick’s (2006) four-level evaluation framework can be formative or summative and is one of the most widely known evaluation typologies. It became popular in the 1990s, although Kirkpatrick developed it as a doctoral dissertation at the University of Wisconsin in the 1950s. It was originally created to evaluate training programs, and OD consultants use it to conduct evaluation at a range of points over time. The framework classifies an interven-
  • 37. tion’s outcomes into one of four categories—reaction, learning, behavior, or results (see Table 6.3). An outcome is assigned a category based on how difficult it is to evaluate. For example, the simplest type of outcome to evaluate is participant reaction to the intervention. Thus, this is assigned level 1. Table 6.3: Kirkpatrick’s four-level evaluation framework Level Focus Examines 1 Reaction Did participants like the intervention? 2 Learning What skills and knowledge did participants gain? 3 Behavior How are participants performing differently? 4 Results How was the bottom line affected? Consider This If you have had the opportunity to evaluate organization change efforts, have you experi- enced any evaluative measures on Cervero’s list? Which ones would be most relevant for the Leadership Academy vignette? Can you think of other categories that might be added to Cervero’s list? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.3Frameworks of Evaluation
  • 38. Level 1: Evaluation Level 1 measures participant reaction to the intervention. This type of evaluation is sometimes referred to as a “smile sheet” (see Figure 6.1 and Tips and Wisdom: Smile Sheet) because it measures only what participants thought and felt during or immediately after an intervention, and in very simple terms—whether they were satisfied, neutral, or dissatisfied. As an example, consider the Leadership Academy vignette. At this level of evaluation, the consultants might ask the academy participants questions such as “How well did you like the session?” “Was the learning environment comfortable?” “Were the facilitators capable and credible?” and “Did you feel it was time well spent?” This type of evaluation may make facilitators feel good about intro- ducing an intervention, but it does not effectively measure change. Unfortunately, it is the most common form of evaluation employed in organizations, because it is the easiest to measure. Figure 6.1: Reaction evaluation using a smile sheet The reaction evaluation sheet shown here is just one way to solicit feedback from an audience. How are we doing? * * * * * * * * * * * * *
  • 39. * * * * * * * * * * * * * * * Tips and Wisdom: Smile Sheet If you are facilitating a meeting, workshop, or seminar and want to gauge the participants’ reactions to the event, create an opportunity for them to share feedback. One way to do this is to put a flip chart page with the smiley face symbols shown in Figure 6.1 in the hall outside the session room. Give participants a dot apiece and ask them to place it in the column that best represents how they feel about the session so far. It is important to place the chart out of your eyeshot, so people feel comfortable sharing honest feedback. This offers a snapshot of partici- pants’ reactions and allows you to make adjustments in the moment. You can also use Twitter or other social media to solicit this data. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 40. Section 6.3Frameworks of Evaluation Level 2: Evaluation Level 2 measures participant learning from an intervention. This level of evaluation assesses whether the intervention helped participants improve or increased their knowledge or skills. At this level, James and Leah might ask the Leadership Academy participants, “What was the key thing you learned from this session?” or “What new skills have you acquired as a result of this experience?” This type of evaluation works best after participants have had a chance to return to their workplace and apply the principles and behaviors they learned (and thus is summative). Participants might also be interviewed or surveyed about learning during the course of the intervention (which would be formative). Level 3: Evaluation Level 3 measures changes in behavior. This level of summative evaluation assesses whether participants are using their new knowledge or skills in their job. At this level, James and Leah might ask Leadership Academy participants, their supervisors, or subordinates, “To what extent has the leader’s behavior changed and improved as a result of the Leadership Acad- emy?” or “What is the person doing differently now?” Similar to level 2, this type of evaluation is best done postintervention. It can be accomplished by interviewing, observing, or survey- ing participants and stakeholders affected by the intervention. Level 4: Evaluation
  • 41. Level 4 measures results for the organization. This level of summative evaluation measures how the intervention affected business performance or contributed to the achievement of organization goals. At this level, James and Leah might ask Leadership Academy participants, their supervisors, or subordinates, “How has the organization benefited from the Leader- ship Academy?” “To what degree has employee satisfaction, productivity, or performance improved?” “To what degree has recruitment and retention of employees improved as a result of improved leadership?” “How many promotions have occurred as a result of participating in the academy?” or “How much money has the organization saved due to better leadership decisions?” As these questions indicate, it might be difficult to actually measure and attribute changed leadership to organization results and outcomes. Kirkpatrick continued to evolve his model and even questioned whether it was a true model or just a guideline; his family members have continued developing it (Kirkpatrick & Kirkpat- rick, 2016). He also expanded his focus to consider an intervention’s cost–benefit ratio and whether it demonstrated a return on investment. Measuring these variables can also present challenges to organizations and OD consultants. Lawson’s Application of Kirkpatrick’s Framework Building on Kirkpatrick’s framework, Lawson (2016) categorized variables relating to the what, who, when, how, and why of the framework’s use. Her approach has been adapted for OD and is depicted in Table 6.4.
  • 42. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.3Frameworks of Evaluation Table 6.4: Applying Kirkpatrick’s framework to OD interventions Level What Who When How Why 1 Reaction: Did they like it? Participants During or after the intervention Smile sheet Determine level of participant satisfaction and need to revise intervention if duplicated 2 Learning: What knowledge or skills were retained? Participants and consultants
  • 43. During, before, and/ or after the intervention Pre- and post- tests, skills applications, role plays, case stud- ies, and exercises Determine whether consul- tant has been effective in imple- menting interven- tion purpose and objectives 3 Behavior: How are participants performing differently? Participants, supervisors, subordinates, and peers 3 to 6 months after intervention Surveys, inter-
  • 44. views, observa- tion, perfor- mance, and appraisal Determine extent to which partici- pants transferred their learning from the inter- vention to the workplace 4 Results: What is the impact on the bottom line? Participants and control group After comple- tion of level 3 assessment Cost–benefit anal- ysis and tracking operational data Determine whether the benefits outweigh costs and how the intervention con- tributed to orga-
  • 45. nization goals and strategy Source: Adapted from The Trainer’s Handbook (4th ed., p. 234), by K. Lawson, 2016, Hoboken, NJ: John Wiley & Sons. Critiques of Kirkpatrick’s Framework Kirkpatrick’s (1994, 2006) four levels of criteria have been dominant for decades among evaluators. With popularity, however, comes criticism. First, the model has been critiqued for being primarily focused on postintervention realities, that is, for evaluating what happens after the intervention versus incorporating more formative evaluation into the process. The four-level framework also does not help evaluators link causal relationships between out- comes and the levels of evaluation. Finally, the framework does not help evaluators determine what changes equate to the different levels of evaluation or how best to measure each level. Some authors have suggested expanding the reaction level to include assessing participants’ reaction to the intervention techniques and efficiency (Kaufman & Keller, 1994). One might also try splitting the reaction level to include measuring participants’ perceptions of enjoy- ment, usefulness, and the difficulty of the program (Warr & Bunce, 1995). Kaufman and Keller (1994) recommended adding a fifth level to address the societal contribution and outcomes created by the intervention, which is becoming more popular with the increased emphasis © 2020 Zovio, Inc. All rights reserved. Not for resale or
  • 46. redistribution. Section 6.3Frameworks of Evaluation on corporate social responsibility and sustainability. Phillips (1996) advocated adding a fifth level that specifically addresses return on investme nt. Other Frameworks of Evaluation Although the Kirkpatrick model is one of the dominant evaluation models, it is not necessarily the best or most appropriate for every situation. In fact, six decades of evaluation research have not yielded a universal evaluation model (Reio, Rocco, Smith, & Chang, 2017). This sec- tion briefly profiles some lesser-known evaluation models. Hamblin’s Five-Level Model Similar to Kirkpatrick’s model, this model measures reactions, learning, job behavior, and organizational impacts, as well as a fifth level—the economic outcomes of training. The hier- archy of Hamblin’s (1974) model is more specific than Kirkpatrick’s in that reactions lead to learning, learning leads to behavior changes, and behavior changes have organizational impact. Because of this assertion, Hamblin believed that evaluation at a given level is not meaningful unless the evaluation at the previous level has been performed. It is worth noting that a major criticism of the Kirkpatrick model is his assumption that the four levels are caus- ally linked (Reio et al., 2017). That linkage has never been demonstrated, although Hamblin
  • 47. may be trying to make that linkage. Preskill and Torres’s Evaluative Inquiry Model Preskill and Torres (1999) contributed a model of inquiry to the literature that uses the evalu- ation process as a learning and development opportunity: Evaluative inquiry is an ongoing process for investigating and understanding critical organization issues. It is an approach to learning that is fully integrated with an organization’s work practices, and as such, it engenders (a) organiza- tion members’ interest and ability in exploring critical issues using evaluation logic, (b) organization members’ involvement in evaluative processes, and (c) the personal and professional growth of individuals within the organization. (pp. 1–2) Evaluative inquiry is the fostering of relationships among organization mem- bers and the diffusion of their learning throughout the organization; it serves as a transfer-of-knowledge process. To that end, evaluative inquiry provides an avenue for individuals’ as well as the organization’s ongoing growth and development. (p. 18) Their definition emphasizes that evaluation is more than simply reporting survey findings. Rather than being event driven, such as sending a survey to participants after the interven- tion is over, evaluation should be an ongoing part of everyone’s
  • 48. job, that is, a shared learning process. Evaluative inquiry should be focused on • intervention and organizational processes as well as outcomes; • shared individual, team, and organizational learning; © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.3Frameworks of Evaluation • educating and training organizational practitioners in inquiry skills (action learning); • collaboration, cooperation, and participation; • establishing linkages between learning and performance; • searching for ways to create greater understanding of the variables that affect orga- nizational success and failure; and • using a diversity of perspectives to develop understanding about organizational issues (Preskill & Catsambas, 2006). Preskill and Torres (1999) identified four learning processes — dialogue, reflection, question- ing, and identifying and clarifying values, beliefs, assumptions, and knowledge—that facilitate three phases of evaluative inquiry: focusing the evaluative inquiry, carrying out the inquiry, and applying learning. The phases are depicted in Table 6.5.
  • 49. Table 6.5: Preskill and Torres’s evaluative inquiry phases Phase Description Strategies At each stage of inquiry, the follow- ing skills are used: 1. Dialogue 2. Reflection 3. Asking questions 4. Identifying and clarifying values, beliefs, assumptions, and knowledge Focusing the evaluative inquiry • Determine issues and concerns for evaluation • Identify stakeholders • Identify guiding questions for the evaluation • Focused dialogues • Group model building • Open space technology
  • 50. • Critical incidents • Assumption testing through questioning Carrying out the inquiry • Design and implement the evaluation (collect, analyze, interpret data) • Address evaluative questions • Develop a database for organization learning • Literature-based discussions • Working session to interpret survey results (or other data collected) • Framing findings as lessons learned Applying learning • Identify and select action alternatives • Develop and implement action plans
  • 51. • Monitor progress • Capturing concerns, issues, and action alternatives • Using technology to facilitate brainstorming • Developing an action plan • Solving implementation issues Brinkerhoff ’s Six-Stage Model This model defines evaluation as the collection of information to facilitate decision making (Brinkerhoff, 1989). Brinkerhoff advocated systemic evaluation that considers the change process holistically. It requires that consultants articulate how and why each training or devel- opment activity is supposed to work, without which comprehensive evaluation is impossible. This model helps to assess whether and how programs benefit the organization; analysis can help trace any failures to one or more of the six stages of Brinkerhoff ’s model: © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.3Frameworks of Evaluation
  • 52. 1. Goal setting (What is the need?): A need, problem, or opportunity worth addressing that could be favorably influenced if someone learned something is identified. 2. Program design (What will work?): A program that teaches the needed topic is cre- ated or, if one already exists, it is located. 3. Program implementation (It is working?): The organization successfully implements the designed program. 4. Immediate outcomes (Did they learn it?): The participants exit the program after successfully acquiring the intended skills, knowledge, or attitudes. 5. Immediate or usage outcomes (Are they keeping and/or using it?): The participants retain and use what they learned. 6. Impacts and worth (Did it make a worthwhile difference?): The organization benefits when participants retain and use what they learned. Brinkerhoff ’s model provides a platform to think about desired impact at the beginning of the change process and takes a more holistic look at the activities and impacts and how they will be communicated. Input, Process, Output, and Outcomes Evaluation This model evaluates training programs at four levels (input, process, output, and outcomes)
  • 53. in terms of their potential contribution to the overall effectiveness of a training program (Bushnell, 1990). It is similar to the systems model introduced in Chapter 2 that considers inputs, throughputs, and outputs in organization systems: 1. Inputs: trainee qualifications, instructor abilities, instructional material, facilities, and budget 2. Process: value-adding activities such as planning, designing, developing, and deliver- ing the training 3. Output: trainee reactions, knowledge and skills gained, and improved job performance 4. Outcomes: profits, customer satisfaction, and productivity This model has the following benefits: 1. It can help determine whether training programs are achieving the right purpose. 2. It can help identify the types of changes that could improve course design, content, and delivery. 3. It can help determine whether students have actually acquired knowledge and skills. Steps in the evaluation process: 1. Identify evaluation goals: Determine the overall structure of the evaluation effort and establish the parameters that influence later stages.
  • 54. 2. Develop an evaluation design and strategy: Select appropriate measures, develop a data collection strategy, match data types with experimental designs, allocate the data collection resources, and identify appropriate data sources. 3. Select and construct measurement tools: Select or construct tools that best fit the data requirements and meet criteria for reliability and validity. Examples include © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation questionnaires, performance assessments, tests, observation checklists, problem simulations, structured interviews, and performance records. 4. Analyze the data: Tie the results of the data-gathering effort to the evaluation’s original goals. 5. Make conclusions and recommendations and present the findings. Evaluation: A Simpler Way? Organizations are accountable for showing results, and OD is no exception. Yet, a comprehen- sive, widespread evaluation model does not exist. Paine (2014) asked if evaluation could be simplified and advocated Brinkerhoff and Mooney’s Business
  • 55. Case Success Method (2008), whereby instead of putting a lot of time and resources into conducting evaluation, the ques- tion becomes, “Did this intervention add value?” Paine recommended asking questions to this end such as the following: 1. What results did we expect? 2. What evidence is available to support or refute the results we obtained? 3. What should we explore more in depth? 4. What additional compelling evidence is available? 5. How can we communicate the results in an understandable way to the organization? The Brinkerhoff and Mooney model is based on taking a systemic approach that attempts to understand how the change was applied on the job, what the results were of the learning or change, and how the change contributed to organization goals. Paine added a final variable to the model: How did the benefits compare with the costs? 6.4 Planning and Performing the Evaluation Just as with other aspects of the action research process, evaluation requires deliberate plan- ning and buy-in from the client. To plan the evaluation, Cervero (1985) suggested identifying five key factors: 1. What is the purpose of the evaluation? 2. Who needs what information? 3. Are there any practical and ethical constraints? 4. What resources are available? 5. What are the evaluator’s values?
  • 56. Consider This After reading these evaluation models, you might think they look familiar. Indeed, they follow a mini action research cycle of planning, doing, and checking. If you were planning an evalua- tion, which approach would you use and why? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation The answers to these five questions offer an evaluation a strong foundation, so it is worth reflecting with the client about them. Once clear on these issues, you can get more specific about evaluation purposes, measurements, information sources, data collection, data analy- sis, feedback, further action, and how to anticipate and manage resistance. These evaluation steps may look familiar, because conducting an evaluation is similar to doing a small-scale action research project. These steps, illustrated in Figure 6.2, will be explored in the next sections. Figure 6.2: Steps in the checking phase This figure outlines the steps of a typical evaluation process. Notice that it looks very similar to the action research process of plan, do, check. Determine evaluation
  • 57. purpose Conduct summative evaluation Develop evaluation plan Choose and employ data collection methods Analyze data and provide feedback Identify appropriate evaluation measures Consider This Apply Cervero’s five questions to an evaluation you have completed or anticipate doing in the future. How have the questions helped you reflect on a past evaluation or plan for a future one? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 58. Section 6.4Planning and Performing the Evaluation Determine the Purpose of the Evaluation Determining the evaluation’s purpose(s) offers clear focus moving forward. Referring back to Cervero’s categories of evaluation may help pinpoint what is being evaluated. For effective formative evaluation, a consultant should work with the client to determine what needs to be evaluated throughout the action research process, particularly regarding process improve- ment. For example, in the case of the Leadership Academy, formative evaluation consisted of assessing the curriculum for relevance and cost. The examination of a performance appraisal process change earlier in the chapter showed how one might assess employees’ satisfaction with their level of participation in the process or with a pilot phase of the appraisal. Consultants should also plan the purpose of summative evaluation. Returning to the example of the Leadership Academy, it was essential to find out whether the participants learned and applied the new behaviors and skills, and if so, how their actions affected their organiza- tions. In the case of the performance appraisal process change, the organization wanted to know if it changed supervisor behavior, improved retention, affected learning, and increased performance. Once an evaluation’s purpose has been decided, a consultant can begin to identify what ques- tions to ask the participants. Table 6.6 offers examples of appropriate questions for different
  • 59. evaluation goals. Questions revolve around needs assessment, intervention conceptualiza- tion, intervention delivery, outcomes, and costs. Table 6.6: Typical evaluation questions Questions about the need for the interven- tion (planning and needs assessment) • What is the nature and magnitude of the problem to be addressed? • What are the characteristics of the individuals/team members/ organization? • What are the needs of the individuals/team members/organization? • What consulting services are needed? • What is the time frame? • What contracting is necessary? Questions about the intervention’s concep- tualization or design (planning) • Who is the client? Target population? • What consulting services should be provided? • What is the best intervention? What are the best delivery systems for the intervention? • How can the intervention identify, recruit, and sustain the intended participants?
  • 60. • How should the intervention be organized? • What resources are necessary and appropriate for the intervention? Questions about inter- vention logistics, deliv- ery, and reach (plan- ning and intervention) • Are intervention goals being met? • Are the intended interventions being delivered to the intended individuals/teams/organization? • Has the intervention missed participants who need it? • Are sufficient numbers of participants engaged in the intervention? • Are the participants satisfied with the intervention? • Are administrative, organizational, and personnel functions handled well? (continued on next page) © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation Table 6.6: Typical evaluation questions (continued) Questions about intervention outcomes (evaluation) • Are the outcome goals and objectives being achieved?
  • 61. • Does the intervention have beneficial effects on the recipients? • Does the intervention have adverse side effects on the participants? • Are some participants affected more by the intervention than others? • Did the intervention improve the issue/problem? Questions about intervention cost and efficiency (evaluation) • Are resources used efficiently? • Is the cost reasonable in relation to the magnitude of benefits? • Would alternative approaches yield equivalent benefits at less cost? Adapted from Evaluation: A Systematic Approach (7th ed., p. 77), by P. H. Rossi, M. W. Lipsey, and H. E. Freeman, 2004, Thousand Oaks, CA: Sage. See Case Study: Piloting and Evaluating a New Performance Appraisal Process. Case Study: Piloting and Evaluating a New Performance Appraisal Process A paper products manufacturing company begins working with a consultant to improve employee retention, because the company has a significantly higher attrition rate compared with other companies it benchmarked. One of the interventions selected during the action research process is to overhaul the way supervisors share feedback with employees; the per- formance appraisal process will become more developmental
  • 62. than punitive, ongoing rather than once a year, with no surprises in the feedback delivered. Making this change requires developing a new process. A small design team is formed to advise on the new performance appraisal process. It includes the consultant, client, supervisors, and employees. The team designs an intervention that is based on supervisors providing feedback in the moment—that is, when they notice something and want to coach the employee through it. The model also incorporates periodic opportunities for the employee and supervisor to meet, focus on the employee’s developmental plan, and make adjustments as needed. Once the process is developed, the team decides to pilot it with a small department. Before the pilot begins, the participating employees are briefed on the intervention and con- firm their participation. Both the employees and supervisors are trained in the new method, and the supervisors receive additional training on how to provide coaching and give develop- mental feedback. The design team begins to study the pilot group’s response by questioning whether it is meeting the original need of improving retention and whether the design is best for meeting the needs. To this end, the design team holds informal conversations with the employees and supervisors and asks them questions such as these: • What do you like about this new process? • What don’t you like about this new process? • How do you perceive these changes?
  • 63. • Will this work if we expand it further, or would you suggest changes? • What have you learned in the process? The informal conversations yield important data that design team members share during a meeting. There is general support for the idea, but the supervisors do not always feel com- petent in using the new process correctly; they also feel stressed about the time it takes. The (continued on next page) Case Study: Piloting and Evaluating a New Performance Appraisal Process (continued) employees are unsure of what the purpose of the periodic meetings is. Some employees and supervisors are resistant, feeling either distrust toward the process or resignation that things will not change. The design team decides to adjust the process. It provides more support and training to the supervisors on how to coach and share feedback, with the expectation that it will take less time as they become more comfortable with the process. It also assembles the department and models an ideal periodic meeting to touch base on development. The pilot group continues to work with the process for several more weeks, with ups and downs. Prior to rolling out the process to the wider organi- zation, the design team meets with the pilot depart- ment to see if additional adjustments are needed. The process works better for most but still has logis-
  • 64. tical problems that require further change. Finally, the company is ready to roll out the changes organization-wide. It starts with a communication plan and provides training to all employees. The design team continues to monitor the process over the next year. After the plan has been in place for a year, the design team comes together and decides to plan and per- form an evaluation to assess whether the new per- formance appraisal process met its intended goals. The evaluation follows these steps: 1. Describe the purpose of the evaluation. The design team determines that the evalua- tion’s purpose is to assess whether the new performance appraisal process increased employee retention. 2. Identify appropriate evaluation measures. The design team decides to look at three measures: a. Retention comparing the year before the intervention with the year after it b. Employee attitudes c. Supervisor attitudes 3. Choose and employ data collection methods. This depends on the type of data desired and the question the organization wishes to answer. The design team chooses three methods: a. Attrition records to measure the year-to-year comparison of retention rates b. Survey of employees
  • 65. c. Interviews of supervisors 4. Analyze data and provide feedback. Once the data is collected, the consultant and client make the first analysis and then involve the design team. Once the findings are refined, they share them during an open meeting with employees. They also share them with supervisors in a separate meeting to get their feedback and input. (continued on next page) FS-Stock/iStock/Getty Images Plus The design team asks employees how they perceive the changes in the communication with supervisors. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation Table 6.6: Typical evaluation questions (continued) Questions about intervention outcomes (evaluation) • Are the outcome goals and objectives being achieved? • Does the intervention have beneficial effects on the recipients? • Does the intervention have adverse side effects on the
  • 66. participants? • Are some participants affected more by the intervention than others? • Did the intervention improve the issue/problem? Questions about intervention cost and efficiency (evaluation) • Are resources used efficiently? • Is the cost reasonable in relation to the magnitude of benefits? • Would alternative approaches yield equivalent benefits at less cost? Adapted from Evaluation: A Systematic Approach (7th ed., p. 77), by P. H. Rossi, M. W. Lipsey, and H. E. Freeman, 2004, Thousand Oaks, CA: Sage. See Case Study: Piloting and Evaluating a New Performance Appraisal Process. Case Study: Piloting and Evaluating a New Performance Appraisal Process A paper products manufacturing company begins working with a consultant to improve employee retention, because the company has a significantly higher attrition rate compared with other companies it benchmarked. One of the interventions selected during the action research process is to overhaul the way supervisors share feedback with employees; the per- formance appraisal process will become more developmental than punitive, ongoing rather than once a year, with no surprises in the feedback delivered.
  • 67. Making this change requires developing a new process. A small design team is formed to advise on the new performance appraisal process. It includes the consultant, client, supervisors, and employees. The team designs an intervention that is based on supervisors providing feedback in the moment—that is, when they notice something and want to coach the employee through it. The model also incorporates periodic opportunities for the employee and supervisor to meet, focus on the employee’s developmental plan, and make adjustments as needed. Once the process is developed, the team decides to pilot it with a small department. Before the pilot begins, the participating employees are briefed on the intervention and con- firm their participation. Both the employees and supervisors are trained in the new method, and the supervisors receive additional training on how to provide coaching and give develop- mental feedback. The design team begins to study the pilot group’s response by questioning whether it is meeting the original need of improving retention and whether the design is best for meeting the needs. To this end, the design team holds informal conversations with the employees and supervisors and asks them questions such as these: • What do you like about this new process? • What don’t you like about this new process? • How do you perceive these changes? • Will this work if we expand it further, or would you suggest changes? • What have you learned in the process?
  • 68. The informal conversations yield important data that design team members share during a meeting. There is general support for the idea, but the supervisors do not always feel com- petent in using the new process correctly; they also feel stressed about the time it takes. The (continued on next page) Case Study: Piloting and Evaluating a New Performance Appraisal Process (continued) employees are unsure of what the purpose of the periodic meetings is. Some employees and supervisors are resistant, feeling either distrust toward the process or resignation that things will not change. The design team decides to adjust the process. It provides more support and training to the supervisors on how to coach and share feedback, with the expectation that it will take less time as they become more comfortable with the process. It also assembles the department and models an ideal periodic meeting to touch base on development. The pilot group continues to work with the process for several more weeks, with ups and downs. Prior to rolling out the process to the wider organi- zation, the design team meets with the pilot depart- ment to see if additional adjustments are needed. The process works better for most but still has logis- tical problems that require further change. Finally, the company is ready to roll out the changes
  • 69. organization-wide. It starts with a communication plan and provides training to all employees. The design team continues to monitor the process over the next year. After the plan has been in place for a year, the design team comes together and decides to plan and per- form an evaluation to assess whether the new per- formance appraisal process met its intended goals. The evaluation follows these steps: 1. Describe the purpose of the evaluation. The design team determines that the evalua- tion’s purpose is to assess whether the new performance appraisal process increased employee retention. 2. Identify appropriate evaluation measures. The design team decides to look at three measures: a. Retention comparing the year before the intervention with the year after it b. Employee attitudes c. Supervisor attitudes 3. Choose and employ data collection methods. This depends on the type of data desired and the question the organization wishes to answer. The design team chooses three methods: a. Attrition records to measure the year-to-year comparison of retention rates b. Survey of employees c. Interviews of supervisors 4. Analyze data and provide feedback. Once the data is
  • 70. collected, the consultant and client make the first analysis and then involve the design team. Once the findings are refined, they share them during an open meeting with employees. They also share them with supervisors in a separate meeting to get their feedback and input. (continued on next page) FS-Stock/iStock/Getty Images Plus The design team asks employees how they perceive the changes in the communication with supervisors. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation Identify Appropriate Evaluation Measures Once an evaluation’s purpose has been determined, actions to measure it should be identified. This book has covered a range of evaluation techniques. The formative measures that the Leadership Academy team members identified allowed them to recruit a small group of top leaders to critique the curriculum. They followed this with a small pilot session to trouble- shoot and revise the curriculum with an actual audience. Participants evaluated the academy throughout the implementation, and adjustments were made accordingly.
  • 71. Summative measures could have included any of the examples listed in the Kirkpatrick dis- cussion, such as promotions, employee satisfaction, or customer satisfaction. In the case of the Leadership Academy, measures included improved performance, promotions, a leader- ship project, and team satisfaction. Because a main goal was to cultivate leaders from within the organization, measuring the percentage of participants who were promoted from middle management to executive positions was a key metric for evaluating the intervention’s success. Choose and Employ Data Collection Methods With the purpose and measures determined, the consultant should identify appropriate sources of information and methods for gathering the information. For example, if you want to measure the results of a customer-service training, you could measure the number of com- plaints, review written complaints, or contact customers. The methods you might use to do this include surveys, documents, or interviews. Table 6.7 offers an overview of data collection methods appropriate for evaluation. The more commonly used methods to collect evalua- tion data include archival data, observations, surveys and questionnaires, assessments, inter- views, and focus groups. Chapter 4 reviewed methods used to conduct analysis or planning— many of them are similar. Case Study: Piloting and Evaluating a New Performance Appraisal Process (continued) 5. Anticipate and manage resistance to the evaluation.
  • 72. Although the team reviewed worst- case scenarios for how the organization or employees might resist, the problems are minimal. For example, complaints are similar to those heard throughout the pilot pro- cess from employees who were skeptical that the new performance appraisal process would work. Once the analysis is complete, the design team presents its findings to top management. The organization now needs to determine how effective the intervention was and whether it would be wise to invest further in it. Critical Thinking Questions 1. Given your knowledge of evaluation, what are some steps the design team followed in implementing its evaluation? 2. What steps did the design team miss? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation Table 6.7: Evaluation data collection methods Evaluation method Description Interview A conversation with one or more individuals to assess
  • 73. their opinions, observations, and beliefs. Questions are usually determined in advance, and the conversation is recorded. Questionnaire A standardized set of questions intended to assess opinions, observations, and beliefs that can be administered in paper form or electronically Direct observation Viewing a task or set of tasks as they are performed and recording what is seen Tests and simulations Structured situations to assess an individual’s knowledge or proficiency to perform some task or behavior Archival performance data Use of existing information, such as files, reports, quality records, or perfor- mance appraisals Product reviews Internal or external evaluations of products or services Performance reviews Written assessments of individual performance against established criteria Records and documents Written materials developed by organizations and communities (perfor- mance appraisals, production schedules, financial reports, attendance records, annual reports, company and board minutes, training data, etc.) Portfolio A purposeful collection of a learner’s work assembled
  • 74. over time that docu- ments events, activities, products, and/or achievements Cost–benefit analysis A method for assessing the relationship between the outcomes of an educa- tional program and the costs required to produce them Demonstration Exhibiting a specific skill or procedure to show competency Pre- and posttests Instruments used to measure knowledge and skills prior to and after the intervention to see if there were changes Focus groups Group interviews of approximately five to 12 participants to assess opin- ions, beliefs, and observations. Focus groups require a trained facilitator. Source: Adapted from Planning Programs for Adult Learners: A Practical Guide (3rd ed.), by R. S. Caffarella and S. R. Daffron, 2013, San Francisco, CA: Jossey-Bass. Archival Data Evaluating the degree of change an intervention produced requires establishing a baseline of existing information from employment records, production figures, or quarterly reports. This information is referred to as archival data (or documents and records). You are not seeking new data but are using existing data to assess the intervention’s effectiveness. Archival data is easily accessible, typically available for no or minimal cost, and useful for providing historical context or a chronology of events, such as employee satisfaction over time. The Leadership
  • 75. Academy team relied on archival data from performance reviews and employee satisfaction surveys to evaluate impact. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation Observation Data Watching the organization engage in its everyday operations involves observation. Obser- vation is a type of evaluation based on detailed descriptions of day-to-day behaviors that cannot be explored by viewing existing archival records. Examples of observation data might include checklists of meeting-leader behaviors completed by one of the team members, call monitoring forms, listening skills, and body language. Observation did not play an official role in the Leadership Academy evaluation process; however, participants’ supervisors observed the changes they made in their approach to their work and documented these in their per- formance reviews. Data collection by observation can range from routine counting of certain occurrences to writing narrative descriptions of what is being observed. Surveys and Questionnaires Surveys and questionnaires are helpful evaluation data collections for measuring the inter- vention’s effects. They should be completed by respondents with some experience related to
  • 76. the intervention. In the Leadership Academy vignette, the consultants used surveys to gather participants’ input on their individual leadership styles during the program. Other examples include end-of-course reaction forms or surveys of stakeholders such as customers, employ- ees, or management. Surveys and questionnaires might also be appropriate when evaluators desire new data from multiple individuals who may be dispersed throughout the organiza- tion. Surveys and questionnaires are relatively inexpensive and easy to administer, particu- larly with the use of technology. It is important that these instruments be well constructed; their wording must be unambiguous, and they must be easy to complete. Paper, Pencil, or Computer-Based Tests Consultants can administer a variety of commercially produced tests to assess the knowledge or skills imparted by an intervention, or they can develop an original test unique to the inter- vention. No matter the type of test employed, the evaluation result is based on the test scores. This type of evaluation works well when trying to determine the quantity and quality of the participants’ education. OD consultants might administer a pretest before an intervention and a posttest afterward, or they might require participants to pass a test to attain a certifi- cate of completion. Tests should be cautiously designed and prudently administered. First, questions must be written in a way that consistently and accurately measures what was taught. Second, partici-
  • 77. pants may perceive test taking as threatening, especially if the results will be used to make performance appraisal decisions. Therefore, efforts to defuse test apprehension should be built into the process. This very book uses some of these tools. Teaching you about OD is the intervention, which is executed via concepts presented in book form. Additional interventions take the form of assignments and opportunities to engage with other learners. Pre- and posttests check your prior knowledge on the topic and gauge how well you learned the concepts after you engaged with them. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.4Planning and Performing the Evaluation Individual and Focus Group Interviews Chapter 4 discussed interviews and focus groups as effective ways of understanding targeted individuals’ or groups’ views, beliefs, or experiences with the issue under investigation. Both approaches depend on developing well-crafted questions that yield useful information. Inter- views and focus groups should be run by an experienced facilitator. These methods yield rich, qualitative information that includes insights about the intervention, critiques, or success stories. Not all participants react well to these data collection methods, however, and some
  • 78. may not trust the interviewers or the process; they may not feel comfortable enough to be honest. Participants may also say what they think the facilitator wants to hear. Analyze Data and Provide Feedback Once data has been collected from an appropriate source and via an appropriate method, it needs to be analyzed. Refer to Chapter 5 for a full discussion of how to analyze data. In the Leadership Academy vignette, performance reviews and employee satisfaction data from sur- vey research were analyzed. The team also monitored participants’ leadership projects and promotional advances. Next, the data analysis should be presented as feedback to key decision makers such as affected employees and management. How to share feedback with a client is covered extensively in Chapter 4, and the same rules apply when sharing evaluation feedback. It is a consultant’s job to determine the feedback meeting’s key purpose and desired outcomes. Does the client need help determining whether to continue the intervention? Modify it? Measure learning or performance? Address unintended consequences of the intervention? Sharing feedback with the client involves determining the focus of the feedback meeting, developing the agenda for feedback, recognizing different types of feedback, presenting feedback effectively, managing the consulting presence during the meeting, addressing confidentiality concerns, and antici- pating defensiveness and resistance.
  • 79. At any point in the evaluation process, data collection and analysis can prompt the team to decide to change future action. For example, the team might decide to adjust the ongoing pro- cess, continue the process with new interventions, or close the project if the problem is per- manently solved. This is the third step of the evaluation process, defined earlier in the chapter as termination or recycling. It is discussed in detail in the next section. Anticipate and Manage Resistance to the Evaluation Sometimes, evaluation is resisted by the client, organization, or other stakeholders. Resis- tance and strategies for curbing it were discussed at length in Chapter 5. Resistors may not want to spend more money to learn the results of the intervention. Or the organization may be unwilling to spend the time required to conduct an evaluation and instead want to move on to the next issue. There may be fear about what the evaluation will reveal (perhaps manage- ment failed to implement the changes, or perhaps employee views remain negative). Organi- zation members can also suffer from change fatigue and worry that the evaluation will bring even more change. Of course, such resistance patterns are likely what created problems in the first place, so observing them warrants timely intervention with the client. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 80. Section 6.5Concluding the Action Research Process Moreover, a consultant should anticipate political issues the evaluation might create. Results of the evaluation can also influence future resource allocations, which could cause trepidation and conflict among organization members. Remaining vigilant as a consultant and working to be authentic and influential is key to navigating the politics of evaluation. See Assessment: Testing Your Change Management Skills. Some clients may resist doing the evaluation because they are more interested in moving on to the next challenge or opportunity. Or they may not want to subject themselves to poten- tially negative feedback. In this case, the consultant should lay out the benefits of measuring results and learning from both positive and negative feedback. Doing so shows good steward- ship of the time and resources committed to the intervention and provides data to support future initiatives. One way to minimize resistance to evaluati on is to make sure it has been addressed during contracting, as outlined in Chapter 3. Even when there is cooperation and investment, evaluation is not easy. Demonstrating impact and results can be challenging for certain interventions such as improving leadership. Link- ing results to intervention events can also be tricky. Devising appropriate evaluation criteria can be problematic, especially if intervention outcomes were vague from the initial planning. Finally, the client may balk at making judgme nts about the intervention.
  • 81. 6.5 Concluding the Action Research Process All consulting jobs end. Indeed, your goal as a successful consultant is to become redundant and work yourself out of a job. In our Leadership Academy vignette, Leah terminated her role with the project after 2 years, the first of which focused on planning and the second on imple- mentation. During year 2, she worked with internal consultants who would take over her role leading and facilitating the Leadership Academy in its third year. In this way, she fostered a repetition—called a recycling—of the intervention. Disengagement or Termination When the client has successfully implemented a change, the OD consultant is no longer needed. At this juncture, the client has become self-reliant and can effectively disengage or terminate the consulting relationship. Working oneself out of a consulting job may at first seem like a bad idea. On the contrary, smoothly disengaging from a client is how to help clients build capacity and also the way to get repeat business as a consultant. Effectively navigating this Assessment: Testing Your Change Management Skills Visit the following link to take an assessment that provides a good review of change and some insight into resistance. The website offers several resources for learning more about manage- ment skills. http://www.mindtools.com/pages/article/newPPM_56.htm © 2020 Zovio, Inc. All rights reserved. Not for resale or
  • 82. redistribution. Section 6.5Concluding the Action Research Process stage depends on setting the expectation during contracting, recognizing the appro- priate timing, processing any interpersonal issues between the consultant and the cli- ent, ensuring that the learning is ongoing, verifying that the client is satisfied, and planning for post-consulting contact. Contracting About Termination A consultant should start setting expecta- tions about disengagement right from the beginning of the consultancy, during con- tracting, as discussed in Chapter 3. There are several things that help disengagement go smoothly. First, the consultant should work with the client to train others in the organization to take over the role played by the consultant, as Leah did in the Leadership Academy vignette. A consultant’s disengagement may be abrupt or more gradual, depending on client needs and resources. If the relationship is expected to be terminated gradually, make sure the client builds the ongoing consulting into the budget. Ensuring Learning Capacity The action research process focuses on promoting learning and change that helps the client diagnose issues, act on them, and evaluate the results. As emphasized in Chapter 5, change and learning go hand in hand. The action research process helps
  • 83. the client build capacity to solve future problems. When the client has the capacity to follow the action research process and continue learning, the client is ready to tackle future challenges without your help. Recognizing Appropriate Timing It is the consultant’s job to monitor both the client and the change implementation to assess when the organization has the capacity to continue without help. Clients may resist termina- tion because they have become over-reliant on the consultant. You can avoid this dependency by striking a collaborative relationship from the beginning. When it is time to terminate, it makes sense to make a grand gesture to signal the relationship has ended. You might want to plan an event with the client, such as presenting a final report, celebrating the key stake- holders, or publishing some type of document that tells the organization’s story. The Leader- ship Academy consultancy culminated in the graduating cohort and the new cohort coming together to celebrate and Leah turning over the management reins to the internal consultants. Verifying Client Satisfaction We have discussed the importance of being authentic with your client and completing the business of each phase of the action research project, as Block (2011) recommended in his classic consulting text. Those key roles remain relevant right up until the end. That is, a consul- tant should continue to ask the client questions such as “Are you getting what you need from
  • 84. Skynesher/E+/Getty Images Plus The action research cycle is terminated when the implementation has been a success. Ending the consulting relationship smoothly and ensuring customer satisfaction helps drive repeat business. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 6.5Concluding the Action Research Process me?” “Is this outcome what you expected?” “What are you concerned about in the future?” “Can you maintain this change on your own?” When you have verified that the client is happy with the OD effort, you can move toward ter- mination. If the client is unhappy, however, work remains. Planning for Post-Consult Contact Although the consulting relationship will end at some point, it is advisable to have a plan for consulting after the intervention has been deemed a success. Clients may run into trouble in the future or need their questions answered. It is thus wise to develop a follow-up and mainte- nance plan with the client that involves periodic checking to make sure the change is on track. Agree on a minimal support maintenance plan such as periodic meetings or reports. Leah, for example, continued to periodically touch base with the Leadership Academy to ensure things were functioning smoothly after her departure.
  • 85. Although it would be considered a failure if a consultant had to return to solve the problem he or she was initially contracted for, it is likely that the client will face new challenges and seek out help. Ensuring that there is an open communication channel and guidelines for future engagement can put both parties at ease. Recycling There are times in the consultancy when termination or disengagement is not a good option for the client. This is true of interventions that are designed to repeat over time. In the Lead- ership Academy vignette, for example, the project was designed to repeat annually. Although Leah terminated her involvement with the project, she trained internal consultants to carry on her role, effectively repeating or recycling the action research process. Recycling can also be an option when the client seeks additional changes beyond the change that has already been effectively implemented. For example, consider a company that started providing executive coaching for its emerging leaders. The program was so successful that the company decided to offer training that brought some of the coaching principles to a wider audience. Recycling can also occur when the intervention was only moderately successful or even failed. An evaluation can usually expose an intervention’s shortcomings and help the organization identify adjustments or new interventions. One example would
  • 86. be an organization that did not follow the action research process and implemented a random intervention that was not clearly linked to the problem, such as requiring employees to attend training unrelated to the organization’s needs. Or the organization might have implemented something similar to the Leadership Academy but failed to prepare upper management to deal with highly enthusias- tic emerging leaders clamoring to make changes that challenge the status quo. In this case, a recycled intervention would target upper management members and help them become more equipped to mentor up-and-coming employees. Regardless of whether the OD intervention and action research process has been terminated or recycled, when your client has been successful at changing and has learned new ways © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources of thinking and behaving, you have completed successful OD. Ultimately, OD seeks to build capacity in individuals and organizations so they can problem solve without your help. That is the mark of an effective action research process. Summary and Resources Chapter Summary
  • 87. • Evaluation is a process of assessing, adjusting, and terminating or recycling the intervention based on data and subsequent decisions. • The purpose of evaluation is to make data-based decisions about an intervention’s quality, appropriateness, and effectiveness. • Evaluation can be formative or summative. Formative evaluation is concerned with improving and enhancing an OD process as it is underway, rather than judging its merit. Summative evaluation occurs after the implementation is complete and ascertains whether the change accomplished the desired outcomes and impact. Both provide valuable ways to assess the intervention before, during, and after it has occurred. • Cervero’s categories of evaluation show the different approaches to evaluation. It is important to be clear on an evaluation’s purpose at the planning stage. Typical evaluation categories include intervention design and implementation; employee participation and satisfaction; acquisition of new knowledge, skills, and attitudes; application of learning after the intervention; impact of the intervention; and inter- vention characteristics associated with outcomes. • Multiple frameworks for conducting evaluation exist. The best known is Kirkpat- rick’s four-level evaluation framework. This model measures reaction, learning,
  • 88. behavior, and results. • Other models of evaluation include Hamblin’s five-level model; Preskill and Torres’s evaluative inquiry; Brinkerhoff ’s six-stage model; and the input, process, output, and outcomes model. • Planning and performing the evaluation involves several steps, the first of which is determining the evaluation’s purpose. Articulating a clear purpose gives the evalua- tion focus and helps identify appropriate participants, measures, and methods. • Identifying appropriate evaluation measures is driven by the evaluation’s purpose. If a consultant aims to measure employee satisfaction after a change in leadership, he or she would likely survey employees to assess their satisfaction with the change. • Once the evaluation purpose and measures have been chosen, the data collection methods should be determined and carried out. Typical methods include surveys, interviews, focus groups, observations, and documents. • Once the data is collected, it can be analyzed and fed back to the client and other interested stakeholders. This information is important for making decisions about the continuance of the intervention and future funding. • It is advisable to anticipate client resistance both to conducting the evaluation and to
  • 89. hearing the results. Consultants can write evaluation protocols into their initial con- tract. When you notice resistance to evaluation, act quickly to defuse the resistance, address the concerns, and help the client use information most effectively. • The action research process concludes by being terminated or recycled. The process is terminated when the change is successfully implemented. There is no longer a © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources need for a consultant. The client has built capacity to use the action research process on future problems. • The action research process is recycled when termination is not a good option for the client. For example, there may be a desire to expand or improve the implementa- tion. There may also be a need to continue working with the consultant if the inter- vention repeats over time. In some cases, however, the intervention has failed, and it is time to consider a new approach. Think About It! Reflective Exercises to Enhance Your Learning 1. The chapter began with a vignette about a leadership
  • 90. academy for a state public health agency, which featured both formative and summative evaluation. Have you been in a situation where an evaluation occurred? If so, can you recall the different types of evaluation? If you have not experienced a formal evaluation, how might you go about evaluating a change you experienced? 2. Think about a change you have implemented. It could be personal, like changing a habit or starting something new, or professional, like taking on a new responsibility or position or meeting a challenge. Conduct a formative evaluation (focusing on what you did or could have improved on) and a summative evaluation (in which you judge the effectiveness and impact) on the change. 3. Which evaluation framework presented in this chapter was the most appealing to you? Why? 4. Reflect on how you might go about evaluating a recent change in your organization using one of the data collection methods outlined in the chapter. 5. Recall a time you have resisted change, especially organization change. How could a consultant or the organization have helped you become more accepting? Apply Your Learning: Activities and Experiences to Bring OD to Life 1. Imagine an organization hires you as an external consultant.
  • 91. It needs you to imple- ment a new recruitment and retention process aimed at hiring a more diverse work force. How would you go about evaluating whether the change was successful? a. What is the evaluation’s purpose? b. What steps will you follow to conduct the evaluation? c. What level(s) do you hope to evaluate, as per the Kirkpatrick framework? d. What data collection method will you use? 2. Identify a process, practice, or performance standard you would like to improve and plot how you would benchmark it. 3. Evaluation may be an afterthought in many interventions. How would you ensure evaluation is integrated into a change effort you are involved with or leading? How might you curb resistance? 4. Identify an intervention in which you have participated at work and evaluate it according to Kirkpatrick’s four-level framework: a. reaction b. learning c. behavior d. results 5. Plan an evaluation according to its a. purpose b. measures © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 92. Summary and Resources c. information sources and methods d. analysis and feedback e. future action f. political issues 6. If you have ever participated in an OD intervention led by a consultant, identify what types of evaluation were conducted. How well did the consultant do, based on the principles presented in this chapter? 7. Have you experienced a failed OD intervention that had to be recycled? If so, use the information presented in this chapter to diagnose what went wrong. Additional Resources Media • Michael Quinn Patton Evaluation Videos https://www.youtube.com/results?search_query=michael+patton +quin+evaulation • Kirkpatrick Model: Should I Always Conduct a Level 1 Evaluation? https://www.youtube.com/watch?v=dVnBE2W7qAI&list=PL3D 286DBB9370267D • Kirkpatrick Model: Monitoring Level 3 to Maximize Results https://www.youtube.com/watch?v=r-
  • 93. qF4kJrTiI&list=PL3D286DBB9370267D Web Links • The American Evaluation Association (AEA), an international professional associa- tion of evaluators devoted to the application and exploration of program evaluation, personnel evaluation, technology, and many other forms of evaluation: http://www.eval.org • The Online Evaluation Resource Library, a useful site that collects and makes avail- able evaluation plans, instruments, and reports that can be used as examples by principal investigators, project evaluators, and others: http://oerl.sri.com • The Centers for Disease Control and Prevention Program Evaluation Resources site, which offers a plethora of useful content on conducting evaluations: http://www.cdc.gov/EVAL/resources/index.htm Key Terms adjusting processes The process of chang- ing the OD intervention once the assess- ment data has been collected and analyzed. archival data Existing records such as employment records, production figures, or quarterly reports. Used as data sources
  • 94. when collecting evaluation data. assessing changes The process of gath- ering and analyzing data related to the learning and change associated with an OD intervention. behavior Kirkpatrick’s level 3 evaluation, which measures how participants perform differently as a result of the intervention. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources benchmark When an organization com- pares its business practices, processes, and performance standards with other organi- zations that are best in class. checking A data-based evaluation to assess whether an intervention had the intended result. evaluation A data-based checking process (assessment, adjustment, terminating, or recycling) to assess whether an interven- tion had the intended result. formative evaluation Assessments of an intervention before or during its imple- mentation geared toward improving the process.
  • 95. learning Kirkpatrick’s level 2 evaluation, which measures what skills and knowledge participants gained from the intervention. observation Type of evaluation based on detailed descriptions of day-to-day operations. reaction Kirkpatrick’s level 1 evaluation, which measures how well participants liked the intervention. recycling The process of repeating or revising the action research process when further interventions are desired or the initial intervention has failed. results Kirkpatrick’s level 4 evaluation, which measures how the bottom line was affected by the intervention. summative evaluation Assessment that is done once the intervention is completed to judge whether it attained its goals and addressed the problem and to make future decisions about funding and continuance. surveys and questionnaires Evaluation data collection method that uses instru- ments that participants complete to provide feedback on the intervention. terminate To disengage from a consulting relationship with an organization at the end of the action research process.
  • 96. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Action Research: The Doing Phase 5 monkeybusinessimages/iStock/Getty Images Plus Learning Objectives After reading this chapter, you should be able to: • Describe factors that influence a client and organization’s readiness for change and promote acceptance of interventions. • Define an OD intervention, including the different ways to classify interventions and the crite- ria for choosing an appropriate intervention. • Explain the consultant’s role in implementing OD interventions and how to promote learning to sustain them. • Discuss common issues related to monitoring and sustaining change, including the reasons that interventions fail, the ethics of the implementation stage, client resistance, and strategies to sustain change.
  • 97. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. A major land-grant university received federal funding to promote education among public health employees in a southern state. As soon as the monies were awarded, several educational initiatives began to serve multiple stakeholders across the state. One of the projects that James, the grant’s principal investigator, wanted to initiate was a leadership academy for mid-level managers with potential to advance to higher levels of public health leadership in the state. Previous analyses of the organization, including succession planning, had revealed a long-term need to provide leadership development. This need lingered for many years because public fund- ing was not available to provide a comprehensive program. The grant finally created the oppor- tunity to deliver this much-needed program. James contacted an external consultant, Leah, to help plan the program. Leah was a good choice for a consultant; she was an expert in leadership and program develop- ment. The contracting meeting was set up, at which Leah and James determined the scope of the project: a 1-year leadership development academy for the state’s top 25 leaders. The project had two objectives: 1. Pilot a program that will become a permanent leadership development academy
  • 98. available to high-potential leaders on an annual basis. 2. Strengthen the leadership competencies and culture within the state public health work force. Although Leah would be the lead consultant and facilitator for the project’s planning and imple- mentation over the first 2 years, the goal was to build capacity within the state so that internal facilitators could sustain the program over the long term. The project required an action research approach to collect and analyze initial data about the target population’s needs, so the decision was made to conduct interviews and surveys to determine the content of the leadership development academy. Based on Leah’s expertise in leadership development, her role was defined as part expert, part collaborative partner with James and his university. The project had a 2-year implementation timeline, with the first year focused on planning and the second year devoted to implementation. Evaluation would be ongo- ing and continue past the second year as a new cohort was started in year 3, staffed by internal consultants. James and Leah met regularly to plan the pro- gram. This involved undertaking the planning or discovery phase of action research: diag- nosing the issue, gathering data on the issue, sharing feedback, presenting the data analy- sis, and planning to act. The “doing” phase of the action research pro-
  • 99. cess is the phase in which the intervention is implemented. For the Public Health Leader- ship Academy, this phase began in September with 25 participants who had been competi- tively selected from across the state. The par- ticipants convened at a resort, and the program was kicked off by high-level state public health CasarsaGuru/E+/Getty Images Plus The Public Health Leadership Academy kicks off after an extensive planning process. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change officials. The first session lasted 3 days. During this time, the participants received results of a leadership styles inventory, listened to innovative lectures and panels on leadership, planned an individual leadership project in their districts, and engaged with each other to develop working relationships. The academy met monthly for a year and focused on a range of topics related to leadership that were prioritized based on prior data collection. The grant provided for an evalu- ator, so formative data was collected at each meeting. The kickoff of the Leadership Academy set the stage for the entire year. The beginning set the tone and expectations for what the par- ticipants could expect. Pythagoras is credited with saying, “The beginning is half the
  • 100. whole” (as cited in Infoplease, n.d., para. 1), which inspired the modern idiom “Well begun is half done.” This philosophy is well applied to creating OD interventions; that is, effective planning is key to successful change implementation. Chapter 4 introduced the first phase of the action research model, planning or discovery. This chapter focuses on the second phase, doing or action. Action research takes a data-based approach to diagnosing organization problems so that interventions can be imple- mented to permanently solve problems. We will return to the Public Health Leadership Academy vignette throughout the chapter to illustrate the action phase. See Table 5.1 to review the action research model we are following in this book. Table 5.1: Action research model Phase Action Planning (the discovery phase) 1. Diagnosing the issue 2. Gathering data on the issue 3. Sharing feedback (data analysis) with the client 4. Planning action to address the issue Doing (the action phase) 1. Learning related to the issue 2. Changing related to the issue Checking (the evaluative phase)
  • 101. 1. Assessing changes 2. Adjusting processes 3. Ending or recycling (back to the planning stage) the action research process This chapter will pick up at Step 4 of the planning phase, planning action to address the issue. Planning action involves choosing and initiating interventions. Interventions represent the action taken to resolve the problem, so they link the action research model’s planning and doing phases. A key activity in Step 4, however, is to assess the organization’s readiness for change. 5.1 Readiness for Change Once you have worked with the client to plan for how the organization will address the prob- lem, you move into the implementation phase. Ultimately, the measure of effective OD is whether a change was made and if it stuck. Implementing change is easier than sustaining it. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change Most people have successfull y dieted and lost weight; the hard part is maintaining the weight loss and sustaining new behaviors over the long term. Similarly, organizations may success-
  • 102. fully implement a new leadership development program but have difficulty in sustaining the necessary behavioral and cultural changes that ensure improved leadership. Making change is not the same as sustaining change. The latter is much more difficult. That is why OD consul- tants must help the client develop strategies to ensure people are accountable to maintain the change and create measures to help the organization sustain the change. Effectively initiating change depends on the organization’s perception that the change is nec- essary and achievable and that employees are willing to support change efforts (McKay, Kunts, & Näswall, 2013). These variables signal readiness for change. Our understanding of change readiness emerged from the fields of health psychology and medical studies (Block & Keller, 1998) and was later applied to organizatio ns. See Who Invented That? The Transtheoretical Model of Health Behavior Change. Who Invented That? The Transtheoretical Model of Health Behavior Change Models of change readiness originated in health care. The transtheoretical model is consid- ered the most influential and was proposed by Prochaska and DiClemente in 1983 based on their research on smoking cessation. A description of the model’s six stages follows. 1. Precontemplation (not ready). A state in which people are unaware their behavior is problematic; thus, there is no intention to take action. For example, suppose Jacob is
  • 103. a manager with an ineffective leadership style. Jacob is doing what he has observed other managers doing and does not give his performance any thought. 2. Contemplation (getting ready). A state in which people begin to notice their behavior is problematic and begin to weigh the pros and cons of their continued actions. (Lewin would refer to this as “unfreezing,” or readiness for change.) For example, Jacob may start to notice that he is not getting the results he would like in his role as a manager. He can see that people do not listen to him, and he starts to ponder whether he should change his behavior. 3. Preparation (ready). A state in which people set intenti ons to take action on the prob- lem behavior in the immediate future. For example, Jacob may decide to start explor- ing different leadership approaches and resources for improving, such as reading, tak- ing a class, or seeking mentoring from managers whose behavior he wants to emulate. 4. Action (doing). A state in which people are engaged in making visible modifications to their behavior, usually by replacing the problematic behavior with a new, more pro- ductive behavior. (This would be known as “moving” in Lewin’s terms.) For example, Jacob may decide to seek a mentor, read some books, and take a leadership class. He may also begin to implement some new behaviors with his staff.
  • 104. 5. Maintenance (maintaining). A state of preservation in which people have been able to sustain the change for a time and are working to prevent relapse to the former prob- lematic behavior. For example, Jacob may work to avoid slipping back to less effective management behaviors, such as failing to consult employees on important decisions. He may also seek feedback and support from his mentor. (continued on next page) © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change Reflect on an organization change you recently experienced. Perhaps there was a transforma- tion in financial systems requiring employees to adopt new procedures for reporting travel or making expenditures. Other changes might include adjusting to a new CEO or president, learning new features in products or services provided to customers, or abiding by additional expectations for completing work. Most often, people are neither pleased about nor ready for changes they are asked to make. Changes are often met with skepticism, resistance, and even anger. There are several dimensions to preparing an organization for change, and readiness for change is important for practitioners of OD to consider at individual, group, and organiza- tion levels.
  • 105. Dimensions of Change Readiness When a change is sprung on an individual or organization, a range of reactions occurs. Per- haps there is a sense of surprise, dismay, anger, excitement, fear, or dread. How people and organizations respond to a change can be measured by how ready they are to make a change, whether planned or unplanned. Dimensions of change readiness involve gauging readiness and understanding the dynamics of change. When you have been faced with change, what was your reaction? Gauging Change Readiness Five dimensions influence the level of readiness to make changes (Hord & Roussin, 2013). The first is whether data exists that justifies the change in a way that is relevant and compel- ling to the organization. That the data exists is not enough: It must be communicated clearly and compellingly by management. Next, employees must be engaged in ways that promote their input and ownership of the change. The third dimension is to ensure that the scope and impact of the change is appropriate for the organization’s culture and strategy. Next, the structure of the change should be clearly defined in terms of new roles, procedures, and resources. Finally, the organization needs to prepare to let go of past practices and find a rea- sonable timeline and process for incorporating the change. Table 5.2 offers a checklist of these dimensions, with examples of each category. It can be used to gauge an organization’s change readiness.
  • 106. Who Invented That? The Transtheoretical Model of Health Behavior Change (continued) 6. Termination (ending). A state in which the new behavior has become permanent (this would be known as “refreezing” in Lewin’s terms) and people are not tempted to revert to their old problematic behaviors. By this point, Jacob has integrated the more participative leadership style into his repertoire and does not even think about it any- more—it has become a natural part of his being. He may now be ready to help others make similar changes (Prochaska & DiClemente, 1983). © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change Table 5.2: Readiness for change Readiness dimensions Readiness dimension indicators Relevance and meaning: Make a compelling case for the change or identify the benefits of the interventions. • There is ample data to justify the need for this change. • Employees have had plenty of opportunity to discuss the whys for this
  • 107. change. • This change is not being driven by a crisis mindset. • There is anecdotal evidence from employees expressing why this change is important. • There is evidence that a culture of trust exists with employees about this change. Consensus and ownership: Engage employees so there is ownership of the desired change. • Employees express ownership for this change. • Employees say they are willing to commit energy and time toward this change. • This change was not driven by a top-down mandate and one- way communication. • Employees think this change will make a significant difference and bring results. • Stakeholders are strong supporters of the change. • There is shared responsibility and collective trust for this change. Scope and culture: Define the scope of the change and the impact it will have on the
  • 108. organization’s culture, cur- rent mindsets, and behaviors. • Advocacy for the change has been sensitive to organization culture. • Employees mentally, emotionally, and physically embrace the change. • Change leaders have been respectful and sensitive in helping employees make sense of the change over time. • The change aligns well with other recently implemented interventions. • The change will not overwhelm employees’ current workload. • The change leaders serve as role models of the desired change. Structure and coherence: Determine change leadership roles, structure, decision mak- ing, and how the change will interface with organization operations. • The right stakeholders have participated in the action research process and decision making for this change. • Leadership has identified key roles to support the change moving forward. • Employees understand how future decisions will be made around the change. • Appropriate resources have been dedicated to implement the change
  • 109. (e.g., finances, time). • The change is feasible and the right resources are in place to sustain it. • Frequent and adequate communication with feedback has guided the change. Focus, attention, and letting go: Assess where to focus attention based on data and determine what can be let go in order to create room for change. • Change leaders have determined what past initiatives/practices can be let go in order to make room for this change. • There is a reasonable timeline established for this change to support its full implementation. • There is clear understanding by employees of what the change is going to entail. • Employees understand the demand and expectations for the change. • There are indicators established for this change to identify early successes. • The appropriate technology tools are available to support this
  • 110. change. Source: Adapted from Implementing Change Through Learning: Concerns-Based Concepts, Tools, and Strategies for Guiding Change (p. 38), by S. M. Hord and J. L. Roussin, 2013, Thousand Oaks, CA: Corwin. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change Cheung-Judge and Holbeche (2015) offered tools to map change readiness of various stake- holder groups by charting readiness according to “level of commitment to change” and “abil- ity to make it happen.” Table 5.3 presents their process. Consider a change you experienced in the past or are currently facing. Using Table 5.3, plot your commitment to change and ability to make change happen. Next, plot your view of other key stakeholders or groups involved in the change. What issues in change readiness do you see in yourself or others, and how might you address them? Table 5.3: Mapping readiness for change Key stakeholders/ groups Commitment to change Ability to make change happen Low Medium High Low Medium High Dynamics of Change
  • 111. Readiness to change usually indicates a willingness to entertain new ways of thinking and doing. Hord and Roussin (2013) outlined the change dynamics in the following manner: 1. “All change is based on learning, and improvement is based on change” (Hord & Rous- sin, 2013, p. 2). Most change depends on learning. Hord and Roussin (2013) valued learning for the way in which it enables people to abandon nonproductive behaviors and replace them with behaviors more supportive of the intervention. They empha- sized, “At the center of all successful implementation of a change is the opportunity for adults to come together and learn” (Hord & Roussin, 2013, p. 2). 2. “Implementing change has greater success when it is guided through social interac- tion” (Hord & Roussin, 2013, p. 3). OD’s collaborative, collective ethic lends itself to building communities of change that band together to implement new programs and solutions. 3. Individuals have to change before organizations can change. If a group or team is to successfully pull off major changes, individuals need to possess the skills and capaci- ties to execute the necessary behaviors. Key to facilitating individual change is giving individuals choice and opportunities to influence the process and their environment. The stages of concern model (Hall & Hord, 1984, 2011) discussed in Chapter 2 pro-
  • 112. vides a framework for helping individuals address concerns related to change. 4. “Change has an effect on the emotional and behavioral dimensions of humans” (Hord & Roussin, 2013, p. 3). Change is stressful. When we fail to respect and tend to the emotional reactions to change, the change will likely fail. People need opportunities to air their hopes and fears about a change; this helps them feel safe during and after the process. 5. Employees will more readily accept change when they understand how the interven- tion will enhance their work. This belief ties in to adults’ need for learning to be © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change timely, relevant, and linked to their experience; it also relates to the power of con- necting individual and organization goals. 6. The client and/or leader’s role is to engage employees in dialogue about the changes as a way of promoting communication and ownership of the change. The more the change is talked about and explained, the easier it will be for employees to embrace.
  • 113. Factors Influencing Readiness to Change The client and the consultant can take steps to prepare the organization for change. The first is to clearly communicate the discrepancy between the status quo and the desired state. In Chapter 4, this discrepancy was defined as a performance gap. Employees will be prepared to change when they understand why the change matters (Madsen, Miller, & John, 2005). The second step is to bolster employees’ confidence that they possess the knowledge, skills, and abilities to deal with the performance gap and make the changes necessary to close it. Employees will accept change when they perceive a match between their skills and abilities and those needed to diminish the performance gap (Chreim, 2006). Perceived Appropriateness of the Change Readiness to change depends on several additional variables to be in place if it is to suc- ceed. When employees view the change as appropriate to the organization, they will gener- ally support and readily embrace it (Holt, Armenakis, Feild, & Harris, 2007). For example, several years ago, most organizations did not recycle; the idea of sustainability was unfamil- iar to both companies and communities. As global awareness of pollution and environmen- talism has increased, so has the willingness to change our behavior. Today, it is common to have recycling bins throughout an organization—you may even have one in your office and at home. Recycling is now embraced because we view it as appropriate and necessary. Of course, even the most appropriate change must be
  • 114. communicated well and visibly supported by management. Creating a Shared Vision of the Change When management engages employees in planning for the future, they are working to cre- ate a shared vision (Hord & Roussin, 2013). A shared vision is the creation and articulation of the organization’s desired future state that is mutually agreed upon with employees. As discussed in Chapter 4, a shared vision may be attained by completing a gap analysis that identifies the discrepancy between the current state and the desired state. For example, when the University of Georgia decided to change its platform for online learning, it involved sev- eral stakeholders, including students and faculty, in evaluating new platforms and providing input in the final decision. By creating a shared vision for what the university desired in terms of technology, image, and learning experience, the OD intervention significantly increased buy-in when the new platform was implemented. The more management can involve affected employees in the change’s planning and implementation, the more the entire organization will support the change. Level of Managerial Support of the Change When management visibly advocates for and adopts a change, it sends a message to the orga- nization about the change’s necessity and importance (Holt et al., 2007). Management serves © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 115. Section 5.1Readiness for Change as a model to employees, who watch to see whether managers actually commit to the change. For example, suppose an organization attempts to create a more diverse and inclusive culture. Managerial support might include articulating the organization’s commitment to diversity and inclusion at every opportunity, promoting a diverse range of employees to key positions, hiring for diversity, and rewarding behaviors that support diversity and inclusion. Providing the Necessary Resources, Support, and Assistance Displaying managerial support goes beyond setting an example. It also involves making sure the necessary resources are provided to make the change (Hord & Roussin, 2013). Changing usually takes time, costs money, and diverts energy from other activities. Creating a realis- tic budget and providing resources up front helps ease the transition. Employees may need moral support or training, the organization may need additional resources, or the community may need to be informed of the changes. It benefits the organization to provide sustained assistance as needed during implementation. For example, the university that changed its online learning platform had to develop a strategy for communicating to faculty, staff, and students; train for the implementation; obtain the ongoing support of faculty and students working within the platform; and hire staff to support the
  • 116. logistics of working with the new technology. Thakur and Srivastava (2018) surveyed 276 middle managers in India about change readi- ness and found that it influences resistance to change. Readiness increases and resistance decreases when levels of trust, perceived organization support, and emotional attachment are high. They also found the human touch to be important, that is, fostering communica- tion, trust, and security for employees experiencing change. Other researchers found that perceived organization support affected individual change readiness among 154 employees of a chain restaurant that introduced new leadership and restructuring and that providing support prior to the introduction of change improved trust and readiness (Gigliotti, Varda- man, Marshall, & Gonzalez, 2019). Level of Organization Members’ Self-Efficacy for Adopting the Change The perception that employees are skilled and competent enough to successfully implement a change bolsters readiness for it (Holt et al., 2007). Helping employees become comfort- able with both the content and the process of the change is important. Change content is the focus of the change—for example, adopting a new electronic medical record (EMR) program. Change process is the way the change is implemented —for example, piloting the EMR in a small department and seeking user feedback before rolling it out organization-wide. Research- ers investigating individual and group openness to change in
  • 117. relation to primary health care employees’ ability to improve their use of information and communication technologies in Sweden found that openness to both the change content and the process positively predicted competence with adoption of the change (Augustsson, Richter, Hasson, & von Thiele Schwarz, 2017). It is often up to the OD consultant and management to show employees that they have the self-efficacy to adopt the change. For example, if an organization were implementing a new technology, it would be helpful to provide opportunities for employees to experiment with it. Doing so would allow them to discover that they have the skills to implement it. Investing in professional development and professional learning is a key way to build self-efficacy (Hord & Roussin, 2013). Implementing and sustaining change requires acquiring new knowledge, © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.1Readiness for Change skills, and abilities. Such learning may boost employees’ confidence that they can adopt the new changes, as well as enhance their understanding and acceptance of the change. Level of Organization Members’ Personal Attachment to the Change Change is more likely to be accepted if management can show that adopting it will positively
  • 118. affect individual employees (Holt et al., 2007). Helping employees connect their personal goals to company goals creates a winning combination. Authors surveyed 1,833 nurses in 23 acute care hospitals across Switzerland and concluded that quality of care and support- ive leadership were positively associated with readiness (Sharma et al., 2018). Connecting employees’ personal attachment to change requires communication and support of employee interests. For example, in a quest to become a learning organization that readily captures and shares information and knowledge, an organization might bolster support of individual learning efforts by funding them, providing in-house learning opportunities, or sponsoring degree attainment and continuing education. Including a System for Checking and Assessing Progress The change implementation should be evaluated throughout the action research process (Hord & Roussin, 2013). As Chapter 6 will discuss, to ensure the intended outcomes are being achieved, it is important to assess progress and results during and after implementation. For example, in the case of the university that implemented a new online learning platform, a small pilot group of faculty users was designated “early adopters.” The group received train- ing and used the new platform the semester before it was officially implemented. This small, contained implementation offered the opportunity to troubleshoot and eliminate bugs prior to the large-scale implementation. Promoting Acceptance of Interventions
  • 119. There are several ways the client and consultant can prepare the organization for change and bolster acceptance of the interventions. Acceptance is encouraged via effective and ongoing communication with employees about the change and by creating opportunities to partici- pate in its planning and implementation. Developing a Change Communication Strategy Management communication about the change is key during both the planning and the imple- mentation phases. Communication not only informs and engages employees about the change but also helps diminish resistance to it. Effective communication comes in many forms; man- agement should take advantage of as many as possible by using media, meetings, and face-to- face interactions. Communicating about the change has several benefits. First, communication can serve an educational function; it helps employees learn about the performance gap and understand how they can help reduce it. Employees also need to learn about the change’s purpose and value to understand how they can contribute to achieving it. Additionally, communication helps alleviate fears about how the change might negatively affect the organization, certain jobs, or individuals. Effective communication will bolster employees’ confidence that they can cope with the change and the new demands it will bring (Mayer, Davis, & Schoorman, 1995; © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 120. Section 5.2What Are OD Interventions? McKay et al., 2013; Walinga, 2008). More broadly, a comprehensive communication plan gives employees the opportunity to understand the scope and strategy behind the change and to raise issues of concern (McKay et al., 2013). Promoting Employee Participation in the Change Participative management has been advocated as an effective strategy throughout this book. Engaging employees in planning and implementing change will likely result in higher levels of acceptance and understanding (Holt et al., 2007). Involving employees may also benefit the change itself because employees may have insight and information that can inform change- related decision making and problem solving (Courpasson, Dany, & Clegg, 2012). Promoting active participation might involve educating employees about the change and inviting their critical analysis of its purposes and procedures. It is also wise to engage employ- ees in learning and development activities that will build their competence and ability to cope with the change and whatever new tasks and responsibilities it requires. Engaged par- ticipation gives employees a sense of ownership and responsibility toward the change. When employees participate in the change process, they are less likely to resist the change.
  • 121. 5.2 What Are OD Interventions? During the planning or discovery phase of action research, the OD consultant works with the client to collect, analyze, review, and present data to help diagnose the problem. Once those steps are completed, an informed action to address the problem, also known as an intervention, can be selected. “OD interventions are sets of structured activities in which selected organizational units (target groups or individuals) engage in a task or a sequence of tasks with the goals of organizational improvement and individual development” (French & Bell, 1999, p. 145). Burke and Noumair (2015) defined an intervention as “some specified activity, some event or planned sequence of events that occurs as a result of diagnosis and feedback” (p. 185). Cheung-Judge and Holbeche (2015) offered a composite definition of intervention drawing on multiple definitions. They concluded that interventions have the following elements: entrance into an existing system; use of a structured and planned activ- ity; focus on a person, group, intergroups, or an entire organization; seeking to disturb the status quo and shift the system toward a new state; and aiming to improve and develop the organization (p. 92). OD interventions might consist of one-time events that happen within a short time frame such as announcing a new leader, or long-term events that may have several shorter inter- ventions such as mergers and acquisitions that bring new team members, additional prod- ucts and services, layoffs, cultural clashes, and so forth.
  • 122. Interventions are fluid and can occur throughout the OD process. Defining an Intervention Interventions signify the point at which action is taken on an issue. Schein (2011), writing about the consulting role, asserted, “Everything you say or do is an intervention that determines the © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.2What Are OD Interventions? future of the relationship” (p. 151). Schein’s view underscores that interventions can hap- pen throughout the action research cycle. Schein (2013) later explained, “Once we have made some kind of judgment, we act” (p. 94). As noted in Chapter 3, even the mere presence of a consultant is considered to be an intervention. According to French and Bell (1999), The OD practitioner, a professional versed in the theory and practice of OD, brings four sets of attributes to the organizational setting: a set of values; a set of assumptions about people, organizations, and interpersonal relation- ships; a set of goals for the practitioner and the organization and its members; and a set of structured activities that are the means for achieving the values, assumptions, and goals. These activities are what we mean by
  • 123. the word inter- ventions. (pp. 145–146) Interventions have certain characteristics of creating disruption, interacting with the existing organization system, and requiring careful planning. Each of these features is discussed in the following sections. Interventions Disrupt the Status Quo Coghlan and Brannick (2010) viewed intervening as the point at which the actual change process begins. This is where the organization may be caught in transition from a present defective state to a future desired state (managing its performance gap). In a similar vein, French and Bell (1999) suggested that interventions “purposely disrupt the status quo” (p. 143) toward a more effective state of affairs. This process is similar to identifying and resolv- ing a performance gap; that is, the current state is incongruent with the desired state, and thus actions are required to resolve the gap. Coghlan and Brannick recognized two manage- rial marks of being in this disruptive transition: 1. possessing a plan that describes the goals, activities, projects, and experiments that will move the organization to the desired state and 2. a plan of commitment to the change by the organization. In the case of the Leadership Academy vignette, Leah and James defined a plan for the acad- emy to develop leaders’ capacity on the individual, group, and organization levels. They also
  • 124. secured the organization’s commitment to the intervention. Interventions Happen in Complex Systems Interventions do not happen in a vacuum. “To intervene is to enter into an ongoing system of relationship, to come between or among persons, groups or objects for the purpose of helping them” (Argyris, 2000, p. 117). Anderson (2016) deconstructed this definition to emphasize three points: 1. The system is “ongoing,” which suggests that any intervention represents an inter- ruption or disruption to the flow of organization life. The intervention happens in the midst of organization complexities such as politics, priorities, interpersonal relationships, constraints, history, and other factors. 2. Interventions “come between or among persons, groups or objects,” which alludes to how they disrupt usual ways of doing business. As a result, members may resist them. © 2020 Zovio, Inc. All rights reserved. Not for resal e or redistribution. Section 5.2What Are OD Interventions? 3. “For the purpose of helping” indicates that the goal of intervening is to positively influence organization functioning, effectiveness, and performance.
  • 125. If we consider the Leadership Academy intervention in Argyris’s (2000) terms, it disrupted the flow of organization life by identifying people with leadership potential. James and Leah also had to navigate politics that evolved within the state public health agency related to the program and its control in the initial planning stages. The academy also came between and among persons, groups, or objects by changing the way participants behaved, problem solved, and coached their employees. In some cases, participants completely overhauled their leader- ship style and began developing their own teams in new ways. Some participants experienced receptivity to their new skills, whereas others encountered resistance. On the third point, the individuals, their teams, and the statewide public health agency were positively affected due to the multiple changes that occurred as a result of this program. Interventions Require Careful Planning and Sequencing Interventions entail planning the action or intervention, as well as executing these. An inter- vention strategy is a series of events that take place over several weeks or months to address the problem. French and Bell (1999) referred to an intervention strategy as a sequence of tasks. In contrast, a single event, meeting, or workshop is known as an intervention activity (Anderson, 2016) or task (French & Bell, 1999). The Leadership Academy vignette was an example of an intervention strategy, given its yearlong implementation and multiple associ- ated activities.
  • 126. Interventions share common attributes in that they • signify a shift away from the status quo toward a new future, • initiate action on the problem, • rely on evidence, • are grounded in relationship, • occur in complex social systems, and • require planning and sequencing. The Leadership Academy vignette met these attributes. It shifted the culture toward leader- ship and gave participants the tools and support to try their new knowledge within their work environments. The program was based on data collected from participants and the organization, as well as the best evidence available on effective leadership. Each participant’s organization had to provide a letter of support to show commitment to the academy partici- pant, and there were multiple relationships developed during the program that were critical to its success. Classifying Interventions Interventions are generally decided during the discovery or planning that occurs in the first phase of the action research model. They are implemented in Phase 2, doing or action, and assessed in Phase 3, checking or evaluating. OD interventions are the point of OD. They repre- sent the actions taken on the problem or issue. Intervention is the peak of the OD process—it is what OD intends to do from the start. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 127. Section 5.2What Are OD Interventions? That said, interventions may occur at any point in the action research cycle and are best understood as fluid. For example, a consultant’s presence represents several opportunities to acknowledge a problem, make an impending change, or alter conditions. A consultant might make an intervention during initial or feedback meetings by offering an observation, such as “Everyone here is talking over one another,” or asking, “What is preventing you from deal- ing with this problem?” or stating, “You are talking over everyone during meetings and not listening to what people are trying to tell you. Are you aware of that?” Of course, in addition to making what might be considered microinterventions, the consultant’s job throughout the process is to guide the client toward making a macrointervention that addresses the root cause of the problem. Classification as Diagnostic or Confrontive Interventions vary in terms of their level, scope, duration, and strategy. The Leadership Acad- emy vignette focused on individual leaders, but it affected the leaders’ direct reports, groups, organizations, and the wider state public health agency. Schein (1988) classified interventions as either diagnostic or confrontive in terms of their timing and level of difficulty. Interventions that occur at any time during the contracting process, initial meetings, and data collection are
  • 128. diagnostic interventions because they occur as the organization grapples with the correct diagnosis and intervention strategy. As has been pointed out, the consultant is an intervention. “Every decision to observe something, to ask a question, or to meet with someone constitutes an intervention into the ongoing organizational process” (Schein, 1988, p. 141). Interventions made based on data collection and analysis are confrontive interventions (Schein, 1988). There are four types of confrontive interventions that range in their level of difficulty: 1. Agenda-managed interventions. As the name indicates, agenda-managed interven- tions are concerned with helping the group or organization prioritize what to focus on; these interventions also examine its internal processes. Groups or organiza- tions may get stuck determining what is important; they may do nothing or vacil- late instead of making a decision. Working with a stalled group or organization can be quite frustrating. Helping a group focus on how it functions can be pivotal in improving actions and outcomes. Schein (1988) argued that something as simple as evaluating meetings significantly affects group members’ awareness of interpersonal dynamics, emotional reactions, communications, and decision- making processes. Schein recommended starting with low-risk issues like the agenda. Higher-risk issues regarding the group’s relational and interpersonal
  • 129. patterns should be tackled once people are emotionally prepared to deal with the vulnerability and feelings that will surface when the group begins to critique how it functions. 2. Confronting through feedback of observations or other data. Chapter 4 discussed the process of sharing feedback with the client during the discovery phase of the action research model. Schein (1988) recommended sharing feedback when the group or organization commits to examining interpersonal workings. Confronting through feedback requires that the client be ready to hear and act on it. Consultants play a key role in helping the client absorb and act on feedback. The ability to observe reactions, listen, ask great questions, facilitate learning, defuse defensiveness, and deliver difficult messages with tact and diplomacy are key to these types of interven- tions. Consultants should also model how to hear and accept feedback for the client. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.2What Are OD Interventions? This involves asking for feedback on one’s own performance and graciously accept- ing and acting on it as appropriate. 3. Coaching. When individuals and groups receive feedback,
  • 130. there is a natural inclina- tion to seek help in modifying behavior to facilitate the change process. Coaching has become a significant management trend in recent years (Feldman & Lankau, 2005). It involves helping people overcome behaviors that limit their ability to work effectively with others (Corbett, Ho, & Colemon, 2009) and understanding what drives behavior (Corbett & Chloupek, 2019). Coaching is a higher-risk confrontive intervention than agenda-managed interventions or confronting through feedback of observations or other data, making it one that Schein (1988) recommended using with caution. Coaching interventions can have life-altering effects and should there- fore be used with care and sensitivity. The most effective coaching programs follow a defined process (Corbett & Colemon, 2005) and support people in shifting thought and behavior in ways that affect business positively (Corbett & Ho, 2012). 4. Structural interventions. The confrontive interventions in this section are arranged in a descending hierarchy. That is, they are presented from ease of implementation (1) to difficulty of execution (4). Structural interventions pertain to allocation of work, modification of communications, group membership, assignment of responsibility, and lines of authority. These types of changes, which are often called “reorganiza- tion,” are greeted with trepidation by most organization members. They are also the
  • 131. most difficult to implement and sustain and must be undertaken for the right rea- sons. Before resorting to restructuring, consultants should consider interventions 1 through 3 to see if they sufficiently solve the problem. Interventions are diverse and can address multiple levels of the organization. Selecting the best possible intervention based on your time frame, budget, culture, and goals is a key task of the action research process. Classification by Level and Process Other theorists use a range of schemes for classifying OD interventions. McLean (2006), for example, classified by levels of analysis (individual, group, team, organization, etc.). Cummings and Worley (2018) organized interventions according to the underlying process (human pro- cess interventions, technostructural interventions, human resource management interven- tions, and strategic change interventions). Table 5.4 goes into more detail on different ways to classify OD interventions. Table 5.4: OD intervention classifications McLean (2006) Cummings and Worley (2018) French and Bell (1999) 1. Individual 2. Team and interteam 3. Process 4. Global 5. Organizational 6. Community and national
  • 132. 1. Human process 2. Technostructural 3. Human resource management 4. Strategic 1. Team interventions 2. Intergroup and third-party peacemaking interventions 3. Comprehensive interventions (e.g., large scale or strategic) 4. Structural interventions © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.2What Are OD Interventions? Although classification helps the OD consultant and client understand an intervention’s scope and focus, no one classification is necessarily “right,” because both levels and processes are fluid. Leadership interventions, for example, commonly fall under more than one classifica- tion. A leadership development program like the one described in the Leadership Academy vignette crosses the classifications of individual, team, organization, human process, human
  • 133. resource, and strategic, because potential leaders receive individual development that affects their interactions with groups and influences the overall organization and future strategies. Another example of an intervention that crosses all levels would be the implementation of a performance management system. Individual behavior is usually affected when performance is appraised, and this in turn influences groups and the organization itself. See Tips and Wisdom: French and Bell’s OD Interventions. This Book’s OD Intervention Classification Regardless of classification, each phase of action research builds toward making one or more interventions. This book uses three levels of intervention classification: individual, group or team, and organization. Each of the categories listed previ ously can be accounted for under one or more of these three categories. Table 5.5 lists typical interventions at these levels. We will further describe these interventions in chapters 7 and 8. Tips and Wisdom: French and Bell’s OD Interventions As Table 5.4 showed, there are many types of interventions to choose from, particularly based on level. French and Bell (1999) classified OD interventions into 14 types that center more on the activities: 1. diagnosticactivities,suchasdatacollectionandfeedbacktodetermin ecausesof problems
  • 134. 2. team- buildingactivities,suchasdetermininggroundrulesorassessingindi vidual interaction styles 3. intergroupactivities,suchasinterofficecollaborationorconflictreso lution 4. surveyfeedback,suchasclimateassessment 5. educationandtraining,suchasaleadershipworkshop 6. technostructuralorstructuralactivities,suchastechnologyimpleme ntationor reorganization 7. processconsultation,suchasgroupdynamicsanalysis 8. grid- organizationdevelopment,suchasdeterminingmanagementstyleba sedonlev- els of concern for people and concern for production (based on Blake & Mouton, 1964) 9. third-partypeacemakingactivities,suchasmediation 10. coaching and counseling activities, such as executive coaching 11. life- and career-planning activities, such as career development or life coaching 12. planning and goal-setting activities, such as departmental goal setting 13. strategic management activities, such as strategic planning 14. organizational transformation activities, such as restructuring and new leadership The Leadership Academy vignette interventions could be classified according to French and Bell’s (1999) list as 1, 2, 5, 10, 11, 12, and likely others.
  • 135. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.2What Are OD Interventions? Table 5.5: Levels of OD interventions Individual-level interventions Group-level interventions Organization-level interventions • Learning and development • Leadership and management development • Career development • Assessment • Job development • Group or team process and development • Diversity and inclusion • Conflict management • Problem solving and decision making • Vision and mission development • Strategic planning • Organization design • Culture
  • 136. • Talent management • Large-scale interactive events Criteria for Choosing an Intervention Argyris (1970, 2000) recommended that three primary intervention tasks occur before mak- ing any type of intervention: • First, recommended interventions must be based on vali d information. This means thoroughly engaging Phase 1 of the action research model by collecting and analyz- ing data on the problem before proceeding. • Second, the client’s discretion and autonomy must be respected; engagement in the intervention must be based on the client’s free, informed choice. • Third, the client must be committed to learning and change. All three of these prerequisites had been met in the QuickCo vignette featured in chapters 3 and 4 and were in place in the case of the Leadership Academy vignette, which positioned the organizations for intervention success. There is usually more than one appropriate intervention for every problem. Interventions vary in terms of their implementation time frame, cost, scale, level, and complexity. For exam- ple, in the QuickCo vignette, the intervention involved a relatively short time frame in which a facilitated intervention was made with the shipping department and some coaching was provided to the supervisor. The Leadership Academy vignette presents a much more costly,
  • 137. long-term, complex implementation that will last for a year and continue into the future. Cummings and Worley (2018) defined an effective intervention as one that fits an organiza- tion’s needs, targets outcomes that will address the problem’s root cause, and transfers com- petence to the organization to manage future changes. A key feature of the Leadership Acad- emy was to build internal capacity for the state public health agency to run future programs. French and Bell (1999) advocated a strategic approach to interventions that incorporates goals, activities, and timing. The strategy also needs to anticipate the organization’s readiness to change, potential barriers, and sources of support and leadership. French and Bell’s tips for making effective interventions follow: 1. Include the relevant stakeholders. 2. Base the intervention on the data generated. 3. Involve the stakeholders in the action research process. 4. Keep the intervention focus on the key goal. 5. Set manageable, attainable goals. 6. Design the intervention so that key learning can be attained and shared. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.3Implementing OD Interventions 7. Emphasize collaborative learning throughout the process. 8. Use the opportunity for the client group to enhance learning
  • 138. about the interpersonal workings of the group. Anderson (2016) outlined useful considerations for making good intervention choices. First, the intervention should be congruent with the data and diagnosis from the discovery phase of the action research model. Incongruence will result in solving the wrong problem. Second, the client readiness for change should be assessed. Without a client that is willing and able to change, the intervention will fail. Striking a collaborative consulting relationship is foundational to promoting readiness throughout the process. Anderson’s (2016) third consideration is determining where to intervene. Do you start with top management or line management? Do you work on relationships before issues or vice versa? Would it be wise to pilot the intervention on a small scale before rolling it out to the whole organization? Do you start with the easiest or most difficult aspect of the implementation? Anderson’s (2016) fourth consideration is the depth of the intervention. Less deep interven- tions are observable, whereas very deep interventions are more abstract. The following are Anderson’s depths, listed in order with some potential examples: • work content (tasks, skills, knowledge); • overt group issues (communication, decision making, or
  • 139. conflict); • hidden group issues (coalitions and power); • values and beliefs (quality, cooperation, stability); and • unconscious issues (assumptions about how we do business, culture). Finally, Anderson’s (2016) fifth consideration is to sequence activities to ensure optimal out- comes. Consultants need to make the best use of data; be highly effective, efficient, and quick; and use relevant activities that minimize stress on individuals and the organization. 5.3 Implementing OD Interventions Now that the data has been analyzed and shared with the client and an intervention has been agreed upon, it is time to implement a solution. This is the action phase of OD. “Implementa- tion is . . . the point of the consultation” (Block, 1999, p. 247). Consider This Use French and Bell’s list to evaluate an intervention in which you participated. How did the intervention stack up against this list? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.3Implementing OD Interventions The moment of implementation or action is also the moment for the client to visibly take ownership of and lead the process. That is, the client will be accountable for maintaining the
  • 140. intervention in the future. If the client is not hands-on with implementation, the entire proj- ect will be at risk. If the consultant has managed the client relationship well and insisted on a joint process, the client should have little issue with taking charge of the intervention. The client will likely need ongoing coaching and support to help see the implementation through and build confidence in the process. Determining the Consulting Role Consultants have a range of options for how to conduct themselves during the implementa- tion. A consultant may elect to stay out of the way, take a hands-on approach, or serve as facilitator. As discussed in Chapter 3, the collaborative role is the most effective and gener- ally preferred in OD. Table 5.6 takes Cockman, Evans, and Reynolds’s (1996) list of roles to collaboratively facilitate intervention implementation and offers some strategies for using these roles. Table 5.6: Consulting roles and strategies during implementation Role Strategies Provide support and encouragement. • Acknowledge the implementation effort in ways that give the client and employees recognition and appreciation. • Offer praise and words of encouragement to those engaged in the implementation.
  • 141. Observe and share feedback. • Prepare clear and direct feedback and share it with the client. • Develop observation checklists so the client can also participate in making observations and checking on progress. Listen and offer counsel when things go wrong. • Serve as a sounding board. • Ask good questions to help the client find the answer instead of giving the answer. • Mediate conflict as necessary. Help the client modify and fine-tune the plan. • Engage in ongoing evaluation of the implementation. • Devise adjustments to the change as needed. Identify process problems that impede implementation. • Conduct ongoing evaluation and take quick action to make needed corrections. • Create a process for identifying and resolving problems. Bring together parts of the client system to address process issues (e.g., conflict, communication). • Create an implementation task force that conducts regular audits of the implementation and has the authority to
  • 142. intervene as needed. • Ensure that communication is ongoing with everyone involved in the implementation. Bring people together from differ- ent disciplines or different parts of the organization to work on implementation. • Create an implementation task force. • Employ task force members to conduct communication, training, and evaluation related to the intervention. (continued on next page) © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.3Implementing OD Interventions Table 5.6: Consulting roles and strategies during implementation (continued) Role Strategies Organize necessary training and education. • Create ongoing training sessions that will help prepare the employees for the change. • Develop in-house trainers to help with the training effort.
  • 143. Work with managers to help them support the change process. • Develop a means of communicating with managers so information can be shared and problems solved easily. • Consider regular meetings, technology, or a mix. Confront inconsistencies between the plan and how it transpires. • Check the implementation progress to plan regularly and make adjustments. • Decide on a protocol for making changes and stick to it. Refer to Chapter 3 for more information about the consulting relationship and how to interact with clients throughout the action research process. Promoting Learning Related to the Intervention The intervention process moves the client from the current state through a transitional phase and into the new, desired state. Another way to think of it is in terms of Lewin’s (1946/1997) unfreezing, moving, refreezing change model introduced in Chapter 2. People are engaged in the unfreezing stage when they become aware of the need to change, build the desire to create change, and undergo a process of unlearning. Imagine you decide to go on a diet. The unlearning is the process of recognizing that your cur- rent eating habits are unhealthy and searching for an alternative. Moving is making the
  • 144. changes, and this requires new learning. For example, you might consider several diet plans or review the basics of nutrition. Refreezing occurs when the new behavior becomes part of your lifestyle. Another way to think about this is to imagine a company decides to cut costs. The unlearning process involves recognizing areas where spending is unnecessary or too high. Moving or changing might involve reviewing expenditures and cutting extraneous purchases from the budget. Sometimes, travel budgets get cut in these circumstances or the company switches to cheaper raw materials to make its products. A more extreme cost savings effort might be to lay off workers. Refreezing occurs when new spending behaviors and policies have been adopted and the company is able to sustain a lower cost threshold to operate. “Human beings have always engaged in learning—learning to survive, learning to live in a social group, learning to understand the meaning of our experiences” (Merriam & Bierema, 2014, p. 44). Learning in the workplace is no exception and is a common focus of OD. Most change requires learning. People often are not aware of a problem until there is a crisis or they reflect on the results they are achieving. At that point, people begin to ask questions such as “Why are we doing it this way?” “Is there another mapodile/E+/Getty Images Plus Learning is often stimulated by crisis or by reflecting on events and ideas.
  • 145. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.3Implementing OD Interventions way to think about this?” “What mistakes have we made and what have we learned from them?” or “What could be improved?” Asking such questions is what is known as reflective practice. Crises or reflection can jolt people into a learning mode as they build knowledge and under- standing. This learning is often the catalyst for change. People may learn the competition is gaining an edge, their quality is declining, their relationships are dysfunctional, or their man- agement is lacking vision. These insights make them want to act, and OD provides a process for addressing these challenges through action research. Learning happens at every phase of the process, from discovery of the problem, to planning an intervention, to maintai ning the change, to evaluating its effectiveness. This section considers the role of learning in the action research process by appreciating the relationship between learning and change and explor- ing ways of facilitating client learning. The Relationship Between Learning and Change Most OD involves change, and most change involves learning. Think about a change you made in your life, such as switching jobs, starting a relationship, moving to a new city, or pursuing
  • 146. a goal. Chances are these shifts created new learning. Changes often require new action or new thinking that depends on new learning. For example, when you switch jobs, you have to learn how to navigate relationships and how to interact with your new colleagues. This might involve learning how your new boss likes to receive information and make decisions and then changing your behavior to accommodate the preferences of your boss. Working with a new team might involve learning how to be more assertive than you have been in the past if you are working with strong personalities or the team is looking to you for the expertise you bring to the organization. Similarly, changes made in OD—such as heightened awareness of interpersonal relations, understanding through feedback, or attempting to change your leadership style—also involve learning. Certain conditions promote learning. For example, adults are motivated to learn when education is relevant to their current situation, work challenges, or life needs. Facilitating Client Learning Knowles (1980) and Knowles and Associates (1984) developed key principles related to adult learning that are relevant to implementing change. These principles are considered the art and practice of teaching adults, also known as andragogy. Principles of andragogy as they relate to implementing change include the following: 1. As people mature, their self-concept moves from that of a dependent personality
  • 147. toward one of a self-directed human being. This means that people desire to have say and control in their learning. Building ways for affected employees to have input into the change and control over aspects of it will enhance buy- in and adoption. For example, if you implement a new procedure, engage the people handling the process in devising best practices. 2. People accumulate a growing reservoir of experience, which is a resource for learning. The people affected by a change have spent a great deal of time in the organization and have a repertoire of know-how related to the problem or issue being addressed by OD. Failure to tap their experience and knowledge will breed resentment and © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.3Implementing OD Interventions resistance. Find ways for the involved parties to contribute their insights to the pro- cess to enhance buy-in and minimize resistance. 3. People become ready to learn when the tasks and challenges of life demand new knowledge. My organization, for example, is converting to a much-needed data man- agement system. Although it is a major change, the employees who have been wres-
  • 148. tling with outdated, unresponsive, clunky databases have eagerly attended training and are excited about the implementation of the new technology. When change is communicated well and addresses a true need in the organization, there is a better chance that employees will be enthusiastic about learning to adopt it. 4. People tend to be life or problem centered in their learning, rather than subject cen- tered. It is likely that many people did not have any interest in birthing classes until they were expecting a baby. That is because the learning was timely and relevant to their life. Similarly, someone would likely be more motivated to take a wine-tasting class (life centered) than an organic chemistry class (subject centered). Changes that are relevant to employees become learning opportunities. Part of a consultant’s role is to effectively communicate the relevance of planned change and help those affected see the linkage. 5. People are driven by intrinsic motivation. People are more inclined to seek learning that meets an internal need for knowledge or mastery rather than an external need for recognition or money. 6. People need to know the reason to learn something. People will be resistant to learning new software or changing their behavior if they are not provided with a rationale. A consultant’s job may well be to sell the OD effort and connect it
  • 149. with the necessary learning. When my organization announced the shift to a new database, the rationale was for ease of generating reports, combination of databases, and a user-friendly format. These were reasons that made sense to the users and motivated their accep- tance of and learning related to the change. Facilitating Transformation Through Learning Scharmer and Senge (2016) proposed “Theory U,” a theory of learning and management that helps leaders change unproductive patterns of behavior that hurt relationships and decision making. Recently, Scharmer (2018) published a distillation of this model that uses action research to address learning and leadership challenges in organizations. Scharmer and Kaufer (2013) distinguished two types of learning in the Theory U model: learning from the past and learning from the emerging future. They described this new model of learning and leadership extensively in their model of Theory U, “a framework for learning, leading, inno- vating and profound systemic renewal” (p. 18). The model is called Theory U because of its U shape (Figure 5.1). Consider This Think of the times you have been motivated to learn. How do the principles of learning relate to your own life? What about to changes you have experienced at work? How can you craft the change in a way that gives affected employees an opportunity for input and control over the learning?
  • 150. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.3Implementing OD Interventions Figure 5.1: The Theory U model The Theory U model moves the client through the process of letting go of the old (left side of the U) and embracing the new (right side of the U). From The Essentials of Theory U: Core Principles and Applications, by C. O. Scharmer, 2018, San Francisco, CA: Berrett-Koehler. Scharmer and Kaufer (2013) proposed that “energy follows attention” (p. 21) and that we should therefore focus on what we want to create versus what we want to avoid. That means consultants need to keep clients focused on the outcomes they seek, rather than the problems they want to avoid. To understand this model of learning and change, start at the top left of the U. Moving down the left side of the U involves opening minds, hearts, and wills. You can help clients do this by observing them closely for ideas or practices that are holding them back, then feeding this information back to them. The bottom of the U is a place of deep reflection and shifting away from the problem toward the desired future. Your role is to create activities that help clients reflect on their problem. This might include key assumptions of individuals
  • 151. and the organization, or raise new questions that have not yet been asked. Going up the right side of the U involves acting—much like the doing phase of action research. In these steps, you develop a vision of the intended future and devise and implement appropriate interventions. Navigating change in the Theory U model requires what Scharmer and Kaufer (2013) refer to as “transform[ing] the three enemies” (p. 23). These are • the voice of doubt and judgment (shutting down the open mind), • the voice of cynicism (shutting down the open heart), and • the voice of fear (shutting down the open will). DOWNLOADING past patterns suspending embodying redirecting enacting letting go letting come SEEING with fresh eyes SENSING from the field PERFORMING by operating from the whole
  • 152. PROTOTYPING by linking head, heart, hand CRYSTALLIZING vision and intention PRESENCING Connecting to source OPEN MIND OPEN HEART OPEN WILL © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Scharmer and Kaufer (2013) suggested beginning by focusing on the future and paying par- ticular attention to where the past seems to end. This “place” is similar to what Bridges (1980) called the neutral zone (see Chapter 2). Theory U is an innovative, future-oriented change model worth knowing. Resources for fur-
  • 153. ther study of this model are listed at the end of the chapter. 5.4 Monitoring and Sustaining Change A vast amount of planning and work goes into making an intervention. As previously noted, making change is easier than sustaining change. A consultant’s job is to keep the client on track to successful change implementation. This requires knowing the warning signs of a faltering intervention and how to get the client back on track to a successful, sustained intervention. See Case Study: Reorganization Resistance for an example of an organizational change that may not have been planned effectively. Case Study: Reorganization Resistance The CEO of a publishing company instructs Brenda Frank, the president of one of its divi- sions, to reorganize its management structure. The division was recently purchased and is not aligned with the other divisions. The CEO thinks that the division has too many vice presi- dents and management layers and that its administrative structure is too expensive. Brenda is unconvinced that restructuring is the best answer, due to the niche of the publishing division, but she also understands her marching orders. She contacts a consultant, George Reed, with whom she has worked before on leadership development issues. “George,” she says, “I’ve got to find a way to reorganize that makes my CEO happy. According to corporate, we have too many layers and too many VPs. I’m going to need your help to figure this out. Can you help?”
  • 154. George pauses for a moment before answering. He is an expert at leadership but has limited experience with the kind of restructuring Brenda is asking for. “I’m not sure that falls within my expertise, Brenda, but I am willing to hear more about the matter. Let’s meet.” A meeting is set, and George and Brenda discuss the change. George is hesitant and tells Brenda she might be better off with someone else. “Nonsense!” she exclaims. “You are an expert at leadership. How hard can this be? Let’s get to work.” Brenda and George set about planning the change. The next thing Brenda and George do is call a meeting of the vice presidents to notify them of the change. Brenda opens the meeting. “We are going to have to reorganize,” she says. “According to corporate, we have too many layers and too many VPs. I’m asking for your help in this process.” Brenda explains that over the com- ing weeks, George will be meeting with them to discuss their functional areas and collect data to help inform the change. (continued on next page) © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Reasons Interventions Fail
  • 155. Perhaps your organization has experienced interventions such as training, survey feedback, or restructuring. Can you think of interventions that failed? This section examines reasons interventions fail and the implications of such failures. Interventions fail for several reasons. First, organizations must be ready for change. In addition, certain flaws inherent in the inter- vention design itself can contribute to failure. Anderson (2016) identified 10 reasons inter- ventions fail. They are listed in Table 5.7, along with tips for fixing intervention failures. Case Study: Reorganization Resistance (continued) After the meeting, the groans and complaints from the VPs are largely uniform. The comments in the hallway range from anger to disbelief to denial: “I’ll tell you what, we are not at all valued. This is a signal we’d better all be dusting off our résumés.” “Well, that’s the dumbest idea I’ve heard out of corporate since we were acquired. They have no idea what it takes to run our business and have given no rationale for the change other than they think it will save money. What about the money it could lose?” “This plan will never work. Let’s just keep our noses to the grindstone and ride it out.” George and Brenda have a good working relationship, so they forge ahead and try to make the best of a difficult situation. George begins studying the organization chart and interviewing
  • 156. the VPs. Together, they come up with a new structure that merges 10 departments into six, displacing four VPs. The rollout of the change involves holding individual meetings with the VPs to unveil the new structure. Brenda works hard to find new roles within the company for the displaced VPs, but she is not entirely successful and winds up laying off two of them. Once the personnel changes have been made at the individual VP level, Brenda crafts an email to all employees with a new organization chart and informs them that the changes are effective immediately. The reorganization announcement throws the organization into a frenzy. It catches the employees by surprise; they see no reason for the changes. Immediate reactions are anger, fear, and suspicion. Employees are nervous about their job security and the integrity of their work units. The remaining VPs are unclear about how to implement the changes or how to manage the new staff units of the merged departments. Productivity and morale plummet. Several employees at multiple levels begin to look for other jobs. Customers begin to complain about a lack of support or clarity about whom to contact to meet their needs. Clearly, Brenda and George have a disaster on their hands. They thought they were doing things right, but obviously they were not. Critical Thinking Questions 1. What did Brenda and George do wrong?
  • 157. 2. What would you do differently? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Table 5.7: Intervention failures and fixes Failure Fix 1. The intervention attempted to solve the wrong problem. • Ensure that Phase 1 of the action research process arrives at the correct diagnosis. • Involve multiple stakeholders to analyze the problem and provide inputs. 2. The wrong intervention was selected. • Ensure that Phase 1 of the action research process plans an appropriate intervention. • Identify a backup intervention if it becomes clear that the selected one fails to meet the need. 3. Goals were ambiguous, unclear, or too lofty. • Work with the client to establish a clear purpose and goals for the intervention. • If there is no clarity of purpose, intended outcomes, and process to achieve them, an
  • 158. intervention is not ready to be implemented. 4. The intervention was undertaken as an event rather than as a program of activities with multiple targets for change (strategy missing). • Develop a long-term implementation strategy using a Gantt chart (see Chapter 3). • Distinguish interventions from intervention strategy. 5. Not enough time was devoted to change. • Estimate how long it will take to make the intervention and then add at least 10% more time. • Build time into the workday to implement the change. This is part of the resource allocation the organization has to make if it is committed to change. 6. The intervention was poorly designed to reach the specified goals. • Ensure that Phase 1 of the action research process arrives at the correct diagnosis and appropriate intervention. • Engage employees in intervention design— they will be the best debuggers and critics and help get it right the first time. 7. The consultant was not skilled at implementing the intervention.
  • 159. • Hire the right consultant. • Part ways with the consultant if you are not getting what you need. 8. Responsibility for change was not transferred to the client. • Establish client accountability for monitoring and sustaining change during the contracting phase (see Chapter 3). • Provide the necessary learning and development to managers and leaders to assume accountability for the change. 9. Organizational members resisted or were not committed to the intervention. • Follow the recommendations for promoting change readiness. • Watch for evidence of resistance and follow the strategies in this chapter to respond to it. (continued on next page) © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Table 5.7: Intervention failures and fixes (continued)
  • 160. Failure Fix 10. The organization was not ready for change. • Prepare management for the change first so it can provide support to employees. • Prepare employees for the change prior to implementation. Source: Adapted from Organization Development: The Process of Leading Organizational Change (pp. 206–208), by D. L. Anderson, 2016, San Francisco, CA: Sage. Failed interventions have serious implications for the consultant and the organization (Ander- son, 2016). They can damage a consultant’s reputation, causing the consultant to lose clients and future referrals. Failed interventions can also be detrimental to a consultant’s sense of self-efficacy and trust in his or her intuition. This same level of self-doubt can plague orga- nizations with failed OD efforts and may cause organization members to distrust their own intuition about organization problems or ability to implement lasting change. In fact, failure can become a repetitive cycle for consultants and organizations if confidence in the process is not quickly restored. Argyris (1970) noted that other implica- tions for failed interventions on the organi- zation level include increased defensiveness against any change; diminished ability to cope through conflict resolution and pro- ductive communications; waning energy to work on solving the problem; increased
  • 161. frustration, stress, cynicism, and controlling behaviors; and unrealistic goals (aiming too high or too low to avoid future risk or failure). Overcoming Resistance A lesser-known definition of change readi- ness is “the cognitive precursor to the behaviours of either resistance to, or support for a change effort” (Armenakis, Harris, & Mossholder, 1993, pp. 681–682). When employees express stress, negativity, or cynicism toward the change, they are showing resistance. Resis- tance has also been defined as “an adherence to any attitudes or behaviours that thwart orga- nizational change goals” (Chawla & Kelloway, 2004, p. 485). Resistance behaviors might be readily visible, such as sabotage or vocal opposition. Or they may be subtler, such as reducing output or withholding information (Giangreco & Peccei, 2005). Resistance may also take the form of ridiculing the change, boycotting change conversations, or sabotage (Lines, 2005). The beginning of this chapter discussed readiness to change. Readiness is related to resis- tance because, when people or organizations are not prepared to change, they will likely find ways to stall or distract the change effort. See Assessment: Test Your Change Resistance to find out how much you embrace change. Ridofranz/iStock/Getty Images Plus When interventions fail, the team must decide which steps to take next.
  • 162. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Causes of Resistance Resistance to change might be caused by management’s dismissal of employee input or fail- ure to handle negative attitudes toward the change, or it might arise because the level of employee input in planning, implementation, and change maintenance is too low (McKay et al., 2013). Most people do not like change, so resistance is the general disposition most will initially experience. Resistance can also occur on ethical and strategic grounds if employees do not regard the change as favorable to the organization and its stakeholders (Oreg, 2006; Piderit, 2000). Acknowledging Resistance Management may be tempted to disregard resistance; however, it is a mistake to ignore it. When employees resist change and relay concerns about it, they are behaving normally. Impending change causes fear and a sense of personal loss and grief among employees who find value and a sense of security in their daily routine and work group (Burke, Lake, & Paine, 2008). Sometimes, employees just need an opportunity to raise issues and have management hear their fears. Dismissing employees’ concerns or disregarding how the change will affect employees’ sense of security and trust in the organization risks
  • 163. intensifying negative atti- tudes, increasing resistance behaviors, and compromising effective change implementation. Instead, it is to the organization’s advantage to create opportunities for dialogue about the change and to seek solutions that resolve the concerns. How leaders talk about change also matters (see Tips and Wisdom: Discussing Change). Cheung- Judge and Holbeche (2015) sug- gested talking about change in a less directive and more inclusive way. Table 5.8 shows the differences. Table 5.8: Using inclusive language about organization change Instead of saying Say instead “We [management] are managing change.” “We invite you [employees] to participate and influ- ence the change initiative.” “We are dealing with resistance to change.” “Here’s what to expect as we make these changes.” (continued on next page) Assessment: Test Your Change Resistance Would you consider yourself generally open to change, or are you more inclined to eschew it? Most people claim to welcome change, yet they tend to waver when it happens to them. You may find it hard to believe that inventions such as lightbulbs, coffee, air travel, umbrel- las, taxis, personal computers, and vaccines were widely mocked upon their introduction (Nguyen, 2016), yet these innovations are things we now
  • 164. depend on for our lifestyle, career, convenience, and health. Although you probably don’t like or even care about changes foisted upon you, you may need them and not even realize it. Find out how much (or little) you embrace change by taking the following survey: http:// pluto.huji.ac.il/~oreg/questionnaire.php. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Table 5.8: Using inclusive language about organization change (continued) Instead of saying Say instead “We are allowing them [employees] to . . .” “We invite you to participate.” “We are giving them the opportunity to . . .” “We invite you to identify ways of navigating the change.” “We are gaining buy-in.” “What processes will help you adjust to the change?” “What support do you need?” Adapted from Organization Development: A Practitioner’s Guide for OD and HR (pp. 161–162), by M. Cheung-Judge and L. Holbeche, 2015, London, England: Kogan Page.
  • 165. Ethical Issues Pertaining to Interventions Integrity and authenticity help the OD process run smoothly and avoid failed interventions. OD ethics have also been discussed in chapters 1 and 3. There are some important principles to keep in mind to ensure that the intervention process is ethical. Avoid Misrepresentation Although it is tempting to avoid telling clients what they do not want to hear, it is a mistake to misrepresent the intervention’s timeline, cost, or difficulty. This mistake can occur due to inexperience, overpromising, or trying to please a client. A better strategy is to underpromise and overdeliver. That way, there are no surprises in the long run. It is also important that you know the limits of your skill set as a consultant. If you promise to deliver a skill or knowledge you do not have, it can create distrust and anger with the client, put the intervention in danger of failure, and imperil your reputation as a consultant. Avoid Collusion Colluding with the client is another ethical challenge. For example, you might scheme to adopt an intervention because it is appealing or interesting or because it will bring you more busi- ness as a consultant. If you lack evidence to support the need and appropriateness of an inter- vention, it is unethical to recommend it. It is also bad ethics to conspire with the client in ways that result in distortion of the process and exclusion of others. For example, if you know that a certain manager is going to disagree with your desired course of
  • 166. action and you exclude him or her from a meeting where it is discussed, this is considered colluding to exclude. Tips and Wisdom: Discussing Change A simple exercise to help employees discuss change is to give them an opportunity to talk about their hopes for the change as well as their fears about the change. It is useful to record these (often on a flip chart or whiteboard) and for management to respond to the fears, which helps defuse them. This activity can be done in a meeting format or via an electronic forum or survey. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Avoid Coercion or Manipulation Finally, it is unethical to coerce or manipulate the client or members of the organization. This might involve blocking opportunities for organization members to participate in the decisions about the process, which in effect foists the intervention on them. This and other ethical chal- lenges noted can be avoided by following a good action research process that generates data on which to base decisions while actively involving organization stakeholders in the process. Sustaining Change Whether the change has been on an individual, group, or
  • 167. organization level, implementing it is the easy part. Sustaining it is where trouble occurs. Successful change implementation may cause overconfidence, which fosters an unpreparedness for the difficult work of maintain- ing it (Anderson, 2016; Senge et al., 1999). Anderson cautioned that relapsing to old ways of being is an implementation hazard, especially when an external consultant exits the picture. Change also requires energy that organization members may lack, because other distractions may pull them away from consciously maintaining the change. The education necessary for full change adoption may not keep pace with the change implementation, making it difficult to sustain. Sometimes, the old organization culture and practices are just too powerful for the change to sway, leaving the organization vulnerable to reverting to old ways of doing business. Actions to Sustain Change How can organizations avoid these pitfalls to lasting change? Change should be translated into the organization’s daily operations so it simply becomes the way business is done. Strate- gies that help sustain change include the following: 1. Communicate regularly about the change implementation. This could be via regu- lar meetings, written or electronic communication, social media, and informal conversations. 2. Formulate an implementation task force that includes top leaders and affected employees. This group can hold regular meetings and help
  • 168. communicate the change. 3. Hold meetings that include a cross-functional, intragroup mixture of people involved with the change to monitor progress, troubleshoot, and evaluate the process. 4. Find ways to reward and recognize employees involved in the implementation. This might include visible items such as T-shirts or trinkets, awards, monetary rewards, or time off. 5. Build change implementation into the performance review criteria of those employ- ees accountable for supporting and sustaining change. 6. Invite external stakeholders and consultants to evaluate the change progress. 7. Ensure that the reward system is aligned with the desired changes. 8. Provide the learning and development needed to sustain the change. 9. Ensure that needed resources are available to sustain the change. See Tips and Wisdom: The Challenge of Change. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 5.4Monitoring and Sustaining Change Strategies to Defuse Challenges to Change
  • 169. During implementation, a consultant will monitor the client for signs of low commit- ment, such as anger, hostility, objections, inflexibility with implementation options, unwillingness to look at process issues, hid- den agendas, delaying tactics, or failure to implement. To successfully make an inter- vention, there must be commitment and leadership from the top, individual compe- tence, and adequate organization. When consultants observe signs of waning com- mitment, they will want to take action quickly. Faltering commitment will nega- tively affect learning and lasting change. Senge and colleagues (1999) identified 10 challenges created by resistance to change that relate to initiating change, sustaining change momentum, and meeting the challenges of redesigning and rethinking processes and proce- dures during and after the change: Time challenges. Employees can feel frustration or worry that they do not have enough time to learn or implement changes. To counter this challenge requires giving employees flexibility and time to process and implement the change. Support and help with change implementation. Employees wil l become frustrated and disen- chanted if coaching, guidance, and support are absent during the change. It is important to provide the resources both for supporting the change and for management to be skilled in this area.
  • 170. Perceptions that the change is not relevant. When employees do not see the rationale for change or its relation to the big picture or business reasons, they may ignore it because they think it does not matter. Establishing the need for and relevance of change is necessary from the beginning of the action research process. The need for change must be tied to business goals, new learning, changed procedures, and processes so employees are not left wondering why they have to make changes. Tips and Wisdom: The Challenge of Change If there is no struggle, there is no progress. —Frederick Douglass Change is hard work. Well-implemented changes make tremendous differences for individu- als, teams, and organizations. The OD and action research processes greatly enhance the prob- ability of success in change endeavors. NicoElNino/iStock/Getty Images Plus Time must be made for change, and sustaining it takes additional time. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources Management fails to set an example. When management fails to “walk the talk,” people notice.
  • 171. Management must be held accountable for visibly and personally supporting and implement- ing the change. Mounting fear and anxiety. When changes are implemented, employees can get nervous. They may feel vulnerable, unable to adopt the change, and distrustful of the change and manage- ment. Open and candid communication from the beginning is a must, along with management setting a good example. Perceptions that the change is not working as intended. Employees might be negative about the change and look for evidence that it is not working. This perception can serve as an excuse to return to the way things were. It is important to show how the change is resulting in progress and intended outcomes. If metrics are available, it is helpful to show that “since implementing the change, our defects have decreased by 10%,” for example. Perceptions that the old way of doing things was better. These perceptions can allow employee groups to feel like victims who are disrespected or misunderstood by management. These perceptions are countered by ongoing, effective communication about the change and its need. Confusion about who is responsible for the change and new procedures. Change can natu- rally breed confusion over new procedures and policies. Management can help by modeling patience, flexibility, and problem solving to create new infrastructure when making changes.
  • 172. Frustration that the organization is doing nothing but “reinventing the wheel.” Employees can get frustrated when they feel like no real change is occurring or that the change has not improved the problem. Making the case for change early in the process can help minimize frustration and help people focus on the change’s future benefits. Confusion about the purpose and benefit of the change in the bigger organization picture. Employees may not immediately link the change to organization strategy and purpose. Man- agement can help by showing how the change will benefit the business and its stakeholders. As has been stressed throughout this chapter, key themes in avoiding resistance include timely and ample communication about the change, providing a clear rationale for why the change is needed, and management support and role modeling throughout the change process. Summary and Resources Chapter Summary • Propensity to accept change depends on the level of change readiness for both indi- viduals and the organization. Change readiness signals that employees perceive the change as necessary and attainable and are willing to support its implementation. • Change readiness is influenced by clear management communication and employ-
  • 173. ees’ confidence in management’s attitudes, knowledge, and skills to effectively implement the change. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources • Interventions are more likely to be accepted when the change has been clearly com- municated and employees have an opportunity to participate in its planning and implementation. • OD interventions are change activities that help resolve the presenting problem. • Interventions can be classified in multiple ways, including diagnostic, confrontive, level, or process. This book classifies them according to individual, group or team, or organization level. • The criteria for making an effective intervention include basing the intervention on valid data, verifying the client’s free and informed choice to proceed, and establish- ing the client’s commitment to learning and change. • During the implementation, consultants should be clear about the type of consulting role they want to play. Roles vary from less-involved observation of the implementa-
  • 174. tion to active engagement in providing feedback, modifying the plan, and provid- ing needed training and support. The role the consultant plays will depend on how skilled the client is at leading and facilitating change. • It is important to promote learning related to the intervention by encouraging reflec- tive practice, helping employees see the connection between learning and change, and facilitating client learning by building principles of andragogy (effective adult learning) into the intervention. • Theory U embraces the idea that energy follows attention. In consulting, this means consultants need to keep clients focused on the outcomes they seek rather than the problems they want to avoid. • Interventions fail for multiple reasons, including lack of change readiness, resis- tance, poor levels of management communication and support, and a flawed OD process that results in the wrong problem being solved, ambiguous goals, inade- quate time being allotted, poor design, ineffective consulting, failure to ensure client accountability, and lack of organization commitment. • Ethical issues abound in OD. During the intervention phase, such issues include mis- representation, collusion, and coercion. • Resistance to change can be overcome by open and regular communication from
  • 175. management that engages employees in the change’s planning and implementation and acknowledges the fears and concerns that underlie resistance. • Change can be sustained by regular communication, broad engagement of employ- ees in monitoring the change, rewarding and recognizing employees who are com- mitted to the change effort, and providing the necessary learning and support. Think About It! Reflective Exercises to Enhance Your Learning 1. The chapter began with a vignette about a Leadership Academy for a state public health agency. Can you recall an intervention you participated in? What was it? How was it executed? 2. Recount a time you or someone you know participated in an OD intervention led by a consultant. What were the outcomes and consequences? How well did the consul- tant do, based on the principles presented in this chapter? 3. Think back to a change you experienced in either your professional or your personal life. How applicable are the principles of andragogy to your experience? 4. When was the last time you reflected on your assumptions, thoughts, and actions related to an idea, practice, or process? Make an appointment with yourself to engage in some deep thinking, and journal about what emerges.
  • 176. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources 5. Assess the changes you have made in your life or organization and evaluate how well you maintained the change. Do you agree with the argument that change is easier to make than maintain? Why or why not? Apply Your Learning: Activities and Experiences to Bring OD to Life 1. Use the transtheoretical model of health behavior change to assess a change you have made. Did you follow the steps? Why or why not? 2. In Chapter 4, one of the activities was to identify a problem in your organization and plan a data collection process to examine the issue. Assuming you did that, how would you go about planning an intervention to address it? What level(s) of inter- vention would be most appropriate (individual, group, and organizational)? 3. Refer back to Tips and Wisdom: French and Bell’s OD Interventions in section 5.2 and reclassify French and Bell’s 14 types of interventions into the model we are using in this book (individual, team, and organization).
  • 177. 4. Using Table 5.5, take a real example of implementation and identify specific roles and strategies you would use to support the intervention implementation. 5. Go back to the case study in section 5.4 and use the key points in this chapter about change readiness and resistance to change to identify at least five mistakes made by the division president and consultant during the change process. 6. Have you experienced a failed OD intervention? If so, use the information presented in this chapter on effective interventions and reasons interventions fail to diagnose what went wrong. 7. Apply the steps from the Theory U model to a behavior change, organization change, or new learning you have made or hope to make. 8. Using Hord and Roussin’s (2013) readiness for change checklist presented in Table 5.2, assess your readiness to make a change that is impending. Additional Resources Media • Theory U: An Interview With Dr. Otto https://www.youtube.com/watch?v=k8HKxvKVUsU • Change Is Good... You Go First https://www.youtube.com/watch?v=jwxrsngEJDw Further Reading Merriam, S. B., & Bierema, L. L. (2014). Adult learning:
  • 178. Linking theory and practice. San Francisco, CA: Jossey-Bass. Scharmer, C. O. (2009). Theory U: Leading from the future as it emerges. San Francisco, CA: Berrett-Koehler. Key Terms andragogy The art of teaching adults; a series of principles for effectively facilitat- ing adult learning. confrontive interventions Activities that occur as a result of the data collected and analyzed during the action research process. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Summary and Resources diagnostic interventions Activities that address issues as they arise during the OD–action research process of contracting, initial meetings, or data collection. intervention activity A single event, task, meeting, or workshop implemented to address a problem or issue in the organization. intervention strategy A sequence of tasks or series of interventi on activities that occur over several weeks or months to address a problem or issue in the organization.
  • 179. readiness for change A perception that making a change is necessary and achiev- able and that willingness to support the change effort exists. reflective practice The process of ques- tioning the assumptions that underlie thoughts and actions. resistance An expression of stress, nega- tivity, or cynicism toward a change that can thwart achieving the change goal, along with the general absence of change readiness. shared vision A mutual picture of a desired future state that an organization’s members seek to achieve together. Theory U Embraces the idea that energy follows attention. In consulting, this means consultants need to keep clients focused on the outcomes they seek rather than the problems they want to avoid. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 180. Action Research: The Planning Phase 4 Rawpixel/iStock/Getty Images Plus Learning Outcomes After reading this chapter, you should be able to: • Describe action research and compare Lewin’s model with those of at least two other OD theorists. • State the importance of considering multiple levels of analysis in the planning phase. • Identify the steps of the planning phase. • Describe different types of research. • Describe different types of research methodologies. • Discuss five methods of gathering organization data, including strengths and weaknesses of each. • Discuss methods of analyzing the data collected. • Explain how to prepare for and manage the feedback meeting, including how to address confidentiality concerns and manage defensiveness and resistance. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 181. In Chapter 3, the QuickCo vignette provided one example of how OD consultants work. Jack, the internal OD consultant at QuickCo, led his clients, Ned (the shipping supervisor) and Sarah (the manufacturing manager), through an action research process to solve communication and teamwork problems in the shipping department. Action research, the process OD consultants follow to plan and implement change, follows three general phases: 1. Planning. Data is collected, analyzed, and shared with the client to determine corrective action. 2. Doing. Action is taken to correct the problem. 3. Checking. The effectiveness of the intervention is evaluated, and the cycle is repeated as needed. Let us return to the QuickCo vignette and examine the action research steps taken. Ned and Sarah met with Jack to outline how employees were at each other’s throats, letting conflicts fester, and failing to work well together. Their first meeting incorporated their planning phase. As explained in Chapter 3, this initial meeting is known as contracting. During the meeting, Jack asked questions to begin identifying the root cause of the conflicted department. The three struck a collaborative agreement and worked to devise a plan for resolving the issues. The first action they took was to collect data. Jack reviewed the performance trends and cus-
  • 182. tomer complaints from the shipping department and interviewed the employees individually about their views on the problems. The planning also involved analyzing the data Jack collected to arrive at a diagnosis. When he met with Ned and Sarah to share feedback from the data collection, Jack presented his analysis, noting, “Ned and Sarah, you have a dysfunctional team on your hands. They have no ground rules, collaboration, or means of handling conflict. Everyone needs to be more understanding and respectful toward each other. It would also be helpful to create some guidelines for how the team wants to operate and manage conflict. Ned, you also need to take a more active role in resolving issues.” Jack laid the problems out in a matter-of-fact, nonjudgmental way. Once all the analyzed data was presented, the three worked jointly to plan an intervention to address the problems. They agreed to take the group through a facilitated process to address communication and team effectiveness. They also agreed that Ned would benefit from individualized executive coaching to help him learn behaviors that would be more productive for dealing with conflict. The second phase of action research, doing, occurred when Jack, Ned, and Sarah scheduled the intervention with the shipping department and implemented it. The outcome of the intervention was a tangible plan for the department for how to be more effective, including specific actions they would take to address conflict.
  • 183. The final phase, checking, involved Ned, Sarah, and Jack continuing to monitor the shipping department after the intervention. Ned helped the department uphold its new ground rules on a daily basis and coached employees to help them stick to the plan. He also asked for regular feed- back on his own management skills as part of his ongoing coaching. Ned, Sarah, and Jack reviewed departmental data on productivity and customer complaints and learned that the © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.1A Review of Action Research timeliness and accuracy of shipped orders had significantly improved. Jack followed up a few months later by conducting individual interviews with shipping department mem- bers. He discovered that the solutions had been maintained. If and when new conflicts arise, or new members join the team, it may be time to start the action research process over again to address new issues. The QuickCo vignette demonstrates all three phases of the action research process. This chapter focuses on the first phase, plan- ning. Chapters 5 and 6 provide a similarly detailed look at the second and final phases, doing and checking, respectively. But before turning to the planning phase, let us review
  • 184. action research. 4.1 A Review of Action Research Chapter 1 defined OD as a process of planned change that is grounded in a humanistic, demo- cratic ethic. This specific process of planned change is known as action research. Defining Action Research Action research is a recurring, collaborative effort between organization members and OD consultants to use data to resolve problems. As such, it involves data collection, analysis, intervention, and evaluation. Essentially, it is a repeating cycle of action and research, action and research. However, the words action research reverse the actual sequence (Brown, 1972), in that “research is conducted first and then action is taken as a direct result of what the research data are interpreted to indicate” (Burke, 1992, p. 54). Moreover, the cycle yields new knowledge about the organization and its issues that becomes useful for addressing future problems. It thereby allows organizations to improve processes and practices while simulta- neously learning about those practices and processes, the organization, and the change pro- cess itself. Action research provides evidence, which enables a consultant to avoid guesswork about what the issue is and how to resolve it. According to French and Bell (1999), Action research is the process of systematically collecting research data about
  • 185. an ongoing system relative to some objective, goal, or need of that system; feeding these data back into the system; taking actions by altering selected variables within the system based both on the data and on hypotheses; and evaluating the results of actions by collecting more data. (p. 130) Catherine Yeulet/iStock/Thinkstock Following the action research process helped the QuickCo shipping department resolve employees’ interpersonal conflicts. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.1A Review of Action Research Action Research Is a Democratic Approach to Problem Solving Many theorists have characterized action research as democratic and collaborative: • “Action research is a participatory, democratic process concerned with developing practical knowing in the pursuit of worthwhile human purposes, grounded in a par- ticipatory worldview” (Reason & Bradbury, 2008, p. 1). • “Action research is the application of the scientific method of fact-finding and experi- mentation to practical problems requiring action solutions and involving the col- laboration and cooperation of scientists, practitioners, and
  • 186. laypersons” (French & Bell, 1999, p. 131). • “Action research approaches are radical to the extent that they advocate replacing existing forms of social organization” (Coghlan & Brannick, 2010, p. 6). In addition, Coghlan and Brannick (2010) identified broad characteristics of action research: • Research in action, rather than research about action • A collaborative, democratic partnership • Research concurrent with action • A sequence of events and an approach to problem solving (p. 4) These definitions are similar in that they all characterize action research as a democratic, data-driven, problem-solving, learning-based approach to organization improvement. Some other examples of how organizations apply action research include a nonprofit organization that surveys donors or beneficiaries before engaging in strategic planning, a government department that conducts a needs analysis prior to a training program, or a corporation that conducts exit interviews before initiating recruitment for positions. Action Research Helps Clients Build Capacity for Future Problem Solving Although typically guided by a consultant, action research engages key stakeholders in the process. Indeed, its effectiveness depends on the active engagement and accountability of
  • 187. the stakeholders. As discussed in Chapter 3, OD consultants are responsible for influencing the action research process while at the same time exercising restraint to avoid solving the problem for the client. An example can illuminate how action research helps the client build problem-solving capac- ity. Suppose an organization introduces a process of assimilating new leaders when they join Consider This Can you recall a project in your organization that involved members in a collaborative prob- lem-solving mission? Chances are it was action research, even if that terminology was not used. Can you think of any other examples? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.1A Review of Action Research it (action). The organization hires a consultant to survey team members about this initiative’s effectiveness (research). The client and the consultant collaborate to develop the survey and analyze the results. What is learned informs continued assimilation of new leaders and the way the process gets modified (action). The client is initially engaged to learn the process so that it can be repeated in the future without the help of a consultant. The action research pro- cess helps the organization collect, analyze, and apply data to
  • 188. make informed decisions and not waste time and money on inappropriate interventions. Helping organizations become proficient at the action research process is the outcome of effective consulting, because the best consultants work themselves out of a job. Models of Action Research Recall from Chapter 1 that action research originated with the work of Kurt Lewin, the father of OD. Lewin’s model (1946/1997) includes a prestep (in which the context and purpose of the OD effort are identified), followed by planning, action, and fact finding (evaluation). Sev- eral models of action research generally follow Lewin’s, although the number and names of steps may vary. See Table 4.1 for a comparison. Table 4.1: Comparison of action research models to Lewin’s original model Lewin’s (1946/1997) original action research steps Cummings and Worley (2018) Coghlan (2019) Stringer (2013) 1. Prestep to determine context and purpose 1. Entering and contracting
  • 189. 0. Prestep: Understanding context and purpose of the issue 1. Constructing: Determining what the issues are 1. Look a. Gather relevant information b. Build a picture; describe the situation 2. Planning 2. Diagnosing 2. Planning action 2. Think a. Explore and analyze b. Interpret and explain 3. Action 3. Planning and implementing change 3. Taking action 3. Act a. Plan b. Implement c. Evaluate
  • 190. 4. Fact finding (evaluation) 4. Evaluating and institutionalizing change 4. Evaluating action © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.1A Review of Action Research Figure 4.1: Plan, do, check action research cycle The plan, do, check model of action research was popularized by the total quality movement. The contemporary research cycle has more steps, although it essentially accomplishes the same steps of diagnosing and designing (plan), implementing (do), and evaluating (check). The model of action research used in this book has three phases, paralleling Lewin’s (1946/1997) model (Figure 4.1): planning, doing, and checking. (See Who Invented That? Plan, Do, Check Cycle to read about the person who originally developed plan, do, check.) Each phase has substeps derived from multiple action research models: 1. Planning (the discovery phase)
  • 191. a. Diagnosing the issue b. Gathering data on the issue c. Analyzing the data gathered d. Sharing feedback (data analysis) with the client e. Planning of action to address the issue 2. Doing (the action phase) a. Learning related to the issue b. Changing related to the issue 3. Checking (the evaluative phase) a. Assessing changes Plan Check Do Action research cycle © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.2Planning: The Discovery Phase b. Adjusting processes c. Ending or recycling (back to the planning stage) the action research process The action research steps may look simple, and it may appear
  • 192. that planning change is a neat, orderly, and rational process. In reality, though, it can be chaotic, political, and shifting, with unexpected developments and outcomes. Nevertheless, learning the action research process equips consultants with a proven method for navigating such shifts as they work with clients on organization challenges. 4.2 Planning: The Discovery Phase When beginning an OD intervention, the initial steps taken to identify the problem and gather data about it are known as planning. The planning phase is a diagnostic one. The client and consultant work with other organization stakeholders to study the problem and determine the difference between desired outcomes and actual outcomes. The discrepancy between what is and what should be is known as a performance gap. For example, if an organization aspires to be first in quality in the industry but lags behind in second or third place, that would be a performance gap. The organization would have to engage in performance improvement practices to close the gap with its competitors. Or, perhaps a leader receives feedback that she is not as skilled at leadership as she had thought. The leader begins to work with a mentor or coach to identify what behaviors she needs to be more effective. By improving listening, rec- ognition, and delegation behaviors, the leader begins to narrow the gap between her current and desired future leadership performance. Organizations perform gap analysis to assess reasons for a gap between reality and the
  • 193. desired outcome. The performance gap idea can also be applied to yourself. Let us say you aspire to a managerial position but have not achieved it. Upon analyzing the gap, you realize you lack the training and experience to attain the position. If you decide to eliminate the gap, you might enroll in a graduate program, earn a leadership certificate, or find a mentor to help you attain your goal. Consider a performance gap you have experienced and complete the chart in Figure 4.2. Who Invented That? Plan, Do, Check Cycle Although often attributed to quality guru W. Edwards Deming, the plan, do, check cycle was created by Walter A. Shewhart of Bell Labs. Shewhart was an American physicist, engineer, and statistician who was one of the originators of statistical quality control, which preceded the total quality movement. Consider This In your life, what example do you have of action research? How have you employed plan, do, check? What actions or adjustments were necessary? © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.2Planning: The Discovery Phase Figure 4.2: Performance gap analysis Use this chart to assess your own performance gap. Identify a
  • 194. desired reality—perhaps running a 5K. Next, honestly note your current performance goal: Can you run around the block? Run or walk for a mile? Once you determine the gap, fill out the middle column with specific action steps to move closer to your goal—how will you close the gap? To download an interactive version of this figure, visit your e-book. Now that you have applied the gap analysis to yourself, let’s think about using it in an orga- nization setting. Identify a desired reality—perhaps being first to market with a new tech- nology. Next, honestly note the organization’s current reality. In the case of introducing the technology: Does it have the right people to do the work? Is the technology ready for market? Is the marketing campaign ready to go? Once you determine the gap, fill out the middle col- umn with specific action steps to move the organization closer to its goal—how will you close the gap? What would be the desired reality in your own organization? How equipped is it to close the gap? What other performance gaps have you experienced? Benefits of the Planning Phase Planning is a critical phase of OD, because poor plans will result in poor outcomes such as fix- ing the wrong problem, wasting time and resources, and frustrating organization members. The benefits of good planning include setting the OD process up for success through careful analysis and diagnosis of the problem; engaging organization members from the beginning in the processes of collaboration, ongoing learning, and capacity
  • 195. building in the action research process; and prioritizing issues. See Tips and Wisdom: Alan Lakein to read and apply tips about planning. Tips and Wisdom: Alan Lakein Time management guru Alan Lakein is credited with coining the phrase “Failing to plan is plan- ning to fail” (as cited in Johnson & Louis, 2013, para. 1). This advice is to be heeded in OD. Plan- ning is key to effective interventions. How does Lakein’s quotation apply to your experience? Current reality Steps to close the gap Desired reality © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.2Planning: The Discovery Phase Levels of Analysis Before we delve into the steps of the planning phase, we should understand the location of the OD effort—that is, the level at which the action research might occur. This is known as the level of analysis. The OD effort might focus on the individual, group, organization, or sys- tem. Each level comes with its own issues, needs, and appropriate interventions. These levels, along with appropriate interventions, were discussed in Chapter 2. All levels of analysis, from the individual to the system, face similar issues. Cockman, Evans,
  • 196. and Reynolds (1996) categorized organization issues according to purpose and task, struc- ture, people, rewards, procedures, or technology: • Purpose and task refers to identifying the reason the organization exists and how its members advance its mission. • Structure pertains to reporting relationships and how formal and informal power relations affect the organization. • People issues relate to relationships, leadership, training, communication, emotions, motivation and morale, and organization culture. • Rewards systems include financial and nonfinancial incentives available for perfor- mance and perceived equity among employees. • Procedures include decision-making processes, formal communication channels, and policies. These are an important category for analysis. • Technology involves assessing whether the organization has the necessary equip- ment, machinery, technology, information, and transport to accomplish its tasks. Table 4.2 identifies questions to ask about each area of Cockman, Evans, and Reynolds’s levels of analysis. Table 4.2: Cockman, Evans, and Reynolds’s organizational issues and diagnostic questions
  • 197. Organizational issues Diagnostic questions Purpose and tasks • What business are we in? • What do people do? Structure • Who reports to whom? • Where is the power? People • How are relationships managed? • What training is provided? • Who communicates with whom? • How do people feel? • How high is motivation and morale? • What is the culture? Rewards • What are the incentives to perform well? Procedures • What are the decision-making procedures? • What are the channels of communication? • What are the control systems? Technology • Does the organization have the necessary equipment, machinery, information technology, transport, and information? Source: From Client-Centered Consulting: Getting Your Expertise Used When You’re Not in Charge, by P. Cockman, B. Evans, & P. Reynolds, 1996, New York, NY: McGraw-Hill. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 198. Section 4.2Planning: The Discovery Phase Identify a performance gap you are aware of personally or professionally and see if you can answer Cockman, Evans, and Reynolds’s questions. Steps in the Planning Phase The steps in the planning phase include identifying the problem area, gathering data, analyz- ing the data, sharing feedback, and planning action. These steps illuminate the core problem and identify key information for making an intervention. Step 1: Preliminary Diagnosis of the Issue When an OD process is initiated, it is imperative that the problem be correctly defined. Doing so involves a process of diagnosis. A consultant’s job is to push the client to identify the root cause of the problem, rather than its symptoms. Considering the QuickCo example, it might have been easy for Ned to decide to put the department through a customer service training based on the symptoms of late, erroneous orders. Had he done so, however, it likely would have worsened matters, because no amount of customer service training would fix the department’s interpersonal conflicts, poor communication, and ineffective conflict reso- lution. It may take intensive study and data collection to accurately diagnose a problem, but doing so is well worth it. The action research process begins by defining a problem that warrants attention. Consul- tants must ask good questions to illuminate a problem’s source. They can then move on to
  • 199. the next step in the planning phase. Questions a consultant might ask a client include the following: • “What do you think is causing the problem?” • “What have you tried to fix it?” • “How has this attempt to fix the problem worked?” • “What has been stopping you from fully addressing this issue?” In addition to asking questions to pinpoint the issue, consultants must ask questions about who else will be involved in the OD effort. Also, as Chapter 3 explored, a consultant needs to uncover the client’s expectations regarding the duration of the project and make sure the cli- ent is willing to assume an equal responsibility for outcomes. Good questioning enhances one’s authenticity as a consultant. How have you diagnosed problems in your organization? Have you ever misdiagnosed an issue? What were the consequences? Step 2: Gathering Data on the Issue Once QuickCo diagnosed the team’s lack of communication and interpersonal effectiveness as the source of the problem, it was ready to collect information to inform next steps. This is known as data gathering. Data can be gathered in many ways. The most common data col- lection methods in action research include interviews, questionnaires, focus groups, direct observation, and document analysis. © 2020 Zovio, Inc. All rights reserved. Not for resale or
  • 200. redistribution. Section 4.2Planning: The Discovery Phase Jack, the internal QuickCo consultant, took several steps to better understand the problem. He reviewed performance trends and customer complaints, inter- viewed department members, and relied on his own working knowledge and observations of the department to formulate a solid understanding of the issues. What types of data have you gathered to better understand organiza- tion issues? Methods of data gathering are explored in detail in the next section of this chapter. Step 3: Analyzing the Data Once data has been collected, it must be turned into something meaningful and useful for the client. Data collected to provide information about a problem is not useful until it is inter- preted in ways that inform the issue and provide clues to possible interventions. For example, a survey is not helpful unless it is examined within the organization’s context. Data analysis will be more fully defined in the data analysis methods section later in this chapter. Step 4: Sharing Feedback With the Client Once data has been collected and analyzed, a feedback meeting is scheduled in which results are presented to the client. In the QuickCo example, Jack met
  • 201. with Ned and Sarah to share his analysis. Feedback meetings require careful planning to keep the consultancy on track. Consultants should decide on the key purpose and desired outcomes for the meeting. For example, do they want the client to better understand the problem? Agree on a course of action? Confront some issues affecting the problem? Sharing feedback with the client involves determining the focus of the feedback meeting, developing the agenda for feedback, recogniz- ing different types of feedback, presenting feedback effectively, managing the consulting pres- ence during the meeting, addressing confidentiality concerns, and anticipating defensiveness and resistance. Step 5: Planning Action to Address the Issue The last step of the planning or discovery phase is to plan the action that will be taken. This planning might occur during the feedback meeting, or you might schedule a time at a later date to give the client an opportunity to digest the data analysis and feedback. The outcome of the planning is to design the activity, action, or event that will be the organization’s response to the issue. This is known as an intervention. The type of intervention selected depends on the organization’s readiness and capability to change, the cultural context, and the capabilities of the OD consultant and internal change agent (Cummings & Worley, 2018). The intervention will also target strategy, technology and structure, and human resource or human process issues. The consultant and the client will collaboratively plan the appropriate intervention(s)
  • 202. to address the issue. Chapter 5 will address interventions in detail. Filo/DigitalVision Vectors/Getty Images Plus Collecting data ensures the OD process is evidence based. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.3Types of Research 4.3 Types of Research OD is a joint endeavor between the client and the consultant that includes data gathering and analysis. Involving clients in the data collection process reinforces their commitment to the OD process. The consultant’s role in this process is to help the client focus on the root cause of the problem and to organize the data collection and interpretation. A consultant’s objectivity can be very helpful to clients, enhancing their understanding of how they might be contribut- ing to the problem or how the issue plays out within the broader organization context. Einstein is credited with saying, “If we knew what it was we were doing, it would not be called research, would it?” (as cited in Albert Einstein Site, 2012, para. 4). People conduct research when they have questions that do not have obvious answers. Depending on the question they wish to answer, there are differing types of research.
  • 203. Basic Research The word research might evoke images of people working in labs, examining petri dish cul- tures, and making new discoveries. This type of research is known as basic research, and it generally creates or extends the knowledge base of a discipline such as medicine, physics, or chemistry through experiments that allow researchers to test hypotheses and examine per- plexing questions. Basic research results in new discoveries and theories and includes inno- vations such as testing cures for cancer, establishing scientific laws such as gravity, or refuting previously held beliefs such as the world being flat. There are other types of research beyond basic, and they vary based on the type of question being asked. Applied Research When people seek to answer questions such as “What is the best way to facilitate learning during change?” or “How do we motivate employees to embrace new technology?” they are usually seeking to improve practice within a certain field. This is known as applied research because its results are germane to problems and issues within a particular setting such as business. This type of research is practical and helps people solve problems, but unlike basic research, it does not necessarily yield new knowledge. OD is applied research because it asks questions about challenges that are unique to the individual organizational context in which they are located but does not necessarily expand our understanding of human behavior in organizations.
  • 204. Action Research Action research explores specific problems within a locality such as an organization or com- munity. It might ask questions such as “How can we prevent employees from leaving Com- pany A at a rate three times higher than the industry standard?” “How can Hospital B imple- ment an electronic health record with minimal disruption to patient care?” or “How can we lower poverty rates in Community C?” As the name implies and we have already covered, action research involves recurring cycles of study and action regarding a problem within a specific context. Action research is participative because it usually involves members of the organization. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.4Research Methodology OD generally engages in both applied research and action research because it aims to improve practice (applied) within a specific context (action). When you engage in action research, you are conducting a systematic inquiry on a particular organization problem by methodically collecting and analyzing data to provide evidence on which to base your intervention. When people do research in organizations, they are seeking not so much to generate new knowl- edge (or cure diseases) as to improve the quality of organization life. Action research is there-
  • 205. fore a form of applied research because it seeks to directly address organization problems and respond to opportunities in ways that improve the organization for all its stakeholders. Evaluation Research People may also want to judge the quality of something like an educational program, confer- ence, or OD intervention. Here they might ask, “How was the learned information applied?” “What was the most effective mode of delivery of instruction?” or “What are people doing differently as a result of the intervention?” This type of research is known as evaluation research. Evaluation seeks to establish the value of programs or interventions and judge their usefulness. Evaluation can occur during the OD process, especially when the process is being evaluated before, during, or after the intervention. We will learn more about evaluation research in OD in Chapter 6. Refer to Table 4.3 for further description of the different types of research. Table 4.3: Different types of research Basic Applied Action Evaluation • Contributes to knowledge base in field (basic, pure) • Experimental • Tests hypotheses • Seeks to answer
  • 206. perplexing problems • Improves practice in discipline (applied) • Seeks to describe, interpret, or understand problems within specific settings • Will not necessarily create new knowledge • Addresses particular, local problem (action research) • Systematic inquiry • Addresses specific problem within specific setting • Often involves participants • Focused on practical problems, social change • Assesses value
  • 207. • Measures worth or value of program, process, or technique • Judges accomplishments and effectiveness • Establishes decision-making basis 4.4 Research Methodology In addition to the four types of research based on the types of questions asked, research can also be classified according to the type of methodology that is used to collect data. Methodol- ogy represents the overarching philosophy and approach to collecting data. Qualitative Research Methodology When seeking to understand “how” a phenomenon occurs or unfolds (“How do leaders best develop?”) or inquire into the nature or meaning of something (“How does participation on a © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.4Research Methodology high-performing team affect individual identity and
  • 208. performance?”), a qualitative methodol- ogy is appropriate. Qualitative methodology is concerned with “understanding the mean- ing people have constructed” (italics in original, Merriam & Tisdell, 2015, p. 15) and as “an umbrella term covering an array of interpretive techniques which seek to describe, decode, translate, and otherwise come to terms with the meaning, not the frequency, of certain more- or-less naturally occurring phenomena in the social world” (Van Maanen, 1979, p. 520). Qualitative inquiry is not generally quantifiable but rather provides convincing evidence. Qualitative data is generated from methods such as interviews, focus groups, or observations that are commonly conducted as part of the discovery phase of action research. Qualitative methods are rooted in constructivist philosophy—the idea that people build meaning from experience and interpret their meanings in different ways. For example, two people would likely define the meaning of life differently. Qualitative research occurs within the social setting or field of practice, and data collection is often referred to as “fieldwork” or being in the “field.” Qualitative approaches can effec- tively address organization members’ everyday concerns, help consultants understand and improve their practice, and inform decisions. Examples of qualitative questions asked in OD include “Why are employees dissatisfied with Organization Y?” and “What specific concerns do employees have about anticipated changes in the organization?” Qualitative methodology
  • 209. uses techniques that allow deep exploration of social phenomena through interviews, obser- vations, focus groups, or analysis of documents. Qualitative Research Characteristics Qualitative research focuses on building meaning and understanding about social phenom- ena. The researcher (generally the consultant in OD) is the primary instrument for data col- lection and analysis. This means that it is the consultant who conducts interviews, focus groups, or observations and then interprets or analyzes their meaning. Interpretation is con- sidered an inductive process—that is, meaning is inferred from the data through a process of comparison, reflection, and theme building. Unlike quantitative methodology, where study participants are often collected at random, qualitative participants are selected purposefully and are individuals who can provide informed accounts of the topic under study. For example, if a consultant wants to know about the experiences of new employees, he or she obviously needs to ask new employees. Qualitative Analysis and Results Qualitative analysis provides a detailed account of the phenomenon. Direct quotations from participants and a full depiction of the setting, issue, or individuals under study is known as rich description. The design of a qualitative study is emergent and flexible, meaning that the questions may change as new insights are gained. For example, if Sarah is conducting focus groups on issues faced by new employees, a topic may arise that she wants to query future
  • 210. groups about as she collects data. Quantitative Research Methodology When people want to know “how much” or “how many” of something, they generally seek a quantitative methodology. For example, a researcher might ask, “What are the percentage © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.4Research Methodology breakdowns of employee satisfaction in Organization Y, from very dissatisfied to very satis- fied?” or “What is our organization’s productivity rate compared with the industry standard?” Quantitative methods assume there is one correct answer to a question. This type of research yields statistical descriptions and predictions of the topics under study. Recall from earlier coverage in this book the process of survey feedback, in which employees are given a questionnaire about the organization’s management, culture, or atmosphere. Sur- veys are regularly used in OD to assess issues such as attitudes, individual performance, and technology needs or to evaluate certain functions or products. Surveys provide quantifiable data, such as what percentage of employees feel management is doing a good job or what percentage of employees plan to look for other work in the coming year.
  • 211. Quantitative Research Characteristics Quantitative techniques include surveys, questionnaires, and experiments that may involve testing with control groups. For example, Team A might be trained on effective team dynam- ics and facilitation procedures. Its productivity and performance might then be measured against Team B, which received no prior training. Quantitative studies are carefully designed, and once data collection begins, they are not changed. For example, if Jonas were administer- ing a survey to a population, he would not change the questions halfway through data collec- tion. Samples in a quantitative study are random and large. A corporation of 40,000 employ- ees being surveyed on their opinions about health benefits would target a smaller number of randomly selected workers to provide a representation of what the majority of workers would likely prefer. Quantitative Analysis and Results Quantitative data is analyzed using a deductive process in which the numbers or statistics will be used to determine an understanding of what is being studied. Assuming a benefits survey was conducted in the previous example, the organization might learn that 60% of employees prefer managed care, 40% want vision, and only 30% want dental insurance. The company would use this information to modify its benefits packages. Table 4.4 compares and contrasts qualitative and quantitative methods.
  • 212. Table 4.4: Comparison of qualitative and quantitative research methods Comparison Qualitative Quantitative Research focus Quality (nature, essence) Quantity (how much, how many) Philosophical roots Phenomenology, symbolic inter- actionism, constructivism Positivism, logical empiricism, realism Associated phrases Fieldwork, ethnographic, natu- ralistic, grounded, constructivist Experimental, empirical, statistical Goal of investigation Understanding, description, discovery, meaning, hypothesis generating Prediction, control, description, confirmation, hypothesis testing (continued on next page) © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.5Research Methods
  • 213. Table 4.4: Comparison of qualitative and quantitative research methods (continued) Comparison Qualitative Quantitative Design Flexible, evolving, emergent Predetermined, structured Sample Small, nonrandom, purposeful, theoretical Large, random, representative Data collection Researcher as primary instru- ment, interviews, observation, documents Inanimate instruments (scales, tests, surveys, questionnaires, computers) Analysis Inductive, constant comparative method Deductive, statistical Findings Comprehensive, holistic, richly descriptive Precise, numerical 4.5 Research Methods Research methods are procedures used to collect data. They are based on the type of research methodology used. Methods typically used in OD are profiled in
  • 214. this section. Interviews A conversation facilitated by the consultant for the purpose of soliciting a participant’s opin- ions, observations, and beliefs is an interview. Interviews give participants the opportunity to explain their experience, record their views and perspectives, and legitimize their under- standings of the phenomenon under study (Stringer, 2013). The interviews at QuickCo likely asked employees about departmental problems, communication, leadership, and so forth. Conducting interviews requires constructing questions that best address the issues under investigation. For example, Jack might have asked the QuickCo shipping employees these questions: • “What do you see as the top three challenges in the shipping department?” • “Can you tell me about a specific event that contributed to the problems you face today?” • “What has to change for you to be happy here?” • “What have you tried to resolve the problem?” • “What role have you played in the shipping department?” • “How likely are you to leave your position in the next year?” Recording interviews can be useful, but make sure you have permission from the participant (interviewee) and prepare and test the recording equipment in advance. If you are not able to record, you will want to take notes, but this is not ideal because it distracts you from what
  • 215. the interviewee is sharing. Interviews have several strengths. They provide in-depth insight into an interviewee’s opin- ions, attitudes, thoughts, preferences, and experiences. Interviews allow the interviewer to © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.5Research Methods probe and pose follow-up questions. Interviews can be done rapidly, particularly by tele- phone and email, and they tend to elicit high response rates. Interviews also have several weaknesses, including that they can be costly and time consum- ing, especially when done in person. Interviewees may answer in ways they think will please the interviewer rather than tell the truth. The quality of the interview is dependent on an interviewer’s skill and ability to avoid bias and ask good questions. To avoid bias, an inter- viewer should set aside expectations about the problem and solutions and truly listen to what the participants say during data collection. Interviewees may lack self-awareness or forget important information and thus fail to provide good data. They may also have confidentiality and trust concerns. Data analysis can also be time consuming. Questionnaires A questionnaire is an electronic or paper form that has a
  • 216. standardized set of questions intended to assess opinions, observations, and beliefs about a specific topic, such as employee satisfaction. It is a quantitative method. Questionnaires are also known as surveys, and one of OD’s first interventions was survey research, as was discussed in Chapter 1. Questionnaires measure attitudes and other content from research participants. The results can be quanti- fied, often to show statistical significance of the responses. Questionnaires are commonly administered to employees to inquire about the organiza- tion’s culture and climate and their satisfaction levels with their work, management, and relationships. Participants are usually asked to rate the questionnaire items using a Likert scale (described in Chapter 1). For example, they might rate an item such as “Management is concerned with my welfare” on a 5-point scale from “Strongly Disagree” to “Strongly Agree.” Questionnaires should feature clearly written questions that will yield actionable information. Questionnaires and surveys have several benefits. They are inexpensive to adminis- ter, especially if done electronically or in groups. Software programs make surveys relatively easy to develop and distribute. Questionnaires provide insights into partic- ipants’ opinions, thoughts, and preferences. They allow rapid data collection and are generally trusted for confidentiality and anonymity. Questionnaires are reliable and valid when well constructed and permit open-ended data to be collected, as well as
  • 217. exact responses to direct questions. Questionnaires and surveys also pose some challenges. They should be kept short or participants may not complete them. Participants may answer in ways they think please you instead of telling the truth. They may not respond to certain items at all, especially if the wording is unclear. Participants may not trust confidentiality or may feel that the survey is tedious; thus, the response rate may be low. Finally, data analysis can be time consuming for open-ended items. AnreyPopov/iStock/Getty Images Plus Surveys and questionnaires are common data collection methods used in OD. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.5Research Methods Focus Groups A group of approximately eight to 12 participants assembled to answer questions about a certain topic is known as a focus group. Focus groups are similar to interviews, but they are conducted collectively and facilitated by a moderator. Developing targeted questions is important, as is inviting the right people who possess insight and experience relevant to the problem. Focus group sessions should be recorded and transcribed verbatim, with partici-
  • 218. pants’ permission. Focus groups are beneficial for understanding participants’ thinking and perspectives, as well as for exploring new ideas and concepts. Participants can generate new knowledge and ideas, especially if they build off each other’s remarks. Focus groups might also yield in-depth infor- mation about problems or potential fixes. They can offer insight into the client organization’s relationships and communications and may provide an opportunity to probe relationship issues. Focus groups are relatively easy to organize and represent an efficient way to collect data from several stakeholders simultaneously. Focus groups also pose challenges. They might be expensive to conduct if participants are brought in from multiple locations. Finding a skilled facilitator can be difficult. Participants may be suspect of the process and have confidentiality concerns. Participants may also be overbearing, negative, or dominant during the session, so adroit facilitation is needed. If employees are angry or worried, their emotions can dominate. Focus groups can also gener- ate voluminous findings that may not be generalizable if the participants are not representa- tive of the organization or that may not be relevant to the issue under investigation. Finally, large amounts of data may be time consuming to analyze. Consultants should hone their focus group facilitation skills, and resources for building this competency are listed at the end of this chapter.
  • 219. Direct Observation Suppose Nina watches people, meetings, events, work processes, or day-to-day activity in the organization setting and records what she sees. Nina is undertaking direct observation. This data collection method involves recording observations in the form of field notes. Stringer (2013) listed typical observations made in action research: • Places: the contexts where people work, live, and socialize, including the physical layout • People: the personalities, roles, formal positions, and relationships experienced by participants • Objects: the artifacts in our contexts such as buildings, furniture, equipment, and materials • Acts: the actions people take (signing a form, asking a question) • Activities: a set of related acts (e.g., facilitating a meeting) • Events: a set of related activities (e.g., putting on a training session) • Purposes: what people are trying to accomplish • Time: times, frequency, duration, and sequencing of events and activities • Feelings: emotional orientations and responses to people, events, activities, and so forth © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution.
  • 220. Section 4.5Research Methods Direct observation has several benefits. It allows direct insight into what people are doing, avoiding the need to rely on what they say they do. Observation offers firsthand experience, especially if the observer participates in activities he or she observes. This is known as par- ticipant observation, and it is just as useful for observing what happens as for what does not (for example, a manager may tell you she involves employees in decision making, but you may observe her doing the opposite). An observation might yield valuable details that offer insight into the organization’s context and politics that organization members may miss. Observa- tional data may also provide a springboard from which to raise issues that people would otherwise be unwilling to talk about. Direct observation also poses challenges. It may be impossible to determine a rationale for observed behavior. If people know they are being observed, they may alter their behavior. Observations may be clouded by personal bias and selective perception. One must avoid over- identifying with the studied group so that observations are objective (this is especially chal- lenging in the case of participant observation). Doing observation can be time consuming, and access may sometimes be limited, depending on the type of organization. A consultant may have to sort through observations that seem meaningless in
  • 221. relation to the problem. Data analysis can also be time consuming. See Tips and Wisdom: Effective Observation to read advice about undertaking productive observation. Tips and Wisdom: Effective Observation Doing effective observations requires planning, skill, and focus. Here are some tips to make your observa- tions more robust: 1. Determine the purpose of the observation. Observations can be used to understand a wide range of activities in organizations such as how employees respond to a new office layout, how customers engage with employees, how supervisors relate to their subordinates, or how certain procedures are executed. You should be able to state in one sentence the focus of your observation: The purpose of this observation is to document the use of personal safety equipment usage [specify time, procedure, or location]. Or per- haps you are more interested in the nature of interaction: The purpose of this observation is to understand what types of questions medical professionals ask during a clinic. Specific- ity saves the consultant from capturing a lot of extraneous data. In the first example, you might note the frequency and types of personal safety equipment used, and the (continued on next page) DragonImages/iStock/Getty Images Plus
  • 222. Tips for conducting effective observation are to determine the purpose and what is relevant, decide how to document, and report observations directly rather than interpreting. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.5Research Methods Document Analysis Document analysis involves reviewing relevant records, texts, brochures, or websites to gain insight into organization functioning, problem solving, politics, culture, or other issues. Docu- ments might include memoranda, meeting minutes, records, reports, policies, procedures, bylaws, plans, evaluation reports, press accounts, public relations materials, vision state- ments, newsletters, and websites. Most organizations have a prolific amount of documen- tation, so using this type of data requires a clear focus and purpose. For example, Jack, our QuickCo consultant, reviewed performance trends and customer complaints to better under- stand the shipping department’s problems. If Jack were trying to help an executive improve communication skills, he might review his client’s email correspondence to determine how effectively and respectfully the executive communicates. This type of data collection can sig- nificantly inform the OD initiative.
  • 223. Documents provide several advantages, including access to historical data on people, groups, and the organization, as well as insight into what people think and do. Document analysis is an unobtrusive data collection method, which minimizes negative reactions. Certain docu- ments might also prove useful for corroborating other data collected; for example, Jack could compare the executive’s email communications with colleagues’ accounts collected through interviews. Tips and Wisdom: Effective Observation (continued) conditions when it is not. In the second example, you might be interested in who is asking the questions, assumptions made about the cases, or what emotions are expressed. Clarity about purpose increases the likelihood of seeing what you are seeking. 2. Determine what is relevant to the observation. If you are observing participation and team dynamics during a meeting, what occurs during an outside interruption of the meeting is probably irrelevant to what is going on in the team. 3. Decide how to document the observation. Your choices include videotaping, audiotap- ing, photography, and notetaking. There is not a perfect method. Technology-assisted video or audio recording might subdue participants who feel self-conscious about the information they are sharing or are fearful of reprisals. Notes
  • 224. can miss key information and quickly lose their meaning. Notetaking can be assisted by creating a shorthand for participants (e.g., “ee” for “employee” and “mgr” for “manager”). Practice taking notes to build skill. Use more than one notetaker and then compare findings. Finally, create a checklist for the observation to make it easy to record items, such as a list of behav- iors during meetings (interruptions, new ideas, constructive criticism, building on ideas, etc.). 4. Avoid interpreting what is observed and instead report it directly. So, if you were observing personal safety equipment usage, you might say “Person A did not wear safety glasses” instead of “Person A appeared to be distracted and hurried and forgot to put on safety glasses.” You will likely have to pair observations with interviews or focus groups to understand intentions behind behaviors and interactions you witness. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.6Methods of Analyzing the Data On the other hand, documents may provide little insight into participants’ thinking or behav- ior or may not apply to general populations. They can also be unwieldy and overwhelming in the action research process. Confidential documents may
  • 225. sometimes be difficult to access. Additional Data Sources Although interviews, questionnaires, focus groups, direct observation, and document analy- sis are the most commonly used OD data sources, other sources of information include the following: • Tests and simulations: Structured situations to assess an individual’s knowledge or proficiency to perform a task or behavior. For example, some organizations might use an inbox activity to assess delegation skills during a hiring process. Others use psychological tests to measure ethics, personality preferences, or behaviors. These instruments can be used in hiring, team development, management development, conflict resolution, and other activities. • Product reviews: Reviews of products or services from internal or external sources. These can be useful for addressing quality or market issues. • Performance reviews: Formal records of employee performance. These can be par- ticularly useful for individual interventions that are developmental or for succession planning on an organization level. • Competitor information and benchmarking: Comparative analyses of what competi- tors are doing regarding the issue under exploration. Examples might include salary, market, or product comparisons.
  • 226. • Environmental scanning: Analysis of political, economic, social, and technological events and trends that influence the organization now or in the future. • Critical incidents: Interviews that ask participants to identify a specific task or experience and pinpoint when it went well, when it did not go well, and what they learned. Critical incidents were first used in military pilot training to identify and eliminate mistakes. 4.6 Methods of Analyzing the Data The most common types of research in OD are survey research using quantitative methods and qualitative inquiry that could employ interviews, focus groups, observa- tion, document analysis, or a combination thereof. As you recall, quantitative methods are used to determine “how much,” while qualitative methods are used to determine “how.” We have already identified the many methods for collecting data; now, what do you do with it? Data points are simply bits of information until they are assimilated in ways that tell a story or provide deeper understanding of ktasimarr/iStock/Getty Images Plus A consultant’s role is not just to collect data but to analyze its significance and present findings to the client.
  • 227. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.6Methods of Analyzing the Data a phenomenon. For instance, employee responses on a survey about job satisfaction are just numbers on a page until interpreted. Once you know that 35% of respondents are only mod- erately satisfied and are clustered within a certain division or job classification, then you can begin to understand the scope of the problem and consider interventions. A consultant’s job is to make sense of data and present it to the client. Such a presentation should be in plain language and in quantities that the client can easily manage. It is advis- able to involve the client and other relevant organization members in the presentation of the analysis, because doing so promotes buy-in, collaboration, and accurate data interpretation. There are several steps to analyzing data effectively. These steps differ depending on whether you are doing qualitative or quantitative analysis. It is beyond the scope of this book to fully train you as a researcher, so it is a good idea to gain additional training and experience in this area if it interests you. Until you gain experience with data analysis, it is recommended that you partner with someone who is an expert. If you have access to a university or other orga- nizational research team, this can be an easy way of both
  • 228. finding a research expert and devel- oping a research partnership. Such relationships help bridge theory and practice and can be great opportunities to enhance your learning. There are also some suggestions for continued learning listed at the end of this chapter. Case Study: Data Collection and Analysis at Jolt Trans- formers offers an example of how to ensure effective data analysis. Case Study: Data Collection and Analysis at Jolt Transformers Jo Lee of Design Solution s Consulting receives a phone call from Rex James of Jolt Transform- ers. “Rex, what can I do for you?” asks Jo, who has done work for Jolt in the past. “Jo, we’ve got a problem with our technicians,” Rex replies. “We can’t keep them. We hire them and train them, and then they go work for the competition for more money. Then the cycle repeats and it seems we wind up hiring folks back again until they can jump ship for more cash. Our manage- ment team thinks they need more training.” “What makes you think that, Rex?” Jo is skeptical that training
  • 229. is the solution in this case. She listens a bit longer and sets up a time to meet with Rex and his division CEO. During the meet- ing, Jo asks several questions about the extent of the problem and what steps have been taken to address it. The three agree that the first step is to collect more data to understand the scope of the problem. They decide on a three-pronged approach: a survey of technicians, interviews with key executives, and focus groups with selected technicians. These methods will provide both quantitative and qualitative data. Over the coming weeks, Jo and Rex work on developing a survey with a small team that includes technician supervisors, technicians, and human resource personnel. They administer the survey to each of the company’s 75 technicians. The survey results show that 70% are dis- satisfied with their careers at Jolt and 62% are planning to apply elsewhere in the next year. Jo and Rex also develop interview questions for the executives and a format and questions for the technician focus groups.
  • 230. During the interviews, it becomes clear to Jo that the executives believe the problem is that the company lacks a training institute for technicians. A couple of executives want her to design a curriculum to train the technicians more effectively. Jo is highly skeptical of this assumption, however, because it runs counter to what she is learning from the technicians. Other executives (continued on next page) Case Study: Data Collection and Analysis at Jolt Transformers (continued) express concern that the company is not investing appropriately in the development and retention of its work force. Jo thinks they might be on to something. During the focus groups with technicians, Jo hears comments such as these: “There is no clear career path at Jolt. The only way to progress is to go elsewhere.” “The company doesn’t seem interested in us at all. They just
  • 231. want us to produce—the faster, the better.” “The competing companies provide a much better orientation program and connect you with a mentor to help you develop your skills.” “It’s a mystery what you have to do to get promoted around here. Instead of moving up, you might as well just plan to move out.” During the weeks that Jo collects and analyzes data, she undertakes several measures to pro- mote a thorough, effective analysis. Each is discussed as a tenet of effective analysis related to the case. Design a systematic approach; keep a data log. Jo works with a team from Jolt to design a process for collecting quantitative and qualitative data. As the data collection process unfolds, Jo keeps a detailed log of the steps taken, especially for the interviews and focus groups. These notes allow her to tweak the interview and focus group questions based on what she learns.
  • 232. When you use data logs, you can keep them in the form of a journal or official memoranda that highlight key steps, decisions, and emerging themes. These logs might include visual images of what you are learning, such as models, system diagrams, or pictures. Write notes to yourself as you analyze. Thoroughly documenting your procedures is good practice and should allow another person to step in and repeat your data collection and analysis procedures. Allow data to influence what is learned. Jo listens and watches carefully as she collects data. Her attention to detail offers her new insights into prevailing assumptions at play in the organization. She is able to add questions to the interviews and focus groups that push participants to think more broadly about the problem. For example, she pushes executives to provide evidence that a training institute would result in better retention of employees. When the executives find they cannot provide clear answers, they reflect more deeply on the problem. Jo is also able to probe more around the lack of
  • 233. development and retention activities going on in the organization. Constantly compare qualitative data. Constant comparison earned its name because it involves a repetitive process of comparing themes that appear in the data until the researcher arrives at a cogent list that satisfactorily explains the phenomenon. This involves careful study, note making, and looking for patterns in the data. Having more than one set of eyes coding the data and generating themes helps verify the analysis. In the case of Jolt, two themes were found from the data analysis: 1. Technicians were dissatisfied with a. the lack of a career path and b. the level of support for advancement and growth. 2. Jolt was lacking a. long-term strategies for recruitment, retention, and development of technicians and (continued on next page)
  • 234. © 2020 Zovio, Inc. All rights reserved. Not for resale or redistribution. Section 4.6Methods of Analyzing the Data a phenomenon. For instance, employee responses on a survey about job satisfaction are just numbers on a page until interpreted. Once you know that 35% of respondents are only mod- erately satisfied and are clustered within a certain division or job classification, then you can begin to understand the scope of the problem and consider interventions. A consultant’s job is to make sense of data and present it to the client. Such a presentation should be in plain language and in quantities that the client can easily manage. It is advis- able to involve the client and other relevant organization members in the presentation of the analysis, because doing so promotes buy-in, collaboration, and accurate data interpretation.
  • 235. There are several steps to analyzing data effectively. These steps differ depending on whether you are doing qualitative or quantitative analysis. It is beyond the scope of this book to fully train you as a researcher, so it is a good idea to gain additional training and experience in this area if it interests you. Until you gain experience with data analysis, it is recommended that you partner with someone who is an expert. If you have access to a university or other orga- nizational research team, this can be an easy way of both finding a research expert and devel- oping a research partnership. Such relationships help bridge theory and practice and can be great opportunities to enhance your learning. There are also some suggestions for continued learning listed at the end of this chapter. Case Study: Data Collection and Analysis at Jolt Trans- formers offers an example of how to ensure effective data analysis. Case Study: Data Collection and Analysis at Jolt Transformers Jo Lee of Design