SlideShare a Scribd company logo
Ch. 13 Designing and
Conducting Summative
Evaluations
Presentation By: Ezra Gray
Background
Summative evaluation is the process of collecting data and information to make decisions
about whether the instruction actually works as intended in the performance context; further,
it is used to determine whether progress is being made in ameliorating the performance
problems that prompted the instructional design and development effort. The main purpose
in summative evaluation is to determine whether given instruction meets expectations. The
need for summative evaluation became apparent several decades ago, when advocates for
each new public school curriculum and each new media delivery system claimed that it was
better than its competitors’. Studies were conducted as soon as possible to determine the
“winner”. Often, the innovation did not do as well as traditional instruction. This came as no
surprise to experienced evaluators, who knew the evaluation was really in draft form,
whereas the traditional instruction had been used, and revised, for many years.
Objectives
● Describe the purpose of summative evaluation
● Describe the two phases of summative evaluation and the
decisions resulting from each phase
● Design an expert judgement phase of summative evaluation
● Design an impact phase of summative evaluation
● Contrast formative and summative evaluation by purpose and
design
Congruence Analysis
The purpose of congruence analysis is to examine the
congruence between the organization’s stated needs
and the instructional materials. To perform the
congruence analysis, you should first obtain copies of
the organization’s strategic plans, their current goals
and objectives, and their stated needs for the training.
You can then infer how closely the training goals are
aligned with the organization’s goals and needs. The
closer the training is aligned to the organization’s
strategic plans and goals, the better implementation
support that managers and employees will have in
instituting changes and implementing new skills in the
workplace.
Content Analysis
Because you may not be a content expert in the
materials you evaluate, it may be necessary to engage
a content expert as a consultant. What you must
consider is how best to use this expert. One strategy is
to provide the experts with copies of all materials and
ask them to judge the accuracy, currency, and
completeness of the materials for the organization’s
stated goals; another strategy is to obtain the design
documents from the group that produced the
instruction and ask the expert to use them as a
standard against which to evaluate the accuracy and
completeness of the instructional materials.
Design Analysis
As an external evaluator, you may not know whether the materials are adequate for the
given learners’ needs, but you should take steps to find out about the learners’ characteristics
in order to make these determinations. The designers’ instructional strategy, including the
preinstrctional information, content presentation, learner participation, assessment, and
follow through, should be used as a template for reviewing the materials. Although the basic
components of an instructional strategy do not change, it may be necessary to adopt criteria
related to each component based on the type of learning outcomes addressed in the
materials and learners’ motivation and capabilities. It may also be necessary to assess the
materials from logistics and management points of view. These aspects of the strategy,
rather than the learning foundation, may be the cause of some problems uncovered in the
instruction.
Transfer Feasibility Analysis
Consider such factors as learner guidance and support as the learner bridges the gap
between instruction and the job. Are they permitted to take any instructional materials with
them to the job site? Are there learner guides, checklists, or outlines to consult? Is there an
app for their smartphones or easy ways to contact support? Is there an electronic
performance support system on their tablet or at the jobsite? Are any just in time materials
available to support the transfer process. Are any required software and instructional
programming platform-neutral for implementation in the maximum number of work
environments? The evaluator must also know whether critical aspects of the job were
adequately simulated in the learning context; other considerations, such as supervisor
capabilities, equipment, or environments, are also important to examine. You may also wish
to investigate any factors that might enhance or restrict the utility of the materials for the
organization, and whether employees actually learned the knowledge and skills during
training.
Transfer Feasibility Analysis (contd.)
If the information and skills are not learned, then it
is not likely that transfer will occur. Some of us
have known groups and individuals who consider
the “in service training” experience to be points on
a certificate, a day away from the job, lunch out
with the colleagues, and home early. The potential
of transfers within such culture is slim. How can
the designer/evaluator assess potential for
transfer when this may be the circumstance?
Evaluators can request posttest data from the
organization providing the instruction to
determine whether the skills were actually
learned.
Existing Materials Analysis
The proliferation of e-learning over the past
few years has many organizations
scrambling for quality instructional
materials for this delivery format. Public
schools, universities, and continuing
professional education as well as training
groups in business, government, and the
military have all realized the economics and
convenience of distance education, and
they are seeking quality instructional
materials. This need has resulted in the
related need for more instructional
designers.
Existing Materials Analysis (contd.)
The rapid expansion of the field without matching
expansion in graduate programs preparing
instructional designers means that many employed as
designers today lack advanced degrees in either
instructional design or industrial/organizational
psychology. Instructional materials created during this
rapid expansion may or may not be the best quality.
This potentially means additional work for external
evaluators who can use instructional design principles
to evaluate the potential of prepackaged, commercial
“learning solutions” or instruction created by outside
consultants.
Impact Phase of Summative Evaluation
The second phase of summative evaluation
is impact analysis, sometimes called
outcomes analysis, typically includes the
following parts: focusing the impact study,
establishing criteria and data needs,
selecting respondents, planning study
procedures, summarizing and analyzing
data, reporting results, and negotiating
resources.
Focusing the Impact Study
The first planning activity is to center your study in the workplace. The evaluator must shift
from the perspective of the instruction to a perspective of the organization. Review the
organization’s goals, their defined needs, and their relationships to the specific goals for the
instruction and to their employees who participated in the instruction. With these as a
resource, describe clearly the questions to be answered during the site study. Basically, your
questions should yield information for the impact analysis. It is always a good idea to plan
how you will introduce the study and interact with employees. Even your initial contact can
sink a study if company personnel are not approached appropriately. All participating
personnel must understand that you are evaluating specific training and instructional
materials, and not them or their company. Individuals and organizations are often justifiably
wary of strangers asking questions. The focus of and purpose for your evaluation should be
made clear to all.
Establishing Criteria and Data
Again, the criteria and data in the performance site vary
greatly from one context to another, and appropriate
methods for the evaluation must be tailored to the site.
The criteria and questions here are whether persons in
the worksite believe the learner has transferred the skills
learned during instruction to the worksite; whether in
doing so the organization’s defined needs were met, or
progress has been made towards meeting them; and
whether physical or attitudinal evidence of use or impact
exists within the worksite.
Establishing Criteria and Data (contd.)
The data can include ratings of learner
performance and attitudes on the job;
supervisor, peer, and customer attitudes;
employees’ performance ratings by
supervisors; supervisors’ performance
ratings by managers; and physical
improvements in products, performances,
or services. Data gathering methods
depend on the resources available for the
study.
Selecting the Respondents
The nature of information you need and the
particular questions assist you in planning
the types and number of persons who are
included in your study. This typically
includes target learners, peers, supervisors,
managers, and sometimes customers. It
may be necessary to interview the persons
in the organization who requested the
evaluation. Through discussions with them,
you can ensure that you have identified
their needs, resources,, and constraints
accurately.
Selecting the Respondents (contd.)
Learners/employees have insight to
whether and how they use the skills, and if
not, why. Peers and subordinates of the
learners selected may also offer insights
into the effectiveness of the instruction. Did
they notice the learners using the skills?
Were the learners effective? How could
they have performed better? Did learners
receive attention or any other kinds of
rewards for trying the new skills? Did they
talk with the learners about the new skills?
They might also shed light on constraints
present in the environment that work
against applying the new skills.
Planning Study Procedures
In selecting the most appropriate
procedures for collecting evidence of
training impact, you should consider when,
where, and how to collect the information.
When to collect the information is best
decided based on the nature of the
instruction, the nature of the work in the job
site, and the needs of the organization.
Where to collect data for summative impact
evaluation must also be decided. Again, this
depends on the organizations needs.
Comparison of Formative and
Summative Evaluations
Formative Evaluation Summative Evaluation
Purpose Locate weaknesses in the instruction in order
to revise it.
Document the degree to which
skills learned during the instruction
transferred to the jobsite
Phase or Stages One-to-one, small group, impact evaluation Expert judgement, impact analysis
Instructional
Development History
Systematically designed in house Produced in house
Materials One set of materials One set of materials
Position of Evaluator Member of design and development team Typically an external evaluator
Outcomes A prescription for revising instruction Documents the soundness of
instruction
Summary
Summative evaluations are conducted to make decisions about whether to maintain, adopt,
or adapt instruction. The primary evaluator in a summative evaluation is rarely the designer
or developer of the instruction; the evaluator is frequently unfamiliar with the materials, the
organization requesting the evaluation, or the setting in which the materials are evaluated.
Such evaluators are referred to as external evaluators; these evaluators are preferred for
summative evaluations because they have no personal investment in the instruction and are
likely to be more objective about the strengths and weaknesses of the instruction.
Instructional designers make excellent summative evaluators because of their understanding
of the instructional design process, the characteristics of well designed instruction, and the
criteria for evaluating instruction. These skills provide them with the expertise for designing
and conducting the expert judgement as well as the impact analysis phases of the
summative evaluation.
The End.
Contact Email: Ezragray7@gmail.com

More Related Content

PPTX
EDLD808 Program Evaluation Final Project - Online Education
PPTX
Needs assessment
PPTX
Conducting Programme Evaluation
DOCX
PDF
Dm4301674680
PPTX
Effects of Quantity-Based Staff Performance Indicators on Performance Quality
PDF
M.com Project Orientation, University of Mumbai
PPTX
Informal Learning_2011 AHRD Conference
EDLD808 Program Evaluation Final Project - Online Education
Needs assessment
Conducting Programme Evaluation
Dm4301674680
Effects of Quantity-Based Staff Performance Indicators on Performance Quality
M.com Project Orientation, University of Mumbai
Informal Learning_2011 AHRD Conference

What's hot (20)

DOCX
201605_CV_ jsagar
PPTX
Data informed decision-making
PPTX
Conceptual thinking
PPT
Rmit Pr Planning Workshop Materials
DOC
Capstone Course Assessment Faq Jan2009
RTF
FranklinA_081215_Rvsd_B.docx
PDF
Doing the Right Things Right
DOC
Clinical Trainer and Assessor Role Description
PDF
Action research into the quality of student learning: A paradigm for faculty ...
PDF
Building Capacity to Measure, Analyze and Evaluate Government Performance
DOCX
The following is a discussion on some key factors for a school to consider wh...
PPT
Alternative Views Of Evaluation
DOCX
Master of Science in Occupational & Organisational Psychology - Modules, Know...
PDF
DreamBoxWP_ActionableData_V6A (1)
PPT
Performance appraisal
PDF
Training needs analysis
PPTX
Learning focused Evaluation
PPTX
Needs Assessment Presentation
PPT
Street Jibe Evaluation Workshop 2
PPT
Needs Assessment and Program Planning
201605_CV_ jsagar
Data informed decision-making
Conceptual thinking
Rmit Pr Planning Workshop Materials
Capstone Course Assessment Faq Jan2009
FranklinA_081215_Rvsd_B.docx
Doing the Right Things Right
Clinical Trainer and Assessor Role Description
Action research into the quality of student learning: A paradigm for faculty ...
Building Capacity to Measure, Analyze and Evaluate Government Performance
The following is a discussion on some key factors for a school to consider wh...
Alternative Views Of Evaluation
Master of Science in Occupational & Organisational Psychology - Modules, Know...
DreamBoxWP_ActionableData_V6A (1)
Performance appraisal
Training needs analysis
Learning focused Evaluation
Needs Assessment Presentation
Street Jibe Evaluation Workshop 2
Needs Assessment and Program Planning
Ad

Similar to Ch. 13 designing and conducting summative evaluations (20)

PPTX
Chapter 12
PPTX
Designing and conducting summative evaluations
PPTX
Ch. 12 powerpoint apt 501
PPTX
Apt501 assignment 10
PDF
Instructional design-models dick n carey pdf
PPTX
Tieka wilkins chapter 12
PPTX
An Introduction To The Dick & Carey Instructional Design Model
PPT
Designing and conducting summative evaluations
DOCX
Apt chapter 12 summative evaluation
PDF
3ContextAnalysis.pdf
PPTX
Chapter 12
PPTX
Designing and conducting summative evaluations
PPTX
Designing and conducting summative evaluations
PPT
Planning the training session
PPTX
Designing and Conducting Formative Evaluation
PDF
Front End Analysis
PPTX
Chapter 10 willis white
PDF
A7 UsryR
PPTX
Designing and conducting summative evaluations
PPTX
Apt 501 assignment 8
Chapter 12
Designing and conducting summative evaluations
Ch. 12 powerpoint apt 501
Apt501 assignment 10
Instructional design-models dick n carey pdf
Tieka wilkins chapter 12
An Introduction To The Dick & Carey Instructional Design Model
Designing and conducting summative evaluations
Apt chapter 12 summative evaluation
3ContextAnalysis.pdf
Chapter 12
Designing and conducting summative evaluations
Designing and conducting summative evaluations
Planning the training session
Designing and Conducting Formative Evaluation
Front End Analysis
Chapter 10 willis white
A7 UsryR
Designing and conducting summative evaluations
Apt 501 assignment 8
Ad

Recently uploaded (20)

PPTX
History, Philosophy and sociology of education (1).pptx
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
Classroom Observation Tools for Teachers
PDF
What if we spent less time fighting change, and more time building what’s rig...
PDF
Yogi Goddess Pres Conference Studio Updates
PPTX
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
PDF
Module 4: Burden of Disease Tutorial Slides S2 2025
PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
STATICS OF THE RIGID BODIES Hibbelers.pdf
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PPTX
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Paper A Mock Exam 9_ Attempt review.pdf.
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
Chinmaya Tiranga quiz Grand Finale.pdf
PPTX
Cell Structure & Organelles in detailed.
PDF
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
PPTX
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
PDF
Supply Chain Operations Speaking Notes -ICLT Program
History, Philosophy and sociology of education (1).pptx
Final Presentation General Medicine 03-08-2024.pptx
Classroom Observation Tools for Teachers
What if we spent less time fighting change, and more time building what’s rig...
Yogi Goddess Pres Conference Studio Updates
Radiologic_Anatomy_of_the_Brachial_plexus [final].pptx
Module 4: Burden of Disease Tutorial Slides S2 2025
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
STATICS OF THE RIGID BODIES Hibbelers.pdf
2.FourierTransform-ShortQuestionswithAnswers.pdf
Tissue processing ( HISTOPATHOLOGICAL TECHNIQUE
Microbial diseases, their pathogenesis and prophylaxis
Anesthesia in Laparoscopic Surgery in India
Paper A Mock Exam 9_ Attempt review.pdf.
LDMMIA Reiki Yoga Finals Review Spring Summer
Chinmaya Tiranga quiz Grand Finale.pdf
Cell Structure & Organelles in detailed.
GENETICS IN BIOLOGY IN SECONDARY LEVEL FORM 3
1st Inaugural Professorial Lecture held on 19th February 2020 (Governance and...
Supply Chain Operations Speaking Notes -ICLT Program

Ch. 13 designing and conducting summative evaluations

  • 1. Ch. 13 Designing and Conducting Summative Evaluations Presentation By: Ezra Gray
  • 2. Background Summative evaluation is the process of collecting data and information to make decisions about whether the instruction actually works as intended in the performance context; further, it is used to determine whether progress is being made in ameliorating the performance problems that prompted the instructional design and development effort. The main purpose in summative evaluation is to determine whether given instruction meets expectations. The need for summative evaluation became apparent several decades ago, when advocates for each new public school curriculum and each new media delivery system claimed that it was better than its competitors’. Studies were conducted as soon as possible to determine the “winner”. Often, the innovation did not do as well as traditional instruction. This came as no surprise to experienced evaluators, who knew the evaluation was really in draft form, whereas the traditional instruction had been used, and revised, for many years.
  • 3. Objectives ● Describe the purpose of summative evaluation ● Describe the two phases of summative evaluation and the decisions resulting from each phase ● Design an expert judgement phase of summative evaluation ● Design an impact phase of summative evaluation ● Contrast formative and summative evaluation by purpose and design
  • 4. Congruence Analysis The purpose of congruence analysis is to examine the congruence between the organization’s stated needs and the instructional materials. To perform the congruence analysis, you should first obtain copies of the organization’s strategic plans, their current goals and objectives, and their stated needs for the training. You can then infer how closely the training goals are aligned with the organization’s goals and needs. The closer the training is aligned to the organization’s strategic plans and goals, the better implementation support that managers and employees will have in instituting changes and implementing new skills in the workplace.
  • 5. Content Analysis Because you may not be a content expert in the materials you evaluate, it may be necessary to engage a content expert as a consultant. What you must consider is how best to use this expert. One strategy is to provide the experts with copies of all materials and ask them to judge the accuracy, currency, and completeness of the materials for the organization’s stated goals; another strategy is to obtain the design documents from the group that produced the instruction and ask the expert to use them as a standard against which to evaluate the accuracy and completeness of the instructional materials.
  • 6. Design Analysis As an external evaluator, you may not know whether the materials are adequate for the given learners’ needs, but you should take steps to find out about the learners’ characteristics in order to make these determinations. The designers’ instructional strategy, including the preinstrctional information, content presentation, learner participation, assessment, and follow through, should be used as a template for reviewing the materials. Although the basic components of an instructional strategy do not change, it may be necessary to adopt criteria related to each component based on the type of learning outcomes addressed in the materials and learners’ motivation and capabilities. It may also be necessary to assess the materials from logistics and management points of view. These aspects of the strategy, rather than the learning foundation, may be the cause of some problems uncovered in the instruction.
  • 7. Transfer Feasibility Analysis Consider such factors as learner guidance and support as the learner bridges the gap between instruction and the job. Are they permitted to take any instructional materials with them to the job site? Are there learner guides, checklists, or outlines to consult? Is there an app for their smartphones or easy ways to contact support? Is there an electronic performance support system on their tablet or at the jobsite? Are any just in time materials available to support the transfer process. Are any required software and instructional programming platform-neutral for implementation in the maximum number of work environments? The evaluator must also know whether critical aspects of the job were adequately simulated in the learning context; other considerations, such as supervisor capabilities, equipment, or environments, are also important to examine. You may also wish to investigate any factors that might enhance or restrict the utility of the materials for the organization, and whether employees actually learned the knowledge and skills during training.
  • 8. Transfer Feasibility Analysis (contd.) If the information and skills are not learned, then it is not likely that transfer will occur. Some of us have known groups and individuals who consider the “in service training” experience to be points on a certificate, a day away from the job, lunch out with the colleagues, and home early. The potential of transfers within such culture is slim. How can the designer/evaluator assess potential for transfer when this may be the circumstance? Evaluators can request posttest data from the organization providing the instruction to determine whether the skills were actually learned.
  • 9. Existing Materials Analysis The proliferation of e-learning over the past few years has many organizations scrambling for quality instructional materials for this delivery format. Public schools, universities, and continuing professional education as well as training groups in business, government, and the military have all realized the economics and convenience of distance education, and they are seeking quality instructional materials. This need has resulted in the related need for more instructional designers.
  • 10. Existing Materials Analysis (contd.) The rapid expansion of the field without matching expansion in graduate programs preparing instructional designers means that many employed as designers today lack advanced degrees in either instructional design or industrial/organizational psychology. Instructional materials created during this rapid expansion may or may not be the best quality. This potentially means additional work for external evaluators who can use instructional design principles to evaluate the potential of prepackaged, commercial “learning solutions” or instruction created by outside consultants.
  • 11. Impact Phase of Summative Evaluation The second phase of summative evaluation is impact analysis, sometimes called outcomes analysis, typically includes the following parts: focusing the impact study, establishing criteria and data needs, selecting respondents, planning study procedures, summarizing and analyzing data, reporting results, and negotiating resources.
  • 12. Focusing the Impact Study The first planning activity is to center your study in the workplace. The evaluator must shift from the perspective of the instruction to a perspective of the organization. Review the organization’s goals, their defined needs, and their relationships to the specific goals for the instruction and to their employees who participated in the instruction. With these as a resource, describe clearly the questions to be answered during the site study. Basically, your questions should yield information for the impact analysis. It is always a good idea to plan how you will introduce the study and interact with employees. Even your initial contact can sink a study if company personnel are not approached appropriately. All participating personnel must understand that you are evaluating specific training and instructional materials, and not them or their company. Individuals and organizations are often justifiably wary of strangers asking questions. The focus of and purpose for your evaluation should be made clear to all.
  • 13. Establishing Criteria and Data Again, the criteria and data in the performance site vary greatly from one context to another, and appropriate methods for the evaluation must be tailored to the site. The criteria and questions here are whether persons in the worksite believe the learner has transferred the skills learned during instruction to the worksite; whether in doing so the organization’s defined needs were met, or progress has been made towards meeting them; and whether physical or attitudinal evidence of use or impact exists within the worksite.
  • 14. Establishing Criteria and Data (contd.) The data can include ratings of learner performance and attitudes on the job; supervisor, peer, and customer attitudes; employees’ performance ratings by supervisors; supervisors’ performance ratings by managers; and physical improvements in products, performances, or services. Data gathering methods depend on the resources available for the study.
  • 15. Selecting the Respondents The nature of information you need and the particular questions assist you in planning the types and number of persons who are included in your study. This typically includes target learners, peers, supervisors, managers, and sometimes customers. It may be necessary to interview the persons in the organization who requested the evaluation. Through discussions with them, you can ensure that you have identified their needs, resources,, and constraints accurately.
  • 16. Selecting the Respondents (contd.) Learners/employees have insight to whether and how they use the skills, and if not, why. Peers and subordinates of the learners selected may also offer insights into the effectiveness of the instruction. Did they notice the learners using the skills? Were the learners effective? How could they have performed better? Did learners receive attention or any other kinds of rewards for trying the new skills? Did they talk with the learners about the new skills? They might also shed light on constraints present in the environment that work against applying the new skills.
  • 17. Planning Study Procedures In selecting the most appropriate procedures for collecting evidence of training impact, you should consider when, where, and how to collect the information. When to collect the information is best decided based on the nature of the instruction, the nature of the work in the job site, and the needs of the organization. Where to collect data for summative impact evaluation must also be decided. Again, this depends on the organizations needs.
  • 18. Comparison of Formative and Summative Evaluations Formative Evaluation Summative Evaluation Purpose Locate weaknesses in the instruction in order to revise it. Document the degree to which skills learned during the instruction transferred to the jobsite Phase or Stages One-to-one, small group, impact evaluation Expert judgement, impact analysis Instructional Development History Systematically designed in house Produced in house Materials One set of materials One set of materials Position of Evaluator Member of design and development team Typically an external evaluator Outcomes A prescription for revising instruction Documents the soundness of instruction
  • 19. Summary Summative evaluations are conducted to make decisions about whether to maintain, adopt, or adapt instruction. The primary evaluator in a summative evaluation is rarely the designer or developer of the instruction; the evaluator is frequently unfamiliar with the materials, the organization requesting the evaluation, or the setting in which the materials are evaluated. Such evaluators are referred to as external evaluators; these evaluators are preferred for summative evaluations because they have no personal investment in the instruction and are likely to be more objective about the strengths and weaknesses of the instruction. Instructional designers make excellent summative evaluators because of their understanding of the instructional design process, the characteristics of well designed instruction, and the criteria for evaluating instruction. These skills provide them with the expertise for designing and conducting the expert judgement as well as the impact analysis phases of the summative evaluation.
  • 20. The End. Contact Email: Ezragray7@gmail.com