SlideShare a Scribd company logo
Blended learning combines online learning, face-to-face instruction, and
other i
Blended learning combines online learning, face-to-face instruction, and other instructional
methods. Blended learning courses provide learners with the positive features of both face-
to-face instruction and technology-based delivery, while minimizing the negative features of
each. Considering the organization you identified in week one, would you recommend
blended learning for this organization? Explain your decision in detail.Training
EvaluationThis chapter provides an overview of how to evaluate training programs,
including the types of outcomes that need to be measured and the types of evaluation
designs available. The chapter highlights the importance of evaluating whether the training
has accomplished its objectives and, particularly, whether job performance and
organizational results have improved as a result. Formative and summative evaluation are
discussed and compared, and the process of evaluating training is outlined and outcomes
used to evaluate training are described in detail. In an environment of accountability,
expertise in assessing the effective of training from multiple perspectives is
invaluable.OBJECTIVESExplain why evaluation is important.Identify and choose outcomes
to evaluate a training program.Discuss the process used to plan and implement a good
training evaluation.Discuss the strengths and weaknesses of different evaluation
designs.Choose the appropriate evaluation design based on the characteristics of the
company and the importance and purpose of the training.Conduct a cost-benefit analysis for
a training program.Explain the role of big data, workforce analytics, and dashboards in
determining the value of training practices.INTRODUCTIONTraining effectiveness refers to
the benefits that the company and trainees experience as a result of training. Benefits for
the trainees include learning new knowledge, skills, and behaviors. Potential benefits for the
company include increased sales, improved quality, and more satisfied customers.Training
evaluation refers to the process of collecting data regarding outcomes needed to determine
whether training is effective.Training outcomes, or criteria, refer to measures that the
trainer and the company use to evaluate training programs.Evaluation design refers to the
collection of information that will be used to determine the effectiveness of the training
program.REASONS FOR EVALUATING TRAININGCompanies have made large dollar
investments in training and education and view training as a strategy to be successful.
Training evaluation provides a way to understand the investments that training produces
and provides information needed to improve training. If a company received an inadequate
return on investment, the company will likely reduce its investment or look for training
providers outside the company who can provide training that yields the desired
results.Formative EvaluationFormative evaluation refers to the evaluation of training that
takes place during program design. Formative evaluation helps to ensure that the training
program is well organized and runs smoothly and that trainees learn and are satisfied.
Formative evaluation provides information about how to make a program better.Formative
evaluations ask employees, managers, and SMEs about their opinions about training
content, methods, and the like. Training content may be changed to be more accurate, easier
to understand, or more appealing, and training methods can be adjusted to improve
learning.Formative evaluation involves pilot testing. Pilot testing is the process of
previewing a training program with potential trainees and their managers. The pilot testing
group is then asked to provide feedback about the content and the methods of delivery. This
feedback enables the trainer to make any needed improvements.Summative
EvaluationSummative evaluation refers to evaluation conducted to determine the extent to
which trainees have changed as a result of training. That is, summative evaluation examines
whether trainees have improved or acquired knowledge, skills attitudes, behaviors, or other
outcomes. Summative evaluation may also include examining the business impact of
training.The Importance of EvaluationThere are multiple reasons to evaluate training
effectiveness:To identify the program’s strengths and weaknesses, including whether the
program is meeting the learning objectives, the quality of the learning environment, and if
transfer is occurring.To assess whether the various features of the training context and
content contribute to learning and transfer.To identify which trainees benefited most or
least from the program and why.To gather information, such as testimonials, to use for
marketing training.To determine financial benefits and costs of the program.To compare the
costs and benefits of training versus other HRM investments.To compare the costs and
benefits of various training programs in order to choose the most effective
programs.OVERVIEW OF THE EVALUATION PROCESSThe evaluation process involves five
key components.Needs assessmentThe evaluation process should begin with determining
training needs. Needs assessment helps identify what knowledge, skills, behavior, or other
learned capabilities are needed.Develop measurable learning objectives and analyze
transfer of trainingIdentify specific, measurable training objectives to guide the program.
Besides considering the learning program objectives, it is important to consider the
expectations of those individuals who support the program and have an interest in it.
Analysis of the work environment can be useful for determining how well content will be
transferred.Development outcome measuresBased on the preceding steps, outcome
measures are designed to assess the extent to which learning and transfer have
occurred.Choose an evaluation strategyOnce the outcomes have been identified, determine
an evaluation strategy. Factors such as expertise, how quickly the information is needed,
change potential, and the organizational culture should be considered.Plan and execute the
evaluationPlanning and executing the evaluation involves previewing the program, as well
as collecting training outcomes according to the evaluation design. The results of the
evaluation are used to modify, market, or gain additional support for the
program.OUTCOMES USED IN THE EVALUATION OF TRAINING PROGRAMSThere are six
primary outcomes that can be used to evaluate training effectiveness.Reaction
OutcomesReaction outcomes refer to the trainees’ perceptions of the training experience
relating to the content, facilities, trainer, and methods of delivery. An accurate evaluation
should include all the factors related to a successful learning environment. Reactions are
often referred to as a measure of “creature comfort.” Key questions to consider include:Did
the trainees like the program?Did the environment help learning?Was material
meaningful?This information is typically collected at the program’s conclusion via a
questionnaire. Reactions are often assessed by asking trainees to respond to a series of
strongly agree-strongly disagree statements about the learning experience. Reaction
measures can also include open-ended questions about the experience, such as “What did
you learn that you are most likely to try on the job?” and “What topics covered in this
program seem confusing?”It is often believed that reactions are related to learning and
transfer. Research suggests that reactions have the strongest relationship with affective
outcomes. Reactions are also significantly related to changes in declarative and procedural
knowledge.Learning or Cognitive OutcomesLearning or cognitive outcomes relate to
familiarity with information, including principles, facts, techniques, procedures, and
processes. Typically paper and pencil tests or self-assessments are used to assess cognitive
outcomes. Self-assessments refer to learners’ estimates of how much they learned from
training. Tests and quizzes are often preferred over self-assessments because self-
assessments are only moderately related to learning. Self-assessments are more strongly
related to learners’ reactions and their motivation to learn.Behavior and Skill-Based
OutcomesBehavior and skill-based outcomes relate to proficiency with technical or motor
skills and behavior. These outcomes include both the learning of skills and their transfer.
Skill learning is often assessed by observing performance in work samples. Skill transfer is
typically assessed by observing trainees on the job or via managerial and peer
ratings.Affective OutcomesAffective outcomes include attitudes and motivation. Affective
outcomes that might be collected include self-efficacy, employee engagement, motivation to
learn, tolerance for diversity, safety attitudes, and customer service orientation. The
attitude of interest depends on training objectives. Affective outcomes can be measured
using surveys.ResultsResults are those outcomes used to determine the benefits of the
training program to the company. Examples include reduced costs, increased employee
retention, increased sales, increased production, and improved quality or customer
service.Return on InvestmentReturn on Investment involves comparing the training
program’s benefits in monetary terms to the program’s costs.Benefits are the value the
company receives from the training.Direct costs include salaries and benefits of trainees,
trainers, consultants, and any others involved in the training; program materials and
supplies; equipment and facilities; and travel costs.Indirect costs include office supplies,
facilities, equipment and related expenses not directly related to the training program;
travel and expenses not billed to one particular program; and training department
management and staff support salaries.DETERMINING WHETHER OUTCOMES ARE
APPROPRIATEAn important issue in choosing outcomes is to determine whether they are
appropriate.RelevanceCriteria relevance refers to the extent to which training outcomes are
related to the learned capabilities emphasized in the program. One way to ensure the
relevance is to choose outcomes based on the learning objectives for the program. There are
two ways outcomes may lack relevance—criterion contamination and criterion
deficiency.Criterion contamination refers to the inclusion of inappropriate or irrelevant
outcomes.Criterion deficiency refers to the omission of important
information.ReliabilityReliability is whether outcomes can be measured consistently over
time. Predominantly, we are concerned with consistency over time, such that a reliable test
contains items that do not change in meaning or interpretation over
time.DiscriminationDiscrimination refers to whether performance on the outcome reflects
true differences in performance. For example, we want tests that can discriminate between
high and low performance. A test that may not discriminate is one that is too easy. In this
instance, both high and low performers would do well on the test. Both groups would
appear “good” even though they are not.PracticalityPracticality is the ease with which the
outcome measures can be collected. One reason companies give for not including learning,
performance, and behavior outcomes is that collecting them is too
burdensome.EVALUATION PRACTICESBelow are percentage estimates of organizations
examining different training outcomes.Reactions 92%Cognitive 81%Behavior 55%Results
37%ROI 18%None 4%Reactions and cognitive outcomes are the most frequently used
outcomes in training evaluation. Despite the less frequent use of cognitive, behavioral, and
results outcomes, research suggests that training can have a positive effect on these
outcomes.There are a number of reasons why companies fail to evaluate training. Learning
professionals report that access to data and tools needed to obtain them are the most
significant barriers.Which Training Outcomes Should Be Collected?It is not always
necessary to collect data on all of the training outcomes. While collecting data on all
outcomes is ideal, doing so may not always be necessary depending on the scope of the
training, its strategic value, and practical considerations.As much as possible, evaluation
should include behavior or skill-based, affective, and results outcomes to determine the
extent to which transfer occurred. (Reactions and cognitive measures do not help to
measure transfer.) It is important to recognize the limitations of only choosing to measure
reactions and cognitive outcomes, the two most commonly measured outcomes.The various
training outcomes measures are largely independent of each other. It cannot be assumed
that positive reactions lead to greater transfer. Research suggests that the relationships
among the outcomes are small.There are three types of transfer:Positive transfer is
demonstrated when learning occurs with positive changes in on-the-job behavior.No
transfer is demonstrated when learning occurs without changes in on-the-job
behavior,Negative transfer is evident when learning occurs, but on-the-job behavior is
lower than pre-training levels.Learning, behavior, and results should be measured after
sufficient time has elapsed to determine whether training has had an influence on these
outcomes.EVALUATION DESIGNSThe design of the training evaluation determines the
confidence that can be placed in the results. No training evaluator can be absolutely certain
that the results of the evaluation are completely true. The evaluator should strive for the
most rigorous design possible.Threats to Validity: Alternative Explanations for Evaluation
ResultsThreats to validity refer to factors that will lead an evaluator to question either the
believability of the study results or the extent to which the evaluation results are
generalizable to other groups of trainees and situations.Internal validity is the believability
of the study. An evaluation needs internal validity to provide confidence that the findings
are due to training and not another factor.External validity refers to the generalizability of
the evaluation results to other groups and other situations.Methods to control for threats to
validity:Use pre-tests and post-tests to determine the extent to which trainees have changed
from pre-training to post-training measures. The pre-training measure essentially
establishes a baseline.Use a control group (i.e., a group that participates in the evaluation
study, but does not receive the training) to rule out factors other than training as the cause
of changes in the trainees.Random assignment of employees to control and training groups.
Randomization helps to ensure that members of the control group and training group are of
similar makeup prior to the training.Types of Evaluation DesignsPosttest onlyThe post-test
only design involves collecting only post-training outcome measures. This design would be
strengthened by the use of a control group to rule out alternative explanations. This design
is appropriate when trainees can be expected to have similar levels of knowledge, behavior,
etc. prior to training.Pretest/PosttestThis design involves collecting both pre-training and
post-training outcome measures to determine whether a change has occurred after
training.Pretest/Posttest with Comparison GroupThis design includes pre-training and
post-training outcome measurements and a control group. If the post-training improvement
is greater for the group that receives training, there is evidence that training was
responsible for the change.Time SeriesThe time series design involves collecting outcome
measurements at periodic intervals pre- and post-training. A comparison group may also be
used. The strength of this design can be improved by using reversal, which refers to a time
period when participants no longer receive training. This design allows for an analysis of
the stability of training outcomes over time.Solomon Four-GroupThe Solomon Four-Group
design combines the pretest/posttest comparison group design and the posttest-only
control group design. It involves the use of four groups:Pre-test, treatment, post-testPre-
test, no treatment, post-testNo pre-test, treatment, post-testNo pre-test, no treatment, post-
testThis design provides the most controls for internal and external validity.Considerations
in Choosing an Evaluation DesignThere are several reasons why no evaluation or a less
rigorous design may be appropriate:Managers and trainers may be unwilling to devote the
timeManagers and trainees may lack the expertise to evaluateThe company may view
training as an investment from which it expects little or no returnA more rigorous
evaluation design should be considered when:The evaluation results can be used to change
the program.The training is ongoing and has the potential to affect many employees.The
training program involves multiple classes and a large number of trainees.Cost justification
for training is based on numerical indicators.Trainers or others in the company have the
expertise to evaluate.The cost of training creates a need to show that it works.There is
sufficient time for conducting an evaluation.There is interest in measuring change from pre-
training levels or in comparing two or more different programs.Evaluation designs without
pretesting or comparison groups are most appropriate when you are interested only in
whether a specific level of performance has been achieved, and not how much change has
occurred.Sometimes naturally occurring comparison groups are available. This can occur
because of the realities of scheduling employees to attend training, when not all employees
can attend. When this occurs, for example, a pre-test/post-test design could be employed
with a control group.DETERMINING RETURN ON INVESTMENTROI is an important training
outcome. ROI can be assessed by conducting a cost-benefit analysis. A cost-benefit analysis
determines the net economic benefits of training using accounting methods. There is an
increased interest in measuring the ROI of training because of the need to show results.
However because ROI analysis can be costly, it should be limited only to training programs
where there was a significant investment.Training cost information is important for several
reasons:to understand total expenditures for training, including direct and indirect coststo
compare the costs of alternative training programsto evaluate the proportion of the training
budget spent on the development of training, administrative costs, and evaluation as well as
how much is spent on various types of employeesto control costsThe process of
determining ROI includes:understanding the objectives of the training programisolating the
effects of training from other factors that might influence the dataconverting data to a
monetary value and calculating ROIDetermining CostsOne method to determine costs is the
resource requirements model. The resource requirements model compares equipment,
facilities, personnel, and materials costs across different stages of the training process. This
model can help determine overall differences in costs among training programs. Cost
incurred at different stages of the training process can be compared across
groups.Accounting can also be used to calculate costs. There are seven categories of cost
sources:program development or purchaseinstructional materialsequipment and
hardwarefacilitiestravel and lodgingsalary of the trainer and support staffcost of either lost
productivity or replacement workers while trainees are away from their jobsDetermining
BenefitsTo identify benefits, one must review the original reasons training was conducted.
A number of methods may be helpful to identify training benefits, including:technical,
practitioner and academic literature that summarizes benefits of training programspilot
training programs that assess the benefits from a small group of trainees before a company
commits more resourcesobserving successful job performers to determine what successful
job performers do differently from unsuccessful onesasking trainees and managers to
provide estimates of benefitsCalculating ROITo calculate return on investment, follow these
steps:identify outcomesplace a monetary value on the outcomesdetermine the annual
change in outcomesobtain an annual amount of benefits by multiplying the change in
outcomes by the monetary valuedetermine training costscalculate net benefit by
subtracting the training costs from benefitscalculate ROI by dividing net benefits by costs—
the ROI gives an estimate of the dollar return expected from each dollar invested in
trainingOther Methods of Cost-Benefit AnalysisUtility analysis assesses the dollar value of
training based on estimates of the difference in job performance between trained and
untrained employees, the number of employees trained, the length of time the program is
expected to influence performance, and the variability in job performance in the untrained
group of employees. Utility analysis employs a highly sophisticated formula that requires
the use of pretest and posttest with a comparison group.Other types of economic analyses
evaluate training as it benefits the firm or government using direct and indirect costs,
incentives paid by the government for training, wage increases received by trainees as a
result of the training, tax rates, and discount rates.Practical Considerations in Determining
Return on InvestmentTraining programs best suited for ROI analysis have clearly identified
outcomes, are not one-time events, are highly visible in the company, are strategically
focused, and have effects that can be isolated.Success Cases and Return on
ExpectationsSuccess cases refer to concrete examples of the impact of training that show
how learning has led to results that the company finds worthwhile and the managers find
credible. Success cases do not attempt to isolate the influence of training, but rather to
provide evidence that training was useful.Return on expectations (ROE) demonstrates to
key business stakeholders, such as top-level managers, that their expectations about
training have been satisfied. ROE depends on establishing a business partnership with key
stakeholders from the start of a training program through its evaluation.MEASURING
HUMAN CAPITAL AND TRAINING ACTIVITYIt is important to remember that evaluation can
involve determining the extent to which training contributes to business strategy and helps
achieve business goals.Metrics are valuable for benchmarking purposes, for understanding
the current amount of training activity in a company, and for tracking historical trends in
training activity.Another way to understand the value of training is through comparisons
with other companies; companies could review the ATD’s report that summarizes company
provided data in the U.S.Big Data and Workforce AnalyticsBig data refer to complex datasets
developed by compiling data across different organizational systems, including marketing
and sales, HR, finance, accounting, customer service, and operations. Three dimensions
characterize big data: volume, variety, and velocity.Volume refers to the amount of available
data.Variety includes the large number of sources and types of data.Velocity refers to the
huge amount of data that is being generated and the speed with which it must be evaluated,
captured, and made useful.In the present context, big data allow for the making of decisions
about human capital based on data, rather than on intuition and conventional wisdom. In a
training context, big data can be used to:evaluate the effectiveness of programsdetermine
their impact on business resultsdevelop predictive models for forecasting training needs,
course enrollments, and outcomesWorkforce analytics refers to the practice of using
quantitative and scientific methods to analyze data from HR databases, corporate financial
statements, employee surveys, and other data sources.

More Related Content

PPTX
Training evaluation
PPTX
Chapter 6
PDF
Training and Development employees - C4.pdf
PPTX
trainingevaluation-ppt6-131226030229-phpapp02 (1)-converted.pptx
PDF
Employee Training and Development 6th Edition Noe Solutions Manual
PPTX
Training Evaluation PPT Training Evaluation PPTTraining Evaluation PPT.pptx
PPTX
Hrd ppt
PPT
Unit 8 updated
Training evaluation
Chapter 6
Training and Development employees - C4.pdf
trainingevaluation-ppt6-131226030229-phpapp02 (1)-converted.pptx
Employee Training and Development 6th Edition Noe Solutions Manual
Training Evaluation PPT Training Evaluation PPTTraining Evaluation PPT.pptx
Hrd ppt
Unit 8 updated

Similar to Blended learning combines online and other i.docx (20)

PDF
Training program effectiveness a measuring instrument (1)
PPT
Evaluation of training Program
PDF
All chapter download Employee Training and Development 6th Edition Noe Soluti...
PPTX
Training Evaluation and Measuremen.pptx
PPTX
Training effectiveness
DOC
T&D Complete
PPT
Training and Development
PPTX
Training Evaluation
PPT
Training evaluation ppt 6
PPT
Training Feedback and Evaluation, Training Audit, Training as Continuous Process
PDF
Employee Training and Development 6th Edition Noe Solutions Manual
PPTX
UNIT 4-Learning and Development.pptx is about
PDF
Learning Evaluation - How to get most out of your training programs simply an...
PDF
Employee Training and Development 6th Edition Noe Solutions Manual
PDF
Training and development (group 7) (Bahasa Inggris)
PPT
Measuring ROI of Training
PDF
Employee Training and Development 6th Edition Noe Solutions Manual
PDF
training-final-161119052307.pdf
PPTX
Training evaluation
Training program effectiveness a measuring instrument (1)
Evaluation of training Program
All chapter download Employee Training and Development 6th Edition Noe Soluti...
Training Evaluation and Measuremen.pptx
Training effectiveness
T&D Complete
Training and Development
Training Evaluation
Training evaluation ppt 6
Training Feedback and Evaluation, Training Audit, Training as Continuous Process
Employee Training and Development 6th Edition Noe Solutions Manual
UNIT 4-Learning and Development.pptx is about
Learning Evaluation - How to get most out of your training programs simply an...
Employee Training and Development 6th Edition Noe Solutions Manual
Training and development (group 7) (Bahasa Inggris)
Measuring ROI of Training
Employee Training and Development 6th Edition Noe Solutions Manual
training-final-161119052307.pdf
Training evaluation

More from write31 (20)

DOCX
The candidates will develop a substantive understanding of six components.docx
DOCX
Women in The Testament of the Bible shows.docx
DOCX
Write a article more than 2 pages in.docx
DOCX
Write a memo to the CIO that describes how to.docx
DOCX
The topic is In the Western Catholic The.docx
DOCX
Video if makes the speech compelling.docx
DOCX
watch the video on The Role of HR Has.docx
DOCX
There is a relationship between an emotionality and their.docx
DOCX
What is required to petition is a formal letter the.docx
DOCX
what is mental illness as an officially recognized.docx
DOCX
With you have learned about the cell DNA.docx
DOCX
TO EACH POST 100 WORDS MIN This.docx
DOCX
TO EACH POST MIN 100 WORDS In.docx
DOCX
Take a look back at your DPP and the Belmont.docx
DOCX
Stakeholder support is necessary for successful project Consider your.docx
DOCX
The OSI data link layer is responsible for physical.docx
DOCX
This assignment is intended to help you use leadership skills.docx
DOCX
What are the different portals of entry for a pathogen.docx
DOCX
You are the Social Media Manager for Savannah Technical.docx
DOCX
When you are engaging it is important to understand.docx
The candidates will develop a substantive understanding of six components.docx
Women in The Testament of the Bible shows.docx
Write a article more than 2 pages in.docx
Write a memo to the CIO that describes how to.docx
The topic is In the Western Catholic The.docx
Video if makes the speech compelling.docx
watch the video on The Role of HR Has.docx
There is a relationship between an emotionality and their.docx
What is required to petition is a formal letter the.docx
what is mental illness as an officially recognized.docx
With you have learned about the cell DNA.docx
TO EACH POST 100 WORDS MIN This.docx
TO EACH POST MIN 100 WORDS In.docx
Take a look back at your DPP and the Belmont.docx
Stakeholder support is necessary for successful project Consider your.docx
The OSI data link layer is responsible for physical.docx
This assignment is intended to help you use leadership skills.docx
What are the different portals of entry for a pathogen.docx
You are the Social Media Manager for Savannah Technical.docx
When you are engaging it is important to understand.docx

Recently uploaded (20)

PDF
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
PDF
Computing-Curriculum for Schools in Ghana
PDF
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
PDF
advance database management system book.pdf
PDF
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
PDF
Practical Manual AGRO-233 Principles and Practices of Natural Farming
PPTX
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
PDF
LDMMIA Reiki Yoga Finals Review Spring Summer
PDF
Indian roads congress 037 - 2012 Flexible pavement
PDF
Hazard Identification & Risk Assessment .pdf
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
PDF
FORM 1 BIOLOGY MIND MAPS and their schemes
PPTX
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
PPTX
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
PPTX
20th Century Theater, Methods, History.pptx
DOC
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
Vision Prelims GS PYQ Analysis 2011-2022 www.upscpdf.com.pdf
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 1)
Computing-Curriculum for Schools in Ghana
احياء السادس العلمي - الفصل الثالث (التكاثر) منهج متميزين/كلية بغداد/موهوبين
advance database management system book.pdf
ChatGPT for Dummies - Pam Baker Ccesa007.pdf
Practical Manual AGRO-233 Principles and Practices of Natural Farming
Chinmaya Tiranga Azadi Quiz (Class 7-8 )
LDMMIA Reiki Yoga Finals Review Spring Summer
Indian roads congress 037 - 2012 Flexible pavement
Hazard Identification & Risk Assessment .pdf
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
Black Hat USA 2025 - Micro ICS Summit - ICS/OT Threat Landscape
FORM 1 BIOLOGY MIND MAPS and their schemes
ELIAS-SEZIURE AND EPilepsy semmioan session.pptx
CHAPTER IV. MAN AND BIOSPHERE AND ITS TOTALITY.pptx
20th Century Theater, Methods, History.pptx
Soft-furnishing-By-Architect-A.F.M.Mohiuddin-Akhand.doc
Share_Module_2_Power_conflict_and_negotiation.pptx

Blended learning combines online and other i.docx

  • 1. Blended learning combines online learning, face-to-face instruction, and other i Blended learning combines online learning, face-to-face instruction, and other instructional methods. Blended learning courses provide learners with the positive features of both face- to-face instruction and technology-based delivery, while minimizing the negative features of each. Considering the organization you identified in week one, would you recommend blended learning for this organization? Explain your decision in detail.Training EvaluationThis chapter provides an overview of how to evaluate training programs, including the types of outcomes that need to be measured and the types of evaluation designs available. The chapter highlights the importance of evaluating whether the training has accomplished its objectives and, particularly, whether job performance and organizational results have improved as a result. Formative and summative evaluation are discussed and compared, and the process of evaluating training is outlined and outcomes used to evaluate training are described in detail. In an environment of accountability, expertise in assessing the effective of training from multiple perspectives is invaluable.OBJECTIVESExplain why evaluation is important.Identify and choose outcomes to evaluate a training program.Discuss the process used to plan and implement a good training evaluation.Discuss the strengths and weaknesses of different evaluation designs.Choose the appropriate evaluation design based on the characteristics of the company and the importance and purpose of the training.Conduct a cost-benefit analysis for a training program.Explain the role of big data, workforce analytics, and dashboards in determining the value of training practices.INTRODUCTIONTraining effectiveness refers to the benefits that the company and trainees experience as a result of training. Benefits for the trainees include learning new knowledge, skills, and behaviors. Potential benefits for the company include increased sales, improved quality, and more satisfied customers.Training evaluation refers to the process of collecting data regarding outcomes needed to determine whether training is effective.Training outcomes, or criteria, refer to measures that the trainer and the company use to evaluate training programs.Evaluation design refers to the collection of information that will be used to determine the effectiveness of the training program.REASONS FOR EVALUATING TRAININGCompanies have made large dollar investments in training and education and view training as a strategy to be successful. Training evaluation provides a way to understand the investments that training produces and provides information needed to improve training. If a company received an inadequate
  • 2. return on investment, the company will likely reduce its investment or look for training providers outside the company who can provide training that yields the desired results.Formative EvaluationFormative evaluation refers to the evaluation of training that takes place during program design. Formative evaluation helps to ensure that the training program is well organized and runs smoothly and that trainees learn and are satisfied. Formative evaluation provides information about how to make a program better.Formative evaluations ask employees, managers, and SMEs about their opinions about training content, methods, and the like. Training content may be changed to be more accurate, easier to understand, or more appealing, and training methods can be adjusted to improve learning.Formative evaluation involves pilot testing. Pilot testing is the process of previewing a training program with potential trainees and their managers. The pilot testing group is then asked to provide feedback about the content and the methods of delivery. This feedback enables the trainer to make any needed improvements.Summative EvaluationSummative evaluation refers to evaluation conducted to determine the extent to which trainees have changed as a result of training. That is, summative evaluation examines whether trainees have improved or acquired knowledge, skills attitudes, behaviors, or other outcomes. Summative evaluation may also include examining the business impact of training.The Importance of EvaluationThere are multiple reasons to evaluate training effectiveness:To identify the program’s strengths and weaknesses, including whether the program is meeting the learning objectives, the quality of the learning environment, and if transfer is occurring.To assess whether the various features of the training context and content contribute to learning and transfer.To identify which trainees benefited most or least from the program and why.To gather information, such as testimonials, to use for marketing training.To determine financial benefits and costs of the program.To compare the costs and benefits of training versus other HRM investments.To compare the costs and benefits of various training programs in order to choose the most effective programs.OVERVIEW OF THE EVALUATION PROCESSThe evaluation process involves five key components.Needs assessmentThe evaluation process should begin with determining training needs. Needs assessment helps identify what knowledge, skills, behavior, or other learned capabilities are needed.Develop measurable learning objectives and analyze transfer of trainingIdentify specific, measurable training objectives to guide the program. Besides considering the learning program objectives, it is important to consider the expectations of those individuals who support the program and have an interest in it. Analysis of the work environment can be useful for determining how well content will be transferred.Development outcome measuresBased on the preceding steps, outcome measures are designed to assess the extent to which learning and transfer have occurred.Choose an evaluation strategyOnce the outcomes have been identified, determine an evaluation strategy. Factors such as expertise, how quickly the information is needed, change potential, and the organizational culture should be considered.Plan and execute the evaluationPlanning and executing the evaluation involves previewing the program, as well as collecting training outcomes according to the evaluation design. The results of the evaluation are used to modify, market, or gain additional support for the program.OUTCOMES USED IN THE EVALUATION OF TRAINING PROGRAMSThere are six
  • 3. primary outcomes that can be used to evaluate training effectiveness.Reaction OutcomesReaction outcomes refer to the trainees’ perceptions of the training experience relating to the content, facilities, trainer, and methods of delivery. An accurate evaluation should include all the factors related to a successful learning environment. Reactions are often referred to as a measure of “creature comfort.” Key questions to consider include:Did the trainees like the program?Did the environment help learning?Was material meaningful?This information is typically collected at the program’s conclusion via a questionnaire. Reactions are often assessed by asking trainees to respond to a series of strongly agree-strongly disagree statements about the learning experience. Reaction measures can also include open-ended questions about the experience, such as “What did you learn that you are most likely to try on the job?” and “What topics covered in this program seem confusing?”It is often believed that reactions are related to learning and transfer. Research suggests that reactions have the strongest relationship with affective outcomes. Reactions are also significantly related to changes in declarative and procedural knowledge.Learning or Cognitive OutcomesLearning or cognitive outcomes relate to familiarity with information, including principles, facts, techniques, procedures, and processes. Typically paper and pencil tests or self-assessments are used to assess cognitive outcomes. Self-assessments refer to learners’ estimates of how much they learned from training. Tests and quizzes are often preferred over self-assessments because self- assessments are only moderately related to learning. Self-assessments are more strongly related to learners’ reactions and their motivation to learn.Behavior and Skill-Based OutcomesBehavior and skill-based outcomes relate to proficiency with technical or motor skills and behavior. These outcomes include both the learning of skills and their transfer. Skill learning is often assessed by observing performance in work samples. Skill transfer is typically assessed by observing trainees on the job or via managerial and peer ratings.Affective OutcomesAffective outcomes include attitudes and motivation. Affective outcomes that might be collected include self-efficacy, employee engagement, motivation to learn, tolerance for diversity, safety attitudes, and customer service orientation. The attitude of interest depends on training objectives. Affective outcomes can be measured using surveys.ResultsResults are those outcomes used to determine the benefits of the training program to the company. Examples include reduced costs, increased employee retention, increased sales, increased production, and improved quality or customer service.Return on InvestmentReturn on Investment involves comparing the training program’s benefits in monetary terms to the program’s costs.Benefits are the value the company receives from the training.Direct costs include salaries and benefits of trainees, trainers, consultants, and any others involved in the training; program materials and supplies; equipment and facilities; and travel costs.Indirect costs include office supplies, facilities, equipment and related expenses not directly related to the training program; travel and expenses not billed to one particular program; and training department management and staff support salaries.DETERMINING WHETHER OUTCOMES ARE APPROPRIATEAn important issue in choosing outcomes is to determine whether they are appropriate.RelevanceCriteria relevance refers to the extent to which training outcomes are related to the learned capabilities emphasized in the program. One way to ensure the
  • 4. relevance is to choose outcomes based on the learning objectives for the program. There are two ways outcomes may lack relevance—criterion contamination and criterion deficiency.Criterion contamination refers to the inclusion of inappropriate or irrelevant outcomes.Criterion deficiency refers to the omission of important information.ReliabilityReliability is whether outcomes can be measured consistently over time. Predominantly, we are concerned with consistency over time, such that a reliable test contains items that do not change in meaning or interpretation over time.DiscriminationDiscrimination refers to whether performance on the outcome reflects true differences in performance. For example, we want tests that can discriminate between high and low performance. A test that may not discriminate is one that is too easy. In this instance, both high and low performers would do well on the test. Both groups would appear “good” even though they are not.PracticalityPracticality is the ease with which the outcome measures can be collected. One reason companies give for not including learning, performance, and behavior outcomes is that collecting them is too burdensome.EVALUATION PRACTICESBelow are percentage estimates of organizations examining different training outcomes.Reactions 92%Cognitive 81%Behavior 55%Results 37%ROI 18%None 4%Reactions and cognitive outcomes are the most frequently used outcomes in training evaluation. Despite the less frequent use of cognitive, behavioral, and results outcomes, research suggests that training can have a positive effect on these outcomes.There are a number of reasons why companies fail to evaluate training. Learning professionals report that access to data and tools needed to obtain them are the most significant barriers.Which Training Outcomes Should Be Collected?It is not always necessary to collect data on all of the training outcomes. While collecting data on all outcomes is ideal, doing so may not always be necessary depending on the scope of the training, its strategic value, and practical considerations.As much as possible, evaluation should include behavior or skill-based, affective, and results outcomes to determine the extent to which transfer occurred. (Reactions and cognitive measures do not help to measure transfer.) It is important to recognize the limitations of only choosing to measure reactions and cognitive outcomes, the two most commonly measured outcomes.The various training outcomes measures are largely independent of each other. It cannot be assumed that positive reactions lead to greater transfer. Research suggests that the relationships among the outcomes are small.There are three types of transfer:Positive transfer is demonstrated when learning occurs with positive changes in on-the-job behavior.No transfer is demonstrated when learning occurs without changes in on-the-job behavior,Negative transfer is evident when learning occurs, but on-the-job behavior is lower than pre-training levels.Learning, behavior, and results should be measured after sufficient time has elapsed to determine whether training has had an influence on these outcomes.EVALUATION DESIGNSThe design of the training evaluation determines the confidence that can be placed in the results. No training evaluator can be absolutely certain that the results of the evaluation are completely true. The evaluator should strive for the most rigorous design possible.Threats to Validity: Alternative Explanations for Evaluation ResultsThreats to validity refer to factors that will lead an evaluator to question either the believability of the study results or the extent to which the evaluation results are
  • 5. generalizable to other groups of trainees and situations.Internal validity is the believability of the study. An evaluation needs internal validity to provide confidence that the findings are due to training and not another factor.External validity refers to the generalizability of the evaluation results to other groups and other situations.Methods to control for threats to validity:Use pre-tests and post-tests to determine the extent to which trainees have changed from pre-training to post-training measures. The pre-training measure essentially establishes a baseline.Use a control group (i.e., a group that participates in the evaluation study, but does not receive the training) to rule out factors other than training as the cause of changes in the trainees.Random assignment of employees to control and training groups. Randomization helps to ensure that members of the control group and training group are of similar makeup prior to the training.Types of Evaluation DesignsPosttest onlyThe post-test only design involves collecting only post-training outcome measures. This design would be strengthened by the use of a control group to rule out alternative explanations. This design is appropriate when trainees can be expected to have similar levels of knowledge, behavior, etc. prior to training.Pretest/PosttestThis design involves collecting both pre-training and post-training outcome measures to determine whether a change has occurred after training.Pretest/Posttest with Comparison GroupThis design includes pre-training and post-training outcome measurements and a control group. If the post-training improvement is greater for the group that receives training, there is evidence that training was responsible for the change.Time SeriesThe time series design involves collecting outcome measurements at periodic intervals pre- and post-training. A comparison group may also be used. The strength of this design can be improved by using reversal, which refers to a time period when participants no longer receive training. This design allows for an analysis of the stability of training outcomes over time.Solomon Four-GroupThe Solomon Four-Group design combines the pretest/posttest comparison group design and the posttest-only control group design. It involves the use of four groups:Pre-test, treatment, post-testPre- test, no treatment, post-testNo pre-test, treatment, post-testNo pre-test, no treatment, post- testThis design provides the most controls for internal and external validity.Considerations in Choosing an Evaluation DesignThere are several reasons why no evaluation or a less rigorous design may be appropriate:Managers and trainers may be unwilling to devote the timeManagers and trainees may lack the expertise to evaluateThe company may view training as an investment from which it expects little or no returnA more rigorous evaluation design should be considered when:The evaluation results can be used to change the program.The training is ongoing and has the potential to affect many employees.The training program involves multiple classes and a large number of trainees.Cost justification for training is based on numerical indicators.Trainers or others in the company have the expertise to evaluate.The cost of training creates a need to show that it works.There is sufficient time for conducting an evaluation.There is interest in measuring change from pre- training levels or in comparing two or more different programs.Evaluation designs without pretesting or comparison groups are most appropriate when you are interested only in whether a specific level of performance has been achieved, and not how much change has occurred.Sometimes naturally occurring comparison groups are available. This can occur because of the realities of scheduling employees to attend training, when not all employees
  • 6. can attend. When this occurs, for example, a pre-test/post-test design could be employed with a control group.DETERMINING RETURN ON INVESTMENTROI is an important training outcome. ROI can be assessed by conducting a cost-benefit analysis. A cost-benefit analysis determines the net economic benefits of training using accounting methods. There is an increased interest in measuring the ROI of training because of the need to show results. However because ROI analysis can be costly, it should be limited only to training programs where there was a significant investment.Training cost information is important for several reasons:to understand total expenditures for training, including direct and indirect coststo compare the costs of alternative training programsto evaluate the proportion of the training budget spent on the development of training, administrative costs, and evaluation as well as how much is spent on various types of employeesto control costsThe process of determining ROI includes:understanding the objectives of the training programisolating the effects of training from other factors that might influence the dataconverting data to a monetary value and calculating ROIDetermining CostsOne method to determine costs is the resource requirements model. The resource requirements model compares equipment, facilities, personnel, and materials costs across different stages of the training process. This model can help determine overall differences in costs among training programs. Cost incurred at different stages of the training process can be compared across groups.Accounting can also be used to calculate costs. There are seven categories of cost sources:program development or purchaseinstructional materialsequipment and hardwarefacilitiestravel and lodgingsalary of the trainer and support staffcost of either lost productivity or replacement workers while trainees are away from their jobsDetermining BenefitsTo identify benefits, one must review the original reasons training was conducted. A number of methods may be helpful to identify training benefits, including:technical, practitioner and academic literature that summarizes benefits of training programspilot training programs that assess the benefits from a small group of trainees before a company commits more resourcesobserving successful job performers to determine what successful job performers do differently from unsuccessful onesasking trainees and managers to provide estimates of benefitsCalculating ROITo calculate return on investment, follow these steps:identify outcomesplace a monetary value on the outcomesdetermine the annual change in outcomesobtain an annual amount of benefits by multiplying the change in outcomes by the monetary valuedetermine training costscalculate net benefit by subtracting the training costs from benefitscalculate ROI by dividing net benefits by costs— the ROI gives an estimate of the dollar return expected from each dollar invested in trainingOther Methods of Cost-Benefit AnalysisUtility analysis assesses the dollar value of training based on estimates of the difference in job performance between trained and untrained employees, the number of employees trained, the length of time the program is expected to influence performance, and the variability in job performance in the untrained group of employees. Utility analysis employs a highly sophisticated formula that requires the use of pretest and posttest with a comparison group.Other types of economic analyses evaluate training as it benefits the firm or government using direct and indirect costs, incentives paid by the government for training, wage increases received by trainees as a result of the training, tax rates, and discount rates.Practical Considerations in Determining
  • 7. Return on InvestmentTraining programs best suited for ROI analysis have clearly identified outcomes, are not one-time events, are highly visible in the company, are strategically focused, and have effects that can be isolated.Success Cases and Return on ExpectationsSuccess cases refer to concrete examples of the impact of training that show how learning has led to results that the company finds worthwhile and the managers find credible. Success cases do not attempt to isolate the influence of training, but rather to provide evidence that training was useful.Return on expectations (ROE) demonstrates to key business stakeholders, such as top-level managers, that their expectations about training have been satisfied. ROE depends on establishing a business partnership with key stakeholders from the start of a training program through its evaluation.MEASURING HUMAN CAPITAL AND TRAINING ACTIVITYIt is important to remember that evaluation can involve determining the extent to which training contributes to business strategy and helps achieve business goals.Metrics are valuable for benchmarking purposes, for understanding the current amount of training activity in a company, and for tracking historical trends in training activity.Another way to understand the value of training is through comparisons with other companies; companies could review the ATD’s report that summarizes company provided data in the U.S.Big Data and Workforce AnalyticsBig data refer to complex datasets developed by compiling data across different organizational systems, including marketing and sales, HR, finance, accounting, customer service, and operations. Three dimensions characterize big data: volume, variety, and velocity.Volume refers to the amount of available data.Variety includes the large number of sources and types of data.Velocity refers to the huge amount of data that is being generated and the speed with which it must be evaluated, captured, and made useful.In the present context, big data allow for the making of decisions about human capital based on data, rather than on intuition and conventional wisdom. In a training context, big data can be used to:evaluate the effectiveness of programsdetermine their impact on business resultsdevelop predictive models for forecasting training needs, course enrollments, and outcomesWorkforce analytics refers to the practice of using quantitative and scientific methods to analyze data from HR databases, corporate financial statements, employee surveys, and other data sources.