SlideShare a Scribd company logo
DR JOHN ROOKSBY
IN THIS LECTURE
The “new view” of human error
Rules and rule following
Cooperative work and crew resource management
THE
KEGWORTH
AIR
DISASTER
OUTLINE
A plane crash on the 8th January 1989
British Midland Flight 92. Flying from Heathrow to Belfast
Crashes by the M1 motorway near Kegworth, while
attempting an emergency landing at East Midlands Airport
The plane was a Boeing 737-400. A new variant of Boeing
737. In use by BM for less than two months
There were 118 passengers and 8 Crew. 47 die, and 74
seriously injured
SEQUENCE OF
EVENTS
• The pilots hear a pounding noise and feel vibrations
  (subsequently found to be caused by a fan blade breaking
  inside the left engine).
• Smoke enters the cabin and passengers sitting near the rear of
  the plane notice flames coming from the left engine
• The flight is diverted to East Midlands Airport
• The pilot shuts down the engine on the right
SEQUENCE OF
EVENTS
•   The pilots can no longer feel the vibrations, and do not notice
    the vibration detector is still reporting a problem. The smoke
    disperses.
•   The pilot informs the passengers and crew that there was a
    problem with the right engine and that it has been shut down
•   20 minutes later. On approach to East Midlands Airport, the
    pilot increases thrust. This causes the left engine to burst into
    flames and cease operating
•   The pilots try to restart the left engine, but crash short of the
    runway
WRONG ENGINE SHUT
DOWN. WHY?
Incorrect assumption: Pilots believed the “bleed air” was
    taken from the right engine, and therefore the smoke
    must be coming from the right. The 737 used bleed air
    from the right engine, not the 737-400. Psychologists call
    this a mistake in “knowledge based performance”
Design issues: No visibility of engines, so relied on other
   information sources to explain vibrations. The vibration
   sensors were tiny, and had a new style of digital display.
   The vibration sensors were inaccurate on the 737 but not
   the 737-400
Inadequate training: A one day course, and no simulator
    training
ERROR NOT TRAPPED.
WHY?
Coincidence: The smoke disappeared after shutting down the
   right engine and the vibrations lessened. - Psychologists call
   this “Confirmation bias”.
Lapse in procedure: After shutting down the right engine the pilot
   began checking all meters and reviewing decisions but
   stopped after being interrupted by a transmission from the
   airport asking him to descend to 12,000 ft.
Lack of Communication: Some cabin crew and passengers could
   see the left engine was on fire, but did not inform the pilot,
   even when the pilot announced he was shutting down the
   right engine.
Design Issue: The vibration meters would have shown a problem
   with the left engine, but were too difficult to read. There was
   no alarm.
COCKPIT OF A BOEING 737-400
VIEWPOINTS
Traditional engineering view
•   The crash was caused by an engine failure. Therefore we
    must design better engines.
Traditional managerial view
•   The crash was caused by the pilots. We must hire better
    pilots.
The Socio-technical systems engineering view or new view
•   The crash had no single cause, but involved problems in
    Testing, Design, Training, Teamwork, Communications,
    Procedure Following, Decision Making, poor „upgrade‟
    management, (and more)
•   We need better engines, but we also need to expect problems
    to happen and to be adequately prepared for them
THE “NEW VIEW” OF HUMAN
ERROR
The old view                  The new view
Human error is the cause of   Human error is a symptom of
accidents                     trouble deeper inside a
                              system
                              Systems are inherently unsafe
Systems are inherently safe
                              and people usually keep them
and people introduce errors
                              running well

Bad things happen to bad
                              All humans are fallible
people
THE “NEW VIEW” OF HUMAN
ERROR
Is not new! This is just a name, it has been around for 20
years.
Draws the emphasis away from modelling human error, and
towards understanding what underlies human actions when
operating technology
• How do people get things right?
Argues too much emphasis is placed on “the sharp end”. It
argues that error is symptomatic of deeper trouble
Opposes the “blame culture” that has arisen in many
organisations. We are too quick to blame system operators
when managers and engineers are at fault.
HUMAN RELIABILITY
Humans don‟t just introduce errors into systems, but are
often responsible for avoiding and correcting them too.
What do people really do when they are operating a
technology?
• Very little human work is driven by a clear and
  unambiguous set of recipes or processes, even when
  these are available
• All human work is situationally contingent. Work must
  inevitably be more than following a set of steps.
• If people work to rule, accidents can happen. For example
  the prior to the sinking of the SS Estonia a crew member
  did not report a leak as it was not his job.
CORRECT PROCEDURE?
There is not always a „correct‟ procedure by which to judge
any action.
Sometimes trial and error processes are necessary
• In young organisations, best practices may not yet exist
• New and unusual situations may occur in which a trial and
  error approach is appropriate
• Sometimes it is appropriate to play or experiment. This is
  how innovation often happens.
So deciding when something is an error, and judging whether
an error was appropriate to a set of circumstances can be
highly context dependent.
CS5032 Lecture 6: Human Error 2
FIELDWORK
Often we don‟t notice that people need to do things to keep
complex systems running smoothly.
•   Fieldwork is an important aspect of understanding how
    systems are operated and how people work.
STUDYING SUCCESS
It is important to study and understand ordinary work
We can also learn lessons from “successful failures”,
including
• The Apollo 13 Mission
• The Airbus A380 engine explosion over Batom island
• The Sioux City Crash
however accounts of successful failures can turn into a form
of hero worship, and organisations that experience these
kinds of success against the odds can build a false sense of
invulnerability.
PROBLEMS WITH AUTOMATION
As work becomes automated, engineers often make the
mistake of automating the aspects that are easy to automate.
• The Fitts list MABA-MABA approach can lead to a
  dangerous lack of awareness and control for systems
  operators.
• The “paradox of automation” is that automation creates
  and requires new forms of labour.
• The major design problem is no longer how to support
  workflow, but how to support awareness across a system
  and organisation, and how to support appropriate kinds of
  intervention
CREW RESOURCE MANAGEMENT
One approach to improving reliability and reducing human
error is crew resource management (CRM)
• Developed in the aviation industry, and now widely used
• Formerly Crew Resource Management
CRM Promotes
• The effective use of all resources (human, physical,
  software)
• Teamwork
• Proactive accident prevention
CREW RESOURCE MANAGEMENT
The focus of CRM is upon
• Communication: How to communicate clearly and
  effectively
• Situational awareness: How to build and maintain an
  accurate and shared picture of an unfolding situation
• Decision making: How to make appropriate decisions
  using the available information. (and how to make
  appropriate information available)
• Teamwork: Effective group work, effective leadership, and
  effective followership.
• Removing barriers: How to remove barriers to the above
KEY POINTS
It can be too narrow to focus on human error
• Human errors are usually symptomatic of deeper
  problems
• Human reliability is not just about humans not making
  errors, but about how humans maintain dependability
We cannot rely on there being correct procedures for every
situations. Procedures are important, but we need to support
cooperative working
Design approaches, as well as human and organisational
approaches, can be taken to support human reliability.

More Related Content

PPTX
CS5032 Lecture 5: Human Error 1
PDF
Designing apps lecture
PPT
Human error
PPTX
Human errors
PPTX
Introduction to Understanding Human errors in Pharmaceutical Industries
PPTX
Human element factors affecting
PPTX
Human Error & Risk Factor Affecting Reliability & Safety
PPT
HUMAN ERROR
CS5032 Lecture 5: Human Error 1
Designing apps lecture
Human error
Human errors
Introduction to Understanding Human errors in Pharmaceutical Industries
Human element factors affecting
Human Error & Risk Factor Affecting Reliability & Safety
HUMAN ERROR

What's hot (18)

PDF
Automation to overcome human error: *true or illsion?
PPTX
Human Factor Safety Decomposed
PPTX
CS5032 Lecture 2: Failure
PPTX
PPT
200603 future it-professionals
PPT
Human factors
PPT
CRM pilot tranning
PPTX
CRM for EMS workshop
PDF
Human factors in Aviation
PDF
The Fail Lecture
PDF
How to Use Agile to Move the Earth
PDF
Aeronautical Decision Making FAA P-8740-69
PPT
When Things Break
PPT
PPTX
Just culture
PDF
Continuous Automated Testing - Cast conference workshop august 2014
Automation to overcome human error: *true or illsion?
Human Factor Safety Decomposed
CS5032 Lecture 2: Failure
200603 future it-professionals
Human factors
CRM pilot tranning
CRM for EMS workshop
Human factors in Aviation
The Fail Lecture
How to Use Agile to Move the Earth
Aeronautical Decision Making FAA P-8740-69
When Things Break
Just culture
Continuous Automated Testing - Cast conference workshop august 2014
Ad

Viewers also liked (20)

PPTX
Crew Resource Management : General
PPTX
CS5032 Lecture 10: Learning from failure 2
PPTX
CS5032 Lecture 9: Learning from failure 1
PPT
Testing Sociotechnical Systems: Passport Issuing
PPT
Testing Sociotechnical Systems: Heathrow Terminal 5
PDF
p168_V1016_EXPERIENCE_staffpage
PDF
Practial Risk Management for VFR XC Flying
PPT
CFI Workshop - Module 7 Effective Teaching
PDF
A Review of the Literature Pertaining to Decision Making in Aviation
PDF
Single-Pilot Resource Management
PPT
Night Vision Imaging Systems - Maintenance
PPTX
Good communication
PDF
An integrated framework for understanding the relationship between human erro...
DOC
Virgin Australia - Pit Crew Leader Shadowing exercise
PPT
CFI Workshop - Module 14 The Flight Review
PPTX
Situational Awareness
PPT
13 Private Human Factors (Crm, Pdm)
PPT
CFI Workshop - Module 5 Risk Management
PPTX
Situational Awareness and Why It's Important
PPT
Failure to Follow Procedures
Crew Resource Management : General
CS5032 Lecture 10: Learning from failure 2
CS5032 Lecture 9: Learning from failure 1
Testing Sociotechnical Systems: Passport Issuing
Testing Sociotechnical Systems: Heathrow Terminal 5
p168_V1016_EXPERIENCE_staffpage
Practial Risk Management for VFR XC Flying
CFI Workshop - Module 7 Effective Teaching
A Review of the Literature Pertaining to Decision Making in Aviation
Single-Pilot Resource Management
Night Vision Imaging Systems - Maintenance
Good communication
An integrated framework for understanding the relationship between human erro...
Virgin Australia - Pit Crew Leader Shadowing exercise
CFI Workshop - Module 14 The Flight Review
Situational Awareness
13 Private Human Factors (Crm, Pdm)
CFI Workshop - Module 5 Risk Management
Situational Awareness and Why It's Important
Failure to Follow Procedures
Ad

Similar to CS5032 Lecture 6: Human Error 2 (20)

PPT
PSG.ppt
PPTX
Human_Factors_in_Aviation__PowerPoint_.pptx.pptx
PPT
Factors affecting and causing air crashes.ppt
PDF
Procedure Based Maintenance
PDF
Procedure Based Maintenance White Paper
PPTX
Human factors for crew&members
PPT
Aviation Safety Training
PDF
Helicopter Aviation: Human Factors
PDF
Human Factors 2010 Iss 2
PPT
Human factors topic 1 introduction
PDF
Workplace accidents and_human_error_by_isti
PPTX
Human factors - what role should they play in Responsible Care
PPTX
Topic 3 swiss cheese model
PPTX
Pay Me Now or Pay Me A Lot More Later
PDF
Pilots training and automation
DOCX
THE ROLE OF HUMAN ERROR IN ACCIDENTS
PPTX
Cream analysis of the Kegworth Air Disaster
PDF
Investigating Critical Risk Incidents
PPTX
Operations Risk Management
PPT
geaazrhszegsr wrrathet eTETR Etrsfe deaFddaewe te3thr esesSEeee
PSG.ppt
Human_Factors_in_Aviation__PowerPoint_.pptx.pptx
Factors affecting and causing air crashes.ppt
Procedure Based Maintenance
Procedure Based Maintenance White Paper
Human factors for crew&members
Aviation Safety Training
Helicopter Aviation: Human Factors
Human Factors 2010 Iss 2
Human factors topic 1 introduction
Workplace accidents and_human_error_by_isti
Human factors - what role should they play in Responsible Care
Topic 3 swiss cheese model
Pay Me Now or Pay Me A Lot More Later
Pilots training and automation
THE ROLE OF HUMAN ERROR IN ACCIDENTS
Cream analysis of the Kegworth Air Disaster
Investigating Critical Risk Incidents
Operations Risk Management
geaazrhszegsr wrrathet eTETR Etrsfe deaFddaewe te3thr esesSEeee

More from John Rooksby (13)

PDF
Implementing Ethics for a Mobile App Deployment
PDF
Self tracking and digital health
PDF
Digital Health From an HCI Perspective - Geraldine Fitzpatrick
PPTX
How to evaluate and improve the quality of mHealth behaviour change tools
PDF
Guest lecture: Designing mobile apps
PDF
Talk at UCL: Mobile Devices in Everyday Use
PDF
Fitts' Law
PPTX
Intimacy and Mobile Devices
PPTX
Making data
PPTX
CS5032 Lecture 20: Dependable infrastructure 2
PPTX
CS5032 Lecture 19: Dependable infrastructure
PPTX
CS5032 Lecture 14: Organisations and failure 2
PPTX
CS5032 Lecture 13: organisations and failure
Implementing Ethics for a Mobile App Deployment
Self tracking and digital health
Digital Health From an HCI Perspective - Geraldine Fitzpatrick
How to evaluate and improve the quality of mHealth behaviour change tools
Guest lecture: Designing mobile apps
Talk at UCL: Mobile Devices in Everyday Use
Fitts' Law
Intimacy and Mobile Devices
Making data
CS5032 Lecture 20: Dependable infrastructure 2
CS5032 Lecture 19: Dependable infrastructure
CS5032 Lecture 14: Organisations and failure 2
CS5032 Lecture 13: organisations and failure

Recently uploaded (20)

PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PPTX
A Presentation on Touch Screen Technology
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Hybrid model detection and classification of lung cancer
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Encapsulation theory and applications.pdf
PDF
Web App vs Mobile App What Should You Build First.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPTX
TLE Review Electricity (Electricity).pptx
PDF
gpt5_lecture_notes_comprehensive_20250812015547.pdf
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Heart disease approach using modified random forest and particle swarm optimi...
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PPTX
1. Introduction to Computer Programming.pptx
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
A Presentation on Touch Screen Technology
A novel scalable deep ensemble learning framework for big data classification...
Hybrid model detection and classification of lung cancer
Unlocking AI with Model Context Protocol (MCP)
Programs and apps: productivity, graphics, security and other tools
Encapsulation theory and applications.pdf
Web App vs Mobile App What Should You Build First.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
TLE Review Electricity (Electricity).pptx
gpt5_lecture_notes_comprehensive_20250812015547.pdf
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Heart disease approach using modified random forest and particle swarm optimi...
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
1. Introduction to Computer Programming.pptx
Univ-Connecticut-ChatGPT-Presentaion.pdf
Group 1 Presentation -Planning and Decision Making .pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf

CS5032 Lecture 6: Human Error 2

  • 2. IN THIS LECTURE The “new view” of human error Rules and rule following Cooperative work and crew resource management
  • 4. OUTLINE A plane crash on the 8th January 1989 British Midland Flight 92. Flying from Heathrow to Belfast Crashes by the M1 motorway near Kegworth, while attempting an emergency landing at East Midlands Airport The plane was a Boeing 737-400. A new variant of Boeing 737. In use by BM for less than two months There were 118 passengers and 8 Crew. 47 die, and 74 seriously injured
  • 5. SEQUENCE OF EVENTS • The pilots hear a pounding noise and feel vibrations (subsequently found to be caused by a fan blade breaking inside the left engine). • Smoke enters the cabin and passengers sitting near the rear of the plane notice flames coming from the left engine • The flight is diverted to East Midlands Airport • The pilot shuts down the engine on the right
  • 6. SEQUENCE OF EVENTS • The pilots can no longer feel the vibrations, and do not notice the vibration detector is still reporting a problem. The smoke disperses. • The pilot informs the passengers and crew that there was a problem with the right engine and that it has been shut down • 20 minutes later. On approach to East Midlands Airport, the pilot increases thrust. This causes the left engine to burst into flames and cease operating • The pilots try to restart the left engine, but crash short of the runway
  • 7. WRONG ENGINE SHUT DOWN. WHY? Incorrect assumption: Pilots believed the “bleed air” was taken from the right engine, and therefore the smoke must be coming from the right. The 737 used bleed air from the right engine, not the 737-400. Psychologists call this a mistake in “knowledge based performance” Design issues: No visibility of engines, so relied on other information sources to explain vibrations. The vibration sensors were tiny, and had a new style of digital display. The vibration sensors were inaccurate on the 737 but not the 737-400 Inadequate training: A one day course, and no simulator training
  • 8. ERROR NOT TRAPPED. WHY? Coincidence: The smoke disappeared after shutting down the right engine and the vibrations lessened. - Psychologists call this “Confirmation bias”. Lapse in procedure: After shutting down the right engine the pilot began checking all meters and reviewing decisions but stopped after being interrupted by a transmission from the airport asking him to descend to 12,000 ft. Lack of Communication: Some cabin crew and passengers could see the left engine was on fire, but did not inform the pilot, even when the pilot announced he was shutting down the right engine. Design Issue: The vibration meters would have shown a problem with the left engine, but were too difficult to read. There was no alarm.
  • 9. COCKPIT OF A BOEING 737-400
  • 10. VIEWPOINTS Traditional engineering view • The crash was caused by an engine failure. Therefore we must design better engines. Traditional managerial view • The crash was caused by the pilots. We must hire better pilots. The Socio-technical systems engineering view or new view • The crash had no single cause, but involved problems in Testing, Design, Training, Teamwork, Communications, Procedure Following, Decision Making, poor „upgrade‟ management, (and more) • We need better engines, but we also need to expect problems to happen and to be adequately prepared for them
  • 11. THE “NEW VIEW” OF HUMAN ERROR The old view The new view Human error is the cause of Human error is a symptom of accidents trouble deeper inside a system Systems are inherently unsafe Systems are inherently safe and people usually keep them and people introduce errors running well Bad things happen to bad All humans are fallible people
  • 12. THE “NEW VIEW” OF HUMAN ERROR Is not new! This is just a name, it has been around for 20 years. Draws the emphasis away from modelling human error, and towards understanding what underlies human actions when operating technology • How do people get things right? Argues too much emphasis is placed on “the sharp end”. It argues that error is symptomatic of deeper trouble Opposes the “blame culture” that has arisen in many organisations. We are too quick to blame system operators when managers and engineers are at fault.
  • 13. HUMAN RELIABILITY Humans don‟t just introduce errors into systems, but are often responsible for avoiding and correcting them too. What do people really do when they are operating a technology? • Very little human work is driven by a clear and unambiguous set of recipes or processes, even when these are available • All human work is situationally contingent. Work must inevitably be more than following a set of steps. • If people work to rule, accidents can happen. For example the prior to the sinking of the SS Estonia a crew member did not report a leak as it was not his job.
  • 14. CORRECT PROCEDURE? There is not always a „correct‟ procedure by which to judge any action. Sometimes trial and error processes are necessary • In young organisations, best practices may not yet exist • New and unusual situations may occur in which a trial and error approach is appropriate • Sometimes it is appropriate to play or experiment. This is how innovation often happens. So deciding when something is an error, and judging whether an error was appropriate to a set of circumstances can be highly context dependent.
  • 16. FIELDWORK Often we don‟t notice that people need to do things to keep complex systems running smoothly. • Fieldwork is an important aspect of understanding how systems are operated and how people work.
  • 17. STUDYING SUCCESS It is important to study and understand ordinary work We can also learn lessons from “successful failures”, including • The Apollo 13 Mission • The Airbus A380 engine explosion over Batom island • The Sioux City Crash however accounts of successful failures can turn into a form of hero worship, and organisations that experience these kinds of success against the odds can build a false sense of invulnerability.
  • 18. PROBLEMS WITH AUTOMATION As work becomes automated, engineers often make the mistake of automating the aspects that are easy to automate. • The Fitts list MABA-MABA approach can lead to a dangerous lack of awareness and control for systems operators. • The “paradox of automation” is that automation creates and requires new forms of labour. • The major design problem is no longer how to support workflow, but how to support awareness across a system and organisation, and how to support appropriate kinds of intervention
  • 19. CREW RESOURCE MANAGEMENT One approach to improving reliability and reducing human error is crew resource management (CRM) • Developed in the aviation industry, and now widely used • Formerly Crew Resource Management CRM Promotes • The effective use of all resources (human, physical, software) • Teamwork • Proactive accident prevention
  • 20. CREW RESOURCE MANAGEMENT The focus of CRM is upon • Communication: How to communicate clearly and effectively • Situational awareness: How to build and maintain an accurate and shared picture of an unfolding situation • Decision making: How to make appropriate decisions using the available information. (and how to make appropriate information available) • Teamwork: Effective group work, effective leadership, and effective followership. • Removing barriers: How to remove barriers to the above
  • 21. KEY POINTS It can be too narrow to focus on human error • Human errors are usually symptomatic of deeper problems • Human reliability is not just about humans not making errors, but about how humans maintain dependability We cannot rely on there being correct procedures for every situations. Procedures are important, but we need to support cooperative working Design approaches, as well as human and organisational approaches, can be taken to support human reliability.

Editor's Notes

  • #9: 20 Minutes to trap the error
  • #11: Not just planes, but ambulance dispactch, terminal 5, passport issuing, enterprise sytems.