Insight from hindsight: A practitioner’s perspective on a causal approach to performance improvement
This discussion paper explores some of the author’s thoughts around performance problems in high hazard industries.
Overall, they argue that due to a few repeating impediments, organisations have a limited ability to “effectively find, learn, and eliminate the causes” and “are left with repeating periods of performance problems despite well-intended efforts to improve”.
1. We trust the ‘‘system” will protect us
Incidents resulting in significant losses or injury are said to often be considered nearly impossible within organisations.
Hence, “We are fundamentally surprised by incidents of significant magnitude”, and “We seem to hold an inherent and often unjustified trust in the system around us to produce the outcomes we want and to protect us from the outcomes and harm we want to avoid”.
Our unyielding trust in the systems “hides the underlying defects in the system from our view”. So, as long as that trust remains, the problems will remain embedded but masked within our approaches.
2. We do not see the pre-cursors
Post-incident, it’s common to conclude that parts of that incident have happened before elsewhere.
Prior events and underlying contributing factors are “often known at some level in parts of the organization before similar more significant incidents occur”.
These issues often exist for long periods “under the threshold we would deem to be concerning or even noticeable” [*** Barry Turner referred to this as our perceptual horizon.]
Multiple concurrent paths exist at the same time in the genesis to many incidents, and often interact in ways difficult to predict in advance. The individual paths on their own, “often appear insignificant but when aligned with other causal actions and conditions they can yield some very significant outcomes”.
In the author’s experience, it’s “rare to see large impacts occur with less than six concurrent causal paths and incidents may contain more than twice that many (and these are just the paths we know about)”.
Although the author adopts a linear perspective of causality, we can readily convert the multiple linear pathways into a systems network, so their same argument remains.
And in their view, it’s pretty fortunate that significant incidents usually require a significant number of concurrent causal paths to align. [** Again, if you prefer, apply your preferred systems lens.]
But in any case, the author argues that many of these causal paths or causal nodes exist at any time within organisations.
3. We introduce new causes with solutions
Here it’s argued that “The causes of current performance originate from many sources including our own solutions”.
Based on the author’s own data, about “30% of the causes discovered in significant incidents were the direct result of solutions put in place to either solve the same or other problems”.
Hence, there’s an unintended risk / byproduct in taking “well-intended action, especially if the action is broad and generalized (i.e. not precise)”.
4. Actions can create an illusion of progress and miss cause
It’s argued that many of the issues or constraints that we observe are symptoms. And addressing visible symptoms “provides us the immediate benefit of restoring functionality but also potentially creates the illusion of longer-term performance improvement”.
5. We have a part
Humans like to take credit for the success of performance or outcomes, but it’s “harder for us to accept that we also have a part in performance when it fails to meet expectations or when it leads to significantly bad outcomes”.
We have baked in heuristics that help to shift blame to factors external to ourselves, and especially those closer in space and time to the event – we readily conclude what they did wrong or should have done something different.
“This ‘‘hindsight bias” is seductive”.
It allows people to view an incident after the fact as to have been far more predictable and avoidable, especially for the person caught up in the situation.
Next the author talks about “discovering causes” [** what others may call constructing causes]. Specifically, what barriers impede this ability within organisations.
1. Our performance is good – no need to investigate
Organisations often view good performance outcomes as direct indicators of performance capability. If the outcomes are good, or acceptable, then it’s easy to conclude that the underlying performance and systems must also be acceptable.
And it’s easy to conclude that if we keep doing what we’re doing, things will remain that way.
Nevertheless, “Many investigations clearly illustrate that the differences between experiencing good outcomes and experiencing undesired outcomes are minor”.
And, often the distinction between acceptable or unacceptable outcomes are only slight differences in the sequence or conditions on the day – often due to chance.
Therefore, “Confidence lessens our curiosity”, and our searches for better learning and understanding. E.g. We investigate the occasional adverse events for what went wrong, but not why things normally go right.
2. Our problems are small – no need to investigate
Small problems are viewed as isolated and localised issues. Hence, “This assumption is seductive as it allows us to deal with problems, which may in reality be systemic in nature, in a simplified way”.
But, viewing small problems as minor and isolated “does not alter the fact that they can align with other small problems and become significant in impact”.
And viewing things only as minor and local may “discount the interrelationships that exist just out of our sight”, and how they can interact and cascade across the system.
3. Our problems are caused by our failure to do the ‘‘right thing” –we need to investigate what we should do differently
Here it’s said that while investigations frequently document that ‘‘operators did not follow a procedure”, this is what was not done , rather than what was done and why/how.
As in, investigations often go to some depth to use counterfactuals to explain what happened by what didn’t happen.
Though well-intentioned, our “normal defensive and solution reasoning orientations tend to drive us to look for what was missing, or what should have been done differently”. We search for what we understand or believe to be the case, rather than “the causes that we do not yet know”.
Next the author discusses what’s in the way of learning from our constructed ‘causes’. The author in this context means what leads to a shift from our current accident and mental models (how we believe the accident occurred) versus shifting to an alternate, better calibrated, model of reality.
For this author, this shift requires:
a. Letting go of what is believed to be true, and what was trusted
b. Grasping a different reality (via some mental or accident model), which may be more accurate but less certain
Problematically, “Learning competes with knowing – no new insight is fair competition with what I already believe”.
It’s argued that investigations often surface disconnections between what we have believed for some time but wasn’t accurate, or was incomplete (e.g. disconnections between WAI vs WAD).
We can readily and “tenaciously … hold onto our beliefs despite evidence to the contrary”. [** Woods and others have described this inability to update mental models as cognitive fixation. Clarke referred to a similar organisational process as the disqualification heuristic.]
Coming face to face with a realisation that our trust within our systems is misplaced, and that our systems are perfectly configured to deliver the ill-effects leaves us with a loss of control, a sense of anxiety.
“Letting go of the old model precedes grasping a new and more realistic one”.
4. Accepting that human behavior is caused
Another area for advancement is letting go of old myopic beliefs about human agency and accountability.
That is, seeing ill effects purely from the lens that the outcomes “solely reside with an individual close to the event or solely with an individual’s actions”.
A result is anxiety about not holding people to account for the outcomes. This reaction seems to result from beliefs like these: (directly quoting the paper below)
· “If something bad happened then there must be someone who must ‘‘account” for that”
· “We have rules and if in hindsight we find that these rules have not been followed then this must have caused the event and those individuals who did not follow the rules are at fault”
· “If we can not find an individual who is clearly accountable for the event, then we can not create an effective (and simple) solution to prevent repeating the incident”
However, post incident, we often discover that “each individual’s actions were rational and based upon the situation and how they understood the system around them” [e.g. local rationality].
Hence, understanding how the context, environment, situation etc. shapes reasoning, mental models, pressures and behaviour “helps us understand the underlying system”.
So while assigning accountability to operators for not following procedures is “expedient and facilitates simple and direct action”, the underlying issues still remain.
Finally, the author explores barriers to effective correction actions & improvements. I’ve skipped a bit here, but some points are:
1. Make learning an overt step in performance improvement
For one, “Pressing problems tend to create a great need for near-term action and can seduce us into merely reacting to the causes found (i.e. reacting to the causes instead of learning from them)”.
But without deeper learning, how we respond will be “based upon the same understanding that existed before the problem and therefore will likely be similar to action we have already taken”.
Leaders are central here to create a ‘pause’ in organisational flow, to create the necessary conditions for learning.
2. Help managers build the conditions for finding-out
Again, a focus on current thinking and symptoms doesn’t shift learning. But “reducing confidence, lessening the need to know, and lengthening time frames of focus requires letting go of what we have trusted for years”.
Hence, we need to “loosen our grip on what has ‘‘worked” for us before we can yet fully trust what might be accomplished by searching for cause”.
We can help managers to entertain their own doubt by finding out for themselves by involving managers.
The author discusses more stuff but I’ve skipped as this summary is long enough…
Ref: Stockholm, G. (2011). Insight from hindsight: a practitioner’s perspective on a causal approach to performance improvement. Safety science, 49(1), 39-46.
Founder & Director at TrendSafe, Contractor and Consultant
1wGreat post. Any person who has any influence over safety management systems should take note of the following from this article: Our problems are caused by our failure to do the ‘‘right thing” –we need to investigate what we should do differently. Unfortunately most continue with the same old tired and ineffective processes and get the same old result.
Interpreter of practice
1w'there’s an unintended risk / byproduct in taking “well-intended action, especially if the action is broad and generalized (i.e. not precise)”'. Probably a lot of attempts to focus on 'systems' solutions, and to do double-loop learning, push in the direction of these broad and generalised solutions.
Human First Safety Innovation and Culture / Operational Learning / Safety Leadership Coaching / Global Speaker / Occasionally funny AKA The Curious Humourist
1wBen, this is great thank you.
Workplace Violence, Bullying, & Harassment Prevention | Safety, Health & Wellbeing Manager
1wSounds like a wicked problem: "Based on the author’s own data, about “30% of the causes discovered in significant incidents were the direct result of solutions put in place to either solve the same or other problems”." I was talking about this recently and whether body-worn cameras, in some situations, may actually increase the risks of violence.
HSE Leader / PhD Candidate
1wStudy link: https://doi.org/10.1016/j.ssci.2010.03.008 My site with more reviews: SafetyInsights.org Shout me a coffee: https://buymeacoffee.com/benhutchinson Safe AS LinkedIn group: https://www.linkedin.com/groups/14717868/