How to create a better Risk
Matrix (kind of…)
Or
How to avoid the dreaded “Q” word….
By Osama Salah
This is a typical Risk Matrix
Negligible Minor Moderate Significant Severe
Very Likely Low Med Medium Med Hi High High
Likely Low Low Med Medium Med Hi High
Possible Low Low Med Medium Med Hi Med Hi
Unlikely Low Low Med Low Med Medium Med Hi
Very Unlikely Low Low Low Med Medium Medium
Impact
Likelihood
With the good intention to make it easier
some add labels to provide guidance….
Negligible Minor Moderate Significant Severe
0 - $100K $101K - $1M $1M - $50M $50M - 100M > $100M
Very Likely 10 times per year or more Low Med Medium Med Hi High High
Likely 9 - 5 times per year Low Low Med Medium Med Hi High
Possible 4-2 times per year Low Low Med Medium Med Hi Med Hi
Unlikely Once per year Low Low Med Low Med Medium Med Hi
Very Unlikely Once every 3 years or less Low Low Low Med Medium Medium
Impact
Likelihood
But you can’t help but wonder….
Negligible Minor Moderate Significant Severe
0 - $100K $101K - $1M $1M - $50M $50M - 100M > $100M
Very Likely 10 times per year or more Low Med Medium Med Hi High High
Likely 9 - 5 times per year Low Low Med Medium Med Hi High
Possible 4-2 times per year Low Low Med Medium Med Hi Med Hi
Unlikely Once per year Low Low Med Low Med Medium Med Hi
Very Unlikely Once every 3 years or less Low Low Low Med Medium Medium
Impact
Likelihood
Am I really going to deal with a $1M loss the same as a $50M?
Am I really going to treat something that happens 11 times per year
the same as 100 times?
Who came up with these seemingly arbitrary buckets? (I did for this
presentation, but in real life there isn’t much more of a rationale
behind it either.)
What if I could define my own buckets?Likelihood
Impact
Let’s imagine that and start with a blank slate…
Let’s say you are pretty sure:Likelihood
Impact$70K
Impact is somewhere between $70K and $200K
Likelihood (frequency of loss) is somewhere between once and 3 times
per year
These will be your self-defined buckets
$200K
3/year
1/year
We designed our own buckets, but what now?
We can actually do some cool math with these ranges. But to make it
more useful we will add one more point in each bucket (and will start
calling them “ranges”).
If loss is between $70K and $200K, what do we believe is most likely?
Are we more likely (more often) going to see losses close to $70K or
more likely (more often) closer to $200K? Or somewhere in the
middle?
To represent how we feel about that we express our range as a
distribution and add a “most likely” point.
$70K $200K
Most Likely?
Let’s Assume our analysis concludes that
most likely Loss is at $100K
This means that if we observe multiple occurrences of losses over time we will find most
around $100K.
In a plot it would look like a triangular, but that’s not “natural”. We can ”smoothen” it out
using a “PERT” distribution.
Triangular Distribution Smooth PERT Distribution
Min Max
ML (Most Likely)
Let’s Assume our analysis concludes that
Most likely Frequency is Twice per year
This means that if we observe losses over several years we find that in
most years there we 2 occurrences.
Triangular Distribution Smooth PERT Distribution
We take both distributions and throw them into a
computational engine…. (Monte Carlo Analysis)Likelihood
Impact
And the output will be…
Where did
all my colors
go!!!??!
Better Risk Matrix ... kind of
This is the most likely loss exposure. Somewhere
around
$200K / year.
The average annualized loss exposure is about $223K.
(not deductible from chart, from calculated statistics)
This is the max loss
exposure. About
$505K / year.
This is the min loss
exposure. About
$95K / year.
Min 96.04K
Avg 223K
Max 506.4K
Same data but cumulative
90% of annualized losses
will be below $310K.
If 90% expresses your risk
appetite then don’t spend
more than that to treat
the risk.
Mitigating the Risk
• If I spend $X on a particular treatment plan I can reduce the loss
exposure and the loss frequency. Let’s say:
• Throw that back into the computational engine…
Frequency Loss
Min 1 $50K
ML 1 $80K
Max 2 $140K
Result after risk mitigation
90% of losses were below $310K.
After treatment 90% of losses are below
$130K.
If this residual risk is acceptable than risk
mitigation implementation costs should
not exceed $180K ($310K-$130K).
Or go look for a more effective treatment
plan to reduce risks further.
• And that’s how you end up doing quantitative risk management
without noticing.
• Although we just made assumptions we really haven’t used any data
that we wouldn’t have used to process the risk matrix.
• The risk matrix would not have helped figuring out if you invested
enough or too much.
Resources
Risk Analysis (O-RA)
Risk Taxonomy (O-RT)
This was a very simplified introduction to quantitative risk management.
You can improve on making better estimates (calibration).
Better understand the relationship and building blocks of Risk (FAIR Ontology).
Have a look at the below resources
blog
A computational engine
Download the free version of Analytica.
Download my simple FAIR model.

More Related Content

PPTX
Fooled by randomness
DOC
Book summery fooled_by_randomness
PPTX
Introduction to FAIR - Factor Analysis of Information Risk
PPTX
RM - Chapter 2.pptxfghjjjjjhyuyyiutuutyy
PPTX
Quantification of Risks in Project Management
PPT
topic5 (2).ppt
PDF
Quantitative Enterprise Risk Management Mary R. Hardy
PPTX
PMP Muzette Charles_Sp2019_Week5_Chapter11_Risk
Fooled by randomness
Book summery fooled_by_randomness
Introduction to FAIR - Factor Analysis of Information Risk
RM - Chapter 2.pptxfghjjjjjhyuyyiutuutyy
Quantification of Risks in Project Management
topic5 (2).ppt
Quantitative Enterprise Risk Management Mary R. Hardy
PMP Muzette Charles_Sp2019_Week5_Chapter11_Risk

Similar to Better Risk Matrix ... kind of (20)

PPTX
Project Risk Management (10)
PPT
05-risk_assesment.ppt
PDF
The Case of the Plucky Promise
PPT
12. Project Risk Management
PPT
Risk management introduction fall 2013
PPT
Topic5
PDF
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis...
PPTX
Healthcare Risk Management
PPTX
Financial Planning for MasterofComm.pptx
PDF
Risk assessment
PPTX
Chapter 2 Risk Management and insurance marketing department
PPT
Topic 04 risk mangement
DOCX
Risk Matrix.docx
PPT
Surviving Or Thriving 1
PDF
Forward-Looking ALLL: Computing Qualitative Adjustments
PPT
OVER VIEW risk management 22016 NEW ASLI
PPT
Risk Management
PPT
Introduction Risk Management
PDF
Rethinking Exposure to Loss
PPTX
Risk & Risk Management
Project Risk Management (10)
05-risk_assesment.ppt
The Case of the Plucky Promise
12. Project Risk Management
Risk management introduction fall 2013
Topic5
Adopting the Quadratic Mean Process to Quantify the Qualitative Risk Analysis...
Healthcare Risk Management
Financial Planning for MasterofComm.pptx
Risk assessment
Chapter 2 Risk Management and insurance marketing department
Topic 04 risk mangement
Risk Matrix.docx
Surviving Or Thriving 1
Forward-Looking ALLL: Computing Qualitative Adjustments
OVER VIEW risk management 22016 NEW ASLI
Risk Management
Introduction Risk Management
Rethinking Exposure to Loss
Risk & Risk Management
Ad

Recently uploaded (20)

PPT
Module 1.ppt Iot fundamentals and Architecture
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
STKI Israel Market Study 2025 version august
PPTX
Benefits of Physical activity for teenagers.pptx
PDF
Architecture types and enterprise applications.pdf
PPTX
observCloud-Native Containerability and monitoring.pptx
PDF
Enhancing emotion recognition model for a student engagement use case through...
PPTX
Web Crawler for Trend Tracking Gen Z Insights.pptx
PPTX
The various Industrial Revolutions .pptx
PPT
Geologic Time for studying geology for geologist
PDF
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
PDF
DP Operators-handbook-extract for the Mautical Institute
PDF
Zenith AI: Advanced Artificial Intelligence
PPTX
Modernising the Digital Integration Hub
PDF
August Patch Tuesday
PDF
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
PDF
Getting Started with Data Integration: FME Form 101
PPTX
Tartificialntelligence_presentation.pptx
PDF
Developing a website for English-speaking practice to English as a foreign la...
Module 1.ppt Iot fundamentals and Architecture
Assigned Numbers - 2025 - Bluetooth® Document
A novel scalable deep ensemble learning framework for big data classification...
STKI Israel Market Study 2025 version august
Benefits of Physical activity for teenagers.pptx
Architecture types and enterprise applications.pdf
observCloud-Native Containerability and monitoring.pptx
Enhancing emotion recognition model for a student engagement use case through...
Web Crawler for Trend Tracking Gen Z Insights.pptx
The various Industrial Revolutions .pptx
Geologic Time for studying geology for geologist
ENT215_Completing-a-large-scale-migration-and-modernization-with-AWS.pdf
DP Operators-handbook-extract for the Mautical Institute
Zenith AI: Advanced Artificial Intelligence
Modernising the Digital Integration Hub
August Patch Tuesday
Hybrid horned lizard optimization algorithm-aquila optimizer for DC motor
Getting Started with Data Integration: FME Form 101
Tartificialntelligence_presentation.pptx
Developing a website for English-speaking practice to English as a foreign la...
Ad

Better Risk Matrix ... kind of

  • 1. How to create a better Risk Matrix (kind of…) Or How to avoid the dreaded “Q” word…. By Osama Salah
  • 2. This is a typical Risk Matrix Negligible Minor Moderate Significant Severe Very Likely Low Med Medium Med Hi High High Likely Low Low Med Medium Med Hi High Possible Low Low Med Medium Med Hi Med Hi Unlikely Low Low Med Low Med Medium Med Hi Very Unlikely Low Low Low Med Medium Medium Impact Likelihood
  • 3. With the good intention to make it easier some add labels to provide guidance…. Negligible Minor Moderate Significant Severe 0 - $100K $101K - $1M $1M - $50M $50M - 100M > $100M Very Likely 10 times per year or more Low Med Medium Med Hi High High Likely 9 - 5 times per year Low Low Med Medium Med Hi High Possible 4-2 times per year Low Low Med Medium Med Hi Med Hi Unlikely Once per year Low Low Med Low Med Medium Med Hi Very Unlikely Once every 3 years or less Low Low Low Med Medium Medium Impact Likelihood
  • 4. But you can’t help but wonder…. Negligible Minor Moderate Significant Severe 0 - $100K $101K - $1M $1M - $50M $50M - 100M > $100M Very Likely 10 times per year or more Low Med Medium Med Hi High High Likely 9 - 5 times per year Low Low Med Medium Med Hi High Possible 4-2 times per year Low Low Med Medium Med Hi Med Hi Unlikely Once per year Low Low Med Low Med Medium Med Hi Very Unlikely Once every 3 years or less Low Low Low Med Medium Medium Impact Likelihood Am I really going to deal with a $1M loss the same as a $50M? Am I really going to treat something that happens 11 times per year the same as 100 times? Who came up with these seemingly arbitrary buckets? (I did for this presentation, but in real life there isn’t much more of a rationale behind it either.)
  • 5. What if I could define my own buckets?Likelihood Impact Let’s imagine that and start with a blank slate…
  • 6. Let’s say you are pretty sure:Likelihood Impact$70K Impact is somewhere between $70K and $200K Likelihood (frequency of loss) is somewhere between once and 3 times per year These will be your self-defined buckets $200K 3/year 1/year
  • 7. We designed our own buckets, but what now? We can actually do some cool math with these ranges. But to make it more useful we will add one more point in each bucket (and will start calling them “ranges”). If loss is between $70K and $200K, what do we believe is most likely? Are we more likely (more often) going to see losses close to $70K or more likely (more often) closer to $200K? Or somewhere in the middle? To represent how we feel about that we express our range as a distribution and add a “most likely” point. $70K $200K Most Likely?
  • 8. Let’s Assume our analysis concludes that most likely Loss is at $100K This means that if we observe multiple occurrences of losses over time we will find most around $100K. In a plot it would look like a triangular, but that’s not “natural”. We can ”smoothen” it out using a “PERT” distribution. Triangular Distribution Smooth PERT Distribution Min Max ML (Most Likely)
  • 9. Let’s Assume our analysis concludes that Most likely Frequency is Twice per year This means that if we observe losses over several years we find that in most years there we 2 occurrences. Triangular Distribution Smooth PERT Distribution
  • 10. We take both distributions and throw them into a computational engine…. (Monte Carlo Analysis)Likelihood Impact
  • 11. And the output will be…
  • 12. Where did all my colors go!!!??!
  • 14. This is the most likely loss exposure. Somewhere around $200K / year. The average annualized loss exposure is about $223K. (not deductible from chart, from calculated statistics) This is the max loss exposure. About $505K / year. This is the min loss exposure. About $95K / year. Min 96.04K Avg 223K Max 506.4K
  • 15. Same data but cumulative 90% of annualized losses will be below $310K. If 90% expresses your risk appetite then don’t spend more than that to treat the risk.
  • 16. Mitigating the Risk • If I spend $X on a particular treatment plan I can reduce the loss exposure and the loss frequency. Let’s say: • Throw that back into the computational engine… Frequency Loss Min 1 $50K ML 1 $80K Max 2 $140K
  • 17. Result after risk mitigation 90% of losses were below $310K. After treatment 90% of losses are below $130K. If this residual risk is acceptable than risk mitigation implementation costs should not exceed $180K ($310K-$130K). Or go look for a more effective treatment plan to reduce risks further.
  • 18. • And that’s how you end up doing quantitative risk management without noticing. • Although we just made assumptions we really haven’t used any data that we wouldn’t have used to process the risk matrix. • The risk matrix would not have helped figuring out if you invested enough or too much.
  • 19. Resources Risk Analysis (O-RA) Risk Taxonomy (O-RT) This was a very simplified introduction to quantitative risk management. You can improve on making better estimates (calibration). Better understand the relationship and building blocks of Risk (FAIR Ontology). Have a look at the below resources blog
  • 20. A computational engine Download the free version of Analytica. Download my simple FAIR model.