SlideShare a Scribd company logo
Algorithmic Fairness
Navdeep Sharma
Data & AI Architect - Accenture, The Dock
Shift AI
Algorithmic Bias: Why
is it important?
Set the scene: Fairness
Definitions
Introduction to
Fairness Metrics
Introduction to
Accenture Fairness Tool
0201
03 04Agenda
Our society's growing reliance on algorithmic decision making,
particularly in social and economic areas has raised a concern; that they
may inadvertently discriminate against certain groups.
“Business needs to consider
society as a stakeholder.” -
Cennydd Bowles, Future Ethics
Introduction | Context
Objective decision-making is a challenge.
“Algorithmic Fairness is a practice that
aims to mitigate unconscious bias against
any individual or group of people in
Machine Learning.”
“Data biases are inevitable. We must
design algorithms that account for
them. "
“The model summarizes the data correctly. If
the data is biased it is not the algorithm’s
fault.”
VS
From “Tutorial: 21 fairness definitions and their politics” on YouTube
From “Tutorial: 21 fairness definitions and their politics” on YouTube
Reference :
https://www.theverge.com/2018/1/12/16882408/google-racist-gorillas-photo-recognition-algorithm-ai
https://www.theverge.com/2018/10/10/17958784/ai-recruiting-tool-bias-amazon-report
Definitions:
“The Treaty on the Functioning of the European Union
prohibits (TFEU) discrimination on grounds of nationality.
It also encourages to combat discrimination based on
sex, racial or ethnic origin, religion or belief, disability,
age or sexual orientation.”
Status Definition: Privileged vs Unprivileged
Example: In criminal risk assessment tools, a common
example of protected feature is race, with the designated
levels: white defendants – privileged group vs black
defendants – unprivileged.
Group Bias VS Individual Bias:Protected feature:
“Group fairness approaches partition the
population/sample into groups and seeks to equalize a
statistical measure across the groups. ”
“Individual fairness seeks to understand if similar
individuals are treated similarly irrespective of their
membership to any of the groups.”
Mutual Information
Identifies proxies for protected feature
Prevalence Analysis
Fraction of a population that satisfies a
given outcome
Disparate Impact
Quantifies disparity of outcomes for
different protected groups
Predictive Parity - False Positive Rates
Proportion of all negatives that still yield positive test
outcomes
Predictive Parity - True Positive
Rates
Predictive Parity - Positive
Predictive Power
Predictive Parity - FalseNegative
Rates
7
Metrics Introduction
Myriad of metrics: which one to choose?
Individualfairness
…
“Tutorial: 21 fairness definitions and their politics” on YouTube
Mutual Information
Approach: Quantifies the amount of information obtained
about one random variable through observing the other
random variable.
Mutual Information for a Protected Variable, assesses the
relationship between the protected and unprotected
variables (which could be used in the model build as proxies
for sensitive ones and generate bias).
Objectives:
• Identify proxies
• Provoke further analysis:
• Is ‘blindness’ w.r.t. protected feature enough?
• Predictive power of the proxies with respect to targeted model
Prevalence
Approach: The prevalence of a certain outcome
in a certain population can be defined as the
fraction of that population that satisfies a given
outcome, say Y = reoffended. In other words
Prevalence Ratio for a given protected variable is
defined as the ratio of the prevalence in the
privileged population to the prevalence in the
unprivileged group. Prevalence Ratio is calculated
on the ground truth. For example:
Prevalence ratio (White vs Black) =
!"#
!$$%
%!&"
'(&"
= 34%
Disparate Impact
TN= 2812 FP = 189
FN = 677 TP = 681
Black Total Pop = 4359
Recidivated
Predicted: low risk Predicted: high risk
Approach: Unintentional bias is encoded via disparate
impact, which occurs when a selection process has
widely different outcomes for different groups, even as
it appears to be neutral.
Calculation:
Disparate Impact for a protected variable is the ratio of
the % privileged population with a predicted outcome
to the % unprivileged population with a predicted
outcome.
US Law:
Originally, the Uniform Guidelines on Employee Selection Procedures provided a simple "80
percent" rule for determining that a company's selection system was having an "adverse
impact" on a minority group.
Cautionary points: “Courts in the U.S. have questioned the arbitrary nature of the 80 percent rule”
TN = 1886 FP = 26
FN = 147 TP = 94
Didnotrecidivate
Recidivated
Predicted: low risk Predicted: high risk
Didnot
recidivate
White Total Pop = 2154
DI = 0.06/0.2=30%
(189+681)/
4358 =
0.20
(26+94)/
2154 =
0.06
False Positive Rate (FPR)
TN= 2812 FP = 189
FN = 677 TP = 681
Black Total Pop = 4,359
Recidivated
Predicted: low risk Predicted: high risk
Approach: Parity for False Positive Rates (FPR) implies that
the false positive rates are equal among the privileged and
unprivileged population.
Calculation:
Error Ratio for a given protected variable, is defined as
the ratio of error in the privileged population to the
ratio of error in the unprivileged population
Legislation: No legal precedent for error ratio, however
a similar approach to DI can be applied by using the
80% rule (bias when ratio of rates <= 0.8)
Didnotrecidivate
Recidivated
Predicted: low risk Predicted: high risk
Didnotrecidivate
White Total Pop = 2,154
TN= 1886 FP = 26
FN = 147 TP = 94
189/(2812 +
189) = 0.063
26/(2154 +
26) = 0.013
Error Ratio = 0.013/0.063 = 0.21
There is now a consensus amongst academia and scientists that
algorithmic fairness can not be achieved by the application of data
science alone.
The complexity of choosing the right solution to allow for group and for
individual fairness for example, or to account for accuracy versus
fairness, or the complexities that come with scale, and many more is a
challenge in itself.
All of this is further compounded by myriad of non science factors: What
may be fair statistically quite often can fall short ethically or may not be
viable from a business perspective.
"Bias is a feature of statistical
models. Fairness is a feature of
human value judgments.”
Learnings
https://www.semanticscholar.org/paper/Fairness-aware-machine-learning%3A-a-perspective-
Žliobaitė/69c7bf934e9ac7673be590f7656bcb38fcb9da48
What are main general challenges we have encountered when
assessing real-life use cases for potential bias?
• Metric selection
• Academic – Industry gap
• Non-binary Protected feature
• More than one protected feature
• Legislation and guidelines
Data scientists can solve for many fairness problems
from a technical perspective by using statistical
metrics but this is not just a data science problem, it
requires input from the broader organisation.
The tool starts with the data scientist and is
integrated with Jupyter hub. We want to add
fairness as a step into the current data science
workflow. Analyses are pushed to a repository for
business users.
The business user can explore the interactive
analyses and embed them in reports for
dissemination for the broader business for decision
making. As a communication tool it facilitates a
deeper understanding of the challenge.
How does the tool work?
Accenture Fairness Tool
Thank You.
Navdeep Sharma
navdeep.a.sharma@accenture.com

More Related Content

PPTX
Ethical Issues in Machine Learning Algorithms. (Part 3)
PPTX
A Tutorial to AI Ethics - Fairness, Bias & Perception
PPTX
Combatting Bias in Machine Learning
PDF
Matthias Feys (ML6) – Bias in ML: A Technical Intro
PDF
Fairness and Bias in Machine Learning
PPTX
Fairness in AI (DDSW 2019)
PDF
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
PDF
Introduction to AI Ethics
Ethical Issues in Machine Learning Algorithms. (Part 3)
A Tutorial to AI Ethics - Fairness, Bias & Perception
Combatting Bias in Machine Learning
Matthias Feys (ML6) – Bias in ML: A Technical Intro
Fairness and Bias in Machine Learning
Fairness in AI (DDSW 2019)
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Introduction to AI Ethics

What's hot (20)

PDF
Fairness in Machine Learning and AI
PDF
Knowledge Graphs & Graph Data Science, More Context, Better Predictions - Neo...
PPTX
Generative AI Risks & Concerns
PPTX
AI Governance and Ethics - Industry Standards
PPTX
AI and the Impact on Cybersecurity
PDF
Data ethics
PDF
AIF360 - Trusted and Fair AI
PDF
Ethics in Data Science and Machine Learning
PDF
Data Science Tutorial | What is Data Science? | Data Science For Beginners | ...
PPTX
Ethical Issues in Machine Learning Algorithms. (Part 1)
PPTX
Ethical Issues in Machine Learning Algorithms (Part 2)
PPTX
Harry Surden - Artificial Intelligence and Law Overview
PDF
Ethical issues facing Artificial Intelligence
PPTX
Accelerate AI w/ Synthetic Data using GANs
PDF
Deep Learning: Application Landscape - March 2018
PDF
Artificial Intelligence Overview PowerPoint Presentation Slides
PPTX
Overview of Artificial Intelligence in Cybersecurity
PDF
Ethics in the use of Data & AI
PDF
TrustArc Webinar - Artificial Intelligence Bill of Rights: Impacts on AI Gove...
PDF
Explainability and bias in AI
Fairness in Machine Learning and AI
Knowledge Graphs & Graph Data Science, More Context, Better Predictions - Neo...
Generative AI Risks & Concerns
AI Governance and Ethics - Industry Standards
AI and the Impact on Cybersecurity
Data ethics
AIF360 - Trusted and Fair AI
Ethics in Data Science and Machine Learning
Data Science Tutorial | What is Data Science? | Data Science For Beginners | ...
Ethical Issues in Machine Learning Algorithms. (Part 1)
Ethical Issues in Machine Learning Algorithms (Part 2)
Harry Surden - Artificial Intelligence and Law Overview
Ethical issues facing Artificial Intelligence
Accelerate AI w/ Synthetic Data using GANs
Deep Learning: Application Landscape - March 2018
Artificial Intelligence Overview PowerPoint Presentation Slides
Overview of Artificial Intelligence in Cybersecurity
Ethics in the use of Data & AI
TrustArc Webinar - Artificial Intelligence Bill of Rights: Impacts on AI Gove...
Explainability and bias in AI
Ad

Similar to Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma (Accenture) (20)

PPTX
Algorithmic fairness
PPTX
A simple Introduction to Algorithmic Fairness
PPTX
Frameworks for Algorithmic Bias
PPTX
Measures and mismeasures of algorithmic fairness
PPTX
Fairness in Machine Learning
PPTX
adv_ii_fairness artificial intelligence .pptx
PDF
Algorithmic fairness
PDF
Framework for developing algorithmic fairness
PDF
Using fairness metrics to solve ethical dilemmas of machine learning
PDF
Measuring Model Fairness - Stephen Hoover
PPTX
Ramon van den Akker. Fairness of machine learning models an overview and prac...
PDF
Fairness in AI - Towards more Ethical predictive models - Big Data Expo 2019
PDF
Regoli fairness deep_learningitalia_20220127
PDF
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
PDF
Fairness-aware learning: From single models to sequential ensemble learning a...
PPTX
Analyzing Bias in Data - IRE 2019
PDF
Fairness in Automated Decision Systems
PDF
Discrimination Discovery
PDF
Using AI to Build Fair and Equitable Workplaces
PDF
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...
Algorithmic fairness
A simple Introduction to Algorithmic Fairness
Frameworks for Algorithmic Bias
Measures and mismeasures of algorithmic fairness
Fairness in Machine Learning
adv_ii_fairness artificial intelligence .pptx
Algorithmic fairness
Framework for developing algorithmic fairness
Using fairness metrics to solve ethical dilemmas of machine learning
Measuring Model Fairness - Stephen Hoover
Ramon van den Akker. Fairness of machine learning models an overview and prac...
Fairness in AI - Towards more Ethical predictive models - Big Data Expo 2019
Regoli fairness deep_learningitalia_20220127
Spark + AI Summit - The Importance of Model Fairness and Interpretability in ...
Fairness-aware learning: From single models to sequential ensemble learning a...
Analyzing Bias in Data - IRE 2019
Fairness in Automated Decision Systems
Discrimination Discovery
Using AI to Build Fair and Equitable Workplaces
Future Directions of Fairness-Aware Data Mining: Recommendation, Causality, a...
Ad

More from Shift Conference (20)

PDF
Shift Remote: AI: How Does Face Recognition Work (ars futura)
PDF
Shift Remote: AI: Behind the scenes development in an AI company - Matija Ili...
PDF
Shift Remote: AI: Smarter AI with analytical graph databases - Victor Lee (Ti...
PDF
Shift Remote: DevOps: Devops with Azure Devops and Github - Juarez Junior (Mi...
PDF
Shift Remote: DevOps: Autodesks research into digital twins for AEC - Kean W...
PPTX
Shift Remote: DevOps: When metrics are not enough, and everyone is on-call - ...
PDF
Shift Remote: DevOps: Modern incident management with opsgenie - Kristijan L...
PDF
Shift Remote: DevOps: Gitlab ci hands-on experience - Ivan Rimac (Barrage)
PDF
Shift Remote: DevOps: DevOps Heroes - Adding Advanced Automation to your Tool...
PDF
Shift Remote: DevOps: An (Un)expected Journey - Zeljko Margeta (RBA)
PDF
Shift Remote: Game Dev - Localising Mobile Games - Marta Kunic (Nanobit)
PDF
Shift Remote: Game Dev - Challenges Introducing Open Source to the Games Indu...
PDF
Shift Remote: Game Dev - Ghost in the Machine: Authorial Voice in System Desi...
PDF
Shift Remote: Game Dev - Building Better Worlds with Game Culturalization - K...
PPTX
Shift Remote: Game Dev - Open Match: An Open Source Matchmaking Framework - J...
PDF
Shift Remote: Game Dev - Designing Inside the Box - Fernando Reyes Medina (34...
PDF
Shift Remote: Mobile - Efficiently Building Native Frameworks for Multiple Pl...
PDF
Shift Remote: Mobile - Introduction to MotionLayout on Android - Denis Fodor ...
PDF
Shift Remote: Mobile - Devops-ify your life with Github Actions - Nicola Cort...
PPTX
Shift Remote: WEB - GraphQL and React – Quick Start - Dubravko Bogovic (Infobip)
Shift Remote: AI: How Does Face Recognition Work (ars futura)
Shift Remote: AI: Behind the scenes development in an AI company - Matija Ili...
Shift Remote: AI: Smarter AI with analytical graph databases - Victor Lee (Ti...
Shift Remote: DevOps: Devops with Azure Devops and Github - Juarez Junior (Mi...
Shift Remote: DevOps: Autodesks research into digital twins for AEC - Kean W...
Shift Remote: DevOps: When metrics are not enough, and everyone is on-call - ...
Shift Remote: DevOps: Modern incident management with opsgenie - Kristijan L...
Shift Remote: DevOps: Gitlab ci hands-on experience - Ivan Rimac (Barrage)
Shift Remote: DevOps: DevOps Heroes - Adding Advanced Automation to your Tool...
Shift Remote: DevOps: An (Un)expected Journey - Zeljko Margeta (RBA)
Shift Remote: Game Dev - Localising Mobile Games - Marta Kunic (Nanobit)
Shift Remote: Game Dev - Challenges Introducing Open Source to the Games Indu...
Shift Remote: Game Dev - Ghost in the Machine: Authorial Voice in System Desi...
Shift Remote: Game Dev - Building Better Worlds with Game Culturalization - K...
Shift Remote: Game Dev - Open Match: An Open Source Matchmaking Framework - J...
Shift Remote: Game Dev - Designing Inside the Box - Fernando Reyes Medina (34...
Shift Remote: Mobile - Efficiently Building Native Frameworks for Multiple Pl...
Shift Remote: Mobile - Introduction to MotionLayout on Android - Denis Fodor ...
Shift Remote: Mobile - Devops-ify your life with Github Actions - Nicola Cort...
Shift Remote: WEB - GraphQL and React – Quick Start - Dubravko Bogovic (Infobip)

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
Review of recent advances in non-invasive hemoglobin estimation
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Encapsulation theory and applications.pdf
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Empathic Computing: Creating Shared Understanding
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Spectral efficient network and resource selection model in 5G networks
“AI and Expert System Decision Support & Business Intelligence Systems”
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
cuic standard and advanced reporting.pdf
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Unlocking AI with Model Context Protocol (MCP)
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Understanding_Digital_Forensics_Presentation.pptx
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
Review of recent advances in non-invasive hemoglobin estimation
Advanced methodologies resolving dimensionality complications for autism neur...
The AUB Centre for AI in Media Proposal.docx
NewMind AI Weekly Chronicles - August'25 Week I
Chapter 3 Spatial Domain Image Processing.pdf
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Encapsulation theory and applications.pdf
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Empathic Computing: Creating Shared Understanding
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf

Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma (Accenture)

  • 1. Algorithmic Fairness Navdeep Sharma Data & AI Architect - Accenture, The Dock Shift AI
  • 2. Algorithmic Bias: Why is it important? Set the scene: Fairness Definitions Introduction to Fairness Metrics Introduction to Accenture Fairness Tool 0201 03 04Agenda
  • 3. Our society's growing reliance on algorithmic decision making, particularly in social and economic areas has raised a concern; that they may inadvertently discriminate against certain groups. “Business needs to consider society as a stakeholder.” - Cennydd Bowles, Future Ethics Introduction | Context Objective decision-making is a challenge. “Algorithmic Fairness is a practice that aims to mitigate unconscious bias against any individual or group of people in Machine Learning.”
  • 4. “Data biases are inevitable. We must design algorithms that account for them. " “The model summarizes the data correctly. If the data is biased it is not the algorithm’s fault.” VS From “Tutorial: 21 fairness definitions and their politics” on YouTube From “Tutorial: 21 fairness definitions and their politics” on YouTube
  • 6. Definitions: “The Treaty on the Functioning of the European Union prohibits (TFEU) discrimination on grounds of nationality. It also encourages to combat discrimination based on sex, racial or ethnic origin, religion or belief, disability, age or sexual orientation.” Status Definition: Privileged vs Unprivileged Example: In criminal risk assessment tools, a common example of protected feature is race, with the designated levels: white defendants – privileged group vs black defendants – unprivileged. Group Bias VS Individual Bias:Protected feature: “Group fairness approaches partition the population/sample into groups and seeks to equalize a statistical measure across the groups. ” “Individual fairness seeks to understand if similar individuals are treated similarly irrespective of their membership to any of the groups.”
  • 7. Mutual Information Identifies proxies for protected feature Prevalence Analysis Fraction of a population that satisfies a given outcome Disparate Impact Quantifies disparity of outcomes for different protected groups Predictive Parity - False Positive Rates Proportion of all negatives that still yield positive test outcomes Predictive Parity - True Positive Rates Predictive Parity - Positive Predictive Power Predictive Parity - FalseNegative Rates 7 Metrics Introduction Myriad of metrics: which one to choose? Individualfairness … “Tutorial: 21 fairness definitions and their politics” on YouTube
  • 8. Mutual Information Approach: Quantifies the amount of information obtained about one random variable through observing the other random variable. Mutual Information for a Protected Variable, assesses the relationship between the protected and unprotected variables (which could be used in the model build as proxies for sensitive ones and generate bias). Objectives: • Identify proxies • Provoke further analysis: • Is ‘blindness’ w.r.t. protected feature enough? • Predictive power of the proxies with respect to targeted model
  • 9. Prevalence Approach: The prevalence of a certain outcome in a certain population can be defined as the fraction of that population that satisfies a given outcome, say Y = reoffended. In other words Prevalence Ratio for a given protected variable is defined as the ratio of the prevalence in the privileged population to the prevalence in the unprivileged group. Prevalence Ratio is calculated on the ground truth. For example: Prevalence ratio (White vs Black) = !"# !$$% %!&" '(&" = 34%
  • 10. Disparate Impact TN= 2812 FP = 189 FN = 677 TP = 681 Black Total Pop = 4359 Recidivated Predicted: low risk Predicted: high risk Approach: Unintentional bias is encoded via disparate impact, which occurs when a selection process has widely different outcomes for different groups, even as it appears to be neutral. Calculation: Disparate Impact for a protected variable is the ratio of the % privileged population with a predicted outcome to the % unprivileged population with a predicted outcome. US Law: Originally, the Uniform Guidelines on Employee Selection Procedures provided a simple "80 percent" rule for determining that a company's selection system was having an "adverse impact" on a minority group. Cautionary points: “Courts in the U.S. have questioned the arbitrary nature of the 80 percent rule” TN = 1886 FP = 26 FN = 147 TP = 94 Didnotrecidivate Recidivated Predicted: low risk Predicted: high risk Didnot recidivate White Total Pop = 2154 DI = 0.06/0.2=30% (189+681)/ 4358 = 0.20 (26+94)/ 2154 = 0.06
  • 11. False Positive Rate (FPR) TN= 2812 FP = 189 FN = 677 TP = 681 Black Total Pop = 4,359 Recidivated Predicted: low risk Predicted: high risk Approach: Parity for False Positive Rates (FPR) implies that the false positive rates are equal among the privileged and unprivileged population. Calculation: Error Ratio for a given protected variable, is defined as the ratio of error in the privileged population to the ratio of error in the unprivileged population Legislation: No legal precedent for error ratio, however a similar approach to DI can be applied by using the 80% rule (bias when ratio of rates <= 0.8) Didnotrecidivate Recidivated Predicted: low risk Predicted: high risk Didnotrecidivate White Total Pop = 2,154 TN= 1886 FP = 26 FN = 147 TP = 94 189/(2812 + 189) = 0.063 26/(2154 + 26) = 0.013 Error Ratio = 0.013/0.063 = 0.21
  • 12. There is now a consensus amongst academia and scientists that algorithmic fairness can not be achieved by the application of data science alone. The complexity of choosing the right solution to allow for group and for individual fairness for example, or to account for accuracy versus fairness, or the complexities that come with scale, and many more is a challenge in itself. All of this is further compounded by myriad of non science factors: What may be fair statistically quite often can fall short ethically or may not be viable from a business perspective. "Bias is a feature of statistical models. Fairness is a feature of human value judgments.” Learnings https://www.semanticscholar.org/paper/Fairness-aware-machine-learning%3A-a-perspective- Žliobaitė/69c7bf934e9ac7673be590f7656bcb38fcb9da48
  • 13. What are main general challenges we have encountered when assessing real-life use cases for potential bias? • Metric selection • Academic – Industry gap • Non-binary Protected feature • More than one protected feature • Legislation and guidelines
  • 14. Data scientists can solve for many fairness problems from a technical perspective by using statistical metrics but this is not just a data science problem, it requires input from the broader organisation. The tool starts with the data scientist and is integrated with Jupyter hub. We want to add fairness as a step into the current data science workflow. Analyses are pushed to a repository for business users. The business user can explore the interactive analyses and embed them in reports for dissemination for the broader business for decision making. As a communication tool it facilitates a deeper understanding of the challenge. How does the tool work? Accenture Fairness Tool