SlideShare a Scribd company logo
2
Most read
11
Most read
16
Most read
Identifying and Mitigating
Bias in AI
Nikita Tiwari
AI Enabling Engineer, Client Computing
Group
Intel Corporation
• Why Should we Care About Responsible AI (RAI)?
• Identifying Bias in AI
• Tools to Mitigate Bias in AI
• Fairness Metrics
• What-If Tool
• AI Fairness 360
• Key Takeaways
• Industry-Wide Ethical AI Revolution & Resources
Agenda
2
© 2024 Intel Corporation
3
© 2024 Intel Corporation
Discussion
Ethics is a conversation!
AI stories you have come
across (positive & negative)
Why should we care?
Why Responsible AI?
4
© 2024 Intel Corporation
Harm to human life
Loss of trust
Fines in compliance &
regulations
Introduction of systemic
bias
Misinformation
Breach of privacy
AI Incidents
in the News
Barriers in implementing
trustworthy and
explainable AI
Daily reports of AI harm
Generative AI accessible
to all end users
Massive global AI
market size
Cost of AI Incidents
Defining Responsible AI Principles
Respect Human Rights
Enable Human Oversight
Transparency
Personal Privacy
Security, Safety, Sustainability
Equity and Inclusion
Focus on data used for training and the algorithm development
process to help prevent bias and discrimination.
Understanding where the data came from and
how the model works.
Human oversight of AI solutions to ensure they
positively benefit society.
Ethical review and enforcement of end-to-end AI safety. Low-
resource implementation of AI algorithms
Maintaining personal privacy and consent. Focusing on
protecting the collected data.
Human rights are a cornerstone for AI development. AI solutions
should not support or tolerate usages that violate human rights.
© 2024 Intel Corporation 5
6
© 2024 Intel Corporation
Bias Identification
Historical Bias:
• Caused by
preconceived notion
even on perfectly
measured/sampled
data
• E.g., Gendered
occupation
• Case study: Amazon
hiring algorithm/
Google Gemini AI
image generation
Bias in Data Generation
7
© 2024 Intel Corporation
Representation Bias:
• Target population does not:
• reflect the user population
• include underrepresented groups
• Sampling method is limited or uneven
• Two-fold representation – uniform vs proportional
• E.g., ImageNet images
Measurement Bias:
• The proxy oversimplifies a complex construct, or
measurement methods and accuracy differ among
groups
• E.g., COMPAS (Correctional Offender Management
Profiling for Alternative Sanctions)
Aggregation Bias
• Arises when data from
different sources or groups
are combined, leading to
distortions in the model's
performance or predictions
• E.g., Housing price
prediction model trained
on data aggregated from
multiple cities without
accounting for differences
in housing markets
Bias in Model Building and Implementation (1/2)
8
Learning Bias
• Arises when modeling choices amplify performance disparities across different examples in the data
• E.g., Prioritizing one objective damages the other, like optimizing for privacy or compactness may
affect accuracy
© 2024 Intel Corporation
Evaluation Bias
• The benchmark data or the
evaluation process used for
a particular task does not
represent the user
population or favors certain
groups over the others
• E.g., A language translation
model evaluated primarily
on its accuracy for
European languages,
hinders its usefulness in
diverse contexts as a global
language translator
Bias in Model Building and Implementation (2/2)
9
Deployment Bias
• Arises due to “off-the-label” usage of model other than intended use
• E.g., Models intended to predict a person’s likelihood of committing a
future crime also used to determine the length of the sentence
© 2024 Intel Corporation
10
© 2024 Intel Corporation
Bias Mitigation
• Fairness metrics are a set of measures that enable you to detect the presence of bias in your data or model
• At least 21 fairness metrics. Many are conflicting. Which is the fairest? No right answer. Some common
metrics -
• Group unaware – Removes all group and proxy-group membership information from the dataset to
avoid favoring any groups or sub-class. Similar logic as unsupervised learning. Difficult to achieve.
• Group threshold – Alternate thought-process to group unaware. Adjust the confidence thresholds for
different groups independently such that the confidence threshold for correct predictions for a
minority group will be slightly lower.
• Demographic parity – Similar percentages of datapoints from each group are predicted as positive
classifications. E.g., A class with x% of subclass-1 should have x% of subclass-1 positive predictions.
• Equal opportunity – Among those datapoints with the positive ground truth label (true positive rate),
there is a similar percentage of positive predictions in each group.
• Equal accuracy – The model’s predictions are equally accurate for all groups. True positive and false
positive should be same across all groups.
Common Fairness Metrics
11
© 2024 Intel Corporation
What-If Tool (Google)
• Simulation with data manipulation & specific
criteria to detect bias
• 5 fairness metrics
• Bias mitigation not straightforward
Tools to Detect and Mitigate Bias
12
AI Fairness 360 (IBM)
• Extensible toolkit for bias detection &
mitigation
• 70+ fairness metrics
• 10 bias mitigation algorithms
• Fairness metric explanations
© 2024 Intel Corporation
• Key challenge in developing and deploying a ML system is understanding their performance across a
wide range of inputs
• Several open-source tools utilize fairness metrics and bias mitigation algorithms to analyze ML
systems with limited coding and test bias in hypothetical scenarios
13
© 2024 Intel Corporation
Key Takeaways and Resources
• Establish RAI principles that guide the decision-making for your AI development
• Drive RAI requirements into product definition
• Adopt human-centric approach at every stage of your product development
• Integrate RAI tools in your software development lifecycle
• Preventing bias is complex. Define fairness metrics, document trade-offs and share with
your users transparently. Re-check for bias often.
• Conduct regular assessments, audits, and update AI response plans
• Keep up-to-date with the evolving legislature, regional laws and standards
• Certifications can help adherence to standards, legislatures and build user trust
Key Takeaways
14
© 2024 Intel Corporation
• Responsible Artificial Intelligence Institute (RAII)
• First independent, accredited certification
program for RAI in US, Canada, Europe and UK
• Vectors: Systems operations, explainability and
interpretability, accountability, consumer
protection, bias and fairness, and robustness with
collaborations across world economic forum,
OECD, IEEE, ANSI, etc.,
• Executive orders on responsible AI around the globe
• European Union AI act
• Executive Order on the Safe, Secure and
Trustworthy Development and use of AI
• The White House Blueprint for an AI Bill of Rights
Industry-Wide Ethical AI Revolution
15
• IEEE Standards Association
• https://standards.ieee.org/participate/
• 2100+ Standards, 175+ Countries, 34000+
Global Participants
• IEEE CertifAIEd™ (part of Standards Association)
• A certification program for assessing ethics of
Autonomous Intelligent Systems (AIS)
• Vectors: Ethical Privacy, Algorithmic Bias,
Transparency, and Accountability, Agentic AI
• AI Safety Certification/Assurance Development
• Case Study for AI ethics applied to the city of
Viennna
© 2024 Intel Corporation
Resources
16
© 2024 Intel Corporation
Responsible AI Landscape
https://hai.stanford.edu/news/2022-ai-index-industrialization-
ai-and-mounting-ethical-concerns
Grandview Research
https://www.grandviewresearch.com/industry-
analysis/artificial-intelligence-ai-market
Global AI adoption Index
https://filecache.mediaroom.com/mr5mr_ibmnewsroom/191
468/IBM%27s%20Global%20AI%20Adoption%20Index%20202
1_Executive-Summary.pdf
Measuring Bias – David Weinberger
https://pair-code.github.io/what-if-tool/ai-fairness.html
Intel Responsible AI Program
https://www.intel.com/content/www/us/en/artificial-
intelligence/responsible-ai.html
Responsible AI Institute Certification
https://www.responsible.ai/how-we-help
IEEE CertifAIEd™
https://engagestandards.ieee.org/ieeecertifaied.html
AI Incident database
https://incidentdatabase.ai/
European Union AI Act
https://ec.europa.eu/commission/presscorner/detail/en/IP_23_
6473
White House Executive Order
https://www.whitehouse.gov/briefing-room/presidential-
actions/2023/10/30/executive-order-on-the-safe-secure-and-
trustworthy-development-and-use-of-artificial-intelligence/
Blueprint for AI Bill of Rights
https://www.whitehouse.gov/ostp/ai-bill-of-rights/
17
© 2024 Intel Corporation
Questions
18
© 2024 Intel Corporation
Backup Material
Designing With a Human-Centric Approach
19
© 2024 Intel Corporation
Definition
Does AI add value?
Who are the indented
users of the system?
Identify intended
potential harm and plan
for remediations
Translate user needs into
data needs
E.g., Prototyping a
chatbot
Development
Source high-quality
unbiased data
responsibly
Get inputs from domain
experts
Enable human oversight
Built-in safety measures
E.g., Improving
autonomous vehicles
Deployment
Provide ways for users to
challenge the outcome
Provide manual controls
when AI fails
Offer high-touch
customer support
Marketing
Focus on the benefit,
not the technology
Transparently share the
limitations of the system
with the users
Be transparent about
privacy and data settings
Anchor on familiarity
Google Gemini AI Image Generation Mistake
What happened?
Why did the incident
happen?
Remediation and
next steps
https://blog.google/products/gemini/gemini-image-generation-issue/
© 2024 Intel Corporation
Prabhakar Raghavan
Senior Vice President, Google
Feb 23, 2024
20
• An AI algorithm was used to recognize when person
moves away from laptop and to turn off the screen
• The algorithm was tested to be inclusive and
performant on individuals with different skin tones,
to ensure the algorithm is fair and the output is not
affected.
Intel Ethical AI Impact Assessment Case Studies
21
• Pedestrian detection for self-driving cars should
incorporate diverse data, including data from
disabled pedestrians, such as folks in wheelchairs
© 2024 Intel Corporation
Diverse Skin Tone
Recognition with
Intel Evo Laptop
Pedestrian Detection
including disabled
individuals
Open-source library Notes
AIF360 Provides a comprehensive set of metrics for datasets and models to test for biases and algorithms to mitigate bias in datasets and models.
Fairness Measures
Provides several fairness metrics, including difference of means, disparate impact, and odds ratio. It also provides datasets, but some are not in the
public domain and require explicit permission from the owners to access or use the data.
FairML
Provides an auditing tool for predictive models by quantifying the relative effects of various inputs on a model’s predictions, which can be used to
assess the model’s fairness.
FairTest
Checks for associations between predicted labels and protected attributes. The methodology also provides a way to identify regions of the input
space where an algorithm might incur unusually high errors. This toolkit also includes a rich catalog of datasets
Aequitas
This is an auditing toolkit for data scientists as well as policymakers; it has a Python library and website where data can be uploaded for bias analysis.
It offers several fairness metrics, including demographic, statistical parity, and disparate impact, along with a “fairness tree” to help users identify the
correct metric to use for their particular situation. Aequitas’s license does not allow commercial use.
Themis An open-source bias toolbox that automatically generates test suites to measure discrimination in decisions made by a predictive system.
Themis-ML
Provides fairness metrics, such as mean difference, some bias mitigation algorithms, additive counterfactually fair estimator, and reject option
classification.
Fairness Comparison
Includes several bias detection metrics as well as bias mitigation methods, including disparate impact remover, prejudice remover, and two-Naive
Bayes. Written primarily as a test bed to allow different bias metrics and algorithms to be compared in a consistent way, it also allows additional
algorithms and datasets.
Open-source Fairness Metrics Libraries
22
© 2024 Intel Corporation

More Related Content

PDF
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
PPTX
Towards Responsible AI - KC.pptx
PDF
Generative AI - Responsible Path Forward.pdf
PPTX
Responsible AI in Industry: Practical Challenges and Lessons Learned
PPTX
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
PDF
Charting Our Course- Information Professionals as AI Navigators
PPTX
Towards Responsible AI - NY.pptx
PPTX
Robust AI Governance framework and its components
“Responsible AI: Tools and Frameworks for Developing AI Solutions,” a Present...
Towards Responsible AI - KC.pptx
Generative AI - Responsible Path Forward.pdf
Responsible AI in Industry: Practical Challenges and Lessons Learned
ANIn Kolkata April 2024 |Ethics of AI by Abhishek Nandy
Charting Our Course- Information Professionals as AI Navigators
Towards Responsible AI - NY.pptx
Robust AI Governance framework and its components

Similar to “Identifying and Mitigating Bias in AI,” a Presentation from Intel (20)

PPTX
Microsoft for Startups program, designed to help new ventures succeed in comp...
PPTX
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
PPTX
Generative AI and Large Language Models (LLMs)
PDF
Responsible Machine Learning
PPTX
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
PPTX
Ethics and Responsible AI Deployment.pptx
PPTX
AI based Testing in Finance & Retail Breakfast Briefing
PDF
Ethical AI at VDAB, presented by Vincent Buekenhout (Ethical AI Lead, VDAB) a...
PDF
Ethical Considerations in AI Development- Ensuring Fairness and Transparency
PPTX
Handle With Care: You Have My VA Report!
PPTX
Building Ethical AI
PPTX
Security Metrics Program
PPT
Security metrics 2
PDF
Mobility innovation and unknowns
PPTX
Building Trust in Generative Artificial Intelligence
PDF
Machine Learning in Customer Analytics
DOCX
How AI Programmers Can Develop Responsible AI.docx
PDF
Giving your AppSec program the edge - using OpenSAMM for benchmarking and sof...
PDF
Professional Ethics------------------.pdf
PDF
AI Governance – The Responsible Use of AI
Microsoft for Startups program, designed to help new ventures succeed in comp...
[DSC Adria 23] Muthu Ramachandran AI Ethics Framework for Generative AI such ...
Generative AI and Large Language Models (LLMs)
Responsible Machine Learning
it-Condust-an-AI-Privacy-Risk-Assessment-Phases-1-3.pptx
Ethics and Responsible AI Deployment.pptx
AI based Testing in Finance & Retail Breakfast Briefing
Ethical AI at VDAB, presented by Vincent Buekenhout (Ethical AI Lead, VDAB) a...
Ethical Considerations in AI Development- Ensuring Fairness and Transparency
Handle With Care: You Have My VA Report!
Building Ethical AI
Security Metrics Program
Security metrics 2
Mobility innovation and unknowns
Building Trust in Generative Artificial Intelligence
Machine Learning in Customer Analytics
How AI Programmers Can Develop Responsible AI.docx
Giving your AppSec program the edge - using OpenSAMM for benchmarking and sof...
Professional Ethics------------------.pdf
AI Governance – The Responsible Use of AI
Ad

More from Edge AI and Vision Alliance (20)

PDF
“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,”...
PDF
“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceler...
PDF
“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applicat...
PDF
“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
PDF
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
PDF
“Introduction to Data Types for AI: Trade-Offs and Trends,” a Presentation fr...
PDF
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
PDF
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
PDF
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
PDF
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
PDF
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
PDF
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
PDF
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
PDF
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
PDF
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
PDF
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
PDF
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
PDF
“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentatio...
PDF
“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips
PDF
“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,...
“Visual Search: Fine-grained Recognition with Embedding Models for the Edge,”...
“Optimizing Real-time SLAM Performance for Autonomous Robots with GPU Acceler...
“LLMs and VLMs for Regulatory Compliance, Quality Control and Safety Applicat...
“Simplifying Portable Computer Vision with OpenVX 2.0,” a Presentation from AMD
“Quantization Techniques for Efficient Deployment of Large Language Models: A...
“Introduction to Data Types for AI: Trade-Offs and Trends,” a Presentation fr...
“Introduction to Radar and Its Use for Machine Perception,” a Presentation fr...
“NPU IP Hardware Shaped Through Software and Use-case Analysis,” a Presentati...
“Voice Interfaces on a Budget: Building Real-time Speech Recognition on Low-c...
“Computer Vision at Sea: Automated Fish Tracking for Sustainable Fishing,” a ...
“Squinting Vision Pipelines: Detecting and Correcting Errors in Vision Models...
“ONNX and Python to C++: State-of-the-art Graph Compilation,” a Presentation ...
“Beyond the Demo: Turning Computer Vision Prototypes into Scalable, Cost-effe...
“Running Accelerated CNNs on Low-power Microcontrollers Using Arm Ethos-U55, ...
“Scaling i.MX Applications Processors’ Native Edge AI with Discrete AI Accele...
“A Re-imagination of Embedded Vision System Design,” a Presentation from Imag...
“MPU+: A Transformative Solution for Next-Gen AI at the Edge,” a Presentation...
“Evolving Inference Processor Software Stacks to Support LLMs,” a Presentatio...
“Efficiently Registering Depth and RGB Images,” a Presentation from eInfochips
“How to Right-size and Future-proof a Container-first Edge AI Infrastructure,...
Ad

Recently uploaded (20)

PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Electronic commerce courselecture one. Pdf
PDF
Encapsulation_ Review paper, used for researhc scholars
PDF
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
NewMind AI Weekly Chronicles - August'25 Week I
PPT
Teaching material agriculture food technology
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PPTX
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Review of recent advances in non-invasive hemoglobin estimation
Reach Out and Touch Someone: Haptics and Empathic Computing
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Electronic commerce courselecture one. Pdf
Encapsulation_ Review paper, used for researhc scholars
Profit Center Accounting in SAP S/4HANA, S4F28 Col11
Advanced methodologies resolving dimensionality complications for autism neur...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
NewMind AI Weekly Chronicles - August'25 Week I
Teaching material agriculture food technology
Chapter 3 Spatial Domain Image Processing.pdf
“AI and Expert System Decision Support & Business Intelligence Systems”
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Effective Security Operations Center (SOC) A Modern, Strategic, and Threat-In...
Unlocking AI with Model Context Protocol (MCP)
Per capita expenditure prediction using model stacking based on satellite ima...
The Rise and Fall of 3GPP – Time for a Sabbatical?
Spectral efficient network and resource selection model in 5G networks
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Review of recent advances in non-invasive hemoglobin estimation

“Identifying and Mitigating Bias in AI,” a Presentation from Intel

  • 1. Identifying and Mitigating Bias in AI Nikita Tiwari AI Enabling Engineer, Client Computing Group Intel Corporation
  • 2. • Why Should we Care About Responsible AI (RAI)? • Identifying Bias in AI • Tools to Mitigate Bias in AI • Fairness Metrics • What-If Tool • AI Fairness 360 • Key Takeaways • Industry-Wide Ethical AI Revolution & Resources Agenda 2 © 2024 Intel Corporation
  • 3. 3 © 2024 Intel Corporation Discussion Ethics is a conversation! AI stories you have come across (positive & negative) Why should we care?
  • 4. Why Responsible AI? 4 © 2024 Intel Corporation Harm to human life Loss of trust Fines in compliance & regulations Introduction of systemic bias Misinformation Breach of privacy AI Incidents in the News Barriers in implementing trustworthy and explainable AI Daily reports of AI harm Generative AI accessible to all end users Massive global AI market size Cost of AI Incidents
  • 5. Defining Responsible AI Principles Respect Human Rights Enable Human Oversight Transparency Personal Privacy Security, Safety, Sustainability Equity and Inclusion Focus on data used for training and the algorithm development process to help prevent bias and discrimination. Understanding where the data came from and how the model works. Human oversight of AI solutions to ensure they positively benefit society. Ethical review and enforcement of end-to-end AI safety. Low- resource implementation of AI algorithms Maintaining personal privacy and consent. Focusing on protecting the collected data. Human rights are a cornerstone for AI development. AI solutions should not support or tolerate usages that violate human rights. © 2024 Intel Corporation 5
  • 6. 6 © 2024 Intel Corporation Bias Identification
  • 7. Historical Bias: • Caused by preconceived notion even on perfectly measured/sampled data • E.g., Gendered occupation • Case study: Amazon hiring algorithm/ Google Gemini AI image generation Bias in Data Generation 7 © 2024 Intel Corporation Representation Bias: • Target population does not: • reflect the user population • include underrepresented groups • Sampling method is limited or uneven • Two-fold representation – uniform vs proportional • E.g., ImageNet images Measurement Bias: • The proxy oversimplifies a complex construct, or measurement methods and accuracy differ among groups • E.g., COMPAS (Correctional Offender Management Profiling for Alternative Sanctions)
  • 8. Aggregation Bias • Arises when data from different sources or groups are combined, leading to distortions in the model's performance or predictions • E.g., Housing price prediction model trained on data aggregated from multiple cities without accounting for differences in housing markets Bias in Model Building and Implementation (1/2) 8 Learning Bias • Arises when modeling choices amplify performance disparities across different examples in the data • E.g., Prioritizing one objective damages the other, like optimizing for privacy or compactness may affect accuracy © 2024 Intel Corporation
  • 9. Evaluation Bias • The benchmark data or the evaluation process used for a particular task does not represent the user population or favors certain groups over the others • E.g., A language translation model evaluated primarily on its accuracy for European languages, hinders its usefulness in diverse contexts as a global language translator Bias in Model Building and Implementation (2/2) 9 Deployment Bias • Arises due to “off-the-label” usage of model other than intended use • E.g., Models intended to predict a person’s likelihood of committing a future crime also used to determine the length of the sentence © 2024 Intel Corporation
  • 10. 10 © 2024 Intel Corporation Bias Mitigation
  • 11. • Fairness metrics are a set of measures that enable you to detect the presence of bias in your data or model • At least 21 fairness metrics. Many are conflicting. Which is the fairest? No right answer. Some common metrics - • Group unaware – Removes all group and proxy-group membership information from the dataset to avoid favoring any groups or sub-class. Similar logic as unsupervised learning. Difficult to achieve. • Group threshold – Alternate thought-process to group unaware. Adjust the confidence thresholds for different groups independently such that the confidence threshold for correct predictions for a minority group will be slightly lower. • Demographic parity – Similar percentages of datapoints from each group are predicted as positive classifications. E.g., A class with x% of subclass-1 should have x% of subclass-1 positive predictions. • Equal opportunity – Among those datapoints with the positive ground truth label (true positive rate), there is a similar percentage of positive predictions in each group. • Equal accuracy – The model’s predictions are equally accurate for all groups. True positive and false positive should be same across all groups. Common Fairness Metrics 11 © 2024 Intel Corporation
  • 12. What-If Tool (Google) • Simulation with data manipulation & specific criteria to detect bias • 5 fairness metrics • Bias mitigation not straightforward Tools to Detect and Mitigate Bias 12 AI Fairness 360 (IBM) • Extensible toolkit for bias detection & mitigation • 70+ fairness metrics • 10 bias mitigation algorithms • Fairness metric explanations © 2024 Intel Corporation • Key challenge in developing and deploying a ML system is understanding their performance across a wide range of inputs • Several open-source tools utilize fairness metrics and bias mitigation algorithms to analyze ML systems with limited coding and test bias in hypothetical scenarios
  • 13. 13 © 2024 Intel Corporation Key Takeaways and Resources
  • 14. • Establish RAI principles that guide the decision-making for your AI development • Drive RAI requirements into product definition • Adopt human-centric approach at every stage of your product development • Integrate RAI tools in your software development lifecycle • Preventing bias is complex. Define fairness metrics, document trade-offs and share with your users transparently. Re-check for bias often. • Conduct regular assessments, audits, and update AI response plans • Keep up-to-date with the evolving legislature, regional laws and standards • Certifications can help adherence to standards, legislatures and build user trust Key Takeaways 14 © 2024 Intel Corporation
  • 15. • Responsible Artificial Intelligence Institute (RAII) • First independent, accredited certification program for RAI in US, Canada, Europe and UK • Vectors: Systems operations, explainability and interpretability, accountability, consumer protection, bias and fairness, and robustness with collaborations across world economic forum, OECD, IEEE, ANSI, etc., • Executive orders on responsible AI around the globe • European Union AI act • Executive Order on the Safe, Secure and Trustworthy Development and use of AI • The White House Blueprint for an AI Bill of Rights Industry-Wide Ethical AI Revolution 15 • IEEE Standards Association • https://standards.ieee.org/participate/ • 2100+ Standards, 175+ Countries, 34000+ Global Participants • IEEE CertifAIEd™ (part of Standards Association) • A certification program for assessing ethics of Autonomous Intelligent Systems (AIS) • Vectors: Ethical Privacy, Algorithmic Bias, Transparency, and Accountability, Agentic AI • AI Safety Certification/Assurance Development • Case Study for AI ethics applied to the city of Viennna © 2024 Intel Corporation
  • 16. Resources 16 © 2024 Intel Corporation Responsible AI Landscape https://hai.stanford.edu/news/2022-ai-index-industrialization- ai-and-mounting-ethical-concerns Grandview Research https://www.grandviewresearch.com/industry- analysis/artificial-intelligence-ai-market Global AI adoption Index https://filecache.mediaroom.com/mr5mr_ibmnewsroom/191 468/IBM%27s%20Global%20AI%20Adoption%20Index%20202 1_Executive-Summary.pdf Measuring Bias – David Weinberger https://pair-code.github.io/what-if-tool/ai-fairness.html Intel Responsible AI Program https://www.intel.com/content/www/us/en/artificial- intelligence/responsible-ai.html Responsible AI Institute Certification https://www.responsible.ai/how-we-help IEEE CertifAIEd™ https://engagestandards.ieee.org/ieeecertifaied.html AI Incident database https://incidentdatabase.ai/ European Union AI Act https://ec.europa.eu/commission/presscorner/detail/en/IP_23_ 6473 White House Executive Order https://www.whitehouse.gov/briefing-room/presidential- actions/2023/10/30/executive-order-on-the-safe-secure-and- trustworthy-development-and-use-of-artificial-intelligence/ Blueprint for AI Bill of Rights https://www.whitehouse.gov/ostp/ai-bill-of-rights/
  • 17. 17 © 2024 Intel Corporation Questions
  • 18. 18 © 2024 Intel Corporation Backup Material
  • 19. Designing With a Human-Centric Approach 19 © 2024 Intel Corporation Definition Does AI add value? Who are the indented users of the system? Identify intended potential harm and plan for remediations Translate user needs into data needs E.g., Prototyping a chatbot Development Source high-quality unbiased data responsibly Get inputs from domain experts Enable human oversight Built-in safety measures E.g., Improving autonomous vehicles Deployment Provide ways for users to challenge the outcome Provide manual controls when AI fails Offer high-touch customer support Marketing Focus on the benefit, not the technology Transparently share the limitations of the system with the users Be transparent about privacy and data settings Anchor on familiarity
  • 20. Google Gemini AI Image Generation Mistake What happened? Why did the incident happen? Remediation and next steps https://blog.google/products/gemini/gemini-image-generation-issue/ © 2024 Intel Corporation Prabhakar Raghavan Senior Vice President, Google Feb 23, 2024 20
  • 21. • An AI algorithm was used to recognize when person moves away from laptop and to turn off the screen • The algorithm was tested to be inclusive and performant on individuals with different skin tones, to ensure the algorithm is fair and the output is not affected. Intel Ethical AI Impact Assessment Case Studies 21 • Pedestrian detection for self-driving cars should incorporate diverse data, including data from disabled pedestrians, such as folks in wheelchairs © 2024 Intel Corporation Diverse Skin Tone Recognition with Intel Evo Laptop Pedestrian Detection including disabled individuals
  • 22. Open-source library Notes AIF360 Provides a comprehensive set of metrics for datasets and models to test for biases and algorithms to mitigate bias in datasets and models. Fairness Measures Provides several fairness metrics, including difference of means, disparate impact, and odds ratio. It also provides datasets, but some are not in the public domain and require explicit permission from the owners to access or use the data. FairML Provides an auditing tool for predictive models by quantifying the relative effects of various inputs on a model’s predictions, which can be used to assess the model’s fairness. FairTest Checks for associations between predicted labels and protected attributes. The methodology also provides a way to identify regions of the input space where an algorithm might incur unusually high errors. This toolkit also includes a rich catalog of datasets Aequitas This is an auditing toolkit for data scientists as well as policymakers; it has a Python library and website where data can be uploaded for bias analysis. It offers several fairness metrics, including demographic, statistical parity, and disparate impact, along with a “fairness tree” to help users identify the correct metric to use for their particular situation. Aequitas’s license does not allow commercial use. Themis An open-source bias toolbox that automatically generates test suites to measure discrimination in decisions made by a predictive system. Themis-ML Provides fairness metrics, such as mean difference, some bias mitigation algorithms, additive counterfactually fair estimator, and reject option classification. Fairness Comparison Includes several bias detection metrics as well as bias mitigation methods, including disparate impact remover, prejudice remover, and two-Naive Bayes. Written primarily as a test bed to allow different bias metrics and algorithms to be compared in a consistent way, it also allows additional algorithms and datasets. Open-source Fairness Metrics Libraries 22 © 2024 Intel Corporation