SlideShare a Scribd company logo
#FIGHTBIAS
SOCIETY
Prejudice in favor of or against one thing,
person, or group compared with another,
usually in a way considered to be unfair.
The bias error is an error from erroneous
assumptions in the learning algorithm.
High bias can cause an algorithm to miss the
relevant relations between features and
target outputs (underfitting).
MACHINE LEARNING
Combatting Bias in Machine Learning
Combatting Bias in Machine Learning
AWARENESS
STANDARDIZING HOW
DATASETS ARE
DOCUMENTED
Many projects are based on well known
public datasets, but there is currently no
standard for how these datasets are
compiled and documented.
IGNORANCE IS BIAS
It shouldn't need be explained how it's
ethically wrong to attempt to predict a
person's sexuality based on appearance.
FAILING PEOPLE OF COLOR
Many widely used facial reconigtion
algorithms by Microsoft Amazon, and
Face++ performed 35% worse on dark
skinned women.
In 2017 mulitple Chinese users reported
being able to log into another's iPhone.
using FaceID.
INNACURACIES AND
INCONVENIENCE
There is bias in travel for people who may be
considered immigrants to some, that will be
exemplified with automated boarding.
OF FACIAL
REGOGNITION
RESEARCH IGNORES
NON-BINAR Y PEOPLE
Combatting Bias in Machine Learning
Algorithms with little feedback have
little opportunity to "learn"
Algorithms can reflect xenophobic bias
and use limited examples to misidentify
people with dark skin as criminal.
With few examples of successful minority
candidates predection based on these
imbalanced sets will amplify bias.
Online ads that determine a user's race
to be black display ads for high interest
credit cards at a higher rate than others.
With great power comes great
responsibility
BIASED DATA = BIASED OUTCOMES
Data used for training models is hardly ever
actually representative of people it will be used
on.
HUMANS ARE IMPERFECT
This doesn't mean algorithms unbiased, it means
we have to assume bias will persist until we take
steps to remove it.
MASTERS OF OUR DEMISE
We are in one of few fields where we get to pick
the metric we're measured against.
COMPAS
Consistently ranks black and brown
incarcerated people as more likely to offend
and higher risk than white prisoners.
FAILED SENSORS
Milimeter wave sensors used by the TSA
consistently have trouble with black
women's hair causing travel delays.
FLAWED HARDWARE
Camera hardware has been tuned and
developed to highlight lighter skin tones.
Combatting Bias in Machine Learning
ACTION
ASSESING THE NEED
Not all problems are best solved with
machine learning.
TEST WITH EDGE CASES
Hardware should be tested on dark
skinned first.
EXTREME SKEPTICISM
Assesing models with harsh criticism
towards performance on edge cases.
Combatting Bias in Machine Learning
HOW TO START A ML PROJECT
(ETHICALLY)
QUESTION IF THE
SOLUTION FITS
THE PROBLEM
EXAMINE FOR AND
REMOVE BIASED
PROXIES
GIVE YOUR MODEL
FEEDBACK AND
TEST FOR BIAS
Will the
product be
better? If so,
how?
How will this
impact my
users?
How often
will this
model get
feedback?
How will I
collect
feedback?
ZIP CODE
Applying for a credit card with a 90210 zip code
shouldn't improve your chances of getting
approved.
HOMOGEN OUS EXAMPLES
If there is an extreme imbalance between
classes any model can attribute an occurence to
subtle proxies the model learns.
WORD EMBEDDINGS
Tools used to measure the distance of the
meaning of words can embed our sexist cultural
norms and exclude nonbinary people.
MINDSET CHANGE
Assume there will be some aspect of bias in your
models and asses potential consequences.
ACCOUNTABIL IT Y
Build user trust and be open to algorithmic
criticism by making models open source.
Transparency leads to accountability,
WORD EMBEDDINGS
Tools used to measure the distance of the
meaning of words can embed our sexist cultural
norms and exclude nonbinary people.
Combatting Bias in Machine Learning
Thank you!
@data_bayes
@data_bayes
/ayodeleodubela

More Related Content

PDF
PPTX
A Tutorial to AI Ethics - Fairness, Bias & Perception
PDF
Fairness and Bias in Machine Learning
PPTX
Explainable AI in Industry (KDD 2019 Tutorial)
PDF
Introduction to AI Ethics
PDF
Building trust through Explainable AI
PPTX
Responsible AI in Industry (ICML 2021 Tutorial)
PPTX
An Introduction to XAI! Towards Trusting Your ML Models!
A Tutorial to AI Ethics - Fairness, Bias & Perception
Fairness and Bias in Machine Learning
Explainable AI in Industry (KDD 2019 Tutorial)
Introduction to AI Ethics
Building trust through Explainable AI
Responsible AI in Industry (ICML 2021 Tutorial)
An Introduction to XAI! Towards Trusting Your ML Models!

What's hot (20)

PPTX
Artificial Intelligence and Bias
PDF
Fairness and Ethics in A
PPTX
Explainable AI in Industry (FAT* 2020 Tutorial)
PDF
Explainable AI (XAI) - A Perspective
PDF
Ethics in the use of Data & AI
PPTX
Explainable AI
PDF
How do we train AI to be Ethical and Unbiased?
PPTX
Bias in Artificial Intelligence
PPTX
Explainable AI in Industry (AAAI 2020 Tutorial)
PPTX
Artificial intelligence
PPTX
Ethical Considerations in the Design of Artificial Intelligence
PDF
Fairness in Machine Learning and AI
PPTX
Generative AI Risks & Concerns
PPTX
Ethical Issues in Machine Learning Algorithms. (Part 3)
PDF
Explainability and bias in AI
PDF
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
PPTX
Overview of Artificial Intelligence
PPTX
Responsible AI
PDF
Explainable AI (XAI)
PDF
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Artificial Intelligence and Bias
Fairness and Ethics in A
Explainable AI in Industry (FAT* 2020 Tutorial)
Explainable AI (XAI) - A Perspective
Ethics in the use of Data & AI
Explainable AI
How do we train AI to be Ethical and Unbiased?
Bias in Artificial Intelligence
Explainable AI in Industry (AAAI 2020 Tutorial)
Artificial intelligence
Ethical Considerations in the Design of Artificial Intelligence
Fairness in Machine Learning and AI
Generative AI Risks & Concerns
Ethical Issues in Machine Learning Algorithms. (Part 3)
Explainability and bias in AI
Algorithmic Bias: Challenges and Opportunities for AI in Healthcare
Overview of Artificial Intelligence
Responsible AI
Explainable AI (XAI)
Shift AI 2020: How to identify and treat biases in ML Models | Navdeep Sharma...
Ad

Similar to Combatting Bias in Machine Learning (20)

PPTX
Testing for cognitive bias in ai systems
PPTX
bias_in_machine_learning. Types of bias and definition
PDF
Algorithmic Bias - What is it? Why should we care? What can we do about it?
PDF
Using AI to Build Fair and Equitable Workplaces
PPTX
Not fair! testing ai bias and organizational values
PPTX
Not fair! testing AI bias and organizational values
PDF
Algorithmic Bias : What is it? Why should we care? What can we do about it?
PDF
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
PDF
Fairness in Machine Learning @Codemotion
PDF
Ethical Dilemmas in AI/ML-based systems
PPTX
Fairness in Search & RecSys 네이버 검색 콜로키움 김진영
PPTX
A simple Introduction to Algorithmic Fairness
PPTX
Algorithmic Fairness: A Brief Introduction
PDF
The Dangers of Machine Learning
PPTX
[DrupalCon] Erase Unconscious Bias From Your AI Datasets
PPTX
Avoiding Machine Learning Pitfalls 2-10-18
PDF
Don't blindly trust your ML System, it may change your life (Azzurra Ragone, ...
PDF
Measuring Model Fairness - Stephen Hoover
PPTX
AI Fails: Avoiding bias in your systems
PDF
Bias in AI-systems: A multi-step approach
Testing for cognitive bias in ai systems
bias_in_machine_learning. Types of bias and definition
Algorithmic Bias - What is it? Why should we care? What can we do about it?
Using AI to Build Fair and Equitable Workplaces
Not fair! testing ai bias and organizational values
Not fair! testing AI bias and organizational values
Algorithmic Bias : What is it? Why should we care? What can we do about it?
Eliminating Machine Bias - Mary Ann Brennan - ML4ALL 2018
Fairness in Machine Learning @Codemotion
Ethical Dilemmas in AI/ML-based systems
Fairness in Search & RecSys 네이버 검색 콜로키움 김진영
A simple Introduction to Algorithmic Fairness
Algorithmic Fairness: A Brief Introduction
The Dangers of Machine Learning
[DrupalCon] Erase Unconscious Bias From Your AI Datasets
Avoiding Machine Learning Pitfalls 2-10-18
Don't blindly trust your ML System, it may change your life (Azzurra Ragone, ...
Measuring Model Fairness - Stephen Hoover
AI Fails: Avoiding bias in your systems
Bias in AI-systems: A multi-step approach
Ad

Recently uploaded (20)

PPTX
Supervised vs unsupervised machine learning algorithms
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
STUDY DESIGN details- Lt Col Maksud (21).pptx
PPT
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
PDF
Foundation of Data Science unit number two notes
PPTX
Acceptance and paychological effects of mandatory extra coach I classes.pptx
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PDF
Galatica Smart Energy Infrastructure Startup Pitch Deck
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPTX
IBA_Chapter_11_Slides_Final_Accessible.pptx
PPTX
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
PPTX
05. PRACTICAL GUIDE TO MICROSOFT EXCEL.pptx
PPT
Quality review (1)_presentation of this 21
PPTX
IB Computer Science - Internal Assessment.pptx
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPTX
Introduction to Knowledge Engineering Part 1
PDF
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
PPTX
Computer network topology notes for revision
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PDF
Clinical guidelines as a resource for EBP(1).pdf
Supervised vs unsupervised machine learning algorithms
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
STUDY DESIGN details- Lt Col Maksud (21).pptx
Chapter 2 METAL FORMINGhhhhhhhjjjjmmmmmmmmm
Foundation of Data Science unit number two notes
Acceptance and paychological effects of mandatory extra coach I classes.pptx
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
Galatica Smart Energy Infrastructure Startup Pitch Deck
Business Ppt On Nestle.pptx huunnnhhgfvu
IBA_Chapter_11_Slides_Final_Accessible.pptx
advance b rammar.pptxfdgdfgdfsgdfgsdgfdfgdfgsdfgdfgdfg
05. PRACTICAL GUIDE TO MICROSOFT EXCEL.pptx
Quality review (1)_presentation of this 21
IB Computer Science - Internal Assessment.pptx
Moving the Public Sector (Government) to a Digital Adoption
Introduction to Knowledge Engineering Part 1
“Getting Started with Data Analytics Using R – Concepts, Tools & Case Studies”
Computer network topology notes for revision
oil_refinery_comprehensive_20250804084928 (1).pptx
Clinical guidelines as a resource for EBP(1).pdf

Combatting Bias in Machine Learning

  • 2. SOCIETY Prejudice in favor of or against one thing, person, or group compared with another, usually in a way considered to be unfair. The bias error is an error from erroneous assumptions in the learning algorithm. High bias can cause an algorithm to miss the relevant relations between features and target outputs (underfitting). MACHINE LEARNING
  • 6. STANDARDIZING HOW DATASETS ARE DOCUMENTED Many projects are based on well known public datasets, but there is currently no standard for how these datasets are compiled and documented.
  • 7. IGNORANCE IS BIAS It shouldn't need be explained how it's ethically wrong to attempt to predict a person's sexuality based on appearance.
  • 8. FAILING PEOPLE OF COLOR Many widely used facial reconigtion algorithms by Microsoft Amazon, and Face++ performed 35% worse on dark skinned women. In 2017 mulitple Chinese users reported being able to log into another's iPhone. using FaceID.
  • 9. INNACURACIES AND INCONVENIENCE There is bias in travel for people who may be considered immigrants to some, that will be exemplified with automated boarding.
  • 12. Algorithms with little feedback have little opportunity to "learn" Algorithms can reflect xenophobic bias and use limited examples to misidentify people with dark skin as criminal. With few examples of successful minority candidates predection based on these imbalanced sets will amplify bias. Online ads that determine a user's race to be black display ads for high interest credit cards at a higher rate than others.
  • 13. With great power comes great responsibility
  • 14. BIASED DATA = BIASED OUTCOMES Data used for training models is hardly ever actually representative of people it will be used on. HUMANS ARE IMPERFECT This doesn't mean algorithms unbiased, it means we have to assume bias will persist until we take steps to remove it. MASTERS OF OUR DEMISE We are in one of few fields where we get to pick the metric we're measured against.
  • 15. COMPAS Consistently ranks black and brown incarcerated people as more likely to offend and higher risk than white prisoners. FAILED SENSORS Milimeter wave sensors used by the TSA consistently have trouble with black women's hair causing travel delays. FLAWED HARDWARE Camera hardware has been tuned and developed to highlight lighter skin tones.
  • 18. ASSESING THE NEED Not all problems are best solved with machine learning. TEST WITH EDGE CASES Hardware should be tested on dark skinned first. EXTREME SKEPTICISM Assesing models with harsh criticism towards performance on edge cases.
  • 20. HOW TO START A ML PROJECT (ETHICALLY) QUESTION IF THE SOLUTION FITS THE PROBLEM EXAMINE FOR AND REMOVE BIASED PROXIES GIVE YOUR MODEL FEEDBACK AND TEST FOR BIAS
  • 22. How will this impact my users?
  • 23. How often will this model get feedback?
  • 25. ZIP CODE Applying for a credit card with a 90210 zip code shouldn't improve your chances of getting approved. HOMOGEN OUS EXAMPLES If there is an extreme imbalance between classes any model can attribute an occurence to subtle proxies the model learns. WORD EMBEDDINGS Tools used to measure the distance of the meaning of words can embed our sexist cultural norms and exclude nonbinary people.
  • 26. MINDSET CHANGE Assume there will be some aspect of bias in your models and asses potential consequences. ACCOUNTABIL IT Y Build user trust and be open to algorithmic criticism by making models open source. Transparency leads to accountability, WORD EMBEDDINGS Tools used to measure the distance of the meaning of words can embed our sexist cultural norms and exclude nonbinary people.