SlideShare a Scribd company logo
Fairness in Automated Decision Systems
Data Protectionin Software Development- WS 21/22
Dr. AmirshayanAhmadian
• Sai Sri Praveen Gadiyaram
• Sowjanya Chennamaneni
26-01-2022 1
OUTLINE
1. Motivation
2. Background
3. Challenges
4. Fairness metrics
5. Fairness approaches
6. Fairness Architecture
7. Adversarial Debiasing
8. Case-study
9. Summary
26-01-2022 2
MOTIVATION
26-01-2022 3
BACKGROUND
26-01-2022
AutomatedDecision Systems : Decisionstaken by the systems without any human involvement.
4
.
26-01-2022
Name Education Race Gender Zip code
John Bachelor's White Male 56783
Mary Bachelor's White Female 45673
Annie Bachelor's Black Female 15674
Jack Bachelor's Black Male 13456
TrainedML
model
Predictions
Eligible
Yes
Yes
No
No
BACKGROUND
5
• Protected Attributes: Can be a source of social bias. For example, age, race and gender
• Unprotected Attributes: Attributes that are not protected.
• Fairness ThroughAwareness:
An algorithm is fair if it gives similar predictions to similar individuals.
It is also known as Individual Fairness.
• DemographicParity :
The likelihood of a positive outcome should be the same regardless of whether the person is in
the protected (e.g., female) group. It comes under Group Fairness
26-01-2022 6
BACKGROUND
CHALLENGES
• Decisions are dependent on the "Data".
• A model trained on biased data can produce
unfair and discriminatory decisions.
• Algorithm bias
26-01-2022
Interpretabilityof models
Universalfairnessdefinition
Universalfairnessmetrics
Concretebias mitigation
Lack of social fairness
Data Algorithm Decision
7
FAIRNESS METRICS
26-01-2022
S. No Metric Definition formula
1. Disparate Impact
Ratio(DIR)
Ratio of the rate of positive outcome for
minority group to majority group
P(y=1|x=0)/P(y=1|x=1)
2. Statistical Parity
Difference(SPD)
Difference of the rate of positive outcome for
minority group to majority group
P(y=1|x=0) - P(y=1|x=1)
• Quantifying Group Fairness
• 2 Groups:
• Male, x=1
• Female, x=0
8
FAIRNESS METRICS
• Effectively search the attribute space
• Test-intensive – Can't be done manually
• Generate automatic Test cases for
indiscrimination.
26-01-2022
Protected
Non-
protected
Individual
discrimination
• path coverage and individual discrimination
• Always negate a non-protected attribute
• Success score= |setof generated testcasesfrom unprotected attributes|
|individual discrimination detected|
9
FAIRNESS APPROACHES
• Achieving Fairness == Mitigating Bias
• Consider fairness constraints in decision system building
• 3 different approaches to include fairness constraints
26-01-2022
Pre-processing
• Learn fair
Representations
• Remove
disparate impact
In-processing
• Regularization
• Train
independently
and tune
together
• Adversarial
debiasing
Post-processing
• Calibrated Equal
Odds
10
FAIRNESS ARCHITECTURE
26-01-2022 11
ADVERSARIAL DE-BIASING
• Optimize with respect to the fairness loss
• Adversarial NN can predict the attributes which leads to an unfair decision.
26-01-2022
Gender ??
Race ??
12
CASE-STUDY
• Fairness architecture at LinkedIn
• "Equal opportunities to equally
qualified members"
• Model agnostic and scalable
• Fair model analyzer:
• Specialized test set
• Creates the fairness correction
• Mitigation trainer
• Learns from the previous step output
• post-processing approach
26-01-2022 13
SUMMARY
26-01-2022
• 100 % data-driven systems
are not ideal
• Focus on Bias
• Data source & variety
• Regularization
• Lack of Fairness knowledge
• Social Fairness, ethics
• Fairness constraints
• Context & diversity
• Continuous fairness evaluation
• Testing after deployment
• Optimize fairness metrics
14
REFERENCES
• Mehrabi, Ninareh & Morstatter, Fred & Saxena, Nripsuta & Lerman, Kristina & Galstyan, Aram. (2019). A Survey on Bias and Fairness in Machine Learning.
• Kulshrestha, Ankit and Ilya Safro. “CONFAIR: Configurable and Interpretable Algorithmic Fairness.” (2021).
• https://engineering.linkedin.com/blog/2022/a-closer-look-at-how-linkedin-integrates-fairness-into-its-ai-pr
• https://www.slideshare.net/godatadriven/fairness-in-ai-ddsw-2019
• https://blog.insightdatascience.com/tackling-discrimination-in-machine-learning-5c95fde95e95
• https://www.persistent.com/blogs/fairness-in-ai-systems/
• Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2019. Black box fairness testing of machine learning models. In <i>Proceedings of the
2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering</i> (<i>ESEC/FSE
2019</i>). Association for Computing Machinery, New York, NY, USA, 625–635. DOI:https://doi.org/10.1145/3338906.3338937
• Ignore icons created by Parzival’ 1997 - Flaticon
• Ethics icons created by Freepik - Flaticon
• Testing icons created by surang – Flaticon
• https://www.datanami.com/2018/09/05/how-to-build-a-better-machine-learning-pipeline/
• https://www.oreilly.com/library/view/ai-fairness/9781492077664/ch01.html
• https://fortune.com/2018/10/10/amazon-ai-recruitment-bias-women-sexist
26-01-2022 15
26-01-2022
Thank you.
16

More Related Content

PDF
Engineering Ethics: Practicing Fairness
PPTX
Dual Approaches for Integrating Ethics into the Information Systems Curriculum
DOCX
ITS 832 CHAPTER 5FROM BUILDING A MODEL TO ADAPTIVE ROBUST DE.docx
PPTX
Responsible AI in Industry: Practical Challenges and Lessons Learned
PPTX
DREAMS 2021 presentation- Evaluation metrics in human-in-the-loop for autonom...
PDF
“Bias in Computer Vision—It’s Bigger Than Facial Recognition!,” a Presentatio...
PPTX
Introduction to Data Analytics.pptx
PPTX
A trust aggregation portal
Engineering Ethics: Practicing Fairness
Dual Approaches for Integrating Ethics into the Information Systems Curriculum
ITS 832 CHAPTER 5FROM BUILDING A MODEL TO ADAPTIVE ROBUST DE.docx
Responsible AI in Industry: Practical Challenges and Lessons Learned
DREAMS 2021 presentation- Evaluation metrics in human-in-the-loop for autonom...
“Bias in Computer Vision—It’s Bigger Than Facial Recognition!,” a Presentatio...
Introduction to Data Analytics.pptx
A trust aggregation portal

Similar to Fairness in Automated Decision Systems (20)

PPSX
Caveon Webinar Series - Lessons Learned at the 2015 National Conference on S...
PPT
PSY 239 401 Chapter 5 SLIDES
PPT
DataMining and Knowledge Discovery in DB.ppt
PDF
Ch2 POM.pdf chapter summary for principal of management
PDF
What is the impact of bias in data analysis, and how can it be mitigated.pdf
PPT
Decisions
PPTX
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
PPT
Ch._6_pp_industrial.ppt
PDF
Successful adoption of Machine Learning
PPTX
Learning Analytics in Higher Education
PPTX
Investigating Fairness of Decision Making
PPTX
Trends in machine learning (1)
PPTX
Supervised and Unsupervised Machine learning
PPTX
Algorithmic Fairness: A Brief Introduction
PPT
Decision Making process.ppt
PDF
Statistics — Your Friend, Not Your Foe
PPTX
Decision making systems
PDF
Responsible Data Use in AI - core tech pillars
Caveon Webinar Series - Lessons Learned at the 2015 National Conference on S...
PSY 239 401 Chapter 5 SLIDES
DataMining and Knowledge Discovery in DB.ppt
Ch2 POM.pdf chapter summary for principal of management
What is the impact of bias in data analysis, and how can it be mitigated.pdf
Decisions
Responsible AI in Industry (Tutorials at AAAI 2021, FAccT 2021, and WWW 2021)
Ch._6_pp_industrial.ppt
Successful adoption of Machine Learning
Learning Analytics in Higher Education
Investigating Fairness of Decision Making
Trends in machine learning (1)
Supervised and Unsupervised Machine learning
Algorithmic Fairness: A Brief Introduction
Decision Making process.ppt
Statistics — Your Friend, Not Your Foe
Decision making systems
Responsible Data Use in AI - core tech pillars
Ad

Recently uploaded (20)

PDF
medical staffing services at VALiNTRY
PDF
Design an Analysis of Algorithms II-SECS-1021-03
PPTX
Transform Your Business with a Software ERP System
PDF
Which alternative to Crystal Reports is best for small or large businesses.pdf
PDF
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
PDF
Softaken Excel to vCard Converter Software.pdf
PDF
top salesforce developer skills in 2025.pdf
PDF
Internet Downloader Manager (IDM) Crack 6.42 Build 41
PPTX
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
PDF
wealthsignaloriginal-com-DS-text-... (1).pdf
PDF
Odoo Companies in India – Driving Business Transformation.pdf
PDF
How Creative Agencies Leverage Project Management Software.pdf
PDF
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
PDF
Digital Strategies for Manufacturing Companies
PDF
2025 Textile ERP Trends: SAP, Odoo & Oracle
PDF
System and Network Administration Chapter 2
PDF
Adobe Illustrator 28.6 Crack My Vision of Vector Design
PDF
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
PPTX
Odoo POS Development Services by CandidRoot Solutions
PPTX
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
medical staffing services at VALiNTRY
Design an Analysis of Algorithms II-SECS-1021-03
Transform Your Business with a Software ERP System
Which alternative to Crystal Reports is best for small or large businesses.pdf
EN-Survey-Report-SAP-LeanIX-EA-Insights-2025.pdf
Softaken Excel to vCard Converter Software.pdf
top salesforce developer skills in 2025.pdf
Internet Downloader Manager (IDM) Crack 6.42 Build 41
Agentic AI Use Case- Contract Lifecycle Management (CLM).pptx
wealthsignaloriginal-com-DS-text-... (1).pdf
Odoo Companies in India – Driving Business Transformation.pdf
How Creative Agencies Leverage Project Management Software.pdf
Claude Code: Everyone is a 10x Developer - A Comprehensive AI-Powered CLI Tool
Digital Strategies for Manufacturing Companies
2025 Textile ERP Trends: SAP, Odoo & Oracle
System and Network Administration Chapter 2
Adobe Illustrator 28.6 Crack My Vision of Vector Design
Audit Checklist Design Aligning with ISO, IATF, and Industry Standards — Omne...
Odoo POS Development Services by CandidRoot Solutions
Oracle E-Business Suite: A Comprehensive Guide for Modern Enterprises
Ad

Fairness in Automated Decision Systems

  • 1. Fairness in Automated Decision Systems Data Protectionin Software Development- WS 21/22 Dr. AmirshayanAhmadian • Sai Sri Praveen Gadiyaram • Sowjanya Chennamaneni 26-01-2022 1
  • 2. OUTLINE 1. Motivation 2. Background 3. Challenges 4. Fairness metrics 5. Fairness approaches 6. Fairness Architecture 7. Adversarial Debiasing 8. Case-study 9. Summary 26-01-2022 2
  • 4. BACKGROUND 26-01-2022 AutomatedDecision Systems : Decisionstaken by the systems without any human involvement. 4
  • 5. . 26-01-2022 Name Education Race Gender Zip code John Bachelor's White Male 56783 Mary Bachelor's White Female 45673 Annie Bachelor's Black Female 15674 Jack Bachelor's Black Male 13456 TrainedML model Predictions Eligible Yes Yes No No BACKGROUND 5 • Protected Attributes: Can be a source of social bias. For example, age, race and gender • Unprotected Attributes: Attributes that are not protected.
  • 6. • Fairness ThroughAwareness: An algorithm is fair if it gives similar predictions to similar individuals. It is also known as Individual Fairness. • DemographicParity : The likelihood of a positive outcome should be the same regardless of whether the person is in the protected (e.g., female) group. It comes under Group Fairness 26-01-2022 6 BACKGROUND
  • 7. CHALLENGES • Decisions are dependent on the "Data". • A model trained on biased data can produce unfair and discriminatory decisions. • Algorithm bias 26-01-2022 Interpretabilityof models Universalfairnessdefinition Universalfairnessmetrics Concretebias mitigation Lack of social fairness Data Algorithm Decision 7
  • 8. FAIRNESS METRICS 26-01-2022 S. No Metric Definition formula 1. Disparate Impact Ratio(DIR) Ratio of the rate of positive outcome for minority group to majority group P(y=1|x=0)/P(y=1|x=1) 2. Statistical Parity Difference(SPD) Difference of the rate of positive outcome for minority group to majority group P(y=1|x=0) - P(y=1|x=1) • Quantifying Group Fairness • 2 Groups: • Male, x=1 • Female, x=0 8
  • 9. FAIRNESS METRICS • Effectively search the attribute space • Test-intensive – Can't be done manually • Generate automatic Test cases for indiscrimination. 26-01-2022 Protected Non- protected Individual discrimination • path coverage and individual discrimination • Always negate a non-protected attribute • Success score= |setof generated testcasesfrom unprotected attributes| |individual discrimination detected| 9
  • 10. FAIRNESS APPROACHES • Achieving Fairness == Mitigating Bias • Consider fairness constraints in decision system building • 3 different approaches to include fairness constraints 26-01-2022 Pre-processing • Learn fair Representations • Remove disparate impact In-processing • Regularization • Train independently and tune together • Adversarial debiasing Post-processing • Calibrated Equal Odds 10
  • 12. ADVERSARIAL DE-BIASING • Optimize with respect to the fairness loss • Adversarial NN can predict the attributes which leads to an unfair decision. 26-01-2022 Gender ?? Race ?? 12
  • 13. CASE-STUDY • Fairness architecture at LinkedIn • "Equal opportunities to equally qualified members" • Model agnostic and scalable • Fair model analyzer: • Specialized test set • Creates the fairness correction • Mitigation trainer • Learns from the previous step output • post-processing approach 26-01-2022 13
  • 14. SUMMARY 26-01-2022 • 100 % data-driven systems are not ideal • Focus on Bias • Data source & variety • Regularization • Lack of Fairness knowledge • Social Fairness, ethics • Fairness constraints • Context & diversity • Continuous fairness evaluation • Testing after deployment • Optimize fairness metrics 14
  • 15. REFERENCES • Mehrabi, Ninareh & Morstatter, Fred & Saxena, Nripsuta & Lerman, Kristina & Galstyan, Aram. (2019). A Survey on Bias and Fairness in Machine Learning. • Kulshrestha, Ankit and Ilya Safro. “CONFAIR: Configurable and Interpretable Algorithmic Fairness.” (2021). • https://engineering.linkedin.com/blog/2022/a-closer-look-at-how-linkedin-integrates-fairness-into-its-ai-pr • https://www.slideshare.net/godatadriven/fairness-in-ai-ddsw-2019 • https://blog.insightdatascience.com/tackling-discrimination-in-machine-learning-5c95fde95e95 • https://www.persistent.com/blogs/fairness-in-ai-systems/ • Aniya Aggarwal, Pranay Lohia, Seema Nagar, Kuntal Dey, and Diptikalyan Saha. 2019. Black box fairness testing of machine learning models. In <i>Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering</i> (<i>ESEC/FSE 2019</i>). Association for Computing Machinery, New York, NY, USA, 625–635. DOI:https://doi.org/10.1145/3338906.3338937 • Ignore icons created by Parzival’ 1997 - Flaticon • Ethics icons created by Freepik - Flaticon • Testing icons created by surang – Flaticon • https://www.datanami.com/2018/09/05/how-to-build-a-better-machine-learning-pipeline/ • https://www.oreilly.com/library/view/ai-fairness/9781492077664/ch01.html • https://fortune.com/2018/10/10/amazon-ai-recruitment-bias-women-sexist 26-01-2022 15