SlideShare a Scribd company logo
UNDERSTANDING
ALGORITHMIC DECISIONS
Updates on work in progress from the SOCIAM team
at Oxford CS…
Dr. Reuben Binns, Dr Jun Zhao, Dr Max Van Kleek, Prof. Sir. Nigel Shadbolt
reuben.binns@cs.ox.ac.uk
Dept. Computer Science, University of Oxford
QUESTION: WHAT DO THEY DO WITH THE DATA?
▸ Transparency over data collection is important, but then
what happens to it?
▸ How will they use it? Will they treat me differently?
UNDERSTANDING ALGORITHMIC DECISIONS
UNDERSTANDING ALGORITHMIC DECISIONS
…BUILD MODELS!
▸ ML systems: build a model
which can predict or classify
things
▸ Examples:
▸ What products will this
person buy?
▸ will they pay back their loan?
▸ Is this email spam?
MACHINE LEARNING AND SOCIAL MACHINES
▸ People label data (‘spam’ / ‘not spam’, ‘good credit risk’ /
‘bad credit risk’), machines build models from it
▸ Models used to decide things:
▸ what adverts are seen
▸ who gets a loan
▸ what goes in the spam box
UNDERSTANDING ALGORITHMIC DECISIONS
ACCOUNTABILITY, TRANSPARENCY, FAIRNESS
▸ How do the biases of humans in training data find their way
into machine models?
▸ How should machines explain the outputs of their models
to humans? Can explanations help people assess the
fairness of those outputs?
UNDERSTANDING ALGORITHMIC DECISIONS
AUTOMATED CONTENT MODERATION
▸ Manual, community-driven
flagging
▸ Paid moderators
▸ Blacklisted words
AUTOMATED CONTENT MODERATION
‘TOXICITY’ SCORES
‘TOXICITY’ SCORES
ALGORITHMIC MODERATION AND BIAS
▸ 100k Wikipedia talk page comments, each annotated by 10 different
people for `toxicity’.
▸ Do different demographic sub-groups have different norms of offence?
▸ Yes: men and women often disagreed.
▸ Women had more diverse norms of offence.
y
n♀ ♀
y
y
♂ ♂
CREATING BIASED TRAINING DATA
▸ Created 30 training data sets, sampling men / women /
mixed genders from original Detox dataset
▸ Trained new offensive text classifiers based on these
biased samples
♀ ♀ ♀
♀ ♀ ♀
♀♀ ♀
♀ ♂ ♂
♂ ♂ ♂
♂♂ ♂
♂ ⚥ ⚥
♂ ⚥ ⚥
⚥⚥ ⚥
⚥
⚥
⚥
⚥
♂
♀
test upon
TESTING BIASED OFFENCE DETECTORS
▸ Test on unseen examples, labelled by each group (male /
female / balanced)
▸ All classifiers performed worse on female-labelled test
data
▸ Different coefficients between m / f.
Female Male Balanced
0.96 0.97 0.98 0.96 0.97 0.98 0.96 0.97 0.98
0.44
0.48
0.52
Specificity (true negative rate)
Sensitivity(true
positiverate)
Training set
Female
Balanced
Male
Test
EXPLAINING ALGORITHMIC DECISIONS
▸ ML systems used to decide:
▸ Who gets a loan
▸ Who to invite to an interview
▸ Insurance premiums
▸ How should these decisions be explained?
Understanding Algorithmic Decisions
WHY DOES COMPUTER SAY NO?
▸ Data protection laws require organisations to provide
`meaningful information about the logic’ behind
automated decisions
▸ US laws require credit scoring companies to provide
`statements of reasons’
DECISION TREES?
LOCAL, INTERPRETABLE, MODEL-AGNOSTIC EXPLANATIONS
▸ E.g. Ribeiro, Marco Tulio, Sameer Singh, and
Carlos Guestrin. "Why should i trust you?:
Explaining the predictions of any classifier."
Proceedings of the 22nd ACM SIGKDD
International Conference on Knowledge
Discovery and Data Mining. ACM, 2016.
SENSITIVITY
▸ What would I have to
change in order to
get a different result?
CASE BASED
▸ Marian is like Vivian,
and Vivian paid back
her loan, so Marian
will pay back her loan
Nugent, Conor, and Pádraig
Cunningham. "A case-based explanation
system for black-box systems." Artificial
Intelligence Review 24.2 (2005):
163-178.
DEMOGRAPHIC
▸ What are the
characteristics of
people who received
this outcome?
▸ What outcomes did
other people in my
demographic
categories get?
Ardissono, Liliana, et al. "Intrigue: personalized recommendation of tourist attractions for desktop and hand
held devices." Applied Artificial Intelligence 17.8-9 (2003): 687-714.
DO EXPLANATIONS AFFECT PERCEPTIONS OF JUSTICE?
▸ Tested people’s perceptions of justice in response to
various hypothetical cases using different explanation
styles…
DO EXPLANATIONS AFFECT PERCEPTIONS OF JUSTICE?
“She’s been a victim of
this computer system
that has to generalise
based on, like,
somebody else”
“If we were in a court of
law, I would argue we don’t
know his circumstances,
but given this computer
model and the way it works
it’s deserved”
“This is just simply
reducing a human being
to a percentage”

More Related Content

PPTX
Big data, human agency, critical realism and the future of the social sciences
PDF
Making Decisions in a World Awash in Data: We’re going to need a different bo...
PDF
It's Not Simple, Stupid: Dealing with Complex Systems and Wicked Problems
PPTX
Tema 717.2. en verdad los ingenieros ,los matemáticos, los físicos, economist...
PPTX
Algorithmic fairness
PDF
Learning Analytics as Educational Knowledge Infrastructure
PDF
Designing Effective Science Communication
PPTX
Data, Responsibly: The Next Decade of Data Science
Big data, human agency, critical realism and the future of the social sciences
Making Decisions in a World Awash in Data: We’re going to need a different bo...
It's Not Simple, Stupid: Dealing with Complex Systems and Wicked Problems
Tema 717.2. en verdad los ingenieros ,los matemáticos, los físicos, economist...
Algorithmic fairness
Learning Analytics as Educational Knowledge Infrastructure
Designing Effective Science Communication
Data, Responsibly: The Next Decade of Data Science

Similar to Understanding Algorithmic Decisions (20)

PPTX
Who to believe: How epistemic cognition can inform science communication (key...
PPTX
Talking Tech - the art and science of communicating complex ideas (Bristech2...
PPTX
Transparency in ML and AI (humble views from a concerned academic)
PDF
Don't blindly trust your ML System, it may change your life (Azzurra Ragone, ...
PDF
Keynote by Simon Kos at Inform[ED] Connected Healthcare
PDF
Write My Apa Paper For Me For Free Write My Paper Fo
PPTX
Equality & Technology_Gregory_2018
PDF
We Have Met the Enemy and They Are Us | John Powell | Hypergiant
PDF
5.5.2021: Portfolios for system transformation by Giulio Quaggiotto (UNDP)
PDF
Fairness in Machine Learning @Codemotion
PPT
The Role of Agent-Based Modelling in Extending the Concept of Bounded Rationa...
PDF
It's all a game: The twin fallacies of epistemic purity and the scholarly inv...
PDF
Successful Essay Example. College Essay Examples - 9 in PDF Examples
PDF
Buy Cheap Essay - Buy Cheap
PDF
Data Science: Origins, Methods, Challenges and the future?
PPT
Applied Futures Research Overview, 2002
PPTX
algorithmic-decisions, fairness, machine learning, provenance, transparency
PPTX
475 media effects methods 2012 up
PDF
Detecting Algorithmic Bias (keynote at DIR 2016)
PDF
Ethical Dilemmas in AI/ML-based systems
Who to believe: How epistemic cognition can inform science communication (key...
Talking Tech - the art and science of communicating complex ideas (Bristech2...
Transparency in ML and AI (humble views from a concerned academic)
Don't blindly trust your ML System, it may change your life (Azzurra Ragone, ...
Keynote by Simon Kos at Inform[ED] Connected Healthcare
Write My Apa Paper For Me For Free Write My Paper Fo
Equality & Technology_Gregory_2018
We Have Met the Enemy and They Are Us | John Powell | Hypergiant
5.5.2021: Portfolios for system transformation by Giulio Quaggiotto (UNDP)
Fairness in Machine Learning @Codemotion
The Role of Agent-Based Modelling in Extending the Concept of Bounded Rationa...
It's all a game: The twin fallacies of epistemic purity and the scholarly inv...
Successful Essay Example. College Essay Examples - 9 in PDF Examples
Buy Cheap Essay - Buy Cheap
Data Science: Origins, Methods, Challenges and the future?
Applied Futures Research Overview, 2002
algorithmic-decisions, fairness, machine learning, provenance, transparency
475 media effects methods 2012 up
Detecting Algorithmic Bias (keynote at DIR 2016)
Ethical Dilemmas in AI/ML-based systems
Ad

More from Ulrik Lyngs (14)

PPTX
Social Machines: Theoretical perspectives, Paul Smart
PPTX
Mandevillian Intelligence, Paul Smart
PPTX
Human-Extended Machine Cognition, Paul Smart
PPTX
Zooniverse Update
PDF
Data sharing in the age of the Social Machine
PPTX
Ulysses in Cyberspace: Distraction and Self-Regulation in Social Machines
PDF
SoLiD co operating.systems
PPTX
Sociagrams: How to design a social machine
PPTX
Safe Haven in a Box, Petros Papapanagiotou
PPTX
App Observatory
PDF
Privacy-Preserving Data Analysis, Adria Gascon
PPTX
A Privacy Framework for Social Machines
PPTX
SOCIAM Book: The Theory and Practice of Social Machines
PPTX
Provenance and Analytics for Social Machines, Trung Dong Huynh
Social Machines: Theoretical perspectives, Paul Smart
Mandevillian Intelligence, Paul Smart
Human-Extended Machine Cognition, Paul Smart
Zooniverse Update
Data sharing in the age of the Social Machine
Ulysses in Cyberspace: Distraction and Self-Regulation in Social Machines
SoLiD co operating.systems
Sociagrams: How to design a social machine
Safe Haven in a Box, Petros Papapanagiotou
App Observatory
Privacy-Preserving Data Analysis, Adria Gascon
A Privacy Framework for Social Machines
SOCIAM Book: The Theory and Practice of Social Machines
Provenance and Analytics for Social Machines, Trung Dong Huynh
Ad

Recently uploaded (20)

PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
sap open course for s4hana steps from ECC to s4
PDF
Approach and Philosophy of On baking technology
PDF
MIND Revenue Release Quarter 2 2025 Press Release
PDF
Encapsulation theory and applications.pdf
PPTX
Spectroscopy.pptx food analysis technology
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Understanding_Digital_Forensics_Presentation.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Chapter 3 Spatial Domain Image Processing.pdf
Network Security Unit 5.pdf for BCA BBA.
“AI and Expert System Decision Support & Business Intelligence Systems”
Digital-Transformation-Roadmap-for-Companies.pptx
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Unlocking AI with Model Context Protocol (MCP)
sap open course for s4hana steps from ECC to s4
Approach and Philosophy of On baking technology
MIND Revenue Release Quarter 2 2025 Press Release
Encapsulation theory and applications.pdf
Spectroscopy.pptx food analysis technology
Reach Out and Touch Someone: Haptics and Empathic Computing
MYSQL Presentation for SQL database connectivity
Spectral efficient network and resource selection model in 5G networks
Dropbox Q2 2025 Financial Results & Investor Presentation
Understanding_Digital_Forensics_Presentation.pptx

Understanding Algorithmic Decisions

  • 1. UNDERSTANDING ALGORITHMIC DECISIONS Updates on work in progress from the SOCIAM team at Oxford CS… Dr. Reuben Binns, Dr Jun Zhao, Dr Max Van Kleek, Prof. Sir. Nigel Shadbolt reuben.binns@cs.ox.ac.uk Dept. Computer Science, University of Oxford
  • 2. QUESTION: WHAT DO THEY DO WITH THE DATA? ▸ Transparency over data collection is important, but then what happens to it? ▸ How will they use it? Will they treat me differently? UNDERSTANDING ALGORITHMIC DECISIONS
  • 3. UNDERSTANDING ALGORITHMIC DECISIONS …BUILD MODELS! ▸ ML systems: build a model which can predict or classify things ▸ Examples: ▸ What products will this person buy? ▸ will they pay back their loan? ▸ Is this email spam?
  • 4. MACHINE LEARNING AND SOCIAL MACHINES ▸ People label data (‘spam’ / ‘not spam’, ‘good credit risk’ / ‘bad credit risk’), machines build models from it ▸ Models used to decide things: ▸ what adverts are seen ▸ who gets a loan ▸ what goes in the spam box UNDERSTANDING ALGORITHMIC DECISIONS
  • 5. ACCOUNTABILITY, TRANSPARENCY, FAIRNESS ▸ How do the biases of humans in training data find their way into machine models? ▸ How should machines explain the outputs of their models to humans? Can explanations help people assess the fairness of those outputs? UNDERSTANDING ALGORITHMIC DECISIONS
  • 6. AUTOMATED CONTENT MODERATION ▸ Manual, community-driven flagging ▸ Paid moderators ▸ Blacklisted words
  • 10. ALGORITHMIC MODERATION AND BIAS ▸ 100k Wikipedia talk page comments, each annotated by 10 different people for `toxicity’. ▸ Do different demographic sub-groups have different norms of offence? ▸ Yes: men and women often disagreed. ▸ Women had more diverse norms of offence. y n♀ ♀ y y ♂ ♂
  • 11. CREATING BIASED TRAINING DATA ▸ Created 30 training data sets, sampling men / women / mixed genders from original Detox dataset ▸ Trained new offensive text classifiers based on these biased samples ♀ ♀ ♀ ♀ ♀ ♀ ♀♀ ♀ ♀ ♂ ♂ ♂ ♂ ♂ ♂♂ ♂ ♂ ⚥ ⚥ ♂ ⚥ ⚥ ⚥⚥ ⚥ ⚥ ⚥ ⚥ ⚥ ♂ ♀ test upon
  • 12. TESTING BIASED OFFENCE DETECTORS ▸ Test on unseen examples, labelled by each group (male / female / balanced) ▸ All classifiers performed worse on female-labelled test data ▸ Different coefficients between m / f. Female Male Balanced 0.96 0.97 0.98 0.96 0.97 0.98 0.96 0.97 0.98 0.44 0.48 0.52 Specificity (true negative rate) Sensitivity(true positiverate) Training set Female Balanced Male Test
  • 13. EXPLAINING ALGORITHMIC DECISIONS ▸ ML systems used to decide: ▸ Who gets a loan ▸ Who to invite to an interview ▸ Insurance premiums ▸ How should these decisions be explained?
  • 15. WHY DOES COMPUTER SAY NO? ▸ Data protection laws require organisations to provide `meaningful information about the logic’ behind automated decisions ▸ US laws require credit scoring companies to provide `statements of reasons’
  • 17. LOCAL, INTERPRETABLE, MODEL-AGNOSTIC EXPLANATIONS ▸ E.g. Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "Why should i trust you?: Explaining the predictions of any classifier." Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. ACM, 2016.
  • 18. SENSITIVITY ▸ What would I have to change in order to get a different result?
  • 19. CASE BASED ▸ Marian is like Vivian, and Vivian paid back her loan, so Marian will pay back her loan Nugent, Conor, and Pádraig Cunningham. "A case-based explanation system for black-box systems." Artificial Intelligence Review 24.2 (2005): 163-178.
  • 20. DEMOGRAPHIC ▸ What are the characteristics of people who received this outcome? ▸ What outcomes did other people in my demographic categories get? Ardissono, Liliana, et al. "Intrigue: personalized recommendation of tourist attractions for desktop and hand held devices." Applied Artificial Intelligence 17.8-9 (2003): 687-714.
  • 21. DO EXPLANATIONS AFFECT PERCEPTIONS OF JUSTICE? ▸ Tested people’s perceptions of justice in response to various hypothetical cases using different explanation styles…
  • 22. DO EXPLANATIONS AFFECT PERCEPTIONS OF JUSTICE? “She’s been a victim of this computer system that has to generalise based on, like, somebody else” “If we were in a court of law, I would argue we don’t know his circumstances, but given this computer model and the way it works it’s deserved” “This is just simply reducing a human being to a percentage”