AI model Gemma introduces gender bias in care decisions

View profile for Monika Curman, MBA

Customer Experience Lead

Not surprising to see biases make their way into AI tools. “Large language models (LLMs), used by over half of England’s local authorities to support social workers, may be introducing gender bias into care decisions, according to new research from LSE's Care Policy & Evaluation Centre (CPEC) funded by the National Institute for Health and Care Research. Published in the journal BMC Medical Informatics and Decision Making, the research found that Google’s widely used AI model ‘Gemma’ downplays women’s physical and mental issues in comparison to men’s when used to generate and summarise case notes.”

To view or add a comment, sign in

Explore content categories