In a new paper at Organization Science, we find that gendered responses to expressions of passion—a commonly used criterion in evaluating potential—both penalize women and advantages (unexceptional) men in high-potential selection processes (joint work with Joyce He and Celia Moore). https://lnkd.in/eeBjc_7j Across two studies—an actual talent review process and a preregistered experiment using videos with trained actors (plus two supplementary studies)—our paper shows: 1️⃣ Replicating prior work, we find a gender gap in high potential designations: men are more likely than women to be designated as high potential even when they perform the same. 2️⃣ Gender biases around passion provide one helpful insight into why this difference occurs. We find: ➡️ a male advantage: passion more meaningfully shifts predictions of diligence for men than women ➡️ a female advantage: passion is viewed as less appropriate for women than men, in particular those expressions which are highly affective and likely evoke stereotypes of women as "overly emotional" We summarize our work in a new Harvard Business Review article, including recommendations for what organizations can do to fix the gendered passion bias: https://lnkd.in/eT7DAdsq 1. Prioritize clear and objective criteria. Where possible, focus on concrete and objective indicators to evaluate potential rather than using subjective criteria like passion. 2. Encourage direct conversations over emotional displays. Rather than inferring how passionate and hardworking an employee appears to be based on their emotional expressions, managers should engage in meaningful conversations with employees to thoroughly gauge their commitment and motivations. 3. Broaden the criteria for high-potential selection. Expand the criteria for evaluating potential to include a mix of personal values, goals, and skill sets, which can help provide a fuller picture of an employee’s qualifications. 4. Conduct regular bias audits. Implement regular assessments of high-potential programs to identify gender or other biases in their selection process. 5. Consider raising the bar for moderately performing men. Given that reasonably high-performing men often receive an added boost from expressing passion, consider raising the performance bar for this group — for instance, by expecting higher levels of diligence commensurate with expectations for women.
Gendered language and gender bias testing
Explore top LinkedIn content from expert professionals.
Summary
Gendered-language-and-gender-bias-testing refers to the practice of identifying and analyzing language patterns and AI outputs that reinforce gender stereotypes or treat people unfairly based on gender. Testing for gender bias is important to ensure that both human evaluations and artificial intelligence systems make fair, unbiased decisions and communicate equitably.
- Spot language patterns: Regularly review both workplace communication and AI-generated content for words or phrases that might unintentionally reinforce traditional gender roles or assumptions.
- Prioritize fairness in AI: Use bias detection tools and include diverse perspectives in training data to make sure that algorithms don't perpetuate gender stereotypes or overlook certain groups.
- Set objective standards: When evaluating people or making decisions, rely on clear, measurable criteria instead of subjective impressions that could be influenced by hidden gender biases.
-
-
Can AI help drive more conversations around women’s health? Yes. But does AI have an inherent gender bias? Also yes, sadly. Here’s an example of that happening: Research from LSE has uncovered worrying gender bias in AI tools already used by over 50% of the UK’s councils to summarise social care case notes. The study tested 29,616 case summary pairs and notes from 617 adult social care users. When given identical notes with only the gender changed, Google’s Gemma model described men’s needs as “complex” while women’s summaries were more likely to downplay needs, framing them as “independent” or “able to manage”. In real life, that difference in language matters. It can mean women receiving less care, simply because an algorithm underestimates their needs. When we talk about tech efficiency, we can’t forget equity. When designing, training, and deploying AI, language isn’t just words - it also determines access to vital support and whose voices get heard, so we can’t take this lightly. We should be using AI to drive more visibility of diverse voices, not let it take us in the wrong direction. 📸 - UN Women.
-
It’s been 2 years. Did LLMs get better at not regurgitating gender bias? ➡️ Two years ago, I asked ChatGPT to tell me 100-word stories about traditionally gender stereotyped jobs and collected 10 samples per job on different days. I repeated this analysis to see if the models have gotten better at filtering biases out. 📈 The chart shows change, but this change is making the one-sided progress with gender stereotypes even more pronounced. As you can see in the chart, ChatGPT used only female characters again for stereotypically female jobs - nurses, preschool teachers and secretaries. Whereas in stereotypically male jobs - detective and firefighter - there is now more female representation and the female representation for CEO is through the roof. 👫 Gender asymmetry manifests even stronger now in ChatGPT output - it’s O.K. for women to be like men, but it’s a lot less acceptable for men to be associated with the feminine. 🤖 To those who believe that ChatGPT just represents things as they are - there are two things to consider: ➡️ When you train a model on online conversations, you inevitably ingest all the bigotry of the internet with it. It’s not necessarily the truth that goes in, it’s the most prevalent opinion. ChatGPT used only female nurse characters, but 13% of all nurses are male and 40% of anesthetist nurses are male. Clearly there could be a nurse Jack or Mario. ➡️ AI that has such profound reach and influence needs to assume the responsibility of stopping the propagation of social biases and bringing change by representing a more equitable world. Or we will keep hearing "What kind of man is a nurse?!" (© Meet the Fockers). ‼️It needs to be acknowledged that making AI more equitable is a hard problem. Attempts to correct for bias could lead to new forms of bias and overcorrection, like 90% of CEOs being female. The finer details of what constitutes a "more equitable world" can be subjective and vary across cultures and ideologies. But this is a hard problem worth solving. 📝On a positive note, ChatGPT did get better with storytelling: 2023: “Ms. Smith was a beloved preschool teacher. Every day she greeted her students with a warm smile and a hug. Her classroom was filled with laughter and excitement as the children learned through play. “ 2025: “Ms. Ellie knelt down to tie a shoe, her hands gently guiding the little fingers. "Thank you," Jamie said, beaming. "You're welcome," she replied, her heart swelling. The classroom buzzed with the sound of crayons on paper, laughter echoing as blocks tumbled and stories were told.” #ux #uxresearch #userresearch #userexperienceresearch #data #datascience #ai
-
Challenging Systematic Prejudices: An Investigation into Bias Against Women and Girls in Large Language Models The International Research Centre on Artificial Intelligence (IRCAI), under the auspices of UNESCO, in collaboration with UNESCO HQ, has released a comprehensive report titled “Challenging Systematic Prejudices: An Investigation into Gender Bias in Large Language Models”. This groundbreaking study sheds light on the persistent issue of gender bias within artificial intelligence, emphasizing the importance of implementing normative frameworks to mitigate these risks and ensure fairness in AI systems globally. "...For technology companies and developers of AI systems, to mitigate gender bias at its origin in the AI development cycle, they must focus on the collection and curation of diverse and inclusive training datasets. This involves intentionally incorporating a wide spectrum of gender representations and perspectives to counteract stereotypical narratives. Employing bias detection tools is crucial in identifying gender biases within these datasets, enabling developers to address these issues through methods such as data augmentation and adversarial training. Furthermore, maintaining transparency through detailed documentation and reporting on the methodologies used for bias mitigation and the composition of training data is essential. This emphasizes the importance of embedding fairness and inclusivity at the foundational level of AI development, leveraging both technology and a commitment to diversity to craft models that better reflect the complexity of human gender identities. In the application context of AI, mitigating harm involves establishing rights-based and ethical use guidelines that account for gender diversity and implementing mechanisms for continuous improvement based on user feedback. Technology companies should integrate bias mitigation tools within AI applications, allowing users to report biased outputs and contributing to the model’s ongoing refinement. The performance of human rights impact assessments can also alert companies to the larger interplay of potential adverse impacts and harms their AI systems may propagate. Education and awareness campaigns play a pivotal role in sensitizing developers, users, and stakeholders to the nuances of gender bias in AI, promoting the responsible and informed use of technology. Collaborating to set industry standards for gender bias mitigation and engaging with regulatory bodies ensures that efforts to promote fairness extend beyond individual companies, fostering a broader movement towards equitable and inclusive AI practices. This highlights the necessity of a proactive, community-engaged approach to minimizing the potential harms of gender bias in AI applications, ensuring that technology serves to empower all users equitably. https://lnkd.in/eTyr6XTn
-
#ResponsibleAI is a major area of investment for John Snow Labs - you can’t call a #Healthcare #AI solution “state of the art” or “production ready” if it's doesn't work in a reliable, fair, transparent, secure, and transparent fashion. Some of the solutions out there today are outright illegal. We're active members of the Coalition for Health AI (CHAI) and I co-lead the fairness, equity, and bias mitigation workgroup. We also have a full team working on the #OpenSource #LangTest project, which now automated 98 types of tests for evaluating and comparing #LargeLanguageModels. If you're looking to learn more about this topic over the holiday, read the Responsible AI blog: https://lnkd.in/gPs8c2Yf Here are some of the areas this blog covers: * Unveiling Bias in Language Models: Gender, Race, Disability, and Socioeconomic Perspectives * Mitigating Gender-Occupational Stereotypes in AI: Evaluating Language Models with the Wino Bias Test * Testing for Demographic Bias in Clinical Treatment Plans Generated by Large Language Models * Evaluating Large Language Models on Gender-Occupational Stereotypes Using the Wino Bias Test * Unmasking Language Model Sensitivity in Negation and Toxicity Evaluations * Detecting and Evaluating Sycophancy Bias: An Analysis of LLM and AI Solutions * Evaluating Stereotype Bias with LangTest * Beyond Accuracy: Robustness Testing of Named Entity Recognition Models with LangTest * Elevate Your NLP Models with Automated Data Augmentation for Enhanced Performance #ethicalai #ai #datascience #llms #llm #generativeai #healthcareai #healthai #privacy #security #transparency #softwaretesting
-
Our colleagues at UNESCO and HumaneIntelligence have just released a step-by-step Red Teaming Playbook to test generative AI systems for bias, harm, and vulnerabilities — especially those that impact women and girls. The guide aims to empower non-technical communities — civil society, policymakers, and educators — to conduct their own Red Teaming exercises and address technology-facilitated gender-based violence (TFGBV). Some key stats: 🔹 89% of ML engineers report finding Gen AI vulnerabilities (Aporia, 2024) 🔹 96% of deepfake videos are non-consensual; nearly all target women 🔹 73% of women journalists report online violence; many self-censor 🔹 Some girls experience TFGBV as early as age 9 The Playbook makes Red Teaming: 🔹 Accessible (no coding needed) 🔹 Flexible (in-person, online, or hybrid) 🔹 Actionable (can inform policy, AI design, and ethics reviews) Full document: https://lnkd.in/eeCwVxui Acknowledgements: Dr. Rumman Chowdhury, Theodora Skeadas Sarah A. Lakshmi Dhanya Our previous and related input: UNESCO Week, AI Competency Frameworks - https://lnkd.in/eekBssZ7 UN Global Digital Compact (GDC) - https://lnkd.in/emurU3nj Sovereign Public AI - https://lnkd.in/eMy9PvVZ AI in Science, R&D - https://lnkd.in/eHmmRU8u OECD Hiroshima AI Process - https://lnkd.in/erP6GB2T OECD Repository of Assistive AI - https://lnkd.in/eiWij8j2 OECD Catalogue of Trustworthy AI Tools - https://lnkd.in/epiQaQtk Paris Declaration - https://lnkd.in/eaNw58eX Washington Hearings - https://lnkd.in/ej-fM_jr NIST - https://lnkd.in/eunedRvd PCAST - https://lnkd.in/eANDE_FF #ai #ethics #policy
-
I've been exploring large models, particularly with respect to bias. I'll write more about my text-to-image research later, but I wanted to share some quick results on text-to-text bias. I wrote some code to calculate embeddings for various gendered words ('he', 'him,' 'father,' 'son,' 'brother,' 'uncle,' etc.) as well as ('she,' 'her,' 'mother,' 'daughter,' 'sister,' 'aunt' etc.) I then calculated the average embedding for these and reduced dimensionality so that + values are male and - values are female. Then, I calculated the embeddings for various job titles and their dot product. The logic -- if the values for the professions map to '-' values, they're associated with females; if '+', they're associated with males. Look at the results! BERT: doctor: -0.196 nurse: -2.064 engineer: -0.443 teacher: -1.188 scientist: -0.741 assistant: -0.801 GPT: doctor: -0.001 nurse: -0.229 engineer: 0.004 teacher: -0.137 scientist: -0.014 assistant: -0.062 Gemini: doctor: 0.025 nurse: -0.062 engineer: 0.067 teacher: 0.002 scientist: 0.015 assistant: 0.011 Some observations -- BERT is massively over-biased towards *female*. Everything aligned with females, particularly those roles that you'd expect to be male-oriented. Nurse is so over-indexed that it's not even funny! Gemini has corrected this and is pretty close to neutral across the board, with a strong attempt to de-gender job roles. These words veer slightly towards male, but they are so few that they're practically zero. GPT is quite interesting -- similar to Gemini, they've done a good job of neutralizing the job titles, with the largest values being nurse as female, and engineer as male, but the values themselves are so small, they're effectively a rounding error! As for BERT -- realizing that much of its training data was from books, which is as far as I can see, a female dominated industry, it's not a surprise that a female oriented bias is in the language of the model. Would love your thoughts! :)
-
How might #AI reinforce gender #bias? And can we leverage AI to mitigate gender bias too? These are the questions I'd like to address in this week's #sundAIreads in honor of #InternationalWomensDay. The reading I chose for this is an interview in UN Women with Zinnya del Villar, Director of Data, Technology, and Innovation at the Data-Pop Alliance. The interview addresses the following questions: 1️⃣ What is AI gender bias and why does it matter? AI gender bias is "when the AI treats people differently on the basis of their gender, because that’s what it learned from the biased data it was trained on." As Zinnya del Villar points out, "These biases can limit opportunities and diversity, especially in areas like decision-making, hiring, loan approvals, and legal judgments." 2️⃣ What is the result of gender bias in AI applications? ❌ It can reinforce stereotypes, e.g., when voice assistants default to female voices, or when text-to-image generators gravitate toward men for executive roles and women for service positions. ❌ It can also lead to disparate impact, e.g., when medical products trained on biased data work better for men than for women, or when recruiting systems automatically filter out applications based on gender. 3️⃣ How can gender bias in AI applications be reduced? Zinnya del Villar emphasizes that gender bias in AI applications must be tackled on multiple fronts: ✅ At the level of the developers: "AI systems should be created by diverse development teams made up of people from different genders, races, and cultural backgrounds. This helps bring different perspectives into the process and reduces blind spots that can lead to biased AI systems." ✅ At the level of the data: "This means actively selecting data that reflects different social backgrounds, cultures and roles, while removing historical biases, such as those that associate specific jobs or traits with one gender." 4️⃣ How can AI mitigate gender bias and drive better decisions? As Zinnya del Villar points out, AI can surface and help evaluate the impact of gender bias. It can also help assess the gender impact of laws and propose relevant reforms. 5️⃣ How can AI improve women's safety and stop digital abuse? The article lists several #AI applications that were developed specifically with women's safety in mind, e.g., chatbots that provide anonymous support for victims of sexual abuse or AI-powered algorithms that limit the spread of non-consensual intimate images. The interview concludes with five concrete suggestions for how to make AI more inclusive: ✅ Using diverse and representative training data ✅ Improving the transparency of algorithms in AI systems ✅ Making AI development and research teams more diverse and inclusive ✅ Adopting strong ethical frameworks for AI systems ✅ Integrating gender-responsive policies in developing AI systems The full interview with Zinnya del Villar can be found here: https://bit.ly/4bybzHW.
-
⚫ AI Didn’t Just Mislabel Me. It Misunderstood Me. Today I ran another round of bias tests using my V.O.I.C.E. framework, built for GRC leaders and ethical tech builders who want to make bias visible and actionable. 🧵 One prompt, I asked the model to describe me, “putting braids in my hair!”, even though I’ve never provided an image of myself with braids. It responded with stereotypes. 💡 But here’s what’s deeper: Because I’ve used ChatGPT rigorously over the past year, it knows I’m Black. It knows I’m a woman, among other things. So when I spoke in an urban tone using voice prompt, it responded with condescension and the same tone, assuming that tone was the default. ⚠️ This is bias in action. When I realized it, I entered a new prompt asking the model to use a neutral tone moving forward and never assume urban language based on my identity. We'll see how that goes, as I've observed #CHATGPT has a short memory. ✅ The model acknowledged the bias, corrected the output, and explained the cause: AI is only as reliable as the data it’s trained on. That’s the proof. 🟣 If the data doesn’t reflect us, the system won’t either. ⚫ When AI assumes your identity, it reveals its own bias. This isn’t just about prompts. It’s about power. It’s about visibility. 💡 Inclusive Prompts aren’t just a test. - They’re a tool for accountability. - They’re a demand for equity. AI is not going to give the woman a seat. You have to take it💪 . #BiasInAI #InclusiveTech #VOICEframework #EthicalAI #WomenInSTEM #BlackTechFutures #PromptEngineering #BecomingSEEN #VisibilityIsPower WiCyS Kansas City
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Education
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Design
- Innovation
- Event Planning
- Training & Development