From the course: Foundations of Responsible AI

Inclusivity

- One of the many proposed ways to address harmful biases in machine learning is to diversify the teams creating ML models. While this alone won't change the current state of development, creating more balanced teams responsible for developing products might. Many underrepresented groups such as women, non-binary people, racial minorities, and LGBTQ communities often feel excluded from AI communities. Consider the gender classifiers that only predict along the gender binary. Or how on a broad scale, computer vision applications work poorly for black and darker skin individuals. These exclusions also exist in social circles of those building AI. It's not merely enough to promote diversity campaigns, without recognizing we have an already biased environment. AI teams are not so different from software engineering teams, and commonly, there are few women, racial minorities, or gender minorities to be found. While diversity is a good thing, we often fail to account for the negative influence of white male-dominated groups, and how they've historically reacted to advice from minority members. There are a variety of reasons it's hard for groups to be inclusive, and heavily weigh the input of minority members on product teams. Often, as we begin to diversify teams, there may only be a handful of minority members, and they're often treated unfairly. Examples of this include being held to higher standards, whether technical or non-technical, being ostracized as dramatic or radical when they speak up, or being silenced by louder, more powerful voices. This contributes to the less than ideal applications of AI, even when minority group members are on development teams. Additionally, we place these individuals in a tough spot. On one hand, one person can never speak for the experiences of large groups of people. And doubly, we only expect minority members to care about how their specific groups are influenced by technology, instead of encouraging all people to learn and understand. This additional work can be draining for people already underestimated, and often fighting stigmas about their identities. So to further promote inclusivity, we must address characteristics that lead to exclusion. First, we must remove harmful and biased actors from development teams, have zero tolerance for microaggressions, and promote equity by lowering the barriers to entry for marginalized people. On a professional level, we must value the work of sociologists, behavioral scientists, anthropologists, user experience designers, and qualitative researchers. This means to find new ways to work with these various professional skillsets on a dynamic product or engineering team. This is important to alleviate the difficulties of identifying harmful applications from solely members of marginalized groups. And instead, have them shared across a team of people who have extensively researched minority populations and how technology can impact them. To create more balanced teams, we need a troop of various professionals with both technical and sociological experience. Often, a product team that leverages the strength of various non-technical roles is more successful at mitigating harms caused by biased and exclusive models. Some ways to promote inclusivity include consulting domain experts across a wide range of disciplines. Testing amongst a diverse base of users. Developing in conjunction with ethics boards. And developing internal review processes to ensure research projects that involve human subjects meet some ethical standards. We've covered a lot of AI principles in this chapter, but there are still great resources to learn more. I recommend you check out the Algorithm Justice League and Data for Black Lives to put these principles into action.

Contents