From the course: Foundations of Responsible AI

Who AI is developed for?

From the course: Foundations of Responsible AI

Who AI is developed for?

- Now let's talk about who's currently being prioritized by AI systems. We've had a look at different types of AI instance, policies, and ethical concepts. So given what you know, take a minute to consider who do you think we optimize the development of AI for. Make a mental note or jot down your ideas. If you thought of white men first, you'd be correct. Systemically, white men tend to be prioritized the most. They deal with fewer quality of service harms, representation harms, and are rarely documented as being demeaned by AI systems. The demographics also skew highly educated and upper middle class as far as our assumptions when building AI systems. This has resulted in a major call for responsible AI to cater to building equity for the people who have been marginalized by technology the most. Whether you're working in ethical AI or diverse hiring, a strategy I recommend is to invert the status quo and adjust who we prioritize. For example, if the racial breakdown of your organization is 70% white, 15% Asian, 10% Latino, and 5% black, we can aim for an inverse ratio that promotes equity. What would equity look like here? Aim to recruit a candidate pool that's 70% black, 15% Latino, 10% Asian and 5% white. If you happen to think this isn't possible, consider the methods currently used to attract applicants. Often, it's posting on LinkedIn, Indeed, or in job specific Slack channels, but not on inclusion, black and AI, or Latin X in AI. This method is something that we can use to push towards equity while developing our long-term strategies. If an image data set you work with is 60% white, collect additional data in bursts and with inverse ratios. Maybe for your context, you want 60% of samples to be black or Latino. This is highly dependent on locale, where you deploy your models, where your users are, and historical context about who is marginalized and where. For example, people with Latin ancestry aren't underrepresented in Latin America. When we think about language, English dominates research, code, and documentation about AI. We could easily say AI's optimize for English speakers as we rarely focus on translating code, documentation, and product pages, even when working on globally-deployed products. In general, development teams expect users of their tools will be English speakers. And it's far harder to receive training, support, or just find out about AI systems if you don't speak or read English. This limited view makes it hard to create systems that work well for everybody. AI has come a long way since its 1956 inception at Dartmouth College. And given all that we've learned, does it make sense to continue to use easily available historical data? Do we continue to undervalue issues surfaced by users and policymakers? Do we prioritize revenue gains over the quality of life for users? We must first come to a conclusion that the way we've developed AI does not serve all people. A narrow group of people, which is ironically the same group that dominates AI developers, are prioritized by AI systems and they face the least harsh and least frequent harms. For a moment, just think about AI systems aim to reduce the amount of time spent in an airport security line. Some airports are already busier than others but a system created for users to complete security more quickly will work for some and delay others. The main tools airports have leveraged are biometric, often including facial recognition, retina scanning, and gate detection, which is a pseudoscience. So for people who these tools work well for, sure, they'll have a good experience getting through security faster, but for those who these tools do not work for, such as trans people and women with dark skin, they'll experience anything from delays, to missed flights, to larger misidentification issues. In 2022, the IRS was poised to use the AI tool ID.me to use facial recognition as part of the tax filing system for all 330 million Americans. For the millions of Americans for whom facial recognition works poorly, what kinds of financial and legal complications would they face merely because the IRS chose to use facial recognition? Without having any input on how these systems are built, not having access to training data or algorithms, and not having a choice on whether or not to use this tool, the potential for damage is alarming, so much so that the work of AI ethicists like Dr. Joy Buolamwini and Dr. Sasha Costanza-Chock have pressured the IRS to reconsider the use of facial recognition when filing taxes. So who should we develop AI for? We should enter this new phase of AI development with a critical eye for identifying and prioritizing marginalized groups and those already harmed by prior AI solutions. This is our duty as technologists. We must consider context as cultural differences determine what is and is not acceptable across the world. While this makes it more complicated for companies operating globally, it's crucial to develop robust and safe AI systems. In summary, consider context, global location, cultural norms, and AI mishaps that have happened in the past when considering who to prioritize in development outcomes and fairness evaluations.

Contents