From the course: Foundations of Responsible AI
Risk mitigation in AI
- Let's discuss risk in AI systems and ways to reduce the harms caused by unfair models. There's a level of risk associated with each AI system and its deployment, but it's rare we discuss ways to mitigate both the fairness risks and legal risks associated with AI systems. Risk is highly connected to power, and as organizations with great power wield even more data we must acknowledge that our tendency to risk user privacy and safety is far too high. Knowing the consequences of bad automated decisions is serious. And there are organizational consequences as well. According to a study by DataRobot, one in three companies have suffered financial, reputational, or engineering risks when AI systems make biased decisions. Confronting risk in AI requires us to also confront our relationships with institutional powers. Here's a quote that sums this up really well. "We should be thinking about the relationship we have with institutional powers; do we enhance their hegemony, do we stand by and do nothing, or do we actively resist it?" One way to mitigate the risk of AI is to deploy models in simulated or shadow environments. This allows you to understand how a model will predict against a real world data and also measure the changes to a dataset's distribution caused by deploying a model. A shadow deployment is where we release a new machine learning model in stealth mode to assess how it performs on production data, but without actually using the predictions within the business. One type of risk to consider in artificial intelligence is model underperformance. When models don't perform as advertised or as expected, it's common that there are financial and reputational harms that happen to AI developing organizations. It's estimated that 85% of machine learning projects fail, but we should understand that failure in these cases is caused by one or more of the following sources. First, construct validity. Several projects are completely misaligned by misinterpreting what the data represents or making spurious correlations between the available data and what a team intends to predict. Sometimes projects are just technically not feasible, and after business leaders have decided enough resources have been invested, they lose confidence and abandon the project entirely. Other projects fail because there just isn't enough expertise on a team, including domain expertise, to point out mistakes a novice wouldn't identify but an experienced practitioner in that domain would. Failure is a norm in machine learning. We're dealing with models that can give us entirely different predictions the next time we run them. Risk is inherent, but we hardly factor that in when pushing models to production. You might be wondering what some of the ways to reduce this risk are. The first is radical corporate transparency. One organizational goal that can solidify you as a leader in responsible AI is being more transparent about how you build your AI than your competitors. One great way to do this is involving users in the design and development lifecycle. This can look like one of many things, including user preferability tests to understand how a test group of users reacts to proposed new products or new product features. We can also reassess the ways in which we provide model explanations to people in specialized roles like radiologists, judges, and doctors. While human-in-the-loop methods aim to reduce the bias caused unintentionally by algorithms, explainability is complex because how engineers think specialists will want to read explanations may not be the most effective at enabling radiologists and judges to make better decisions using AI tools. Teams can also shift priorities and performance metrics towards ethical goals. This is a great way to inspect how changing power dynamics can lead to better outcomes for users. In order to reduce risk with data privacy, organizations should allow users to opt in to data and to data sharing services versus allowing users to opt out. By having the default be that users must opt out, users don't have a sense of trustworthiness for your organization. By using an opt in method, there's a chance of receiving far fewer examples. However, the users who have given their informed consent through the active action of opting in are often higher quality. Strong documentation of key decisions is required to understand how some risky decisions get made. By documenting and storing these decisions in an easy to read and easily accessible place, we have a better chance to identify the causes of an AI incident. This gives us insights as to what needs improvement. So transparency, deploying models in stealth mode, and understanding that failure comes with the domain are ways for us to mitigate the risks associated with developing AI. We can't remove risk from the equation, but we can be transparent with stakeholders and users about the levels of risk we tolerate and expose them to.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.