From the course: Foundations of Responsible AI
Accountability
- Accountability is an acknowledgement of responsibility for our actions, decisions, and products. We can think of it as a process that takes the inputs of organizational objectives, stakeholders, and technological constraints to alter our methodology on developing AI. This work allows us to identify a risk, make architecture and product decisions that prioritize fairness amongst users, and hold our organizations accountable. Not only for AI incidents that receive bad PR and public backlash, but also for providing algorithmic appeals processes. Accountability is a combination of methodologies that work to keep companies who create machine learning models from shirking the consequences of meddling with political elections, or creating surveillance tech to over police underserved communities. Takes awareness about social structures and patterns of inequality to identify risky models and unethical use cases. Lastly and most importantly, model accountability relies on recourse and remediation processes to allow users to appeal automated decisions, receive information about why a decision was made, and what they're able to do to get a better outcome. We often create ML to make decisions about people without considering how they'll be impacted by the outcomes of our algorithms. We should provide algorithm explanations as well as algorithmic appeals processes, especially, in high risk applications. Many organizations fret over how to do this. And it starts with algorithmic transparency. Users can't appeal if they don't know an algorithm made a decision about them. Next, whether it's an online form on your webpage or a form attached to user onboarding emails, you need to allow users to dispute an algorithmic decision, or you're not being accountable for the algorithms you build. I realize I'm calling out 99% of companies modeling data. And by the very visible harms we've unleashed on society, we need to be honest with ourselves about the role we've played in being irresponsible with people's data. But it's not just me. The GDPR now requires companies to have some means for customers to appeal automated decisions if they have a presence in the EU. The industry standard needs to be destroyed and rebuilt. There have been hundreds of talks about data ethics that ease practitioner's minds about responsibility by focusing on intentional harms and data bias. As if practitioners are handcuffed to working on bad data for carceral use cases, or don't have the privilege to refuse the work. Unfortunately, our focus on ethics has caused many to believe all data or models can be fixed or debiased. And that is not the case. A key to successful mitigation of ML risk is real accountability. We have to build this into the culture not only of our data teams, but tech companies at large. Try asking these questions. Who tracks how ML is developed and used at my organization? Who's responsible for auditing our ML systems? Do we have an AI incident response plan? For many organizations today the answers may be no one and no. If no one's responsible when an ML system fails or gets attacked, then it's possible that no one at the organization has really thought through ML risks. Smaller organizations may not be able to spare an entire full-time employee to monitor ML model risk. But it's important to have an individual or group who will be responsible and held accountable if ML systems misbehave. We can't pretend to be serious about accountability if we offer nothing to users we've harmed. In some cases, this is financial compensation. In other cases, we can use alternative methods like discounted pricing offers for paying users. Many in software and reliability rely on engineers working on call shifts to cover potential incidents that may occur. This is a method we can employ in AI ethics. However, AI incident response should be multidisciplinary to investigate both the technical and social aspects of an issue. The main reason this isn't part of our process now is that few teams have the trained experts required to do this work on a large scale. This is slowly changing, but there are rotational frameworks and training for developing AI incident response teams, even at small organizations. Accountability is a large chunk of the responsible AI pie. It encompasses what we do when a bad decision is made, and how users can appeal automated decisions. Organizations must understand accountability should be a priority at all levels, from developers and data scientists, to executive and legal teams. In the next video, we'll talk about how to make models explainable.
Practice while you learn with exercise files
Download the files the instructor uses to teach the course. Follow along and learn by watching, listening and practicing.