From the course: Agentic AI: Building Data-First AI Agents
Responsible AI and data
From the course: Agentic AI: Building Data-First AI Agents
Responsible AI and data
- A company is starting up from scratch and they are going to build AI into their processes. How in your mind would you build out their data governance and their data structures, so that they are ready for both what we have for AI now and where we're going next with it? - The building blocks for applying any sort of governance here starts with the responsible AI principles. And those are all surrounded around transparency, accountability, inclusiveness, privacy and security, reliability and safety, and fairness. Fairness is the most important thing that when you build an application, even without AI, you would still have to consider fair use of that application. And that applies here today too. But with generative AI, we are to be extra careful. So the governance, the policy, procedures, we are to keep in mind that we're training a machine to respond and predict the outcome. And it could be fed the wrong information in the sense creating an unfair or biased output out of this. I would say we start with the basics of transparency and make sure all the components are there, so that you can explain the AI model that's being used. You can explain the results, you can demystify the mechanisms behind it, and that's one way of using this governance framework and building trust. Second is user education. A lot of people still do not understand the intricacies of how these mathematical models are working. How is the probability here of getting the right result? How relevant is the outcome? So the right metrics, right? So you have to educate them about how AI works and also what are the key metrics that they should be looking out for. Feedback mechanism is another. When we build this governance platform or framework, you have to think about the feedback mechanism you can put in place, so that you get the likes and dislikes. You get the, uh-oh, this doesn't sound right or doesn't feel right. It needs to be tuned or changed. So those feedback mechanisms had to be put in place. Some of them are using some great techniques with reinforced learning and using AI itself on AI to improve. But you know, nothing beats the human in the loop, right? Human input, which can really tune it and make it more relevant and better for us. And then the measures is control measures that you need to put in place to ensure safety, reliability, ethical alignment of your specific application, all of that. And then obviously some way to keep this into a feedback loop or control loop of continuous improvement, 'cause this is not a one time, one job activity. It's once you build an app, you have to be responsible for that app, be accountable for that (indistinct) AI and continue to feed it the right data, continue to monitor it, and make sure it's in the right shape. So we have gone a little bit beyond that at Microsoft here. Personally, I want to mention content moderation is a big activity for us. We started long ago with the principles. Content moderation, content safety are critical to keep in mind, and therefore, in our particular scenario, we released content safety monitoring tools and policy tools. So the Azure AI content safety tool allows you to look at harmful content, identify all the categories of hate and sexual activity, self-harm, violence, all of that, categorize it and make sure the data you're using, the training that you're giving and the output from it, the input, output, all of the data is being safely delivered without any harm to the user. If I stuck to the regular data governance framework, it would not focus on content safety, it wouldn't focus on the actual data inside the bucket there, right? But with AI, we have to be extra careful and make sure it's delivering information with the right kind of metrics in place. That's the best way to build trust. That's the best way to build an application that people rely on and use on a day-to-day basis.
Contents
-
-
-
The importance of data in AI agents3m 32s
-
Dealing with data puddles5m 5s
-
Bringing structure to agentive AI data3m 28s
-
Mitigating risks when building agentive AI3m 23s
-
Agentive AI and data governance3m 4s
-
Responsible AI and data4m 22s
-
When to build agentive AI systems4m 38s
-
How to build trust in AI agents5m 17s
-
The data lifecycle of agentive AI4m 56s
-
Using AI as a data opportunity3m
-
-