Responsible Machine Learning

Responsible Machine Learning

Any machine learning system built, used, and managed in an organization should follow the core responsible machine learning principles. Three fundamental principles of responsible machine learning include:

  1. Understanding the models
  2. Protecting the models
  3. Controlling the models


Understanding the models

It is important to understand why a machine learning model made a particular decision or a prediction it did. This is important to know since we don’t want ML systems making unfair or unethical decisions and, we also want to have the ability to inform the people getting affected by the decisions made by a machine learning system. All systems using a machine learning model should have explainability built into it so the final model that is going to be utilized in an AI system is fully explainable. The decision may also have to be made on choosing between highly accurate models, highly explainable models, or a balanced model which provides a good mix of both accuracy and explainability. A data scientist can review the explainability of the final model to make sure that it has not inadvertently introduced bias into the model to make unfair decisions or predictions based on age, ethnicity, gender, economic status, education level, or religion.   


Protecting the models

Machine learning models should protect the privacy of an individual’s data in the model. Differential privacy technique, which interjects statistical noise into the dataset, is used to protect the privacy of an individual’s data. The machine learning systems should also provide encryption and security of the data both in transit and in rest.


According to a report 71% of the organizations spent more on machine learning for cybersecurity than they did two years ago.

Controlling the models

Machine learning models should be traceable and reproducible, and they should provide an end-to-end audit trail of the machine learning lifecycle from data set, training, code, and environment to model deployment. Audit trails should include activity logs that can be used by data scientists and machine learning developers to diagnose and troubleshoot issues around data sets, training models, compute targets, and deployments. Training environments can be reproduced to create a reproducible ML model. Audit trails also help meet any regulatory compliance requirements. Model data sheets can be used to document the model by using model metadata. 

Johann Botha

Executive/Coach/Consultant/Educator

3y

Interesting group name 😊 last year I published a book entitled Competing in a digital future, focusing on the role of organisational leadership in digital transformation.

Like
Reply

To view or add a comment, sign in

Others also viewed

Explore topics