This document discusses bias in machine learning and how to intentionally design systems to reflect organizational values. It defines bias as a systematic influence on decisions that produces results inconsistent with reality. Bias can come from data selection and latent biases in training data. While bias may result in suboptimal answers, bias towards organizational values is not necessarily bad. The document provides examples of testing AI systems to ensure they reflect values like equal opportunity, customer satisfaction, and environmental stewardship. Testers should understand organizational values, how to operationalize them, and test that recommendations match values as reflected in proxy training data.
Related topics: