This document discusses fairness in machine learning models. It begins with motivating examples of algorithms that were found to be biased, such as a recidivism prediction tool that was biased against black individuals. It then covers operationalizing fairness through frameworks like transparency and explainability. Finally, it discusses approaches for achieving fairness by design, such as preprocessing the data, adding randomness to predictions, or tailoring new algorithms with fairness constraints. The author notes there are inherent tradeoffs between performance and fairness that require difficult choices.