This document discusses how algorithms can discriminate and exhibit bias. It provides examples of gender and racial bias in Google image searches and ad targeting. Algorithms used in criminal risk assessment and job recruiting were also found to discriminate. The sources of algorithmic bias include biased data, complex models, and cultural differences not accounted for. Solutions proposed include anti-discrimination regulations, tools for evaluating and mitigating bias, and designing algorithms with fairness and explainability. While algorithms amplify existing societal biases, the document remains optimistic that machine learning can also help address discrimination problems.