This document summarizes work being done by researchers at the University of Oxford on understanding algorithmic decisions and machine learning models. It discusses how machine learning models are built using labeled training data to make predictions and decisions. The researchers are studying issues of accountability, transparency, and fairness in these models and how biases in training data can affect model outcomes. They also explore how algorithmic decisions can be explained to increase understanding and perceptions of justice.