This document presents SafeML, an approach for monitoring machine learning classifiers through statistical difference measures. SafeML uses statistical distance measures like Kolmogorov-Smirnov, Kuiper, Anderson-Darling, and Wasserstein to estimate a classifier's accuracy on new data where the true labels are unknown. It proposes a human-in-loop procedure with three levels: 1) runtime estimated accuracy, 2) need for more data, 3) need for human input. Numerical examples show SafeML can estimate classifier accuracy on 1D and 2D datasets. SafeML could help assure safety by providing runtime monitoring of classifiers and explaining classifications through statistical differences.