This paper presents research on achieving differential privacy while preserving utility in machine learning classifiers. The researchers applied differential privacy to a dataset of political donations, then used an AdaBoost ensemble classifier on the original and private datasets. They found that differential privacy maintained statistical properties but introduced classification errors. While increasing weak learners reduced errors on the original data, it did not help with the private data. The study shows the challenges of balancing privacy and utility.