This document discusses the use of influence functions to understand and explain the predictions made by machine learning models without the need for retraining. It highlights techniques for efficiently computing the effects of perturbing training data, and provides applications of these methods for debugging, understanding model behavior, and fixing mislabeled examples. The findings emphasize the importance of identifying influential training points to improve model accuracy and reliability.
Related topics: