This document introduces the LIME (Local Interpretable Model-agnostic Explanations) algorithm for explaining the predictions of any machine learning classifier. LIME works by training a local interpretable model to approximate the predictions of the underlying black box model. It samples perturbed versions of the input instance and trains a linear model on these samples weighted by their proximity to the original instance. This local model is then used to explain the prediction by interpreting the features that are important. LIME helps users understand why they should trust a model's predictions, detect issues with untrustworthy models, and gain insights into how a complex model works.