This paper proposes a method called Local Outlier Detection with Interpretation (LODI) that detects outliers and explains their anomalousness simultaneously. LODI first selects a neighboring set for each outlier candidate using entropy measures. It then computes an anomaly degree for each object based on its deviation from neighbors in a learned 1D subspace. Finally, LODI interprets outliers by identifying a small set of influential features. Experiments on synthetic and real-world data show LODI outperforms other methods in outlier detection and provides intuitive feature-based explanations. However, LODI's computation is expensive and it assumes linear separability, which are limitations for future work.
Related topics: