Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Modified Perturbed Sampling Method for Local Interpretable Model-agnostic Explanation

About

Explainability is a gateway between Artificial Intelligence and society as the current popular deep learning models are generally weak in explaining the reasoning process and prediction results. Local Interpretable Model-agnostic Explanation (LIME) is a recent technique that explains the predictions of any classifier faithfully by learning an interpretable model locally around the prediction. However, the sampling operation in the standard implementation of LIME is defective. Perturbed samples are generated from a uniform distribution, ignoring the complicated correlation between features. This paper proposes a novel Modified Perturbed Sampling operation for LIME (MPS-LIME), which is formalized as the clique set construction problem. In image classification, MPS-LIME converts the superpixel image into an undirected graph. Various experiments show that the MPS-LIME explanation of the black-box model achieves much better performance in terms of understandability, fidelity, and efficiency.

Sheng Shi, Xinfeng Zhang, Wei Fan• 2020

Related benchmarks

TaskDatasetResultRank
Local Explanation GenerationCovType
Stability95.4
14
Explanation RegularityIris
Regularity0.761
11
Explanation RegularityDigits
Regularity62.2
11
Explanation RegularityCA Housing
Regularity0.812
11
Explanation RegularityAmes Housing
Regularity85.3
11
Explanation Fidelity EstimationDiabetes
Fidelity (R2 Score)0.84
11
Explanation Fidelity EstimationAmes Housing
Fidelity (R2)0.83
11
Explanation RegularityDiabetes
Regularity90.5
11
Explanation StabilityCA Housing
Stability0.973
11
Explanation Fidelity EstimationBreast cancer
Fidelity (R2)0.612
11
Showing 10 of 24 rows

Other info

Follow for update