Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations

About

Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI -- which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.

Xingyu Zhao, Wei Huang, Xiaowei Huang, Valentin Robu, David Flynn• 2020

Related benchmarks

TaskDatasetResultRank
Local Explanation GenerationCovType
Stability92.6
14
Explanation Fidelity EstimationBreast cancer
Fidelity (R2)0.739
11
Explanation Fidelity EstimationDiabetes
Fidelity (R2 Score)0.894
11
Explanation Fidelity EstimationDigits
Fidelity (R2 Score)0.476
11
Explanation Fidelity EstimationAmes Housing
Fidelity (R2)0.845
11
Explanation Fidelity EstimationIris
Fidelity (R2 Score)0.624
11
Explanation Fidelity EstimationWine
Fidelity (R2 Score)0.418
11
Explanation Fidelity EstimationCA Housing
R2 Score (Fidelity)0.397
11
Explanation Fidelity EstimationCovType
Fidelity (R2)0.415
11
Explanation RegularityBreast cancer
Regularity84.3
11
Showing 10 of 24 rows

Other info

Follow for update