BayLIME: Bayesian Local Interpretable Model-Agnostic Explanations
About
Given the pressing need for assuring algorithmic transparency, Explainable AI (XAI) has emerged as one of the key areas of AI research. In this paper, we develop a novel Bayesian extension to the LIME framework, one of the most widely used approaches in XAI -- which we call BayLIME. Compared to LIME, BayLIME exploits prior knowledge and Bayesian reasoning to improve both the consistency in repeated explanations of a single prediction and the robustness to kernel settings. BayLIME also exhibits better explanation fidelity than the state-of-the-art (LIME, SHAP and GradCAM) by its ability to integrate prior knowledge from, e.g., a variety of other XAI techniques, as well as verification and validation (V&V) methods. We demonstrate the desirable properties of BayLIME through both theoretical analysis and extensive experiments.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Local Explanation Generation | CovType | Stability92.6 | 14 | |
| Explanation Fidelity Estimation | Breast cancer | Fidelity (R2)0.739 | 11 | |
| Explanation Fidelity Estimation | Diabetes | Fidelity (R2 Score)0.894 | 11 | |
| Explanation Fidelity Estimation | Digits | Fidelity (R2 Score)0.476 | 11 | |
| Explanation Fidelity Estimation | Ames Housing | Fidelity (R2)0.845 | 11 | |
| Explanation Fidelity Estimation | Iris | Fidelity (R2 Score)0.624 | 11 | |
| Explanation Fidelity Estimation | Wine | Fidelity (R2 Score)0.418 | 11 | |
| Explanation Fidelity Estimation | CA Housing | R2 Score (Fidelity)0.397 | 11 | |
| Explanation Fidelity Estimation | CovType | Fidelity (R2)0.415 | 11 | |
| Explanation Regularity | Breast cancer | Regularity84.3 | 11 |