Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing LIME using Neural Decision Trees

About

Interpreting complex machine learning models is a critical challenge, especially for tabular data where model transparency is paramount. Local Interpretable Model-Agnostic Explanations (LIME) has been a very popular framework for interpretable machine learning, also inspiring many extensions. While traditional surrogate models used in LIME variants (e.g. linear regression and decision trees) offer a degree of stability, they can struggle to faithfully capture the complex non-linear decision boundaries that are inherent in many sophisticated black-box models. This work contributes toward bridging the gap between high predictive performance and interpretable decision-making. Specifically, we propose the NDT-LIME variant that integrates Neural Decision Trees (NDTs) as surrogate models. By leveraging the structured, hierarchical nature of NDTs, our approach aims at providing more accurate and meaningful local explanations. We evaluate its effectiveness on several benchmark tabular datasets, showing consistent improvements in explanation fidelity over traditional LIME surrogates.

Mohamed Aymen Bouyahia, Argyris Kalogeratos• 2026

Related benchmarks

TaskDatasetResultRank
Local Explanation GenerationCovType
Stability93.1
14
Explanation Fidelity EstimationIris
Fidelity (R2 Score)0.86
11
Explanation Fidelity EstimationWine
Fidelity (R2 Score)0.518
11
Explanation Fidelity EstimationDigits
Fidelity (R2 Score)0.577
11
Explanation Fidelity EstimationCovType
Fidelity (R2)0.632
11
Explanation Fidelity EstimationCA Housing
R2 Score (Fidelity)0.96
11
Explanation Fidelity EstimationDiabetes
Fidelity (R2 Score)0.92
11
Explanation RegularityBreast cancer
Regularity91.5
11
Explanation RegularityWine
Regularity91
11
Explanation RegularityDiabetes
Regularity97.8
11
Showing 10 of 31 rows

Other info

Follow for update