Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Trade-off Between Efficiency and Consistency for Removal-based Explanations

About

In the current landscape of explanation methodologies, most predominant approaches, such as SHAP and LIME, employ removal-based techniques to evaluate the impact of individual features by simulating various scenarios with specific features omitted. Nonetheless, these methods primarily emphasize efficiency in the original context, often resulting in general inconsistencies. In this paper, we demonstrate that such inconsistency is an inherent aspect of these approaches by establishing the Impossible Trinity Theorem, which posits that interpretability, efficiency, and consistency cannot hold simultaneously. Recognizing that the attainment of an ideal explanation remains elusive, we propose the utilization of interpretation error as a metric to gauge inefficiencies and inconsistencies. To this end, we present two novel algorithms founded on the standard polynomial basis, aimed at minimizing interpretation error. Our empirical findings indicate that the proposed methods achieve a substantial reduction in interpretation error, up to 31.8 times lower when compared to alternative techniques. Code is available at https://github.com/trusty-ai/efficient-consistent-explanations.

Yifan Zhang, Haowei He, Zhiquan Tan, Yang Yuan• 2022

Related benchmarks

TaskDatasetResultRank
Interpretation Error EvaluationImageNet
Interpretation Error7.05
80
InterpretationSST-2
L2 Norm0.0434
56
Interpretation errorIMDB (test)
L2 Norm Error0.0175
56
Interpretability EvaluationMS-COCO
Interpretation Error Rate1.83
40
Interpretation errorImageNet (test)
L2 Norm0.1048
40
Interpretation errorImageNet (val)
Interpretation Error0.1231
40
Model InterpretationIMDB
C3 Truthful Gap0.174
8
Model InterpretationImageNet 1000 random images
C3 Truthful Gap0.246
8
Showing 8 of 8 rows

Other info

Code

Follow for update