Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Interpretable Counterfactual Explanations Guided by Prototypes

About

We propose a fast, model agnostic method for finding interpretable counterfactual explanations of classifier predictions by using class prototypes. We show that class prototypes, obtained using either an encoder or through class specific k-d trees, significantly speed up the the search for counterfactual instances and result in more interpretable explanations. We introduce two novel metrics to quantitatively evaluate local interpretability at the instance level. We use these metrics to illustrate the effectiveness of our method on an image and tabular dataset, respectively MNIST and Breast Cancer Wisconsin (Diagnostic). The method also eliminates the computational bottleneck that arises because of numerical gradient evaluation for $\textit{black box}$ models.

Arnaud Van Looveren, Janis Klaise• 2019

Related benchmarks

TaskDatasetResultRank
Counterfactual Explanation GenerationWine
Validity1
9
Counterfactual Explanationsmoons
Coverage100
6
Counterfactual ExplanationsLaw
Coverage100
6
Counterfactual ExplanationsAudit
Coverage1
6
Counterfactual ExplanationsHELOC
Coverage100
6
Counterfactual GenerationFMCW radar dataset diagonal gestures
Interpretability Score2.1
6
Counterfactual GenerationFMCW radar dataset
Proximity1.8
6
Counterfactual Explanation GenerationBlobs
Coverage98
5
Counterfactual Explanation GenerationDigits
Coverage96
5
Counterfactual ExplanationsCredit-g
Coverage100
4
Showing 10 of 12 rows

Other info

Follow for update