Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Visual Counterfactual Explanations

About

Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image classifier. They are 'small' but 'realistic' semantic changes of the image changing the classifier decision. Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts, or are limited to image classification problems with few classes. In this paper, we overcome this by generating Diffusion Visual Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers via a diffusion process. Two modifications to the diffusion process are key for our DVCEs: first, an adaptive parameterization, whose hyperparameters generalize across images and models, together with distance regularization and late start of the diffusion process, allow us to generate images with minimal semantic changes to the original ones but different classification. Second, our cone regularization via an adversarially robust model ensures that the diffusion process does not converge to trivial non-semantic changes, but instead produces realistic images of the target class which achieve high confidence by the classifier.

Maximilian Augustin, Valentyn Boreiko, Francesco Croce, Matthias Hein• 2022

Related benchmarks

TaskDatasetResultRank
Counterfactual ExplanationImageNet Zebra - Sorrel
FID33.1
11
Counterfactual ExplanationImageNet (Cheetah - Cougar)
FID46.9
11
Counterfactual ExplanationImageNet Egyptian Cat - Persian Cat
FID46.6
11
Showing 3 of 3 rows

Other info

Code

Follow for update