Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Counterfactual Visual Explanations

About

Counterfactual explanations and adversarial attacks have a related goal: flipping output labels with minimal perturbations regardless of their characteristics. Yet, adversarial attacks cannot be used directly in a counterfactual explanation perspective, as such perturbations are perceived as noise and not as actionable and understandable image modifications. Building on the robust learning literature, this paper proposes an elegant method to turn adversarial attacks into semantically meaningful perturbations, without modifying the classifiers to explain. The proposed approach hypothesizes that Denoising Diffusion Probabilistic Models are excellent regularizers for avoiding high-frequency and out-of-distribution perturbations when generating adversarial attacks. The paper's key idea is to build attacks through a diffusion model to polish them. This allows studying the target model regardless of its robustification level. Extensive experimentation shows the advantages of our counterfactual explanation approach over current State-of-the-Art in multiple testbeds.

Guillaume Jeanneret, Lo\"ic Simon, Fr\'ed\'eric Jurie• 2023

Related benchmarks

TaskDatasetResultRank
Visual Counterfactual Explanation (Age)CelebA Standard
FID1.45
11
Visual Counterfactual Explanation (Smile)CelebA Standard
FID1.27
11
Counterfactual ExplanationImageNet Zebra - Sorrel
FID67.7
11
Counterfactual ExplanationImageNet (Cheetah - Cougar)
FID70.2
11
Counterfactual ExplanationImageNet Egyptian Cat - Persian Cat
FID93.6
11
Counterfactual Visual ExplanationBDD100K
FID1.02
10
Visual Counterfactual Explanation (Age)CelebA-HQ
FID5.31
9
Visual Counterfactual Explanation (Smile)CelebA-HQ
FID3.21
9
Counterfactual Visual ExplanationBDD-OIA
FID2.09
7
Counterfactual Visual Explanation (Age attribute)CelebA (test)
FID1.45
6
Showing 10 of 13 rows

Other info

Follow for update