Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Diffusion Counterfactual Explanations

About

Counterfactual explanations have emerged as a promising method for elucidating the behavior of opaque black-box models. Recently, several works leveraged pixel-space diffusion models for counterfactual generation. To handle noisy, adversarial gradients during counterfactual generation -- causing unrealistic artifacts or mere adversarial perturbations -- they required either auxiliary adversarially robust models or computationally intensive guidance schemes. However, such requirements limit their applicability, e.g., in scenarios with restricted access to the model's training data. To address these limitations, we introduce Latent Diffusion Counterfactual Explanations (LDCE). LDCE harnesses the capabilities of recent class- or text-conditional foundation latent diffusion models to expedite counterfactual generation and focus on the important, semantic parts of the data. Furthermore, we propose a novel consensus guidance mechanism to filter out noisy, adversarial gradients that are misaligned with the diffusion model's implicit classifier. We demonstrate the versatility of LDCE across a wide spectrum of models trained on diverse datasets with different learning paradigms. Finally, we showcase how LDCE can provide insights into model errors, enhancing our understanding of black-box model behavior.

Karim Farid, Simon Schrodi, Max Argus, Thomas Brox• 2023

Related benchmarks

TaskDatasetResultRank
Counterfactual ExplanationImageNet Zebra - Sorrel
FID82.4
11
Counterfactual ExplanationImageNet (Cheetah - Cougar)
FID71
11
Counterfactual ExplanationImageNet Egyptian Cat - Persian Cat
FID102.7
11
Visual Counterfactual Explanation (Age)CelebA-HQ
FID14.2
9
Visual Counterfactual Explanation (Smile)CelebA-HQ
FID13.6
9
Showing 5 of 5 rows

Other info

Follow for update