Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LeGrad: An Explainability Method for Vision Transformers via Feature Formation Sensitivity

About

Vision Transformers (ViTs), with their ability to model long-range dependencies through self-attention mechanisms, have become a standard architecture in computer vision. However, the interpretability of these models remains a challenge. To address this, we propose LeGrad, an explainability method specifically designed for ViTs. LeGrad computes the gradient with respect to the attention maps of ViT layers, considering the gradient itself as the explainability signal. We aggregate the signal over all layers, combining the activations of the last as well as intermediate tokens to produce the merged explainability map. This makes LeGrad a conceptually simple and an easy-to-implement tool for enhancing the transparency of ViTs. We evaluate LeGrad in challenging segmentation, perturbation, and open-vocabulary settings, showcasing its versatility compared to other SotA explainability methods demonstrating its superior spatial fidelity and robustness to perturbations. A demo and the code is available at https://github.com/WalBouss/LeGrad.

Walid Bousselham, Angie Boggust, Sofian Chaybouti, Hendrik Strobelt, Hilde Kuehne• 2024

Related benchmarks

TaskDatasetResultRank
LocalizationImageNet-1k (val)--
79
Attribution EvaluationImageNet (val)
POS Score0.2457
18
Visual AttributionImageNet Predicted Class target ILSVRC 2012 (val)
Deletion Score0.1666
10
Visual AttributionImageNet (val)
Deletion Score0.1792
10
Visual AttributionImageNet ILSVRC-2012 (val)
Deletion Score0.1552
10
Showing 5 of 5 rows

Other info

Follow for update