Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Attention Guided CAM: Visual Explanations of Vision Transformer Guided by Self-Attention

About

Vision Transformer(ViT) is one of the most widely used models in the computer vision field with its great performance on various tasks. In order to fully utilize the ViT-based architecture in various applications, proper visualization methods with a decent localization performance are necessary, but these methods employed in CNN-based models are still not available in ViT due to its unique structure. In this work, we propose an attention-guided visualization method applied to ViT that provides a high-level semantic explanation for its decision. Our method selectively aggregates the gradients directly propagated from the classification output to each self-attention, collecting the contribution of image features extracted from each location of the input image. These gradients are additionally guided by the normalized self-attention scores, which are the pairwise patch correlation scores. They are used to supplement the gradients on the patch-level context information efficiently detected by the self-attention mechanism. This approach of our method provides elaborate high-level semantic explanations with great localization performance only with the class labels. As a result, our method outperforms the previous leading explainability methods of ViT in the weakly-supervised localization task and presents great capability in capturing the full instances of the target class object. Meanwhile, our method provides a visualization that faithfully explains the model, which is demonstrated in the perturbation comparison test.

Saebom Leem, Hyunseok Seo• 2024

Related benchmarks

TaskDatasetResultRank
Visual ExplanationImageNet
AUC-D38.52
22
Visual ExplanationFood-101
AUC-D43.57
10
Visual ExplanationCaltech-UCSD Birds-200 2011
AUC-D75.24
10
LocalizationVOC 2012
Pixel Accuracy85.61
5
LocalizationCOCO 2017
Pixel Accuracy87.4
5
LocalizationImageNet-1K
Pixel Accuracy73.41
4
Attribution FaithfulnessImageNet ImageNette 10-class
LIF0.5298
4
Attribution FaithfulnessVOC 2-class
LIF0.928
4
Attribution FaithfulnessCOCO 2-class
LIF0.9376
4
Showing 9 of 9 rows

Other info

Follow for update