Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

What Helps---and What Hurts: Bidirectional Explanations for Vision Transformers

About

Vision Transformers (ViTs) achieve strong performance in visual recognition, yet their decision-making remains difficult to interpret. We propose BiCAM, a bidirectional class activation mapping method that captures both supportive (positive) and suppressive (negative) contributions to model predictions. Unlike prior CAM-based approaches that discard negative signals, BiCAM preserves signed attributions to produce more complete and contrastive explanations. BiCAM further introduces a Positive-to-Negative Ratio (PNR) that summarizes attribution balance and enables lightweight detection of adversarial examples without retraining. Across ImageNet, VOC, and COCO, BiCAM improves localization and faithfulness while remaining computationally efficient. It generalizes to multiple ViT variants, including DeiT and Swin. These results suggest the importance of modeling both supportive and suppressive evidence for interpreting transformer-based vision models.

Qin Su, Tie Luo• 2026

Related benchmarks

TaskDatasetResultRank
LocalizationVOC 2012
Pixel Accuracy85.59
5
LocalizationCOCO 2017
Pixel Accuracy87.07
5
Attribution FaithfulnessImageNet ImageNette 10-class
LIF0.5478
4
Attribution FaithfulnessVOC 2-class
LIF0.9313
4
Attribution FaithfulnessCOCO 2-class
LIF0.9407
4
LocalizationImageNet-1K
Pixel Accuracy62.53
4
Showing 6 of 6 rows

Other info

Follow for update