What Helps---and What Hurts: Bidirectional Explanations for Vision Transformers
About
Vision Transformers (ViTs) achieve strong performance in visual recognition, yet their decision-making remains difficult to interpret. We propose BiCAM, a bidirectional class activation mapping method that captures both supportive (positive) and suppressive (negative) contributions to model predictions. Unlike prior CAM-based approaches that discard negative signals, BiCAM preserves signed attributions to produce more complete and contrastive explanations. BiCAM further introduces a Positive-to-Negative Ratio (PNR) that summarizes attribution balance and enables lightweight detection of adversarial examples without retraining. Across ImageNet, VOC, and COCO, BiCAM improves localization and faithfulness while remaining computationally efficient. It generalizes to multiple ViT variants, including DeiT and Swin. These results suggest the importance of modeling both supportive and suppressive evidence for interpreting transformer-based vision models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Localization | VOC 2012 | Pixel Accuracy85.59 | 5 | |
| Localization | COCO 2017 | Pixel Accuracy87.07 | 5 | |
| Attribution Faithfulness | ImageNet ImageNette 10-class | LIF0.5478 | 4 | |
| Attribution Faithfulness | VOC 2-class | LIF0.9313 | 4 | |
| Attribution Faithfulness | COCO 2-class | LIF0.9407 | 4 | |
| Localization | ImageNet-1K | Pixel Accuracy62.53 | 4 |