Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DEX-AR: A Dynamic Explainability Method for Autoregressive Vision-Language Models

About

As Vision-Language Models (VLMs) become increasingly sophisticated and widely used, it becomes more and more crucial to understand their decision-making process. Traditional explainability methods, designed for classification tasks, struggle with modern autoregressive VLMs due to their complex token-by-token generation process and intricate interactions between visual and textual modalities. We present DEX-AR (Dynamic Explainability for AutoRegressive models), a novel explainability method designed to address these challenges by generating both per-token and sequence-level 2D heatmaps highlighting image regions crucial for the model's textual responses. The proposed method offers to interpret autoregressive VLMs-including varying importance of layers and generated tokens-by computing layer-wise gradients with respect to attention maps during the token-by-token generation process. DEX-AR introduces two key innovations: a dynamic head filtering mechanism that identifies attention heads focused on visual information, and a sequence-level filtering approach that aggregates per-token explanations while distinguishing between visually-grounded and purely linguistic tokens. Our evaluation on ImageNet, VQAv2, and PascalVOC, shows a consistent improvement in both perturbation-based metrics, using a novel normalized perplexity measure, as well as segmentation-based metrics.

Walid Bousselham, Angie Boggust, Hendrik Strobelt, Hilde Kuehne• 2026

Related benchmarks

TaskDatasetResultRank
Perturbation-based evaluationImageNet
Pos AUC18.1
34
Perturbation-based evaluationVQA v2
Positive Perturbation AUC1.13
34
Object LocalizationPascalVOC
soft-IoU17.7
22
Attribution EvaluationImageNet (val)
POS Score18.1
18
Showing 4 of 4 rows

Other info

Follow for update