Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Where MLLMs Attend and What They Rely On: Explaining Autoregressive Token Generation

About

Multimodal large language models (MLLMs) have demonstrated remarkable capabilities in aligning visual inputs with natural language outputs. Yet, the extent to which generated tokens depend on visual modalities remains poorly understood, limiting interpretability and reliability. In this work, we present EAGLE, a lightweight black-box framework for explaining autoregressive token generation in MLLMs. EAGLE attributes any selected tokens to compact perceptual regions while quantifying the relative influence of language priors and perceptual evidence. The framework introduces an objective function that unifies sufficiency (insight score) and indispensability (necessity score), optimized via greedy search over sparsified image regions for faithful and efficient attribution. Beyond spatial attribution, EAGLE performs modality-aware analysis that disentangles what tokens rely on, providing fine-grained interpretability of model decisions. Extensive experiments across open-source MLLMs show that EAGLE consistently outperforms existing methods in faithfulness, localization, and hallucination diagnosis, while requiring substantially less GPU memory. These results highlight its effectiveness and practicality for advancing the interpretability of MLLMs.

Ruoyu Chen, Xiaoqing Guo, Kangwei Liu, Siyuan Liang, Shiming Liu, Qunli Zhang, Laiyuan Wang, Hua Zhang, Xiaochun Cao• 2025

Related benchmarks

TaskDatasetResultRank
Hallucination InterpretationRePOPE
Insertion91.14
16
Image Captioning ExplainabilityMS-COCO
Insertion Score0.8623
16
Image CaptioningMS-COCO 2014
Sentence Faithfulness Insertion Score76.65
12
Visual Question AnsweringMMVP
Sentence Faithfulness (Insertion)0.8052
12
Showing 4 of 4 rows

Other info

Follow for update