Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lifting the Veil on Visual Information Flow in MLLMs: Unlocking Pathways to Faster Inference

About

Multimodal large language models (MLLMs) improve performance on vision-language tasks by integrating visual features from pre-trained vision encoders into large language models (LLMs). However, how MLLMs process and utilize visual information remains unclear. In this paper, a shift in the dominant flow of visual information is uncovered: (1) in shallow layers, strong interactions are observed between image tokens and instruction tokens, where most visual information is injected into instruction tokens to form cross-modal semantic representations; (2) in deeper layers, image tokens primarily interact with each other, aggregating the remaining visual information to optimize semantic representations within visual modality. Based on these insights, we propose Hierarchical Modality-Aware Pruning (HiMAP), a plug-and-play inference acceleration method that dynamically prunes image tokens at specific layers, reducing computational costs by approximately 65% without sacrificing performance. Our findings offer a new understanding of visual information processing in MLLMs and provide a state-of-the-art solution for efficient inference.

Hao Yin, Guangzong Si, Zilei Wang• 2025

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE
Accuracy86.5
935
Text-based Visual Question AnsweringTextVQA
Accuracy61.7
496
Visual Question AnsweringScienceQA
Accuracy72.1
210
Visual Question AnsweringVQAv2
Accuracy80.2
177
Multi-choice Visual Question AnsweringA-OKVQA
Accuracy81.4
49
Image CaptioningNoCaps
CIDEr83.7
15
Multimodal EvaluationMME
Accuracy1.82e+3
12
Natural QALLaVA-Bench Natural QA
Score74.5
6
Natural QAMM-Vet
Score37.4
6
Showing 9 of 9 rows

Other info

Code

Follow for update