Lifting the Veil on Visual Information Flow in MLLMs: Unlocking Pathways to Faster Inference
About
Multimodal large language models (MLLMs) improve performance on vision-language tasks by integrating visual features from pre-trained vision encoders into large language models (LLMs). However, how MLLMs process and utilize visual information remains unclear. In this paper, a shift in the dominant flow of visual information is uncovered: (1) in shallow layers, strong interactions are observed between image tokens and instruction tokens, where most visual information is injected into instruction tokens to form cross-modal semantic representations; (2) in deeper layers, image tokens primarily interact with each other, aggregating the remaining visual information to optimize semantic representations within visual modality. Based on these insights, we propose Hierarchical Modality-Aware Pruning (HiMAP), a plug-and-play inference acceleration method that dynamically prunes image tokens at specific layers, reducing computational costs by approximately 65% without sacrificing performance. Our findings offer a new understanding of visual information processing in MLLMs and provide a state-of-the-art solution for efficient inference.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Hallucination Evaluation | POPE | Accuracy86.5 | 935 | |
| Text-based Visual Question Answering | TextVQA | Accuracy61.7 | 496 | |
| Visual Question Answering | ScienceQA | Accuracy72.1 | 210 | |
| Visual Question Answering | VQAv2 | Accuracy80.2 | 177 | |
| Multi-choice Visual Question Answering | A-OKVQA | Accuracy81.4 | 49 | |
| Image Captioning | NoCaps | CIDEr83.7 | 15 | |
| Multimodal Evaluation | MME | Accuracy1.82e+3 | 12 | |
| Natural QA | LLaVA-Bench Natural QA | Score74.5 | 6 | |
| Natural QA | MM-Vet | Score37.4 | 6 |