Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Where Does Vision Meet Language? Understanding and Refining Visual Fusion in MLLMs via Contrastive Attention

About

Multimodal Large Language Models (MLLMs) have achieved remarkable progress in vision-language understanding, yet how they internally integrate visual and textual information remains poorly understood. To bridge this gap, we perform a systematic layer-wise masking analysis across multiple architectures, revealing how visual-text fusion evolves within MLLMs. The results show that fusion emerges at several specific layers rather than being uniformly distributed across the network, and certain models exhibit a late-stage "review" phenomenon where visual signals are reactivated before output generation. Besides, we further analyze layer-wise attention evolution and observe persistent high-attention noise on irrelevant regions, along with gradually increasing attention on text-aligned areas. Guided by these insights, we introduce a training-free contrastive attention framework that models the transformation between early fusion and final layers to highlight meaningful attention shifts. Extensive experiments across various MLLMs and benchmarks validate our analysis and demonstrate that the proposed approach improves multimodal reasoning performance. Code will be released.

Shezheng Song, Shasha Li, Jie Yu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2
Accuracy80.7
1165
Visual Question AnsweringVizWiz
Accuracy62
1043
Visual Question AnsweringGQA
Accuracy71.6
374
Visual Question AnsweringOKVQA
Top-1 Accuracy56.9
283
Visual Question AnsweringDocVQA
Accuracy68.1
103
Showing 5 of 5 rows

Other info

Follow for update