Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Revisiting Multimodal Positional Encoding in Vision-Language Models

About

Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors-ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code will be avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.

Jie Huang, Xuejing Liu, Sibo Song, Ruibing Hou, Hong Chang, Junyang Lin, Shuai Bai• 2025

Related benchmarks

TaskDatasetResultRank
Video UnderstandingMVBench--
425
Chart Question AnsweringChartQA--
356
Document Visual Question AnsweringDocVQA--
263
Video UnderstandingVideoMME--
248
Optical Character RecognitionOCRBench
Score74
232
Diagram Question AnsweringAI2D--
232
Video UnderstandingVideoMME
Overall Score58.96
222
Video UnderstandingMLVU
Score61.29
221
Visual GroundingRefCOCO+ (val)
Accuracy71.8
212
Visual GroundingRefCOCO+ (testA)
Accuracy77.79
206
Showing 10 of 52 rows

Other info

Follow for update