Revisiting Multimodal Positional Encoding in Vision-Language Models
About
Multimodal position encoding is essential for vision-language models, yet there has been little systematic investigation into multimodal position encoding. We conduct a comprehensive analysis of multimodal Rotary Positional Embedding (RoPE) by examining its two core components: position design and frequency allocation. Through extensive experiments, we identify three key guidelines: positional coherence, full frequency utilization, and preservation of textual priors-ensuring unambiguous layout, rich representation, and faithful transfer from the pre-trained LLM. Based on these insights, we propose Multi-Head RoPE (MHRoPE) and MRoPE-Interleave (MRoPE-I), two simple and plug-and-play variants that require no architectural changes. Our methods consistently outperform existing approaches across diverse benchmarks, with significant improvements in both general and fine-grained multimodal understanding. Code will be avaliable at https://github.com/JJJYmmm/Multimodal-RoPEs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Video Understanding | MVBench | -- | 425 | |
| Chart Question Answering | ChartQA | -- | 356 | |
| Document Visual Question Answering | DocVQA | -- | 263 | |
| Video Understanding | VideoMME | -- | 248 | |
| Optical Character Recognition | OCRBench | Score74 | 232 | |
| Diagram Question Answering | AI2D | -- | 232 | |
| Video Understanding | VideoMME | Overall Score58.96 | 222 | |
| Video Understanding | MLVU | Score61.29 | 221 | |
| Visual Grounding | RefCOCO+ (val) | Accuracy71.8 | 212 | |
| Visual Grounding | RefCOCO+ (testA) | Accuracy77.79 | 206 |