Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Unimodal Shortcuts: MLLMs as Cross-Modal Reasoners for Grounded Named Entity Recognition

About

Grounded Multimodal Named Entity Recognition (GMNER) aims to extract text-based entities, assign them semantic categories, and ground them to corresponding visual regions. In this work, we explore the potential of Multimodal Large Language Models (MLLMs) to perform GMNER in an end-to-end manner, moving beyond their typical role as auxiliary tools within cascaded pipelines. Crucially, our investigation reveals a fundamental challenge: MLLMs exhibit $\textbf{modality bias}$, including visual bias and textual bias, which stems from their tendency to take unimodal shortcuts rather than rigorous cross-modal verification. To address this, we propose Modality-aware Consistency Reasoning ($\textbf{MCR}$), which enforces structured cross-modal reasoning through Multi-style Reasoning Schema Injection (MRSI) and Constraint-guided Verifiable Optimization (CVO). MRSI transforms abstract constraints into executable reasoning chains, while CVO empowers the model to dynamically align its reasoning trajectories with Group Relative Policy Optimization (GRPO). Experiments on GMNER and visual grounding tasks demonstrate that MCR effectively mitigates modality bias and achieves superior performance compared to existing baselines.

Jinlong Ma, Yu Zhang, Xuefeng Bai, Kehai Chen, Yuwei Wang, Zeming Liu, Jun Yu, Min Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Named Entity RecognitionTwitter-GMNER
F1 Score82.8
125
Grounded Multimodal Named Entity RecognitionTwitter-GMNER
F1 Score73.4
75
Grounded Referring Expression ComprehensionGREC (testA)
N-acc75.7
6
Grounded Referring Expression ComprehensionGREC testB
N-acc71.9
6
Multimodal Named Entity RecognitionMNER-MI
Precision84.7
6
Showing 5 of 5 rows

Other info

Follow for update