Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Bridging Perception and Reasoning: Token Reweighting for RLVR in Multimodal LLMs

About

Extending Reinforcement Learning with Verifiable Rewards (RLVR) to multimodal large language models (MLLMs) faces a fundamental challenge: their responses inherently interleave perception-related tokens, which ground visual content, with reasoning-related tokens, which construct reasoning chains. These token types instantiate distinct yet interdependent capacities -- visual grounding and symbolic reasoning -- making isolated optimization insufficient. Through token-level empirical analysis, we demonstrate that optimizing either perception- or reasoning-only tokens consistently underperforms full optimization, underscoring their inherent coupling. To address this, we propose a plug-and-play Token-Reweighting (ToR) strategy that explicitly models this interdependence by identifying critical tokens of both types and dynamically reweighting them during RLVR training. Applied on top of existing methods (e.g., GRPO and DAPO), ToR delivers consistent performance gains across multiple multi-modal reasoning benchmarks, achieving state-of-the-art performance with both accurate visual grounding and coherent reasoning.

Jinda Lu, Junkang Wu, Jinghan Li, Kexin Huang, Shuo Yang, Guoyin Wang, Jiancan Wu, Xiang Wang, Xiangnan He• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical Multimodal ReasoningMathVerse
Accuracy54.3
221
Mathematical Multimodal ReasoningMathVista
Accuracy74.2
218
Multimodal Math ReasoningMathVision
Accuracy31.6
183
Multimodal Math ReasoningWeMath
Accuracy73
168
Hallucination EvaluationHallBench
Accuracy73.6
31
Showing 5 of 5 rows

Other info

Follow for update