Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RuCL: Stratified Rubric-Based Curriculum Learning for Multimodal Large Language Model Reasoning

About

Reinforcement Learning with Verifiable Rewards (RLVR) has emerged as a prevailing paradigm for enhancing reasoning in Multimodal Large Language Models (MLLMs). However, relying solely on outcome supervision risks reward hacking, where models learn spurious reasoning patterns to satisfy final answer checks. While recent rubric-based approaches offer fine-grained supervision signals, they suffer from high computational costs of instance-level generation and inefficient training dynamics caused by treating all rubrics as equally learnable. In this paper, we propose Stratified Rubric-based Curriculum Learning (RuCL), a novel framework that reformulates curriculum learning by shifting the focus from data selection to reward design. RuCL generates generalized rubrics for broad applicability and stratifies them based on the model's competence. By dynamically adjusting rubric weights during training, RuCL guides the model from mastering foundational perception to tackling advanced logical reasoning. Extensive experiments on various visual reasoning benchmarks show that RuCL yields a remarkable +7.83% average improvement over the Qwen2.5-VL-7B model, achieving a state-of-the-art accuracy of 60.06%.

Yukun Chen, Jiaming Li, Longze Chen, Ze Gong, Jingpeng Li, Zhen Qin, Hengyu Chang, Ancheng Xu, Zhihao Yang, Hamid Alinejad-Rokny, Qiang Qu, Bo Zheng, Min Yang• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMathVista
Accuracy74.1
257
Mathematical ReasoningWeMath
Accuracy71.49
161
Mathematical ReasoningMathVision
Accuracy28.88
144
Mathematical ReasoningMathVerse
Accuracy54.14
109
Visual Logical ReasoningLogicVista
Accuracy49.66
70
General Visual ReasoningMMMU
Accuracy56.67
14
General Visual ReasoningSuper-CLEVR Counting
Accuracy85.5
12
Showing 7 of 7 rows

Other info

Follow for update