Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Models Judge Themselves: Unsupervised Self-Evolution for Multimodal Reasoning

About

Recent progress in multimodal large language models has led to strong performance on reasoning tasks, but these improvements largely rely on high-quality annotated data or teacher-model distillation, both of which are costly and difficult to scale. To address this, we propose an unsupervised self-evolution training framework for multimodal reasoning that achieves stable performance improvements without using human-annotated answers or external reward models. For each input, we sample multiple reasoning trajectories and jointly model their within group structure. We use the Actor's self-consistency signal as a training prior, and introduce a bounded Judge based modulation to continuously reweight trajectories of different quality. We further model the modulated scores as a group level distribution and convert absolute scores into relative advantages within each group, enabling more robust policy updates. Trained with Group Relative Policy Optimization (GRPO) on unlabeled data, our method consistently improves reasoning performance and generalization on five mathematical reasoning benchmarks, offering a scalable path toward self-evolving multimodal models. The code are available at https://github.com/OPPO-Mente-Lab/LLM-Self-Judge.

Zhengxian Wu, Kai Shi, Chuanrui Zhang, Zirui Liao, Jun Yang, Ni Yang, Qiuying Peng, Luyuan Zhang, Hangrui Xu, Tianhuang Su, Zhenyu Yang, Haonan Lu, Haoqian Wang• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical Multimodal ReasoningMathVerse
Accuracy46.8
221
Multimodal Math ReasoningMathVision
Accuracy30.9
183
Multimodal Math ReasoningWeMath
Accuracy38.9
168
Multimodal Mathematical ReasoningLogicVista
Accuracy49
34
Multimodal Mathematical ReasoningDynaMath
Accuracy (DynaMath)24.2
28
Showing 5 of 5 rows

Other info

GitHub

Follow for update