Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mixed-R1: Unified Reward Perspective For Reasoning Capability in Multimodal Large Language Models

About

Recent works on large language models (LLMs) have successfully demonstrated the emergence of reasoning capabilities via reinforcement learning (RL). Although recent efforts leverage group relative policy optimization (GRPO) for MLLMs post-training, they constantly explore one specific aspect, such as grounding tasks, math problems, or chart analysis. There are no works that can leverage multi-source MLLM tasks for stable reinforcement learning. In this work, we present a unified perspective to solve this problem. We present Mixed-R1, a unified yet straightforward framework that contains a mixed reward function design (Mixed-Reward) and a mixed post-training dataset (Mixed-45K). We first design a data engine to select high-quality examples to build the Mixed-45K post-training dataset. Then, we present a Mixed-Reward design, which contains various reward functions for various MLLM tasks. In particular, it has four different reward functions: matching reward for binary answer or multiple-choice problems, chart reward for chart-aware datasets, IoU reward for grounding problems, and open-ended reward for long-form text responses such as caption datasets. To handle the various long-form text content, we propose a new open-ended reward named Bidirectional Max-Average Similarity (BMAS) by leveraging tokenizer embedding matching between the generated response and the ground truth. Extensive experiments show the effectiveness of our proposed method on various MLLMs, including Qwen2.5-VL and Intern-VL on various sizes. Our dataset and model are available at https://github.com/xushilin1/mixed-r1.

Shilin Xu, Yanwei Li, Rui Yang, Tao Zhang, Yueyi Sun, Wei Chow, Linfeng Li, Hang Song, Qi Xu, Yunhai Tong, Xiangtai Li, Hao Fei• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMMMU-Pro
Accuracy38
55
Mathematical Multimodal ReasoningMathVista
Accuracy70.6
46
Multimodal ReasoningM3CoT (test)
Total Acc59.9
31
Mathematical Multimodal ReasoningMathVerse
Accuracy40.8
29
Multimodal ReasoningMathVision--
23
Mathematical Multimodal ReasoningMM-Math
Accuracy35.8
11
Multimodal ReasoningMM-IQ
Accuracy25.9
10
Multimodal ReasoningSingle-image benchmarks suite Overall
Overall Accuracy43
8
Showing 8 of 8 rows

Other info

Follow for update