Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InternLM-XComposer2.5-Reward: A Simple Yet Effective Multi-Modal Reward Model

About

Despite the promising performance of Large Vision Language Models (LVLMs) in visual understanding, they occasionally generate incorrect outputs. While reward models (RMs) with reinforcement learning or test-time scaling offer the potential for improving generation quality, a critical gap remains: publicly available multi-modal RMs for LVLMs are scarce, and the implementation details of proprietary models are often unclear. We bridge this gap with InternLM-XComposer2.5-Reward (IXC-2.5-Reward), a simple yet effective multi-modal reward model that aligns LVLMs with human preferences. To ensure the robustness and versatility of IXC-2.5-Reward, we set up a high-quality multi-modal preference corpus spanning text, image, and video inputs across diverse domains, such as instruction following, general understanding, text-rich documents, mathematical reasoning, and video understanding. IXC-2.5-Reward achieves excellent results on the latest multi-modal reward model benchmark and shows competitive performance on text-only reward model benchmarks. We further demonstrate three key applications of IXC-2.5-Reward: (1) Providing a supervisory signal for RL training. We integrate IXC-2.5-Reward with Proximal Policy Optimization (PPO) yields IXC-2.5-Chat, which shows consistent improvements in instruction following and multi-modal open-ended dialogue; (2) Selecting the best response from candidate responses for test-time scaling; and (3) Filtering outlier or noisy samples from existing image and video instruction tuning training data. To ensure reproducibility and facilitate further research, we have open-sourced all model weights and training recipes at https://github.com/InternLM/InternLM-XComposer/tree/main/InternLM-XComposer-2.5-Reward

Yuhang Zang, Xiaoyi Dong, Pan Zhang, Yuhang Cao, Ziyu Liu, Shengyuan Ding, Shenxi Wu, Yubo Ma, Haodong Duan, Wenwei Zhang, Kai Chen, Dahua Lin, Jiaqi Wang• 2025

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringTextVQA (val)
VQA Score81.3
309
Reward ModelingRewardBench
Avg Score88.6
118
Visual Question AnsweringChartQA (test)
Accuracy80.5
58
Geometry Problem SolvingMathVista GPS
Accuracy63.5
38
Mathematical Problem SolvingMathVerse (testmini)
Accuracy18.8
28
Mathematical Problem SolvingMATH-Vision (test)
Accuracy19
26
Reward ModelingVLRewardBench (test)
General84.7
24
Multi-step mathematical reasoningWe-Math (test)
S1 Score44.4
20
Multimodal Reward ModelingVL-RewardBench
Accuracy65.8
17
Multimodal Reward ModelingMultimodal RewardBench
Accuracy66.6
17
Showing 10 of 24 rows

Other info

Follow for update