Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Entropy-Guided Data-Efficient Training for Multimodal Reasoning Reward Models

About

Multimodal reward models are crucial for aligning multimodal large language models with human preferences. Recent works have incorporated reasoning capabilities into these models, achieving promising results. However, training these models suffers from two critical challenges: (1) the inherent noise in preference datasets, which degrades model performance, and (2) the inefficiency of conventional training methods, which ignore the differences in sample difficulty. In this paper, we identify a strong correlation between response entropy and accuracy, indicating that entropy can serve as a reliable and unsupervised proxy for annotation noise and sample difficulty. Based on this insight, we propose a novel Entropy-Guided Training (EGT) approach for multimodal reasoning reward models, which combines two strategies: (1) entropy-guided data curation to mitigate the impact of unreliable samples, and (2) an entropy-guided training strategy that progressively introduces more complex examples. Extensive experiments across three benchmarks show that the EGT-trained model consistently outperforms state-of-the-art multimodal reward models.

Shidong Yang, Tongwen Huang, Hao Wen, Yong Wang, Li Chen, Xiangxiang Chu• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Reward ModelingVL-RewardBench
Accuracy77.15
17
Multimodal Reward ModelingMultimodal RewardBench
Accuracy84.3
17
Multimodal Reward ModelingMM-RLHF-RewardBench
Accuracy85.88
9
Multimodal Reward ModelingVL-RewardBench, Multimodal RewardBench, and MM-RLHF-RewardBench Aggregate
Accuracy82.44
9
Showing 4 of 4 rows

Other info

Follow for update