Multi-Task Reinforcement Learning for Enhanced Multimodal LLM-as-a-Judge
About
Multimodal Large Language Models (MLLMs) have been widely adopted as MLLM-as-a-Judges due to their strong alignment with human judgment across various visual tasks. However, most existing judge models are optimized for single-task scenarios and struggle to generalize to diverse contexts, which is a critical requirement for reliable evaluation. To address this limitation, we propose Multi-Task Reinforcement Learning for MLLM-as-a-Judge (MT-RL-Judge), a framework that jointly optimizes the judge model across multiple tasks, leveraging the generalization capabilities of RL. Experimental results against several strong baselines demonstrate that MT-RL-Judge outperforms strong baselines in both judgment consistency and correlation with human preferences. Furthermore, our approach exhibits robust generalization on out-of-distribution tasks, further validating its effectiveness.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Binary Classification | AGIN-Tech (test) | Macro F1 Score81.37 | 5 | |
| Binary Classification | Seetrue (test) | Macro F1 Score83.67 | 5 | |
| Binary Classification | AGIN-Nat. (test) | Macro-F181.63 | 5 | |
| Binary Classification | AGIN-Rat (test) | Macro-F181.58 | 5 | |
| Binary Classification | ImageReward (test) | Macro-F164.97 | 5 | |
| Binary Classification | Unsafe Bench (test) | Macro-F185.22 | 5 | |
| Image-text alignment | MJ-Bench | Macro F160.59 | 3 | |
| Safety Judge | MJ-Bench | Macro-F182.23 | 3 |