LLaVA-Critic: Learning to Evaluate Multimodal Models
About
We introduce LLaVA-Critic, the first open-source large multimodal model (LMM) designed as a generalist evaluator to assess performance across a wide range of multimodal tasks. LLaVA-Critic is trained using a high-quality critic instruction-following dataset that incorporates diverse evaluation criteria and scenarios. Our experiments demonstrate the model's effectiveness in two key areas: (1) LMM-as-a-Judge, where LLaVA-Critic provides reliable evaluation scores, performing on par with or surpassing GPT models on multiple evaluation benchmarks; and (2) Preference Learning, where it generates reward signals for preference learning, enhancing model alignment capabilities. This work underscores the potential of open-source LMMs in self-critique and evaluation, setting the stage for future research into scalable, superhuman alignment feedback mechanisms for LMMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reward Modeling | RewardBench | Avg Score80 | 118 | |
| Correction | VISCO full 1.0 (test) | Correction Gain58.9 | 46 | |
| Critique | VISCO 1.0 (test) | VISCore42.6 | 26 | |
| Reward Modeling | VLRewardBench (test) | General54.6 | 24 | |
| Multi-modal Preference Evaluation | MM-RewardBench | Accuracy56 | 19 | |
| Multi-modal Preference Evaluation | VL-Reward | Accuracy54.1 | 19 | |
| Large Multimodal Model Evaluation | MLLM-as-a-Judge v1.0 (test) | Overall Score39.3 | 16 | |
| RLHF | HH-RLHF | Human Win Rate68.2 | 16 | |
| Pairwise Ranking | WildVision Arena in-domain | Accuracy (w/ Tie)60.5 | 11 | |
| Pointwise Scoring | ImageDC pointwise | Kendall's Tau0.949 | 9 |