Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLaVA-Critic: Learning to Evaluate Multimodal Models

About

We introduce LLaVA-Critic, the first open-source large multimodal model (LMM) designed as a generalist evaluator to assess performance across a wide range of multimodal tasks. LLaVA-Critic is trained using a high-quality critic instruction-following dataset that incorporates diverse evaluation criteria and scenarios. Our experiments demonstrate the model's effectiveness in two key areas: (1) LMM-as-a-Judge, where LLaVA-Critic provides reliable evaluation scores, performing on par with or surpassing GPT models on multiple evaluation benchmarks; and (2) Preference Learning, where it generates reward signals for preference learning, enhancing model alignment capabilities. This work underscores the potential of open-source LMMs in self-critique and evaluation, setting the stage for future research into scalable, superhuman alignment feedback mechanisms for LMMs.

Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi Fan, Quanquan Gu, Heng Huang, Chunyuan Li• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Avg Score80
118
CorrectionVISCO full 1.0 (test)
Correction Gain58.9
46
CritiqueVISCO 1.0 (test)
VISCore42.6
26
Reward ModelingVLRewardBench (test)
General54.6
24
Multi-modal Preference EvaluationMM-RewardBench
Accuracy56
19
Multi-modal Preference EvaluationVL-Reward
Accuracy54.1
19
Large Multimodal Model EvaluationMLLM-as-a-Judge v1.0 (test)
Overall Score39.3
16
RLHFHH-RLHF
Human Win Rate68.2
16
Pairwise RankingWildVision Arena in-domain
Accuracy (w/ Tie)60.5
11
Pointwise ScoringImageDC pointwise
Kendall's Tau0.949
9
Showing 10 of 18 rows

Other info

Code

Follow for update