Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prometheus-Vision: Vision-Language Model as a Judge for Fine-Grained Evaluation

About

Assessing long-form responses generated by Vision-Language Models (VLMs) is challenging. It not only requires checking whether the VLM follows the given instruction but also verifying whether the text output is properly grounded on the given image. Inspired by the recent approach of evaluating LMs with LMs, in this work, we propose to evaluate VLMs with VLMs. For this purpose, we present a new feedback dataset called the Perception Collection, encompassing 15K customized score rubrics that users might care about during assessment. Using the Perception Collection, we train Prometheus-Vision, the first open-source VLM evaluator model that can understand the user-defined score criteria during evaluation. Prometheus-Vision shows the highest Pearson correlation with human evaluators and GPT-4V among open-source models, showing its effectiveness for transparent and accessible evaluation of VLMs. We open-source our code, dataset, and model at https://github.com/kaistAI/prometheus-vision

Seongyun Lee, Seungone Kim, Sue Hyun Park, Geewook Kim, Minjoon Seo• 2024

Related benchmarks

TaskDatasetResultRank
Large Multimodal Model EvaluationMLLM-as-a-Judge v1.0 (test)
Overall Score21.3
16
Pairwise RankingWildVision Arena in-domain
Accuracy (w/ Tie)47.3
11
Pointwise ScoringMMHal pointwise
Kendall's Tau0.59
9
Pointwise ScoringMLLM-as-a-Judge in-domain v1.0 (test)
ImageDC Score26.2
9
Pointwise ScoringMMVet pointwise
Kendall's Tau0.436
9
Pointwise ScoringWildVision (pointwise)
Kendall's Tau0.615
9
Pointwise ScoringImageDC pointwise
Kendall's Tau0.452
9
Pointwise ScoringLLaVA-B pointwise
Kendall's Tau0.487
9
Pointwise ScoringLLaVA-W pointwise
Kendall's Tau0.503
9
Pointwise ScoringL-Wilder pointwise
Kendall's Tau0.231
9
Showing 10 of 10 rows

Other info

Follow for update