Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

About

Proprietary LMs such as GPT-4 are often employed to assess the quality of responses from various LMs. However, concerns including transparency, controllability, and affordability strongly motivate the development of open-source LMs specialized in evaluations. On the other hand, existing open evaluator LMs exhibit critical shortcomings: 1) they issue scores that significantly diverge from those assigned by humans, and 2) they lack the flexibility to perform both direct assessment and pairwise ranking, the two most prevalent forms of assessment. Additionally, they do not possess the ability to evaluate based on custom evaluation criteria, focusing instead on general attributes like helpfulness and harmlessness. To address these issues, we introduce Prometheus 2, a more powerful evaluator LM than its predecessor that closely mirrors human and GPT-4 judgements. Moreover, it is capable of processing both direct assessment and pair-wise ranking formats grouped with a user-defined evaluation criteria. On four direct assessment benchmarks and four pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and agreement with humans and proprietary LM judges among all tested open evaluator LMs. Our models, code, and data are all publicly available at https://github.com/prometheus-eval/prometheus-eval.

Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Avg Score75.3
118
Reward ModelingRewardBench
Accuracy83.7
70
Reward ModelingRewardBench v1.0 (test)
Chat Score0.855
27
Audio QA Correctness AssessmentMMAU and MMAR unseen question-based (test)
Spearman ρ0.7439
18
LLM-as-a-judge evaluationFLASK
Pearson's r0.512
16
Text SummarizationSummEval Global
Coherence78.4
16
LLM-as-a-judge evaluationMT-Bench
Pearson's r0.519
16
LLM-as-a-judge evaluationVicuna-bench
Pearson Correlation (r)0.488
16
LLM-as-a-judge evaluationFB Bench (Feedback Bench)
Pearson's r0.845
16
Text Quality Meta-evaluationSummEval (Local)
Coherence0.623
16
Showing 10 of 25 rows

Other info

Follow for update