Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Prometheus 2: An Open Source Language Model Specialized in Evaluating Other Language Models

About

Proprietary LMs such as GPT-4 are often employed to assess the quality of responses from various LMs. However, concerns including transparency, controllability, and affordability strongly motivate the development of open-source LMs specialized in evaluations. On the other hand, existing open evaluator LMs exhibit critical shortcomings: 1) they issue scores that significantly diverge from those assigned by humans, and 2) they lack the flexibility to perform both direct assessment and pairwise ranking, the two most prevalent forms of assessment. Additionally, they do not possess the ability to evaluate based on custom evaluation criteria, focusing instead on general attributes like helpfulness and harmlessness. To address these issues, we introduce Prometheus 2, a more powerful evaluator LM than its predecessor that closely mirrors human and GPT-4 judgements. Moreover, it is capable of processing both direct assessment and pair-wise ranking formats grouped with a user-defined evaluation criteria. On four direct assessment benchmarks and four pairwise ranking benchmarks, Prometheus 2 scores the highest correlation and agreement with humans and proprietary LM judges among all tested open evaluator LMs. Our models, code, and data are all publicly available at https://github.com/prometheus-eval/prometheus-eval.

Seungone Kim, Juyoung Suk, Shayne Longpre, Bill Yuchen Lin, Jamin Shin, Sean Welleck, Graham Neubig, Moontae Lee, Kyungjae Lee, Minjoon Seo• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Accuracy83.7
166
Reward ModelingRewardBench
Chat Score93
146
Reward ModelingRewardBench v1.0 (test)
Average Score0.72
89
Reasoning Quality Correlation AnalysisPolitiFact
Somers' D0.0516
45
Reasoning Quality Correlation AnalysisLIAR
Somers' D0.0367
45
LLM-as-a-judge evaluationFLASK
Pearson's r0.525
36
LLM-as-a-judge evaluationMT-Bench
Pearson's r0.538
36
LLM-as-a-judge evaluationFB Bench (Feedback Bench)
Pearson's r0.853
36
Pair-wise comparisonRewardBench
Accuracy74.5
29
LLM-as-a-judge evaluationVicuna benchmark
Pearson Correlation (r)51
20
Showing 10 of 53 rows

Other info

Follow for update