Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prometheus: Inducing Fine-grained Evaluation Capability in Language Models

About

Recently, using a powerful proprietary Large Language Model (LLM) (e.g., GPT-4) as an evaluator for long-form responses has become the de facto standard. However, for practitioners with large-scale evaluation tasks and custom criteria in consideration (e.g., child-readability), using proprietary LLMs as an evaluator is unreliable due to the closed-source nature, uncontrolled versioning, and prohibitive costs. In this work, we propose Prometheus, a fully open-source LLM that is on par with GPT-4's evaluation capabilities when the appropriate reference materials (reference answer, score rubric) are accompanied. We first construct the Feedback Collection, a new dataset that consists of 1K fine-grained score rubrics, 20K instructions, and 100K responses and language feedback generated by GPT-4. Using the Feedback Collection, we train Prometheus, a 13B evaluator LLM that can assess any given long-form text based on customized score rubric provided by the user. Experimental results show that Prometheus scores a Pearson correlation of 0.897 with human evaluators when evaluating with 45 customized score rubrics, which is on par with GPT-4 (0.882), and greatly outperforms ChatGPT (0.392). Furthermore, measuring correlation with GPT-4 with 1222 customized score rubrics across four benchmarks (MT Bench, Vicuna Bench, Feedback Bench, Flask Eval) shows similar trends, bolstering Prometheus's capability as an evaluator LLM. Lastly, Prometheus achieves the highest accuracy on two human preference benchmarks (HHH Alignment & MT Bench Human Judgment) compared to open-sourced reward models explicitly trained on human preference datasets, highlighting its potential as an universal reward model. We open-source our code, dataset, and model at https://kaistai.github.io/prometheus/.

Seungone Kim, Jamin Shin, Yejin Cho, Joel Jang, Shayne Longpre, Hwaran Lee, Sangdoo Yun, Seongjin Shin, Sungdong Kim, James Thorne, Minjoon Seo• 2023

Related benchmarks

TaskDatasetResultRank
CritiqueVISCO 1.0 (test)
VISCore19.3
26
Evaluation AlignmentSummHF
QWK0.3915
16
Evaluation AlignmentASAP 2.0
QWK0.5566
16
Evaluation AlignmentDREsS
QWK0.3665
16
Dialogue Response GenerationTopical-Chat Global
Und69.2
16
Text Quality Meta-evaluationSummEval (Local)
Coherence0.476
16
Text Quality Meta-evaluationSummEval & Topical-Chat Combined (Overall)
Overall Score48.4
16
Text Quality Meta-evaluationTopical-Chat (Local)
Understandability0.433
16
Text SummarizationSummEval Global
Coherence57.6
16
Patent Quality EvaluationPap2Pat EvalGold N=146 (test)
TCF51
8
Showing 10 of 10 rows

Other info

Follow for update