Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Themis: A Reference-free NLG Evaluation Language Model with Flexibility and Interpretability

About

The evaluation of natural language generation (NLG) tasks is a significant and longstanding research area. With the recent emergence of powerful large language models (LLMs), some studies have turned to LLM-based automatic evaluation methods, which demonstrate great potential to become a new evaluation paradigm following traditional string-based and model-based metrics. However, despite the improved performance of existing methods, they still possess some deficiencies, such as dependency on references and limited evaluation flexibility. Therefore, in this paper, we meticulously construct a large-scale NLG evaluation corpus NLG-Eval with annotations from both human and GPT-4 to alleviate the lack of relevant data in this field. Furthermore, we propose Themis, an LLM dedicated to NLG evaluation, which has been trained with our designed multi-perspective consistency verification and rating-oriented preference alignment methods. Themis can conduct flexible and interpretable evaluations without references, and it exhibits superior evaluation performance on various NLG tasks, simultaneously generalizing well to unseen tasks and surpassing other evaluation models, including GPT-4.

Xinyu Hu, Li Lin, Mingqi Gao, Xunjian Yin, Xiaojun Wan• 2024

Related benchmarks

TaskDatasetResultRank
Text SummarizationSummEval Global
Coherence85.2
16
Dialogue Response GenerationTopical-Chat Global
Und94.1
16
Text Quality Meta-evaluationTopical-Chat (Local)
Understandability0.588
16
Text Quality Meta-evaluationSummEval (Local)
Coherence0.368
16
Text Quality Meta-evaluationSummEval & Topical-Chat Combined (Overall)
Overall Score41.7
16
Showing 5 of 5 rows

Other info

Follow for update