Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RISE: Leveraging Retrieval Techniques for Summarization Evaluation

About

Evaluating automatically-generated text summaries is a challenging task. While there have been many interesting approaches, they still fall short of human evaluations. We present RISE, a new approach for evaluating summaries by leveraging techniques from information retrieval. RISE is first trained as a retrieval task using a dual-encoder retrieval setup, and can then be subsequently utilized for evaluating a generated summary given an input document, without gold reference summaries. RISE is especially well suited when working on new datasets where one may not have reference summaries available for evaluation. We conduct comprehensive experiments on the SummEval benchmark (Fabbri et al., 2021) and the results show that RISE has higher correlation with human evaluations compared to many past approaches to summarization evaluation. Furthermore, RISE also demonstrates data-efficiency and generalizability across languages.

David Uthus, Jianmo Ni• 2022

Related benchmarks

TaskDatasetResultRank
Summarization EvaluationSummEval
Coherence53.3
41
Summarization Evaluation (Human Correlation)arXiv (test)
Relevance0.75
6
Summarization Evaluation (Human Correlation)GovReport (test)
Relevance61
4
Showing 3 of 3 rows

Other info

Code

Follow for update