Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RoMe: A Robust Metric for Evaluating Natural Language Generation

About

Evaluating Natural Language Generation (NLG) systems is a challenging task. Firstly, the metric should ensure that the generated hypothesis reflects the reference's semantics. Secondly, it should consider the grammatical quality of the generated sentence. Thirdly, it should be robust enough to handle various surface forms of the generated sentence. Thus, an effective evaluation metric has to be multifaceted. In this paper, we propose an automatic evaluation metric incorporating several core aspects of natural language understanding (language competence, syntactic and semantic variation). Our proposed metric, RoMe, is trained on language features such as semantic similarity combined with tree edit distance and grammatical acceptability, using a self-supervised neural network to assess the overall quality of the generated sentence. Moreover, we perform an extensive robustness analysis of the state-of-the-art methods and RoMe. Empirical results suggest that RoMe has a stronger correlation to human judgment over state-of-the-art metrics in evaluating system-generated sentences across several NLG tasks.

Md Rashad Al Hasan Rony, Liubov Kovriguina, Debanjan Chaudhuri, Ricardo Usbeck, Jens Lehmann• 2022

Related benchmarks

TaskDatasetResultRank
Metrics correlation with human judgmentWebNLG challenge 2017
Spearman Correlation (rho)0.84
45
Language Generation EvaluationSFHOTEL (test)
Informativeness0.244
14
Language Generation EvaluationBAGEL (test)
Informativeness0.17
14
Dialogue EvaluationIn-car dialogue--
3
Dialogue EvaluationSoccer dialogue--
3
Showing 5 of 5 rows

Other info

Code

Follow for update