Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Identifying Reliable Evaluation Metrics for Scientific Text Revision

About

Evaluating text revision in scientific writing remains a challenge, as traditional metrics such as ROUGE and BERTScore primarily focus on similarity rather than capturing meaningful improvements. In this work, we analyse and identify the limitations of these metrics and explore alternative evaluation methods that better align with human judgments. We first conduct a manual annotation study to assess the quality of different revisions. Then, we investigate reference-free evaluation metrics from related NLP domains. Additionally, we examine LLM-as-a-judge approaches, analysing their ability to assess revisions with and without a gold reference. Our results show that LLMs effectively assess instruction-following but struggle with correctness, while domain-specific metrics provide complementary insights. We find that a hybrid approach combining LLM-as-a-judge evaluation and task-specific metrics offers the most reliable assessment of revision quality.

L\'eane Jourdan, Florian Boudin, Richard Dufour, Nicolas Hernandez• 2025

Related benchmarks

TaskDatasetResultRank
Scientific Text RevisionScientific Text Revision
Pairwise Accuracy60
21
Nile TranslationNile
Pairwise Accuracy69
15
Showing 2 of 2 rows

Other info

Code

Follow for update