Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Automatic Analysis of Substantiation in Scientific Peer Reviews

About

With the increasing amount of problematic peer reviews in top AI conferences, the community is urgently in need of automatic quality control measures. In this paper, we restrict our attention to substantiation -- one popular quality aspect indicating whether the claims in a review are sufficiently supported by evidence -- and provide a solution automatizing this evaluation process. To achieve this goal, we first formulate the problem as claim-evidence pair extraction in scientific peer reviews, and collect SubstanReview, the first annotated dataset for this task. SubstanReview consists of 550 reviews from NLP conferences annotated by domain experts. On the basis of this dataset, we train an argument mining system to automatically analyze the level of substantiation in peer reviews. We also perform data analysis on the SubstanReview dataset to obtain meaningful insights on peer reviewing quality in NLP conferences over recent years.

Yanzhu Guo, Guokan Shang, Virgile Rennard, Michalis Vazirgiannis, Chlo\'e Clavel• 2023

Related benchmarks

TaskDatasetResultRank
Human Correlation AnalysisRottenReview
Spearman Correlation0.25
10
Human Correlation AnalysisSubstanReview
Spearman Correlation0.7
9
Claim taggingSubstanReview 110 reviews (evaluation)
Precision52
2
Evidence linkingSubstanReview (test)
EM64.31
2
Showing 4 of 4 rows

Other info

Follow for update