Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PaperAudit-Bench: Benchmarking Error Detection in Research Papers for Critical Automated Peer Review

About

Large language models can generate fluent peer reviews, yet their assessments often lack sufficient critical rigor when substantive issues are subtle and distributed across a paper. In this paper, we introduce PaperAudit-Bench, which consists of two components: (1) PaperAudit-Dataset, an error dataset covering both errors identifiable within individual sections and those requiring cross-section reasoning, designed for controlled evaluation under long-context settings; and (2) PaperAudit-Review, an automated review framework that integrates structured error detection with evidence-aware review generation to support critical assessment. Experiments on PaperAudit-Bench reveal large variability in error detectability across models and detection depths, highlighting the difficulty of identifying such errors under long-context settings. Relative to representative automated reviewing baselines, incorporating explicit error detection into the review workflow produces systematically stricter and more discriminative evaluations, demonstrating its suitability for peer review. Finally, we show that the dataset supports training lightweight LLM detectors via SFT and RL, enabling effective error detection at reduced computational cost.

Songjun Tu, Yiwen Ma, Jiahao Lin, Qichao Zhang, Xiangyuan Lan, Junfeng.Li, Nan Xu, Linjing Li, Dongbin Zhao• 2026

Related benchmarks

TaskDatasetResultRank
AI Peer ReviewPaperAudit-Dataset ICML branch
Novelty8.51
18
Error detectionPaperAudit-Dataset ICML branch 1.0
EC (Detection@1)27.3
13
Coverage-based AlignmentICLR 50 submissions 2026
Str-Cov88.6
3
Score-based AlignmentICLR 2026 (50 submissions)
R-MSE0.148
3
Showing 4 of 4 rows

Other info

Follow for update