Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CycleResearcher: Improving Automated Research via Automated Review

About

The automation of scientific discovery has been a long-standing goal within the research community, driven by the potential to accelerate knowledge creation. While significant progress has been made using commercial large language models (LLMs) as research assistants or idea generators, the possibility of automating the entire research process with open-source LLMs remains largely unexplored. This paper explores the feasibility of using open-source post-trained LLMs as autonomous agents capable of performing the full cycle of automated research and review, from literature review and manuscript preparation to peer review and paper refinement. Our iterative preference training framework consists of CycleResearcher, which conducts research tasks, and CycleReviewer, which simulates the peer review process, providing iterative feedback via reinforcement learning. To train these models, we develop two new datasets, Review-5k and Research-14k, reflecting real-world machine learning research and peer review dynamics. Our results demonstrate that CycleReviewer achieves promising performance with a 26.89\% reduction in mean absolute error (MAE) compared to individual human reviewers in predicting paper scores, indicating the potential of LLMs to effectively assist expert-level research evaluation. In research, the papers generated by the CycleResearcher model achieved a score of 5.36 in simulated peer reviews, showing some competitiveness in terms of simulated review scores compared to the preprint level of 5.24 from human experts, while still having room for improvement compared to the accepted paper level of 5.69. This work represents a significant step toward fully automated scientific inquiry, providing ethical safeguards and exploring AI-driven research capabilities. The code, dataset and model weight are released at https://wengsyx.github.io/Researcher/.

Yixuan Weng, Minjun Zhu, Guangsheng Bao, Hongbo Zhang, Jindong Wang, Yue Zhang, Linyi Yang• 2024

Related benchmarks

TaskDatasetResultRank
Automated Peer Review EvaluationDeepReview-13K 1.0 (test)
H-Max Technical Accuracy2.27
30
Paper Acceptance DecisionICLR 2025 (test)
Accuracy66.09
15
Paper Quality EvaluationICLR 2025 (test)
Jaccard Index16.99
15
Automated Peer ReviewDeepReview-13K 2025 (test)
Technical Accuracy Win98.7
14
Automated Peer ReviewDeepReview-13K (test)
Technical Accuracy Win (%)0.00e+0
10
Peer Review EvaluationICLR Papers 2023
Accuracy61.23
8
Peer Review EvaluationICLR Papers 2025
Accuracy63.05
8
Peer Review EvaluationICLR Papers 2024
Accuracy57.73
8
Scientific Paper GenerationAI-generated public papers Max Rating Paper
Soundness2.75
6
AI Research Paper Generation EvaluationPublic papers (Overall)
Soundness Score2.25
6
Showing 10 of 11 rows

Other info

Follow for update