Paper2Code: Automating Code Generation from Scientific Papers in Machine Learning
About
Despite the rapid growth of machine learning research, corresponding code implementations are often unavailable, making it slow and labor-intensive for researchers to reproduce results and build upon prior work. In the meantime, recent Large Language Models (LLMs) excel at understanding scientific documents and generating high-quality code. Inspired by this, we introduce PaperCoder, a multi-agent LLM framework that transforms machine learning papers into operational code repositories. PaperCoder operates in three stages: planning, where it constructs a high-level roadmap, designs the system architecture with diagrams, identifies file dependencies, and generates configuration files; analysis, which focuses on interpreting implementation-specific details; and generation, where modular, dependency-aware code is produced. Moreover, each phase is instantiated through a set of specialized agents designed to collaborate effectively across the pipeline. We then evaluate PaperCoder on generating code implementations from machine learning papers based on both model-based and human evaluations, particularly from the authors of those papers, with author-released repositories as ground truth if available. Our results demonstrate the effectiveness of PaperCoder in creating high-quality, faithful implementations. Furthermore, it consistently shows strengths in the recently released PaperBench benchmark, surpassing strong baselines by substantial margins. Code is available at: https://github.com/going-doer/Paper2Code.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Paper-to-code synthesis | NERFIFY-BENCH 10 NeRF Papers 1.0 | C Score83 | 50 | |
| Repository-level Code Generation | RepoCraft | Pass Rate6 | 14 | |
| Paper-to-Code Reproduction | PaperBench Code (dev) | Final Score45.1 | 9 | |
| Experimental reproduction | REPRODUCEBENCH | Align-Score (Paper-Level)90.41 | 7 | |
| General Recommendation | GeneralRec | Performance Gap32.11 | 6 | |
| Graph Structure Learning | GSL | Performance Gap70.33 | 6 | |
| Long-term time-series forecasting | LongTerm | Performance Gap37.66 | 6 | |
| Noisy Graph Learning | NoisyGL | Performance Gap35.44 | 6 | |
| Paper-to-Code Reproduction | PaperBench Code ICML 2024 (dev) | Average Score0.682 | 6 | |
| Sequential Recommendation | SeqRec | Performance Gap65.88 | 6 |