AutoReproduce: Automatic AI Experiment Reproduction with Paper Lineage
About
Efficient reproduction of research papers is pivotal to accelerating scientific progress. However, the increasing complexity of proposed methods often renders reproduction a labor-intensive endeavor, necessitating profound domain expertise. To address this, we introduce the paper lineage, which systematically mines implicit knowledge from the cited literature. This algorithm serves as the backbone of our proposed AutoReproduce, a multi-agent framework designed to autonomously reproduce experimental code in a complete, end-to-end manner. To ensure code executability, AutoReproduce incorporates a sampling-based unit testing strategy for rapid validation. To assess reproduction capabilities, we introduce ReproduceBench, a benchmark featuring verified implementations, alongside comprehensive metrics for evaluating both reproduction and execution fidelity. Extensive evaluations on PaperBench and ReproduceBench demonstrate that AutoReproduce consistently surpasses existing baselines across all metrics. Notably, it yields substantial improvements in reproduction fidelity and final execution performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Paper-to-Code Reproduction | PaperBench Code (dev) | Final Score49.6 | 9 | |
| Experimental reproduction | REPRODUCEBENCH | Align-Score (Paper-Level)91.57 | 7 | |
| General Graph Learning | GeneralGL | Performance Gap33.45 | 6 | |
| General Recommendation | GeneralRec | Performance Gap28.34 | 6 | |
| Graph Structure Learning | GSL | Performance Gap39.78 | 6 | |
| Long-term time-series forecasting | LongTerm | Performance Gap31.87 | 6 | |
| Multimodal Recommendation | MMRec | Performance Gap36.42 | 6 | |
| Noisy Graph Learning | NoisyGL | Performance Gap28.56 | 6 | |
| Sequential Recommendation | SeqRec | Performance Gap34.76 | 6 | |
| Short-term Time Series Forecasting | ShortTerm | Performance Gap35.67 | 6 |