Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model
About
We introduce Open-Reasoner-Zero, the first open source implementation of large-scale reasoning-oriented RL training on the base model focusing on scalability, simplicity and accessibility. Through extensive experiments, we demonstrate that a minimalist approach, vanilla PPO with GAE ($\lambda=1$, $\gamma=1$) and straightforward rule-based rewards, without any KL regularization, is sufficient to scale up both benchmark performance and response length, replicating the scaling phenomenon observed in DeepSeek-R1-Zero. Using the same base model, Qwen2.5-32B base, as DeepSeek-R1-Zero-Qwen-32B, our implementation achieves superior performance across AIME2024, MATH500, and GPQA Diamond, while demonstrating remarkable efficiency, requiring only 1/10 of the training steps compared to the DeepSeek-R1-Zero pipeline. Moreover, our analysis not only covers training dynamics and ablation for critical design choices, but also quantitatively shows how the learned critic in Reasoner-Zero training effectively identifies and devalues repetitive response patterns, yielding more robust advantage estimations and enhancing training stability. Embracing the principles of open-source, we release our source code, training data, and various model weights, fostering reproducibility and encouraging further exploration of the properties of related models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | AMC | Accuracy54.2 | 221 | |
| Mathematical Reasoning | AIME 2024 | Pass@1 Accuracy13.3 | 165 | |
| Mathematical Reasoning | AIME 24 | Accuracy16.5 | 154 | |
| Mathematical Reasoning | AIME 2024 | Accuracy16.5 | 151 | |
| Mathematical Reasoning | MATH 500 | Accuracy (Acc)82.4 | 149 | |
| Mathematical Reasoning | Minerva | -- | 138 | |
| Reasoning | GPQA Diamond | Accuracy29.3 | 135 | |
| Mathematical Reasoning | AMC | Accuracy (%)52.1 | 134 | |
| General Reasoning | MMLU-Pro | Accuracy48.9 | 114 | |
| Mathematical Reasoning | AIME 24 | Accuracy13.3 | 113 |