Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Understanding R1-Zero-Like Training: A Critical Perspective

About

DeepSeek-R1-Zero has shown that reinforcement learning (RL) at scale can directly enhance the reasoning capabilities of LLMs without supervised fine-tuning. In this work, we critically examine R1-Zero-like training by analyzing its two core components: base models and RL. We investigate a wide range of base models, including DeepSeek-V3-Base, to understand how pretraining characteristics influence RL performance. Our analysis reveals that DeepSeek-V3-Base already exhibit ''Aha moment'', while Qwen2.5 base models demonstrate strong reasoning capabilities even without prompt templates, suggesting potential pretraining biases. Additionally, we identify an optimization bias in Group Relative Policy Optimization (GRPO), which artificially increases response length (especially for incorrect outputs) during training. To address this, we introduce Dr. GRPO, an unbiased optimization method that improves token efficiency while maintaining reasoning performance. Leveraging these insights, we present a minimalist R1-Zero recipe that achieves 43.3% accuracy on AIME 2024 with a 7B base model, establishing a new state-of-the-art. Our code is available at https://github.com/sail-sg/understand-r1-zero.

Zichen Liu, Changyu Chen, Wenjun Li, Penghui Qi, Tianyu Pang, Chao Du, Wee Sun Lee, Min Lin• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)
Accuracy73
514
Mathematical ReasoningMATH 500--
442
Mathematical ReasoningMATH 500
pass@188.28
239
Mathematical ReasoningAMC
Accuracy74.7
221
Mathematical ReasoningAIME 2024 (test)--
159
Mathematical ReasoningAIME 24
Accuracy33.4
154
Mathematical ReasoningAIME 2024
Accuracy33.4
151
Mathematical ReasoningMATH 500
Accuracy (Acc)78
149
Mathematical ReasoningMinerva
Pass@149.19
138
Mathematical ReasoningAMC
Accuracy (%)61.2
134
Showing 10 of 128 rows
...

Other info

Follow for update