Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

R-Zero: Self-Evolving Reasoning LLM from Zero Data

About

Self-evolving Large Language Models (LLMs) offer a scalable path toward super-intelligence by autonomously generating, refining, and learning from their own experiences. However, existing methods for training such models still rely heavily on vast human-curated tasks and labels, typically via fine-tuning or reinforcement learning, which poses a fundamental bottleneck to advancing AI systems toward capabilities beyond human intelligence. To overcome this limitation, we introduce R-Zero, a fully autonomous framework that generates its own training data from scratch. Starting from a single base LLM, R-Zero initializes two independent models with distinct roles, a Challenger and a Solver. These models are optimized separately and co-evolve through interaction: the Challenger is rewarded for proposing tasks near the edge of the Solver capability, and the Solver is rewarded for solving increasingly challenging tasks posed by the Challenger. This process yields a targeted, self-improving curriculum without any pre-existing tasks and labels. Empirically, R-Zero substantially improves reasoning capability across different backbone LLMs, e.g., boosting the Qwen3-4B-Base by +6.49 on math-reasoning benchmarks and +7.54 on general-domain reasoning benchmarks.

Chengsong Huang, Wenhao Yu, Xiaoyang Wang, Hongming Zhang, Zongxia Li, Ruosen Li, Jiaxin Huang, Haitao Mi, Dong Yu• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K--
351
Mathematical ReasoningAMC
Accuracy62.8
151
Mathematical ReasoningMinerva--
138
Mathematical ReasoningAMC
Pass@161.7
112
Mathematical ReasoningOlympiad
Accuracy43.4
92
Mathematical ReasoningMathematical Reasoning Suite (AMC, AIME 2024, AIME 2025, Minerva, MATH, Olympiad) various (test val)
Average Score36.9
55
General ReasoningMMLU-Pro
MMLU-Pro General Reasoning Avg@8 Acc61.6
51
Mathematical ReasoningMathematical Reasoning Benchmarks (GSM8K, MATH, AMC23, Olympiad, Minerva) (test)
GSM8K Accuracy92.4
32
ReasoningGPQA D
Accuracy40.5
29
General Domain ReasoningSuperGPQA, MMLU-Pro, BBEH
Overall Avg Score38.73
28
Showing 10 of 23 rows

Other info

Follow for update