Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Socratic-Zero : Bootstrapping Reasoning via Data-Free Agent Co-evolution

About

Recent breakthroughs in large language models (LLMs) on reasoning tasks rely heavily on massive, high-quality datasets-typically human-annotated and thus difficult to scale. While data synthesis or distillation offers a promising alternative, existing methods struggle with inconsistent data quality and an inability to dynamically adapt to the evolving capabilities of the model, leading to suboptimal training signals. To address these limitations, we introduce Socratic-Zero, a fully autonomous framework that generates high-quality training data from minimal seed examples through the co-evolution of three agents: the Teacher, the Solver, and the Generator. The Solver continuously refines its reasoning by learning from preference feedback on both successful and failed trajectories; the Teacher adaptively crafts increasingly challenging questions based on the Solver's weaknesses; and the Generator distills the Teacher's question-design strategy to enable scalable, high-fidelity curriculum generation. This closed-loop system produces a self-improving curriculum-requiring no pre-existing tasks or labels. Remarkably, starting from only 100 seed questions, our Socratic-Solver-8B achieves an average gain of +20.2 percentage points over prior data synthesis methods across seven mathematical reasoning benchmarks (AMC23, AIME24-25, Olympiad, MATH-500, Minerva, and GSM8K), with consistent gains on both Qwen3 and GLM4 series models. Even more surprisingly, synthetic data from Socratic-Generator-32B enables student LLMs to achieve superior performance compared to other state-of-the-art (SOTA) commercial LLMs on these benchmarks, including Qwen3-235B-A22B, DeepSeek-V3.1-671B, GPT-5, Gemini-2.5-Pro, Grok-4, and Claude-4.1-Opus.

Shaobo Wang, Zhengbo Jiao, Zifan Zhang, Yilang Peng, Xu Ze, Boyu Yang, Wei Wang, Hu Wei, Linfeng Zhang• 2025

Related benchmarks

TaskDatasetResultRank
General Domain ReasoningSuperGPQA, MMLU-Pro, BBEH
Overall Avg Score39.15
28
Mathematical ReasoningAMO-Bench
Mean@64 Accuracy9.1
27
Mathematical ReasoningAIME 2024
Mean@64 Accuracy50.2
19
Mathematical ReasoningAIME 2025
Mean@64 Acc46.9
19
Mathematical ReasoningHMMT February
Mean@64 Acc0.313
19
Mathematical ReasoningMathematical Reasoning Suite (AMC, Minerva, MATH, GSM8K, Olympiad, AIME25, AIME24)
Overall Average Score56.1
12
Showing 6 of 6 rows

Other info

Follow for update