Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SPICE: Self-Play In Corpus Environments Improves Reasoning

About

Self-improving systems require environmental interaction for continuous adaptation. We introduce SPICE (Self-Play In Corpus Environments), a reinforcement learning framework where a single model acts in two roles: a Challenger that mines documents from a large corpus to generate diverse reasoning tasks, and a Reasoner that solves them. Through adversarial dynamics, the Challenger creates an automatic curriculum at the frontier of the Reasoner's capability, while corpus grounding provides the rich, near-inexhaustible external signal necessary for sustained improvement. Unlike existing ungrounded self-play methods that offer more limited benefits, SPICE achieves consistent gains across mathematical (+8.9%) and general reasoning (+9.8%) benchmarks on multiple model families. Our analysis reveals how document grounding is a key ingredient in SPICE to continuously generate its own increasingly challenging goals and achieve them, enabling sustained self-improvement.

Bo Liu, Chuanyang Jin, Seungone Kim, Weizhe Yuan, Wenting Zhao, Ilia Kulikov, Xian Li, Sainbayar Sukhbaatar, Jack Lanchantin, Jason Weston• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC
Accuracy70
151
Mathematical ReasoningMinerva--
138
Mathematical ReasoningOlympiad
Accuracy42.7
92
General ReasoningMMLU-Pro
MMLU-Pro General Reasoning Avg@8 Acc65
51
Mathematical ReasoningMathematical Reasoning Benchmarks (GSM8K, MATH, AMC23, Olympiad, Minerva) (test)
GSM8K Accuracy93.8
32
ReasoningGPQA D
Accuracy39.4
29
ReasoningReasoning Benchmark Suite Aggregate
Average Score55.4
26
General ReasoningBBEH
Accuracy14.9
19
General ReasoningGeneral Reasoning Suite MMLU Pro, Super GPQA, GPQA Diamond, BBEH
MMLU Pro61
19
General ReasoningSuper GPQA--
16
Showing 10 of 10 rows

Other info

Follow for update