SPICE: Self-Play In Corpus Environments Improves Reasoning
About
Self-improving systems require environmental interaction for continuous adaptation. We introduce SPICE (Self-Play In Corpus Environments), a reinforcement learning framework where a single model acts in two roles: a Challenger that mines documents from a large corpus to generate diverse reasoning tasks, and a Reasoner that solves them. Through adversarial dynamics, the Challenger creates an automatic curriculum at the frontier of the Reasoner's capability, while corpus grounding provides the rich, near-inexhaustible external signal necessary for sustained improvement. Unlike existing ungrounded self-play methods that offer more limited benefits, SPICE achieves consistent gains across mathematical (+8.9%) and general reasoning (+9.8%) benchmarks on multiple model families. Our analysis reveals how document grounding is a key ingredient in SPICE to continuously generate its own increasingly challenging goals and achieve them, enabling sustained self-improvement.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | AMC | Accuracy70 | 151 | |
| Mathematical Reasoning | Minerva | -- | 138 | |
| Mathematical Reasoning | Olympiad | Accuracy42.7 | 92 | |
| General Reasoning | MMLU-Pro | MMLU-Pro General Reasoning Avg@8 Acc65 | 51 | |
| Mathematical Reasoning | Mathematical Reasoning Benchmarks (GSM8K, MATH, AMC23, Olympiad, Minerva) (test) | GSM8K Accuracy93.8 | 32 | |
| Reasoning | GPQA D | Accuracy39.4 | 29 | |
| Reasoning | Reasoning Benchmark Suite Aggregate | Average Score55.4 | 26 | |
| General Reasoning | BBEH | Accuracy14.9 | 19 | |
| General Reasoning | General Reasoning Suite MMLU Pro, Super GPQA, GPQA Diamond, BBEH | MMLU Pro61 | 19 | |
| General Reasoning | Super GPQA | -- | 16 |