Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training Reasoning Models on Saturated Problems via Failure-Prefix Conditioning

About

Reinforcement Learning with Verifiable Rewards (RLVR) has substantially improved the reasoning abilities of large language models (LLMs), yet training often stalls as problems become saturated. We identify the core challenge as the poor accessibility of informative failures: learning signals exist but are rarely encountered during standard rollouts. To address this, we propose failure-prefix conditioning, a simple and effective method for learning from saturated problems. Rather than starting from the original question, our approach reallocates exploration by conditioning training on prefixes derived from rare incorrect reasoning trajectories, thereby exposing the model to failure-prone states. We observe that failure-prefix conditioning yields performance gains matching those of training on medium-difficulty problems, while preserving token efficiency. Furthermore, we analyze the model's robustness, finding that our method reduces performance degradation under misleading failure prefixes, albeit with a mild trade-off in adherence to correct early reasoning. Finally, we demonstrate that an iterative approach, which refreshes failure prefixes during training, unlocks additional gains after performance plateaus. Overall, our results suggest that failure-prefix conditioning offers an effective pathway to extend RLVR training on saturated problems.

Minwu Kim, Safal Shrestha, Keith Ross• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME24
Accuracy33
130
Mathematical ReasoningHMMT25
Accuracy16.2
78
Mathematical ReasoningAMC12
Accuracy56.3
12
Mathematical ReasoningMath reasoning benchmarks (MATH500, AMC12, AIME24, AIME25, HMMT25) (test)
MATH500 Score86
6
Showing 4 of 4 rows

Other info

GitHub

Follow for update