Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoScale-RL: Efficient Post-Training by Co-Scaling Data and Computation

About

Training Large Reasoning Model (LRM) is usually unstable and unpredictable, especially on hard problems or weak foundation models. We found that the current post-training scaling strategy can still improve on these cases. We propose CoScale-RL, a novel scaling strategy with better data and computational efficiency. We first scale up solutions to make problems solvable. The core idea is to collect multiple solutions for each problem, rather than simply enlarging the dataset. Then, we scale up rollout computation to stabilize Reinforcement Learning. We further leverage a model merge technique called Re-distillation to sustain or even improve computational efficiency when scaling up. Our method significantly improves data and computational efficiency, with an average 3.76$\times$ accuracy improvement on four benchmarks. CoScale-RL is able to improve an LRM's ability boundary without an extensive SFT dataset. Our method provides a new scaling direction to further improve LRM's reasoning ability.

Yutong Chen, Jiandong Gao, Ji Wu• 2026

Related benchmarks

TaskDatasetResultRank
Math ReasoningAMC
Accuracy7.6
70
Math ReasoningMATH 500
Accuracy40.7
38
Math ReasoningOpenMathReasoning
Accuracy14.2
10
Math ReasoningOlympiad Math
Accuracy1.26
10
General ReasoningReasoning GYM
Accuracy (Reasoning GYM)6.8
10
Showing 5 of 5 rows

Other info

Follow for update