ReMiT: RL-Guided Mid-Training for Iterative LLM Evolution
About
Standard training pipelines for large language models (LLMs) are typically unidirectional, progressing from pre-training to post-training. However, the potential for a bidirectional process--where insights from post-training retroactively improve the pre-trained foundation--remains unexplored. We aim to establish a self-reinforcing flywheel: a cycle in which reinforcement learning (RL)-tuned model strengthens the base model, which in turn enhances subsequent post-training performance, requiring no specially trained teacher or reference model. To realize this, we analyze training dynamics and identify the mid-training (annealing) phase as a critical turning point for model capabilities. This phase typically occurs at the end of pre-training, utilizing high-quality corpora under a rapidly decaying learning rate. Building upon this insight, we introduce ReMiT (Reinforcement Learning-Guided Mid-Training). Specifically, ReMiT leverages the reasoning priors of RL-tuned models to dynamically reweight tokens during the mid-training phase, prioritizing those pivotal for reasoning. Empirically, ReMiT achieves an average improvement of 3\% on 10 pre-training benchmarks, spanning math, code, and general reasoning, and sustains these gains by over 2\% throughout the post-training pipeline. These results validate an iterative feedback loop, enabling continuous and self-reinforcing evolution of LLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval | -- | 850 | |
| Mathematical Reasoning | MATH | Accuracy31.68 | 535 | |
| Instruction Following | IFEval | -- | 292 | |
| Science Question Answering | ARC Challenge | Accuracy54.69 | 234 | |
| Graduate-level Question Answering | GPQA | Accuracy29.69 | 114 | |
| Code Generation | MBPP | Accuracy49.6 | 90 | |
| General Knowledge | MMLU-Pro | MMLU-Pro General Knowledge Score30.73 | 38 | |
| Common Sense Reasoning | BBH | Accuracy58.27 | 27 | |
| Aggregated Performance | Average 10 Tasks | Average Accuracy42.97 | 19 | |
| Factuality | TruthfulQA | Accuracy31.95 | 18 |