Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Act Only When It Pays: Efficient Reinforcement Learning for LLM Reasoning via Selective Rollouts

About

Reinforcement learning, such as PPO and GRPO, has powered recent breakthroughs in LLM reasoning. Scaling rollout to sample more prompts enables models to selectively use higher-quality data for training, which can stabilize RL training and improve model performance. However, this comes at the cost of significant computational overhead. In this paper, we show that a substantial portion of this overhead can be avoided by skipping uninformative prompts before rollout. Our analysis of reward dynamics reveals a strong temporal consistency in prompt value: prompts that are uninformative in one epoch of training are likely to remain uninformative in future epochs. Based on these insights, we propose GRESO (GRPO with Efficient Selective Rollout), an online, lightweight pre-rollout filtering algorithm that predicts and skips uninformative prompts using reward training dynamics. By evaluating GRESO on a broad range of math reasoning benchmarks and models, such as Qwen2.5-Math-1.5B, DeepSeek-R1-Distill-Qwen-1.5B, and Qwen2.5-Math-7B, we show that GRESO achieves up to 2.4x wall-clock time speedup in rollout and up to 2.0x speedup in total training time without accuracy degradation.

Haizhong Zheng, Yang Zhou, Brian R. Bartoldson, Bhavya Kailkhura, Fan Lai, Jiawei Zhao, Beidi Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH 500
Accuracy76.65
442
Mathematical ReasoningMATH 500
pass@191.8
239
Mathematical ReasoningAMC
Accuracy47.29
221
Mathematical ReasoningMATH
Pass@184.2
112
Mathematical ReasoningMATH 500--
106
Mathematical ReasoningOlympiadBench
Accuracy37.35
82
Mathematical ReasoningMinerva
Pass@141.18
80
General ReasoningMMLU-Pro
MMLU-Pro General Reasoning Avg@8 Acc0.522
63
Mathematical ReasoningOlympiad
Pass@140.89
50
ReasoningARC Challenge--
45
Showing 10 of 28 rows

Other info

Follow for update