Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QuRL: Efficient Reinforcement Learning with Quantized Rollout

About

Reinforcement learning with verifiable rewards (RLVR) has become a trending paradigm for training reasoning large language models (LLMs). However, due to the autoregressive decoding nature of LLMs, the rollout process becomes the efficiency bottleneck of RL training, consisting of up to 70\% of the total training time. In this work, we propose Quantized Reinforcement Learning (QuRL) that uses a quantized actor for accelerating the rollout. We address two challenges in QuRL. First, we propose Adaptive Clipping Range (ACR) that dynamically adjusts the clipping ratio based on the policy ratio between the full-precision actor and the quantized actor, which is essential for mitigating long-term training collapse. Second, we identify the weight update problem, where weight changes between RL steps are extremely small, making it difficult for the quantization operation to capture them effectively. We mitigate this problem through the invariant scaling technique that reduces quantization noise and increases weight update. We evaluate our method with INT8 and FP8 quantization experiments on DeepScaleR and DAPO, and achieve 20% to 80% faster rollout during training.

Yuhang Li, Reena Elangovan, Xin Dong, Priyadarshini Panda, Brucek Khailany• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy54.28
212
Mathematical ReasoningMinerva--
138
Mathematical ReasoningAIME 2024 (test)--
103
Mathematical ReasoningAMC
Avg@3271.34
21
Mathematical ReasoningAIME 2024
avg@3240.52
18
Mathematical ReasoningMATH
Avg@32 Accuracy87.2
6
Mathematical ReasoningOlympiad
Avg@32 Accuracy49.13
6
Mathematical ReasoningDeepScaleR Math Reasoning Aggregate
Avg@32 Acc55.48
6
Showing 8 of 8 rows

Other info

Follow for update