Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SOUP: Token-level Single-sample Mix-policy Reinforcement Learning for Large Language Models

About

On-policy reinforcement learning (RL) methods widely used for language model post-training, like Group Relative Policy Optimization (GRPO), often suffer from limited exploration and early saturation due to low sampling diversity. While off-policy data can help, current approaches that mix entire trajectories cause significant policy mismatch and instability. In this work, we propose the $\textbf{S}$ingle-sample Mix-p$\textbf{O}$licy $\textbf{U}$nified $\textbf{P}$aradigm (SOUP), a framework that unifies off- and on-policy learning within individual samples at the token level. It confines off-policy influence to the prefix of a generated sequence sampled from historical policies, while the continuation is generated on-policy. Through token-level importance ratios, SOUP effectively leverages off-policy information while preserving training stability. Extensive experiments demonstrate that SOUP consistently outperforms standard on-policy training and existing off-policy extensions. Our further analysis clarifies how our fine-grained, single-sample mix-policy training can improve both exploration and final performance in LLM RL.

Lei Yang, Wei Bi, Chenxi Sun, Renren Jin, Deyi Xiong• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAMC 23
Accuracy79.84
198
Mathematical ReasoningMinerva--
138
Mathematical ReasoningOlympiad
Accuracy59.05
92
Mathematical ReasoningMATH 500
Accuracy88.25
73
Showing 4 of 4 rows

Other info

Follow for update