Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient and Stable Reinforcement Learning for Diffusion Language Models

About

Reinforcement Learning (RL) is crucial for unlocking the complex reasoning capabilities of Diffusion-based Large Language Models (dLLMs). However, applying RL to dLLMs faces unique challenges in efficiency and stability. To address these challenges, we propose Spatio-Temporal Pruning (STP), a framework designed to simultaneously improve the efficiency and stability of RL for dLLMs. STP compresses the redundancy in the generative process through: (1) \textit{spatial pruning}, which constrains the exploration space using static priors; and (2) \textit{temporal pruning}, which bypasses redundant late-stage refinement steps. Our theoretical analysis demonstrates that STP strictly reduces the variance of the log-likelihood estimation, thereby ensuring more stable policy updates. Extensive experiments demonstrate that STP surpasses state-of-the-art baselines in both efficiency and accuracy. Our code is available at https://github.com/Lolo1222/STP.

Jiawei Liu, Xiting Wang, Yuanyuan Zhong, Defu Lian, Yu Yang• 2026

Related benchmarks

TaskDatasetResultRank
ReasoningCOUNTDOWN (test)
Accuracy66.02
13
Mathematical ReasoningMATH (test)
Accuracy36.2
13
Mathematical ReasoningGSM8K (test)
Accuracy80.97
4
Showing 3 of 3 rows

Other info

Follow for update