Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LightningRL: Breaking the Accuracy-Parallelism Trade-off of Block-wise dLLMs via Reinforcement Learning

About

Diffusion Large Language Models (dLLMs) have emerged as a promising paradigm for parallel token generation, with block-wise variants garnering significant research interest. Despite their potential, existing dLLMs typically suffer from a rigid accuracy-parallelism trade-off: increasing the number of tokens per forward (TPF) via aggressive parallel decoding often leads to performance degradation and increased generation instability. We identify that this limitation stems from the model's inability to navigate high-parallelism regimes where approximation errors and local corruptions accumulate, ultimately undermining the reliability of parallel generation. To address this, we propose LightningRL, a post-training framework designed to directly optimize the speed-quality Pareto frontier of pre-trained dLLMs. Instead of forcing uniform parallelization, our approach leverages reinforcement learning to identify and reinforce high-parallelism trajectories that maintain generation accuracy. Built upon the Group Relative Policy Optimization (GRPO) framework, LightningRL introduces several enhancements tailored for dLLMs: (1) stabilized training via per-reward decoupled normalization; (2) token-level negative log-likelihood (NLL) regularization on correct trajectories to anchor model performance; and (3) a dynamic sampling strategy with TPF-aware filtering to enhance training efficiency. Experimental results across mathematical and coding benchmarks demonstrate that LightningRL consistently advances the Pareto frontier, achieving competitive task accuracy while significantly increasing parallelism, reaching an average TPF of 7.32 (with a peak of 11.10 on the MBPP dataset). Our code is available at https://github.com/SJTU-DENG-Lab/LightningRL.

Yanzhe Hu, Yijie Jin, Pengfei Liu, Kai Yu, Zhijie Deng• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Accuracy72.6
99
Mathematical ReasoningGSM8K
Accuracy (Acc)90.3
42
Mathematical ReasoningGSM8K
Accuracy (%)90.3
16
Mathematical ReasoningGSM8K
Accuracy90.3
14
Mathematical ReasoningMATH500
Accuracy63
14
Code GenerationMBPP
Accuracy58.3
14
Showing 6 of 6 rows

Other info

Follow for update