Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DSB: Dynamic Sliding Block Scheduling for Diffusion LLMs

About

Diffusion large language models (dLLMs) have emerged as a promising alternative for text generation, distinguished by their native support for parallel decoding. In practice, block inference is crucial for avoiding order misalignment in global bidirectional decoding and improving output quality. However, the widely-used fixed, predefined block (naive) schedule is agnostic to semantic difficulty, making it a suboptimal strategy for both quality and efficiency: it can force premature commitments to uncertain positions while delaying easy positions near block boundaries. In this work, we analyze the limitations of naive block scheduling and disclose the importance of dynamically adapting the schedule to semantic difficulty for reliable and efficient inference. Motivated by this, we propose Dynamic Sliding Block (DSB), a training-free block scheduling method that uses a sliding block with a dynamic size to overcome the rigidity of the naive block. To further improve efficiency, we introduce DSB Cache, a training-free KV-cache mechanism tailored to DSB. Extensive experiments across multiple models and benchmarks demonstrate that DSB, together with DSB Cache, consistently improves both generation quality and inference efficiency for dLLMs. Code is released at https://github.com/lizhuo-luo/DSB.

Lizhuo Luo, Shenggui Li, Yonggang Wen, Tianwei Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Logical reasoningBBH
Accuracy58.85
93
Mathematical ReasoningMATH
Accuracy38.36
48
Code GenerationHumanEval
TPS124.6
41
Code GenerationMBPP
Accuracy56.4
28
Mathematical ReasoningGSM8K
Accuracy81.96
28
Showing 5 of 5 rows

Other info

Follow for update