Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ES-dLLM: Efficient Inference for Diffusion Large Language Models by Early-Skipping

About

Diffusion large language models (dLLMs) are emerging as a promising alternative to autoregressive models (ARMs) due to their ability to capture bidirectional context and the potential for parallel generation. Despite the advantages, dLLM inference remains computationally expensive as the full input context is processed at every iteration. In this work, we analyze the generation dynamics of dLLMs and find that intermediate representations, including key, value, and hidden states, change only subtly across successive iterations. Leveraging this insight, we propose \textbf{ES-dLLM}, a training-free inference acceleration framework for dLLM that reduces computation by skipping tokens in early layers based on the estimated importance. Token importance is computed with intermediate tensor variation and confidence scores of previous iterations. Experiments on LLaDA-8B and Dream-7B demonstrate that ES-dLLM achieves throughput of up to 226.57 and 308.51 tokens per second (TPS), respectively, on an NVIDIA H200 GPU, delivering 5.6$\times$ to 16.8$\times$ speedup over the vanilla implementation and up to 1.85$\times$ over the state-of-the-art caching method, while preserving generation quality.

Zijian Zhu, Fei Ren, Zhanhong Tan, Kaisheng Ma• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval (test)--
506
Mathematical ReasoningGSM8K
Speed Up (x)13.4
246
Mathematical ReasoningMATH
Speedup6
42
Mathematical ReasoningGSM8K 5-shot (test)
Strict Match Accuracy67.63
37
Code GenerationMBPP
MBPP Performance Score59
28
Code GenerationHumanEval 0-shot (test)
Pass@131.71
20
Code GenerationMBPP 3-shot (test)
Pass@138
18
Code GenerationMBPP
TPS301.4
17
Code GenerationHumanEval
TPS (Tokens/s)305.4
15
Mathematical ReasoningMATH 4-shot (test)--
15
Showing 10 of 20 rows

Other info

Follow for update