Diffusion LLMs Can Do Faster-Than-AR Inference via Discrete Diffusion Forcing
About
Diffusion Large Language Models (dLLMs) have emerged as a promising alternative to autoregressive (AR) LLMs for text generation, with the potential to decode multiple tokens in a single iteration. However, none of the existing open-source dLLMs have achieved superior inference speed over AR LLMs of similar size. This paper breaks this barrier based on a simple and effective strategy named discrete diffusion forcing (D2F). D2F equips dLLMs with two key capabilities: (1) block-wise autoregressive generation to enable KV cache utilization; (2) prediction of following tokens without requiring completion of prior blocks for inter-block parallel decoding. In this way, the vanilla dLLMs are refurbished into an AR-diffusion hybrid paradigm for efficient inference. D2F can be implemented with an asymmetric distillation process based on pre-trained dLLMs. We further propose a pipelined parallel decoding algorithm, which enables a trade-off between efficiency and efficacy. Empirically, D2F dLLMs achieve more than $\mathbf{2.5\times}$ inference speed than LLaMA3 and Qwen2.5 on GSM8K. Compared to vanilla dLLMs like LLaDA and Dream, the acceleration can be more than $\mathbf{50\times}$ while maintaining comparable output quality. The code is available at https://github.com/zhijie-group/Discrete-Diffusion-Forcing.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval | -- | 850 | |
| Mathematical Reasoning | MATH | Accuracy38.62 | 535 | |
| Mathematical Reasoning | GSM8K | -- | 177 | |
| Code Generation | MBPP | Accuracy (%)53.48 | 146 | |
| Mathematical Reasoning | GSM8K | TPS89.82 | 26 | |
| Code Generation | MBPP | TPF230 | 9 | |
| Mathematical Reasoning | MATH (test) | Tokens Per Frame (TPF)2.6 | 9 | |
| Code Generation | HumanEval | TPF2.5 | 9 | |
| Mathematical Reasoning | GSM8K (test) | TPF (Time Per First Token)3.1 | 9 | |
| Code Generation | MBPP | TPF3.13 | 6 |