d3LLM: Ultra-Fast Diffusion LLM using Pseudo-Trajectory Distillation
About
Diffusion large language models (dLLMs) offer capabilities beyond those of autoregressive (AR) LLMs, such as parallel decoding and random-order generation. However, realizing these benefits in practice is non-trivial, as dLLMs inherently face an accuracy-parallelism trade-off. Despite increasing interest, existing methods typically focus on only one-side of the coin, targeting either efficiency or performance. To address this limitation, we propose d3LLM (Pseudo-Distilled Diffusion Large Language Model), striking a balance between accuracy and parallelism: (i) during training, we introduce pseudo-trajectory distillation to teach the model which tokens can be decoded confidently at early steps, thereby improving parallelism; (ii) during inference, we employ entropy-based multi-block decoding with a KV-cache refresh mechanism to achieve high parallelism while maintaining accuracy. To better evaluate dLLMs, we also introduce AUP (Accuracy Under Parallelism), a new metric that jointly measures accuracy and parallelism. Experiments demonstrate that our d3LLM achieves up to 10$\times$ speedup over vanilla LLaDA/Dream and 5$\times$ speedup over AR models without much accuracy drop. Our code is available at https://github.com/hao-ai-lab/d3LLM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Radiology Report Generation | MIMIC-CXR (test) | -- | 172 | |
| Code Generation | HumanEval | Accuracy57.1 | 99 | |
| Radiology Report Generation | CheXpert Plus (test) | -- | 88 | |
| Mathematical Reasoning | MATH | -- | 42 | |
| Chest X-ray Report Generation | ReXGradient (test) | ROUGE-L54.18 | 16 | |
| Mathematical Reasoning | GSM8K | Accuracy (%)73.1 | 16 | |
| Mathematical Reasoning | GSM8K | Accuracy81.4 | 14 | |
| Code Generation | MBPP | Accuracy55.6 | 14 | |
| Mathematical Reasoning | MATH500 | Accuracy38.2 | 14 | |
| Code Generation | HumanEval | Accuracy39.6 | 6 |