CDLM: Consistency Diffusion Language Models For Faster Sampling
About
Diffusion Language Models (DLMs) offer a promising parallel generation paradigm but suffer from slow inference due to numerous refinement steps and the inability to use standard KV caching. We introduce CDLM (Consistency Diffusion Language Models), a training-based acceleration method that simultaneously tackles both bottlenecks. CDLM integrates consistency modeling to drastically reduce the number of required sampling steps by enabling multi-token finalization. Furthermore, we enforce a block-wise causal attention mask during fine-tuning, making the model fully compatible with KV caching. Experiments show CDLM achieves 3.6x-14.5x lower latency while maintaining competitive accuracy on math and coding tasks. The full training and evaluation code is available at https://github.com/SqueezeAILab/CDLM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval 0-shot (test) | -- | 17 | |
| Mathematical Reasoning | GSM8K 4-shot (test) | Throughput54.3 | 15 | |
| Mathematical Reasoning | MATH 4-shot (test) | Accuracy28.3 | 15 | |
| Code Generation | MBPP-Instruct 0-shot (test) | TPS60.6 | 10 | |
| Mathematical Reasoning | GSM8K CoT 8-shot (test) | TPS51.7 | 5 | |
| Code Generation | HumanEval-Instruct 0-shot (test) | TPS43.3 | 5 |