Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

The Diffusion Duality

About

Uniform-state discrete diffusion models hold the promise of fast text generation due to their inherent ability to self-correct. However, they are typically outperformed by autoregressive models and masked diffusion models. In this work, we narrow this performance gap by leveraging a key insight: Uniform-state diffusion processes naturally emerge from an underlying Gaussian diffusion. Our method, Duo, transfers powerful techniques from Gaussian diffusion to improve both training and sampling. First, we introduce a curriculum learning strategy guided by the Gaussian process, doubling training speed by reducing variance. Models trained with curriculum learning surpass autoregressive models in zero-shot perplexity on 3 of 7 benchmarks. Second, we present Discrete Consistency Distillation, which adapts consistency distillation from the continuous to the discrete setting. This algorithm unlocks few-step generation in diffusion language models by accelerating sampling by two orders of magnitude. We provide the code, model checkpoints, and video tutorials on the project page: http://s-sahoo.github.io/duo

Subham Sekhar Sahoo, Justin Deschenaux, Aaron Gokaslan, Guanghan Wang, Justin Chiu, Volodymyr Kuleshov• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingPTB
Perplexity89.35
1034
Language modellingLM1B (test)
Perplexity22.3
130
Unconditional Text GenerationOpenWebText
Gen. PPL46.31
100
Language ModelingPTB (val)
Perplexity89.35
99
Image GenerationMNIST Binary (test)
FID6.52
98
Image GenerationCIFAR-10
FID69.87
88
Molecular GenerationZINC250K
Uniqueness942.2
68
Multiple-choice Question AnsweringARC Easy (test)
Accuracy44.95
68
Language ModelingOWT
Gen. PPL47.13
61
Molecule GenerationZINC 250k 2012
Validity Score942.2
56
Showing 10 of 44 rows

Other info

Follow for update