Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Gumbel Distillation for Parallel Text Generation

About

The slow, sequential nature of autoregressive (AR) language models has driven the adoption of parallel decoding methods. However, these non-AR models often sacrifice generation quality as they struggle to model the complex joint distribution of token sequences. To narrow this performance gap, we introduce Gumbel Distillation, a novel distillation technique that enables parallel decoders to learn this distribution effectively. Our method leverages the Gumbel-Max trick to create a deterministic mapping from a latent Gumbel noise space to the output tokens of a high-performing AR teacher. As a model-agnostic technique, Gumbel Distillation seamlessly integrates with diverse parallel decoding architectures, including MDLM and BD3-LM. Experiments on LM1B and OpenWebText show that Gumbel Distillation substantially improves the generation quality of parallel language models, achieving a 30.0% improvement in MAUVE score and 10.5% in generative perplexity over MDLM trained on OpenWebText dataset. Code available at https://github.com/hxixixh/gumbel-distill.

Chi Zhang, Xixi Hu, Bo Liu, Qiang Liu• 2026

Related benchmarks

TaskDatasetResultRank
Unconditional Text GenerationOpenWebText
Gen. PPL24.37
100
Language ModelingLM1B (val)
Perplexity22.69
55
Language ModelingWikiText (val)
Perplexity13.86
54
Language ModelingAG News (val)
Perplexity18.19
28
Unconditional GenerationLM1B
Generation Perplexity46.06
7
Likelihood EstimationPTB (val)
Perplexity35.12
4
Likelihood EstimationLAMBADA (val)
Perplexity15.56
4
Likelihood EstimationPubmed Scientific Papers (val)
Perplexity19.78
4
Likelihood EstimationArxiv Scientific Papers (val)
Perplexity16.85
4
Unconditional Text GenerationOpenWebText
Clarity3.41
4
Showing 10 of 10 rows

Other info

Follow for update