Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

T3D: Few-Step Diffusion Language Models via Trajectory Self-Distillation with Direct Discriminative Optimization

About

Diffusion large language models (DLLMs) have the potential to enable fast text generation by decoding multiple tokens in parallel. However, in practice, their inference efficiency is constrained by the need for many refinement steps, while aggressively reducing the number of steps leads to a substantial degradation in generation quality. To alleviate this, we propose a trajectory self-distillation framework that improves few-step decoding by distilling the model's own generative trajectories. We incorporate Direct Discriminative Optimization (DDO), a reverse-KL objective that promotes mode-seeking distillation and encourages the student to concentrate on high-probability teacher modes. Across benchmarks, our approach consistently outperforms strong few-step baselines and standard training under tight step budgets. Although full-step decoding remains superior, we substantially narrow the gap, establishing a strong foundation towards practical few-step DLLMs. The source code is available at https://github.com/Tyrion58/T3D.

Tunyu Zhang, Xinxi Zhang, Ligong Han, Haizhou Shi, Xiaoxiao He, Zhuowei Li, Hao Wang, Kai Xu, Akash Srivastava, Hao Wang, Vladimir Pavlovic, Dimitris N. Metaxas• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)
Accuracy61.6
514
Code GenerationHumanEval (test)--
506
Code GenerationMBPP (test)--
298
Radiology Report GenerationMIMIC-CXR (test)--
172
Radiology Report GenerationCheXpert Plus (test)--
88
Mathematical ReasoningGSM8K (test)
Accuracy0.8385
48
Code GenerationHumanEval
TPS222.7
41
Chest X-ray Report GenerationReXGradient (test)
ROUGE-L64.36
16
Mathematical ReasoningMATH 500
Throughput (TPS)791.2
5
Mathematical ReasoningGSM8K
Throughput (TPS)843
5
Showing 10 of 11 rows

Other info

GitHub

Follow for update