Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions

About

In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponentially large number of infilling problems, but at inference time, they can decode tokens in essentially arbitrary order. In this work, we closely examine these two competing effects. On the training front, we theoretically and empirically demonstrate that MDMs indeed train on computationally intractable subproblems compared to their autoregressive counterparts. On the inference front, we show that a suitable strategy for adaptively choosing the token decoding order significantly enhances the capabilities of MDMs, allowing them to sidestep hard subproblems. On logic puzzles like Sudoku, we show that adaptive inference can boost solving accuracy in pretrained MDMs from $<7$% to $\approx 90$%, even outperforming ARMs with $7\times$ as many parameters and that were explicitly trained via teacher forcing to learn the right order of decoding.

Jaeyeon Kim, Kulin Shah, Vasilis Kontonis, Sham Kakade, Sitan Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy77.5
983
Code GenerationHumanEval--
850
Mathematical ReasoningGSM8K (test)
Accuracy50.2
797
Code GenerationHumanEval (test)--
444
Code GenerationHumanEval+--
189
Mathematical ReasoningGSM8K--
177
Code GenerationMBPP
Accuracy (%)2.2
146
Code GenerationMBPP
Accuracy40.4
120
Text-to-Image GenerationGenEval
Two Objects68.7
87
ReasoningARC
Accuracy85.91
83
Showing 10 of 25 rows

Other info

Follow for update