Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MDM-Prime-v2: Binary Encoding and Index Shuffling Enable Compute-optimal Scaling of Diffusion Language Models

About

Masked diffusion models (MDM) exhibit superior generalization when learned using a Partial masking scheme (Prime). This approach converts tokens into sub-tokens and models the diffusion process at the sub-token level. We identify two limitations of the MDM-Prime framework. First, we lack tools to guide the hyperparameter choice of the token granularity in the subtokenizer. Second, we find that the function form of the subtokenizer significantly degrades likelihood estimation when paired with commonly used Byte-Pair-Encoding (BPE) tokenizers. To address these limitations, we study the tightness of the variational bound in MDM-Prime and develop MDM-Prime-v2, a masked diffusion language model which incorporates Binary Encoding and Index Shuffling. Our scaling analysis reveals that MDM-Prime-v2 is 21.8$\times$ more compute-efficient than autoregressive models (ARM). In compute-optimal comparisons, MDM-Prime-v2 achieves 7.77 perplexity on OpenWebText, outperforming ARM (12.99), MDM (18.94), and MDM-Prime (13.41). When extending the model size to 1.1B parameters, our model further demonstrates superior zero-shot accuracy on various commonsense reasoning tasks.

Chen-Hao Chao, Wei-Fang Sun, Junwei Quan, Chun-Yi Lee, Rahul G. Krishnan• 2026

Related benchmarks

TaskDatasetResultRank
Common Sense ReasoningBoolQ
Accuracy62.05
212
Commonsense ReasoningOBQA
Accuracy34
117
Commonsense ReasoningSocialIQA
Accuracy42.02
116
Commonsense ReasoningARC-E
Accuracy47.81
106
Language ModelingPTB (val)
Perplexity20.26
99
Language ModelingLM1B (val)
Perplexity16.57
55
Language ModelingWikiText (val)
Perplexity12.51
54
Language ModelingOpenWebText (OWT) (val)
Perplexity7.77
42
Commonsense ReasoningaNLI
Accuracy34.24
35
Language ModelingAG News (val)
Perplexity27.79
28
Showing 10 of 15 rows

Other info

GitHub

Follow for update