Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

From Next-Token to Next-Block: A Principled Adaptation Path for Diffusion LLMs

About

Diffusion Language Models (DLMs) enable fast generation, yet training large DLMs from scratch is costly. As a practical shortcut, adapting off-the-shelf Auto-Regressive (AR) model weights into a DLM could quickly equip the DLM with strong long-context generation capabilies. Prior "adaptation" attempts either modify logits or randomly grow attention masks to Full-Sequence diffusion, or simply transplant AR weights into a Block-Diffusion recipe, leaving two key questions unaddressed: where is the final destination of adaptation, and how to adapt better? For manifold benefits, we reframe the whole AR-to-DLM adaptation under the Block-Diffusion paradigm, transitioning from block size 1 to the final Block-Diffusion state. Concretely, the principled pathway of adaptation is designed as follows: we keep a context-causal path where causal attention is kept in the prefix, an efficient parallel adaptation procedure where an AR guidance is maintained, and gradual increment of the generation block size for a smoother transition. Built on these components, the adaptation is proved competitive on various models at different scales. With better adaptation, we propose NBDiff-7B that could inherit the long-context modeling and reasoning capabilities, and achieve state-of-the-art performance among the 7B-class DLMs. Codes: https://github.com/YuchuanTian/NBDiff.

Yuchuan Tian, Yuchen Liang, Shuo Zhang, Yingte Shu, Guangwen Yang, Wei He, Sibo Fang, Tianyu Guo, Kai Han, Chao Xu, Hanting Chen, Xinghao Chen, Yunhe Wang• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@189
1036
Mathematical ReasoningMATH
Accuracy84
882
Language UnderstandingMMLU
Accuracy82.9
825
Instruction FollowingIFEval
IFEval Accuracy60.8
625
Mathematical ReasoningMATH
Accuracy46
535
Mathematical ReasoningGSM8K
Accuracy (GSM8K)91
358
General KnowledgeMMLU
MMLU General Knowledge Accuracy82.9
234
Logical reasoningBBH
Accuracy77.3
201
Code GenerationMBPP
Pass@187.6
193
Code GenerationMBPP
Accuracy (%)55.8
146
Showing 10 of 17 rows

Other info

GitHub

Follow for update