Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Empirical Analysis of Decoding Biases in Masked Diffusion Models

About

Masked diffusion models (MDMs), which leverage bidirectional attention and a denoising process, are narrowing the performance gap with autoregressive models (ARMs). However, their internal attention mechanisms remain under-explored. This paper investigates the attention behaviors in MDMs, revealing the phenomenon of Attention Floating. Unlike ARMs, where attention converges to a fixed sink, MDMs exhibit dynamic, dispersed attention anchors that shift across denoising steps and layers. Further analysis reveals its Shallow Structure-Aware, Deep Content-Focused attention mechanism: shallow layers utilize floating tokens to build a global structural framework, while deeper layers allocate more capability toward capturing semantic content. Empirically, this distinctive attention pattern provides a mechanistic explanation for the strong in-context learning capabilities of MDMs, allowing them to double the performance compared to ARMs in knowledge-intensive tasks. All codes are available at https://github.com/NEUIR/Uncode.

Pengcheng Huang, Tianming Liu, Zhenghao Liu, Yukun Yan, Shuo Wang, Tong Xiao, Zulong Chen, Maosong Sun• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy81.5
983
Code GenerationMBPP
Accuracy46.2
120
PlanningSudoku
Accuracy83.6
68
PlanningCountdown
Accuracy42.4
68
Mathematical ReasoningMATH500
Accuracy46.8
57
Scientific ReasoningGPQA
Accuracy28.8
55
Reasoning and PlanningReasoning and Planning Suite (GSM8K, MATH500, HumanEval, MBPP, Sudoku, Countdown)
Accuracy59.1
14
Showing 7 of 7 rows

Other info

Follow for update