Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Relaxing Positional Alignment in Masked Diffusion Language Models

About

Masked diffusion language models (MDLMs) have emerged as a promising alternative to dominant autoregressive approaches. Although they achieve competitive performance on several tasks, a substantial gap remains in open-ended text generation. We hypothesize that one cause of this gap is that strict positional prediction makes MDLM decoding highly sensitive to token misalignment, and we show through controlled interventions that a one-position shift can severely disrupt semantics. This observation suggests that enforcing strict positional supervision during training is misaligned with the irreversible denoising dynamics of MDLM decoding. Motivated by this mismatch, we adopt an alignment-flexible supervision strategy during fine-tuning. Specifically, we introduce a special token <slack> via the connectionist temporal classification objective. We apply this approach to the widely used MDLM model and conduct experiments on five open-ended text generation benchmarks. Our method consistently outperforms the original model and improves robustness to positional shifts, indicating that relaxing strict positional supervision is an important factor in improving generation quality in MDLMs.

Mengyu Ye, Ryosuke Takahashi, Keito Kudo, Jun Suzuki• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringGPQA
Accuracy30.8
258
Code GenerationMBPP
Accuracy (%)38.4
146
Question AnsweringMMLU
Accuracy64.1
62
Open-ended Text GenerationArena-hard Creative-Writing
Pairwise Win Rate80.2
4
Open-ended Text GenerationCreative-Writing-Bench v3
Score27.4
4
Open-ended Text GenerationMTBench
LLM Judge Score3.7
4
Open-ended Text GenerationWildBench
Score-1.7
4
Open-ended Text GenerationArena-hard Hard-Prompt
Pairwise Win Rate51.4
4
Showing 8 of 8 rows

Other info

Follow for update