Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Early Decisions Matter: Proximity Bias and Initial Trajectory Shaping in Non-Autoregressive Diffusion Language Models

About

Diffusion-based language models (dLLMs) have emerged as a promising alternative to autoregressive language models, offering the potential for parallel token generation and bidirectional context modeling. However, harnessing this flexibility for fully non-autoregressive decoding remains an open question, particularly for reasoning and planning tasks. In this work, we investigate non-autoregressive decoding in dLLMs by systematically analyzing its inference dynamics along the temporal axis. Specifically, we uncover an inherent failure mode in confidence-based non-autoregressive generation stemming from a strong proximity bias-the tendency for the denoising order to concentrate on spatially adjacent tokens. This local dependency leads to spatial error propagation, rendering the entire trajectory critically contingent on the initial unmasking position. Leveraging this insight, we present a minimal-intervention approach that guides early token selection, employing a lightweight planner and end-of-sequence temperature annealing. We thoroughly evaluate our method on various reasoning and planning tasks and observe substantial overall improvement over existing heuristic baselines without significant computational overhead.

Jiyeon Kim, Sungik Choi, Yongrae Jo, Moontae Lee, Minjoon Seo• 2026

Related benchmarks

TaskDatasetResultRank
Logical reasoningSudoku
Accuracy75.2
119
Logical reasoningCountdown
Accuracy52
16
Mathematical ReasoningGSM8K 8B Instruct (test)
Accuracy58.6
9
Mathematical ReasoningMATH 8B Instruct (test)
Accuracy23
9
Mathematical ReasoningCountdown 8B Instruct (test)
Accuracy46.1
9
Logic reasoningSudoku 8B Instruct (test)
Accuracy69.5
9
Mathematical ReasoningCountdown (CTD) (test)
Accuracy43.8
4
Logical reasoningSudoku (SDK) (test)
Accuracy67
4
Showing 8 of 8 rows

Other info

Follow for update