Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Breaking Block Boundaries: Anchor-based History-stable Decoding for Diffusion Large Language Models

About

Diffusion Large Language Models (dLLMs) have recently become a promising alternative to autoregressive large language models (ARMs). Semi-autoregressive (Semi-AR) decoding is widely employed in base dLLMs and advanced decoding strategies due to its superior performance. However, our observations reveal that Semi-AR decoding suffers from inherent block constraints, which cause the decoding of many cross-block stable tokens to be unnecessarily delayed. To address this challenge, we systematically investigate the identification of stable tokens and present three key findings: (1) naive lookahead decoding is unreliable, (2) token stability closely correlates with convergence trend, and (3) historical information is isolated. Building on these insights, we propose Anchor-based History-stable Decoding (AHD), a training-free, plug-and-play dynamic decoding strategy. Specifically, AHD monitors the stability trend of tokens in real time through dynamic anchors. Once a token reaches stability, it initiates early cross-block decoding to enhance efficiency and performance. Extensive experiments across language, vision-language, and audio-language domains demonstrate that AHD simultaneously improves both performance and inference efficiency. Notably, AHD effectively reverses the performance degradation typically observed in existing advanced decoding acceleration strategies. For instance, on the BBH benchmark, our approach reduces decoding steps by 80% while improving performance by 3.67%.

Shun Zou, Yong Wang, Zehui Chen, Lin Chen, Chongyang Tao, Feng Zhao, Xiangxiang Chu• 2026

Related benchmarks

TaskDatasetResultRank
ReasoningBBH--
672
Mathematical ReasoningMATH-Vision--
32
TruthfulnessTruthfulQA
Truthfulness Score41.98
16
Mathematical ReasoningMATH
Score34.34
12
Mathematical ReasoningASDIV
Score78.79
12
Code GenerationHumanEval
Score43.29
6
Code GenerationHumanEval
Score43.9
6
Code GenerationHumanEval 1024 sequence length (test)
Code Generation Score48.78
6
Code GenerationHumanEval 2048 sequence length (test)
Score49.39
6
Mathematical ReasoningGSM8K CoT LLaDA-8B-Instruct (test)
Score80.51
6
Showing 10 of 22 rows

Other info

Follow for update