Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Locally Confident, Globally Stuck: The Quality-Exploration Dilemma in Diffusion Language Models

About

Diffusion large language models (dLLMs) theoretically permit token decoding in arbitrary order, a flexibility that could enable richer exploration of reasoning paths than autoregressive (AR) LLMs. In practice, however, random-order decoding often hurts generation quality. To mitigate this, low-confidence remasking improves single-sample quality (e.g., Pass@$1$) by prioritizing confident tokens, but it also suppresses exploration and limits multi-sample gains (e.g., Pass@$k$), creating a fundamental quality--exploration dilemma. In this paper, we provide a unified explanation of this dilemma. We show that low-confidence remasking improves a myopic proxy for quality while provably constraining the entropy of the induced sequence distribution. To overcome this limitation, we characterize the optimal distribution that explicitly balances quality and exploration, and develop a simple Independent Metropolis--Hastings sampler that approximately targets this distribution during decoding. Experiments across a range of reasoning benchmarks including MATH500, AIME24/25, HumanEval, and MBPP show that our approach yields better exploration-quality tradeoff than both random and low-confidence remasking.

Liancheng Fang, Aiwei Liu, Henry Peng Zou, Yankai Chen, Enze Ma, Leyi Pan, Chunyu Miao, Wei-Chieh Huang, Xue Liu, Philip S. Yu• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval+
Pass@169.6
383
Code GenerationMBPP
Pass@177.6
41
Math ReasoningAIME 2025
Pass@17.4
19
Math ReasoningMATH500
Pass@154
18
Code GenerationHumanEval
Pass@174.5
13
Code GenerationMBPP+
Pass@166.4
9
Math ReasoningAIME 24
Pass@19.5
4
Showing 7 of 7 rows

Other info

Follow for update