Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Variable-Length Generation in Diffusion Language Models via Length Regularization

About

Diffusion Large Language Models (DLLMs) are inherently ill-suited for variable-length generation, as their inference is defined on a fixed-length canvas and implicitly assumes a known target length. When the length is unknown, as in realistic completion and infilling, naively comparing confidence across mask lengths becomes systematically biased, leading to under-generation or redundant continuations. In this paper, we show that this failure arises from an intrinsic lengthinduced bias in generation confidence estimates, leaving existing DLLMs without a robust way to determine generation length and making variablelength inference unreliable. To address this issue, we propose LR-DLLM, a length-regularized inference framework for DLLMs that treats generation length as an explicit variable and achieves reliable length determination at inference time. It decouples semantic compatibility from lengthinduced uncertainty through an explicit length regularization that corrects biased confidence estimates. Based on this, LR-DLLM enables dynamic expansion or contraction of the generation span without modifying the underlying DLLM or its training procedure. Experiments show that LRDLLM achieves 51.3% Pass@1 on HumanEvalInfilling under fully unknown lengths (+13.4% vs. DreamOn) and 51.5% average Pass@1 on four-language McEval (+14.3% vs. DreamOn).

Zicong Cheng, Ruixuan Jia, Jia Li, Guo-Wei Yang, Meng-Hao Guo, Shi-Min Hu• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy75.1
983
Code InfillingHumanEval Infilling
Pass@1 Single-line81.6
12
Code InfillingMcEval Single-line
JavaScript Pass@178.8
10
Code InfillingMcEval Multi-line
JavaScript Pass@138.8
10
Code GenerationHumanEval dLLM-Var
Pass@131.7
2
Mathematical ReasoningMATH500 dLLM-Var
Accuracy39.8
2
Showing 6 of 6 rows

Other info

Follow for update