Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Just on Time: Token-Level Early Stopping for Diffusion Language Models

About

Diffusion language models generate text through iterative refinement, a process that is often computationally inefficient because many tokens reach stability long before the final denoising step. We introduce a training-free, token-level early stopping approach that identifies convergence independently at each position. Our method leverages lightweight signals derived from the model's predictions and local context to dynamically determine when individual tokens can be finalized. This yields adaptive per-token freezing without task-specific fine-tuning, substantially reducing the total number of diffusion steps required. Across diverse benchmarks, spanning mathematical reasoning, general question answering, and scientific understanding, our approach achieves state-of-the-art efficiency gains while preserving generation quality.

Zahar Kohut, Severyn Shykula, Dmytro Khamula, Mykola Vysotskyi, Taras Rumezhak, Volodymyr Karpiv• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Code GenerationHumanEval
Pass@158.5
1036
Mathematical ReasoningGSM8K
Speed Up (x)5.54
246
Multi-task Language UnderstandingMMLU
MMLU Score66.7
14
Language UnderstandingMMLU
MMLU Score66.7
12
Code GenerationHumanEval
Pass@10.585
8
Showing 6 of 6 rows

Other info

Follow for update