Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Just on Time: Token-Level Early Stopping for Diffusion Language Models

About

Diffusion language models generate text through iterative refinement, a process that is often computationally inefficient because many tokens reach stability long before the final denoising step. We introduce a training-free, token-level early stopping approach that identifies convergence independently at each position. Our method leverages lightweight signals derived from the model's predictions and local context to dynamically determine when individual tokens can be finalized. This yields adaptive per-token freezing without task-specific fine-tuning, substantially reducing the total number of diffusion steps required. Across diverse benchmarks, spanning mathematical reasoning, general question answering, and scientific understanding, our approach achieves state-of-the-art efficiency gains while preserving generation quality.

Zahar Kohut, Severyn Shykula, Dmytro Khamula, Mykola Vysotskyi, Taras Rumezhak, Volodymyr Karpiv• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1460
Code GenerationHumanEval
Pass@158.5
850
Mathematical ReasoningGSM8K
Speed Up (x)5.54
177
Multi-task Language UnderstandingMMLU
MMLU Score66.7
14
Code GenerationHumanEval
Pass@10.585
8
Language UnderstandingMMLU
MMLU Score66.7
8
Showing 6 of 6 rows

Other info

Follow for update