Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learn from Your Mistakes: Self-Correcting Masked Diffusion Models

About

Masked diffusion models (MDMs) have emerged as a promising alternative to autoregressive models, enabling parallel token generation while achieving competitive performance. Despite these advantages, MDMs face a fundamental limitation: once tokens are unmasked, they remain fixed, leading to error accumulation and ultimately degrading sample quality. We address this by proposing a framework that trains a model to perform both unmasking and correction. By reusing outputs from the MDM denoising network as inputs for corrector training, we train a model to recover from potential mistakes. During generation we apply additional corrective refinement steps between unmasking ones in order to change decoded tokens and improve outputs. We name our training and sampling method Progressive Self-Correction (ProSeCo) for its unique ability to iteratively refine an entire sequence, including already generated tokens. We conduct extensive experimental validation across multiple conditional and unconditional tasks, demonstrating that ProSeCo yields better quality-efficiency trade-offs (up to ~2-3x faster sampling) and enables inference-time compute scaling to further increase sample quality beyond standard MDMs (up to ~1.3x improvement on benchmarks).

Yair Schiff, Omer Belhasin, Roy Uziel, Guanghan Wang, Marianne Arriola, Gilad Turok, Michael Elad, Volodymyr Kuleshov• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMinerva
Pass@135.1
138
Unconditional Text GenerationOpenWebText
Gen. PPL11.1
56
CodingHumanEval
Pass@162.2
52
CodeMBPP
Pass@150.2
43
MathGSM8K
Pass@182.18
9
Showing 5 of 5 rows

Other info

Follow for update