Corrective Diffusion Language Models
About
While Diffusion Language Models (DLMs) are theoretically well-suited for iterative refinement due to their non-causal structure, they often fail to reliably revise incorrect tokens in practice. The key challenge lies in the model's inability to distinguish between correct and erroneous tokens in a visible sequence. Standard masked diffusion language model (MDLM) training is restricted to the objective of unmasking, undermining the effectiveness of refinement guided by confidence. Based on this observation, we study corrective behavior in DLMs, defined as the ability to assign lower confidence to incorrect tokens and iteratively refine them while preserving correct content. We show that this capability is not induced by conventional masked diffusion objectives and propose a post-training principle oriented by correction that explicitly supervises visible incorrect tokens, enabling discriminative confidence and targeted refinement. To evaluate corrective behavior, we introduce the Code Revision Benchmark, a controllable and executable benchmark for assessing error localization and in-place correction. Experiments on code revision tasks and parallel decoding scenarios demonstrate that models trained with our approach substantially outperform standard MDLMs, with gains that are most pronounced when parallel decoding introduces substantial uncertainty and iterative refinement becomes essential. Our code is publicly available at https://github.com/zhangshuibai/CDLM.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval (test) | Pass@122 | 444 | |
| Code Generation | MBPP+ | Pass@121.1 | 122 | |
| Code Generation | HumanEval+ (test) | Pass@121 | 81 | |
| Code | MBPP | Pass@117.5 | 43 | |
| Coding | MBPP+ | Pass@122.6 | 37 | |
| Parallel Sequence Generation | ParallelBench | Copy Accuracy100 | 6 | |
| Code Correction | HumanEval | Pass@125.7 | 3 | |
| Code Correction | HumanEval+ | Pass@124.2 | 3 |