Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Where-to-Unmask: Ground-Truth-Guided Unmasking Order Learning for Masked Diffusion Language Models

About

Masked Diffusion Language Models (MDLMs) generate text by iteratively filling masked tokens, requiring two coupled decisions at each step: which positions to unmask (where-to-unmask) and which tokens to place (what-to-unmask). While standard MDLM training directly optimizes token prediction (what-to-unmask), inference-time unmasking orders (where-to-unmask) are typically determined by heuristic confidence measures or trained through reinforcement learning with costly on-policy rollouts. To address this, we introduce Gt-Margin, a position-wise score derived from ground-truth tokens, defined as the probability margin between the correct token and its strongest alternative. Gt-Margin yields an oracle unmasking order that prioritizes easier positions first under each partially masked state. We demonstrate that leveraging this oracle unmasking order significantly enhances final generation quality, particularly on logical reasoning benchmarks. Building on this insight, we train a supervised unmasking planner via learning-to-rank to imitate the oracle ordering from masked contexts. The resulting planner integrates into standard MDLM sampling to select where-to-unmask, improving reasoning accuracy without modifying the token prediction model.

Hikaru Asano, Tadashi Kozuno, Kuniaki Saito, Yukino Baba• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy42.5
535
Logical reasoningStrategyQA
Accuracy68.5
58
Logical reasoningSudoku
Accuracy0.995
44
Question AnsweringStrategyQA
Accuracy84
14
Logical reasoningSudoku 9x9
Accuracy0.085
5
Showing 5 of 5 rows

Other info

Follow for update