Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MAGE: All-[MASK] Block Already Knows Where to Look in Diffusion LLM

About

Block diffusion LLMs are emerging as a promising next paradigm for language generation, but their use of KV caching makes memory access a dominant bottleneck in long-context settings. While dynamic sparse attention has been actively explored, existing methods designed for autoregressive LLMs rely on approximate importance estimation and perform poorly when adapted to block diffusion. This work identifies a key opportunity unique to block diffusion: attention at the first All-[MASK] denoising step reliably predicts important KV entries and budget requirements, enabling MAGE to perform a single exact attention pass per block and reuse it for training-free sparse denoising. Across long-context benchmarks including LongBench and Needle-in-a-Haystack, MAGE achieves near-lossless accuracy with a fraction of the KV budget while delivering up to 3-4x end-to-end speedup, consistently outperforming AR-oriented sparse attention baselines. A lightweight fine-tuning strategy further strengthens [MASK]-guided patterns with minimal cost, requiring only a few hours of training on a single NVIDIA H100 GPU for both 1.5B and 7B models.

Omin Kwon, Yeonjae Kim, Doyeon Kim, Minseo Kim, Yeonhong Park, Jae W. Lee• 2026

Related benchmarks

TaskDatasetResultRank
Needle-In-A-Haystack RetrievalNeedle-in-a-Haystack 32K context (test)
Accuracy70
30
Needle-In-A-Haystack RetrievalNeedle-in-a-Haystack 8K context (test)
Accuracy100
30
Showing 2 of 2 rows

Other info

Follow for update