Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sink-Aware Pruning for Diffusion Language Models

About

Diffusion Language Models (DLMs) incur high inference cost due to iterative denoising, motivating efficient pruning. Existing pruning heuristics largely inherited from autoregressive (AR) LLMs, typically preserve attention sink tokens because AR sinks serve as stable global anchors. We show that this assumption does not hold for DLMs: the attention-sink position exhibits substantially higher variance over the full generation trajectory (measured by how the dominant sink locations shift across timesteps), indicating that sinks are often transient and less structurally essential than in AR models. Based on this observation, we propose ${\bf \texttt{Sink-Aware Pruning}}$, which automatically identifies and prunes unstable sinks in DLMs (prior studies usually keep sinks for AR LLMs). Without retraining, our method achieves a better quality-efficiency trade-off and outperforms strong prior pruning baselines under matched compute. Our code is available at https://github.com/VILA-Lab/Sink-Aware-Pruning.

Aidar Myrzakhan, Tianyi Li, Bowei Guo, Shengkun Tang, Zhiqiang Shen• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy50.67
776
Language UnderstandingMMLU
Accuracy33.3
756
Physical Commonsense ReasoningPIQA
Accuracy60.34
329
Question AnsweringGPQA
Accuracy25
258
Science Question AnsweringARC Challenge
Accuracy38.2
234
Science Question AnsweringARC Easy
Accuracy71.75
101
Reading ComprehensionRACE
Accuracy28.2
34
Language UnderstandingLLM Benchmark Suite (MMLU, ARC-C, PIQA, WinoG, GSM8K, HellaSwag, GPQA, RACE) (test)
Overall Accuracy57.68
13
Zero-shot Language Understanding and ReasoningLLM Evaluation Suite (MMLU, ARC-C, PIQA, WinoG, GSM8K, HellaSwag, GPQA, RACE) zero-shot LLaDA1.5
Average Score58.47
13
commonsense inferenceHellaSwag
Accuracy52.1
13
Showing 10 of 13 rows

Other info

GitHub

Follow for update