Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Attention Sink Forges Native MoE in Attention Layers: Sink-Aware Training to Address Head Collapse

About

Large Language Models (LLMs) often assign disproportionate attention to the first token, a phenomenon known as the attention sink. Several recent approaches aim to address this issue, including Sink Attention in GPT-OSS and Gated Attention in Qwen3-Next. However, a comprehensive analysis of the relationship among these attention mechanisms is lacking. In this work, we provide both theoretical and empirical evidence demonstrating that the sink in Vanilla Attention and Sink Attention naturally construct a Mixture-of-Experts (MoE) mechanism within attention layers. This insight explains the head collapse phenomenon observed in prior work, where only a fixed subset of attention heads contributes to generation. To mitigate head collapse, we propose a sink-aware training algorithm with an auxiliary load balancing loss designed for attention layers. Extensive experiments show that our method achieves effective head load balancing and improves model performance across Vanilla Attention, Sink Attention, and Gated Attention. We hope this study offers a new perspective on attention mechanisms and encourages further exploration of the inherent MoE structure within attention layers.

Zizhuo Fu, Wenxuan Zeng, Runsheng Wang, Meng Li• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy59.66
1460
Code GenerationHumanEval--
850
Multi-task Language UnderstandingMMLU
Accuracy41.48
842
Commonsense ReasoningPIQA
Accuracy77.2
647
Mathematical ReasoningGSM8K
Accuracy (GSM8K)10.69
358
Commonsense ReasoningWinoGrande
Accuracy60.6
231
Long-context Language UnderstandingLongBench
M-Avg55.38
219
Reading ComprehensionBoolQ
Accuracy61.04
219
ReasoningARC
Accuracy92.34
83
ReasoningGSM8K
Accuracy0.9333
83
Showing 10 of 18 rows

Other info

Follow for update