Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MHLA: Restoring Expressivity of Linear Attention via Token-Level Multi-Head

About

While the Transformer architecture dominates many fields, its quadratic self-attention complexity hinders its use in large-scale applications. Linear attention offers an efficient alternative, but its direct application often degrades performance, with existing fixes typically re-introducing computational overhead through extra modules (e.g., depthwise separable convolution) that defeat the original purpose. In this work, we identify a key failure mode in these methods: global context collapse, where the model loses representational diversity. To address this, we propose Multi-Head Linear Attention (MHLA), which preserves this diversity by computing attention within divided heads along the token dimension. We prove that MHLA maintains linear complexity while recovering much of the expressive power of softmax attention, and verify its effectiveness across multiple domains, achieving a 3.6\% improvement on ImageNet classification, a 6.3\% gain on NLP, a 12.6\% improvement on image generation, and a 41\% enhancement on video generation under the same time complexity.

Kewei Zhang, Ye Huang, Yufan Deng, Jincheng Yu, Junsong Chen, Huan Ling, Enze Xie, Daquan Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy23.7
842
Image ClassificationImageNet-1K
Top-1 Acc84.6
836
Language ModelingWikiText
PPL38.31
479
Image GenerationImageNet (val)
FID19.164
198
Language ModelingLAMBADA
Perplexity71.64
99
Long-context UnderstandingLongBench (test)
Avg Score7.41
80
Video GenerationVBench (test)
Semantic Score79.59
35
Commonsense ReasoningCSR (Commonsense Reasoning Suite)
Average Accuracy47.1
10
Text-to-Image Generation31M image dataset (test)
FID5.9
4
Showing 9 of 9 rows

Other info

GitHub

Follow for update