Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SoLA-Vision: Fine-grained Layer-wise Linear Softmax Hybrid Attention

About

Standard softmax self-attention excels in vision tasks but incurs quadratic complexity O(N^2), limiting high-resolution deployment. Linear attention reduces the cost to O(N), yet its compressed state representations can impair modeling capacity and accuracy. We present an analytical study that contrasts linear and softmax attention for visual representation learning from a layer-stacking perspective. We further conduct systematic experiments on layer-wise hybridization patterns of linear and softmax attention. Our results show that, compared with rigid intra-block hybrid designs, fine-grained layer-wise hybridization can match or surpass performance while requiring fewer softmax layers. Building on these findings, we propose SoLA-Vision (Softmax-Linear Attention Vision), a flexible layer-wise hybrid attention backbone that enables fine-grained control over how linear and softmax attention are integrated. By strategically inserting a small number of global softmax layers, SoLA-Vision achieves a strong trade-off between accuracy and computational cost. On ImageNet-1K, SoLA-Vision outperforms purely linear and other hybrid attention models. On dense prediction tasks, it consistently surpasses strong baselines by a considerable margin. Code will be released.

Ruibang Li, Guan Luo, Yiwei Zhang, Jin Gao, Bing Li, Weiming Hu• 2026

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU50.5
936
Image ClassificationImageNet 1k (test)
Top-1 Accuracy84.1
359
Object DetectionCOCO 2017
AP (Box)47.5
279
Instance SegmentationCOCO 2017
APm42.3
199
Showing 4 of 4 rows

Other info

Follow for update