Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Switch Attention: Towards Dynamic and Fine-grained Hybrid Transformers

About

The attention mechanism has been the core component in modern transformer architectures. However, the computation of standard full attention scales quadratically with the sequence length, serving as a major bottleneck in long-context language modeling. Sliding window attention restricts the context length for better efficiency at the cost of narrower receptive fields. While existing efforts attempt to take the benefits from both sides by building hybrid models, they often resort to static, heuristically designed alternating patterns that limit efficient allocation of computation in various scenarios. In this paper, we propose Switch Attention (SwiAttn), a novel hybrid transformer that enables dynamic and fine-grained routing between full attention and sliding window attention. For each token at each transformer layer, SwiAttn dynamically routes the computation to either a full-attention branch for global information aggregation or a sliding-window branch for efficient local pattern matching. An adaptive regularization objective is designed to encourage the model towards efficiency. Moreover, we adopt continual pretraining to optimize the model, transferring the full attention architecture to the hybrid one. Extensive experiments are conducted on twenty-three benchmark datasets across both regular (4K) and long (32K) context lengths, demonstrating the effectiveness of the proposed method.

Yusheng Zhao, Hourun Li, Bohan Wu, Jingyang Yuan, Meng Zhang, Yichun Yin, Lifeng Shang, Ming Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Commonsense ReasoningWinoGrande--
1085
Commonsense ReasoningPIQA
Accuracy74
751
Language ModelingWikiText
PPL15
732
Common Sense ReasoningBoolQ
Accuracy62
212
Commonsense ReasoningARC Challenge
Accuracy34
190
Commonsense ReasoningSIQA
Accuracy42.3
106
Common Sense ReasoningARC Easy
ARC (easy) Accuracy63
72
In-context retrievalReal-world data
SQuAD Accuracy49.4
18
Commonsense ReasoningOpenBookQA
Normalized Accuracy (OpenBookQA)36.8
8
Showing 10 of 12 rows

Other info

Follow for update