Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fibottention: Inceptive Visual Representation Learning with Diverse Attention Across Heads

About

Vision Transformers and their variants have achieved remarkable success in diverse visual perception tasks. Despite their effectiveness, they suffer from two significant limitations. First, the quadratic computational complexity of multi-head self-attention (MHSA), which restricts scalability to large token counts, and second, a high dependency on large-scale training data to attain competitive performance. In this paper, to address these challenges, we propose a novel sparse self-attention mechanism named Fibottention. Fibottention employs structured sparsity patterns derived from the Wythoff array, enabling an $\mathcal{O}(N \log N)$ computational complexity in self-attention. By design, its sparsity patterns vary across attention heads, which provably reduces redundant pairwise interactions while ensuring sufficient and diverse coverage. This leads to an \emph{inception-like functional diversity} in the attention heads, and promotes more informative and disentangled representations. We integrate Fibottention into standard Transformer architectures and conduct extensive experiments across multiple domains, including image classification, video understanding, and robot learning. Results demonstrate that models equipped with Fibottention either significantly outperform or achieve on-par performance with their dense MHSA counterparts, while leveraging only $2\%$ of all pairwise interactions across self-attention heads in typical settings, $2-6\%$ of the pairwise interactions in self-attention heads, resulting in substantial computational savings. Moreover, when compared to existing sparse attention mechanisms, Fibottention consistently achieves superior results on a FLOP-equivalency basis. Finally, we provide an in-depth analysis of the enhanced feature diversity resulting from our attention design and discuss its implications for efficient representation learning.

Ali K. Rahimian, Manish K. Govind, Subhajit Maity, Dominick Reilly, Christian K\"ummerle, Srijan Das, Aritra Dutta• 2024

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10--
471
Action RecognitionToyota SmartHome (TSH) (CV2)--
60
Action RecognitionToyota Smarthome CS
Accuracy57.1
58
Image ClassificationCIFAR-100
Top-1 Accuracy70.7
9
Image ClassificationTiny-ImageNet
Top-1 Accuracy79.1
9
Image ClassificationImageNet-1K
Top-1 Accuracy75.5
9
Behavioral CloningPushT
Task Completion Accuracy72
4
Video Action ClassificationNUCLA (CS)
Top-1 Acc59.6
4
Behavioral CloningLift
Task Completion Accuracy100
4
Behavioral CloningCAN
Task Completion Accuracy96
4
Showing 10 of 10 rows

Other info

Follow for update