Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Correlation Structures for Vision Transformers

About

We introduce a new attention mechanism, dubbed structural self-attention (StructSA), that leverages rich correlation patterns naturally emerging in key-query interactions of attention. StructSA generates attention maps by recognizing space-time structures of key-query correlations via convolution and uses them to dynamically aggregate local contexts of value features. This effectively leverages rich structural patterns in images and videos such as scene layouts, object motion, and inter-object relations. Using StructSA as a main building block, we develop the structural vision transformer (StructViT) and evaluate its effectiveness on both image and video classification tasks, achieving state-of-the-art results on ImageNet-1K, Kinetics-400, Something-Something V1 & V2, Diving-48, and FineGym.

Manjin Kim, Paul Hongsuck Seo, Cordelia Schmid, Minsu Cho• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU48.5
2731
Object DetectionCOCO 2017 (val)--
2454
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy86.7
1866
Instance SegmentationCOCO 2017 (val)
APm0.437
1144
Semantic segmentationADE20K
mIoU48.5
936
Video ClassificationKinetics 400 (test)
Top-1 Acc83.4
97
Video ClassificationSomething-Something v2
Top-1 Acc71.5
56
Video ClassificationSomething-Something V1
Top-1 Acc61.3
13
Video ClassificationDiving-48 v1 (test)
Top-1 Accuracy88.3
11
Video ClassificationFineGym (test)
Gym288 Score54.2
8
Showing 10 of 10 rows

Other info

Code

Follow for update