Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MViTv2: Improved Multiscale Vision Transformers for Classification and Detection

About

In this paper, we study Multiscale Vision Transformers (MViTv2) as a unified architecture for image and video classification, as well as object detection. We present an improved version of MViT that incorporates decomposed relative positional embeddings and residual pooling connections. We instantiate this architecture in five sizes and evaluate it for ImageNet classification, COCO detection and Kinetics video recognition where it outperforms prior work. We further compare MViTv2s' pooling attention to window attention mechanisms where it outperforms the latter in accuracy/compute. Without bells-and-whistles, MViTv2 has state-of-the-art performance in 3 domains: 88.8% accuracy on ImageNet classification, 58.7 boxAP on COCO object detection as well as 86.1% on Kinetics-400 video classification. Code and models are available at https://github.com/facebookresearch/mvit.

Yanghao Li, Chao-Yuan Wu, Haoqi Fan, Karttikeya Mangalam, Bo Xiong, Jitendra Malik, Christoph Feichtenhofer• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU41.39
2888
Object DetectionCOCO 2017 (val)--
2643
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy88.8
1952
Image ClassificationImageNet-1K
Top-1 Acc85.3
1239
Instance SegmentationCOCO 2017 (val)
APm0.488
1201
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)88.8
1163
Image ClassificationImageNet 1k (test)
Top-1 Accuracy88.8
848
Object DetectionCOCO (val)
mAP55.8
633
Action RecognitionSomething-Something v2 (val)
Top-1 Accuracy73.3
545
Action RecognitionKinetics-400
Top-1 Acc86.1
481
Showing 10 of 74 rows
...

Other info

Code

Follow for update