Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Vision Transformer Adapter for Dense Predictions

About

This work investigates a simple yet powerful dense prediction task adapter for Vision Transformer (ViT). Unlike recently advanced variants that incorporate vision-specific inductive biases into their architectures, the plain ViT suffers inferior performance on dense predictions due to weak prior assumptions. To address this issue, we propose the ViT-Adapter, which allows plain ViT to achieve comparable performance to vision-specific transformers. Specifically, the backbone in our framework is a plain ViT that can learn powerful representations from large-scale multi-modal data. When transferring to downstream tasks, a pre-training-free adapter is used to introduce the image-related inductive biases into the model, making it suitable for these tasks. We verify ViT-Adapter on multiple dense prediction tasks, including object detection, instance segmentation, and semantic segmentation. Notably, without using extra detection data, our ViT-Adapter-L yields state-of-the-art 60.9 box AP and 53.0 mask AP on COCO test-dev. We hope that the ViT-Adapter could serve as an alternative for vision-specific transformers and facilitate future research. The code and models will be released at https://github.com/czczup/ViT-Adapter.

Zhe Chen, Yuchen Duan, Wenhai Wang, Junjun He, Tong Lu, Jifeng Dai, Yu Qiao• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU62.8
2731
Object DetectionCOCO 2017 (val)
AP58.4
2454
Object DetectionCOCO (test-dev)
mAP60.9
1195
Semantic segmentationCityscapes (test)
mIoU85.2
1145
Instance SegmentationCOCO 2017 (val)
APm0.511
1144
Semantic segmentationADE20K
mIoU58.3
936
Object DetectionCOCO (val)--
613
Object DetectionCOCO v2017 (test-dev)
mAP60.1
499
Instance SegmentationCOCO (val)
APmk50.2
472
Instance SegmentationCOCO (test-dev)
APM53
380
Showing 10 of 27 rows

Other info

Code

Follow for update