Multi-Scale Vision Longformer: A New Vision Transformer for High-Resolution Image Encoding
About
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer, which significantly enhances the ViT of \cite{dosovitskiy2020image} for encoding high-resolution images using two techniques. The first is the multi-scale model structure, which provides image encodings at multiple scales with manageable computational cost. The second is the attention mechanism of vision Longformer, which is a variant of Longformer \cite{beltagy2020longformer}, originally developed for natural language processing, and achieves a linear complexity w.r.t. the number of input tokens. A comprehensive empirical study shows that the new ViT significantly outperforms several strong baselines, including the existing ViT models and their ResNet counterparts, and the Pyramid Vision Transformer from a concurrent work \cite{wang2021pyramid}, on a range of vision tasks, including image classification, object detection, and segmentation. The models and source code are released at \url{https://github.com/microsoft/vision-longformer}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Object Detection | COCO 2017 (val) | AP48.6 | 2454 | |
| Image Classification | ImageNet-1K 1.0 (val) | Top-1 Accuracy83.7 | 1866 | |
| Image Classification | ImageNet (val) | Top-1 Acc83.2 | 1206 | |
| Classification | ImageNet-1K 1.0 (val) | Top-1 Accuracy (%)86.2 | 1155 | |
| Instance Segmentation | COCO 2017 (val) | APm0.442 | 1144 | |
| Image Classification | ImageNet-1k (val) | Top-1 Accuracy86.2 | 840 | |
| Image Classification | ImageNet-1k (val) | Top-1 Acc82.4 | 706 | |
| Object Detection | COCO (val) | mAP47.6 | 613 | |
| Instance Segmentation | COCO (val) | APmk43 | 472 | |
| Object Detection | MS-COCO 2017 (val) | -- | 237 |