Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CMT: Convolutional Neural Networks Meet Vision Transformers

About

Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image. However, there are still gaps in both performance and computational cost between transformers and existing convolutional neural networks (CNNs). In this paper, we aim to address this issue and develop a network that can outperform not only the canonical transformers, but also the high-performance convolutional models. We propose a new transformer based hybrid network by taking advantage of transformers to capture long-range dependencies, and of CNNs to model local features. Furthermore, we scale it to obtain a family of models, called CMTs, obtaining much better accuracy and efficiency than previous convolution and transformer based models. In particular, our CMT-S achieves 83.5% top-1 accuracy on ImageNet, while being 14x and 2x smaller on FLOPs than the existing DeiT and EfficientNet, respectively. The proposed CMT-S also generalizes well on CIFAR10 (99.2%), CIFAR100 (91.7%), Flowers (98.7%), and other challenging vision datasets such as COCO (44.3% mAP), with considerably less computational cost.

Jianyuan Guo, Kai Han, Han Wu, Yehui Tang, Xinghao Chen, Yunhe Wang, Chang Xu• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy91.7
3518
Object DetectionCOCO 2017 (val)
AP44.3
2454
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy83.5
1866
Instance SegmentationCOCO 2017 (val)
APm0.407
1144
Image ClassificationCIFAR-100--
622
Image ClassificationCIFAR-10
Accuracy99.2
507
Image ClassificationStanford Cars
Accuracy94.4
477
Image ClassificationOxford-IIIT Pets
Accuracy95.2
259
Image ClassificationOxford Flowers
Top-1 Accuracy98.7
78
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy0.845
57
Showing 10 of 12 rows

Other info

Code

Follow for update