Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Going deeper with Image Transformers

About

Transformers have been recently adapted for large scale image classification, achieving high scores shaking up the long supremacy of convolutional neural networks. However the optimization of image transformers has been little studied so far. In this work, we build and optimize deeper transformer networks for image classification. In particular, we investigate the interplay of architecture and optimization of such dedicated transformers. We make two transformers architecture changes that significantly improve the accuracy of deep transformers. This leads us to produce models whose performance does not saturate early with more depth, for instance we obtain 86.5% top-1 accuracy on Imagenet when training with no external data, we thus attain the current SOTA with less FLOPs and parameters. Moreover, our best model establishes the new state of the art on Imagenet with Reassessed labels and Imagenet-V2 / match frequency, in the setting with no additional training data. We share our code and models.

Hugo Touvron, Matthieu Cord, Alexandre Sablayrolles, Gabriel Synnaeve, Herv\'e J\'egou• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)--
3381
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy85
1866
Image ClassificationImageNet-1k (val)
Top-1 Accuracy86.5
1453
Image ClassificationImageNet (val)
Top-1 Acc86.5
1206
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)86.3
1155
Semantic segmentationADE20K
mIoU45.3
936
Image ClassificationImageNet-1k (val)
Top-1 Accuracy83.3
840
Image ClassificationImageNet 1k (test)
Top-1 Accuracy84.5
798
Image ClassificationImageNet-1k (val)
Top-1 Acc86.5
706
Showing 10 of 45 rows

Other info

Code

Follow for update