Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TResNet: High Performance GPU-Dedicated Architecture

About

Many deep learning models, developed in recent years, reach higher ImageNet accuracy than ResNet50, with fewer or comparable FLOPS count. While FLOPs are often seen as a proxy for network efficiency, when measuring actual GPU training and inference throughput, vanilla ResNet50 is usually significantly faster than its recent competitors, offering better throughput-accuracy trade-off. In this work, we introduce a series of architecture modifications that aim to boost neural networks' accuracy, while retaining their GPU training and inference efficiency. We first demonstrate and discuss the bottlenecks induced by FLOPs-optimizations. We then suggest alternative designs that better utilize GPU structure and assets. Finally, we introduce a new family of GPU-dedicated models, called TResNet, which achieve better accuracy and efficiency than previous ConvNets. Using a TResNet model, with similar GPU throughput to ResNet50, we reach 80.8 top-1 accuracy on ImageNet. Our TResNet models also transfer well and achieve state-of-the-art accuracy on competitive single-label classification datasets such as Stanford cars (96.0%), CIFAR-10 (99.0%), CIFAR-100 (91.5%) and Oxford-Flowers (99.1%). They also perform well on multi-label classification and object detection tasks. Implementation is available at: https://github.com/mrT23/TResNet.

Tal Ridnik, Hussam Lawen, Asaf Noy, Emanuel Ben Baruch, Gilad Sharir, Itamar Friedman• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationImageNet (val)
Top-1 Acc82
1206
Image ClassificationCIFAR-100
Top-1 Accuracy91.5
622
Image ClassificationStanford Cars
Accuracy96
477
Image ClassificationCIFAR-10--
471
Fine-grained Image ClassificationStanford Cars (test)
Accuracy96
348
Image ClassificationILSVRC 2012 (test)
Top-1 Acc84.3
117
Image ClassificationOxford Flowers
Top-1 Accuracy99.1
78
Multi-label image recognitionMS-COCO 2014 (val)
mAP86.4
51
Multi-Label ClassificationNUS-WIDE
mAP63.1
21
Showing 10 of 10 rows

Other info

Code

Follow for update