Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aggregated Residual Transformations for Deep Neural Networks

About

We present a simple, highly modularized network architecture for image classification. Our network is constructed by repeating a building block that aggregates a set of transformations with the same topology. Our simple design results in a homogeneous, multi-branch architecture that has only a few hyper-parameters to set. This strategy exposes a new dimension, which we call "cardinality" (the size of the set of transformations), as an essential factor in addition to the dimensions of depth and width. On the ImageNet-1K dataset, we empirically show that even under the restricted condition of maintaining complexity, increasing cardinality is able to improve classification accuracy. Moreover, increasing cardinality is more effective than going deeper or wider when we increase the capacity. Our models, named ResNeXt, are the foundations of our entry to the ILSVRC 2016 classification task in which we secured 2nd place. We further investigate ResNeXt on an ImageNet-5K set and the COCO detection set, also showing better results than its ResNet counterpart. The code and models are publicly available online.

Saining Xie, Ross Girshick, Piotr Doll\'ar, Zhuowen Tu, Kaiming He• 2016

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)--
3381
Semantic segmentationADE20K (val)
mIoU43.8
2731
Object DetectionCOCO 2017 (val)
AP48.3
2454
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy81.5
1866
Image ClassificationImageNet-1k (val)
Top-1 Accuracy80.9
1453
Image ClassificationImageNet (val)
Top-1 Acc80.9
1206
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)80.9
1155
Instance SegmentationCOCO 2017 (val)
APm38.4
1144
Image ClassificationCIFAR-10 (test)--
906
Showing 10 of 130 rows
...

Other info

Code

Follow for update