Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ImageNet-21K Pretraining for the Masses

About

ImageNet-1K serves as the primary dataset for pretraining deep learning models for computer vision tasks. ImageNet-21K dataset, which is bigger and more diverse, is used less frequently for pretraining, mainly due to its complexity, low accessibility, and underestimation of its added value. This paper aims to close this gap, and make high-quality efficient pretraining on ImageNet-21K available for everyone. Via a dedicated preprocessing stage, utilization of WordNet hierarchical structure, and a novel training scheme called semantic softmax, we show that various models significantly benefit from ImageNet-21K pretraining on numerous datasets and tasks, including small mobile-oriented models. We also show that we outperform previous ImageNet-21K pretraining schemes for prominent new models like ViT and Mixer. Our proposed pretraining pipeline is efficient, accessible, and leads to SoTA reproducible results, from a publicly available dataset. The training code and pretrained models are available at: https://github.com/Alibaba-MIIL/ImageNet21K

Tal Ridnik, Emanuel Ben-Baruch, Asaf Noy, Lihi Zelnik-Manor• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)83.9
1155
Image ClassificationImageNet-1k (val)
Top-1 Acc83.9
706
Image ClassificationCIFAR-100
Top-1 Accuracy90.4
622
Image ClassificationImageNet-1K
Top-1 Acc81.4
524
Image ClassificationCIFAR100 (test)
Top-1 Accuracy92.5
377
Multi-Label ClassificationPASCAL VOC 2007 (test)
mAP96.7
125
Image RetrievalHolidays
mAP82.1
115
Multi-Label ClassificationPascal VOC (test)--
112
Multi-Label ClassificationMS-COCO 2014 (test)
mAP88.4
81
Showing 10 of 26 rows

Other info

Code

Follow for update