Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Are we done with ImageNet?

About

Yes, and no. We ask whether recent progress on the ImageNet classification benchmark continues to represent meaningful generalization, or whether the community has started to overfit to the idiosyncrasies of its labeling procedure. We therefore develop a significantly more robust procedure for collecting human annotations of the ImageNet validation set. Using these new labels, we reassess the accuracy of recently proposed ImageNet classifiers, and find their gains to be substantially smaller than those reported on the original labels. Furthermore, we find the original ImageNet labels to no longer be the best predictors of this independently-collected set, indicating that their usefulness in evaluating vision models may be nearing an end. Nevertheless, we find our annotation procedure to have largely remedied the errors in the original labels, reinforcing ImageNet as a powerful benchmark for future research in visual recognition.

Lucas Beyer, Olivier J. H\'enaff, Alexander Kolesnikov, Xiaohua Zhai, A\"aron van den Oord• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet (val)
Top-1 Acc78.1
1206
Image ClassificationImageNet V2
Top-1 Acc79.1
487
Image ClassificationWebVision-1000 (val)
Top-1 Acc72.1
21
Multi-label Image ClassificationShankar et al.
Top-1 Accuracy85.2
4
Multi-label Image ClassificationReaL
Top-1 Acc83.6
4
Showing 5 of 5 rows

Other info

Follow for update