Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

High-Performance Large-Scale Image Recognition Without Normalization

About

Batch normalization is a key component of most image classification models, but it has many undesirable properties stemming from its dependence on the batch size and interactions between examples. Although recent work has succeeded in training deep ResNets without normalization layers, these models do not match the test accuracies of the best batch-normalized networks, and are often unstable for large learning rates or strong data augmentations. In this work, we develop an adaptive gradient clipping technique which overcomes these instabilities, and design a significantly improved class of Normalizer-Free ResNets. Our smaller models match the test accuracy of an EfficientNet-B7 on ImageNet while being up to 8.7x faster to train, and our largest models attain a new state-of-the-art top-1 accuracy of 86.5%. In addition, Normalizer-Free models attain significantly better performance than their batch-normalized counterparts when finetuning on ImageNet after large-scale pre-training on a dataset of 300 million labeled images, with our best models obtaining an accuracy of 89.2%. Our code is available at https://github.com/deepmind/ deepmind-research/tree/master/nfnets

Andrew Brock, Soham De, Samuel L. Smith, Karen Simonyan• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU38.31
2888
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy89.2
1952
Image ClassificationImageNet-1k (val)
Top-1 Accuracy86.5
1469
Image ClassificationImageNet-1K
Top-1 Acc77.7
1239
Image ClassificationImageNet (val)
Top-1 Acc86.8
1206
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)86.5
1163
Commonsense ReasoningWinoGrande
Accuracy57.85
1085
Image ClassificationImageNet 1k (test)
Top-1 Accuracy84.7
848
Image ClassificationImageNet-1k (val)
Top-1 Accuracy85.1
844
Image ClassificationImageNet-1k (val)
Top-1 Acc89.2
706
Showing 10 of 38 rows

Other info

Code

Follow for update