Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Large Batch Training of Convolutional Networks

About

A common way to speed up training of large convolutional networks is to add computational units. Training is then performed using data-parallel synchronous Stochastic Gradient Descent (SGD) with mini-batch divided between computational units. With an increase in the number of nodes, the batch size grows. But training with large batch size often results in the lower model accuracy. We argue that the current recipe for large batch training (linear learning rate scaling with warm-up) is not general enough and training may diverge. To overcome this optimization difficulties we propose a new training algorithm based on Layer-wise Adaptive Rate Scaling (LARS). Using LARS, we scaled Alexnet up to a batch size of 8K, and Resnet-50 to a batch size of 32K without loss in accuracy.

Yang You, Igor Gitman, Boris Ginsburg• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet 1k (test)
Top-1 Accuracy86.77
450
Image ClassificationCaltech101 (test)--
159
Image ClassificationCaltech-256 (test)
Top-1 Acc74.29
74
Image ClassificationCIFAR100 (test)--
43
Showing 4 of 4 rows

Other info

Follow for update