Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Stochastic Gradient Push for Distributed Deep Learning

About

Distributed data-parallel algorithms aim to accelerate the training of deep neural networks by parallelizing the computation of large mini-batch gradient updates across multiple nodes. Approaches that synchronize nodes using exact distributed averaging (e.g., via AllReduce) are sensitive to stragglers and communication delays. The PushSum gossip algorithm is robust to these issues, but only performs approximate distributed averaging. This paper studies Stochastic Gradient Push (SGP), which combines PushSum with stochastic gradient updates. We prove that SGP converges to a stationary point of smooth, non-convex objectives at the same sub-linear rate as SGD, and that all nodes achieve consensus. We empirically validate the performance of SGP on image classification (ResNet-50, ImageNet) and machine translation (Transformer, WMT'16 En-De) workloads. Our code will be made publicly available.

Mahmoud Assran, Nicolas Loizou, Nicolas Ballas, Michael Rabbat• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)--
1469
Object DetectionCOCO
AP (Box)37.3
144
Object DetectionPascal VOC
mAP80.7
88
Predictive MaintenanceMaritime PdM Mid Topology (test)
RMSE0.0086
30
Image ClassificationTiny-ImageNet Dirichlet alpha=0.1 (test)
Test Accuracy25.29
30
Image ClassificationCifar10 Dirichlet(0.3) (test)
Test Accuracy82.81
21
Image ClassificationTiny-ImageNet Dirichlet alpha=0.3 (test)
Test Accuracy17.07
10
Image ClassificationCifar-10 17 (test)
Accuracy (alpha=1.0)89.5
10
Image ClassificationTiny-ImageNet Pathological c=10 (test)
Test Accuracy42.08
10
Image ClassificationCIFAR-10 (test)
Accuracy (Dirichlet α=0.1)87.39
10
Showing 10 of 20 rows

Other info

Follow for update