Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Can Decentralized Algorithms Outperform Centralized Algorithms? A Case Study for Decentralized Parallel Stochastic Gradient Descent

About

Most distributed machine learning systems nowadays, including TensorFlow and CNTK, are built in a centralized fashion. One bottleneck of centralized algorithms lies on high communication cost on the central node. Motivated by this, we ask, can decentralized algorithms be faster than its centralized counterpart? Although decentralized PSGD (D-PSGD) algorithms have been studied by the control community, existing analysis and theory do not show any advantage over centralized PSGD (C-PSGD) algorithms, simply assuming the application scenario where only the decentralized network is available. In this paper, we study a D-PSGD algorithm and provide the first theoretical analysis that indicates a regime in which decentralized algorithms might outperform centralized algorithms for distributed stochastic gradient descent. This is because D-PSGD has comparable total computational complexities to C-PSGD but requires much less communication cost on the busiest node. We further conduct an empirical study to validate our theoretical analysis across multiple frameworks (CNTK and Torch), different network configurations, and computation platforms up to 112 GPUs. On network configurations with low bandwidth or high latency, D-PSGD can be up to one order of magnitude faster than its well-optimized centralized counterparts.

Xiangru Lian, Ce Zhang, Huan Zhang, Cho-Jui Hsieh, Wei Zhang, Ji Liu• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)--
1469
Image ClassificationFashionMNIST (test)--
260
Object DetectionCOCO
AP (Box)37.2
144
Object DetectionPascal VOC
mAP80.3
88
Predictive MaintenanceMaritime PdM Mid Topology (test)
RMSE0.0086
30
Image ClassificationCifar-10 17 (test)
Accuracy (alpha=1.0)89.5
10
Image ClassificationFashion-MNIST alpha = 0.5
Communication Rounds79
8
Image ClassificationFashion-MNIST IID
Communication Rounds53
8
Stochastic Optimization ConvergenceTheoretical Analysis
Convergence Rate Bound1
7
Image ClassificationFashion-MNIST alpha = 0.1
Communication Rounds125
6
Showing 10 of 10 rows

Other info

Follow for update