Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SQuARM-SGD: Communication-Efficient Momentum SGD for Decentralized Optimization

About

In this paper, we propose and analyze SQuARM-SGD, a communication-efficient algorithm for decentralized training of large-scale machine learning models over a network. In SQuARM-SGD, each node performs a fixed number of local SGD steps using Nesterov's momentum and then sends sparsified and quantized updates to its neighbors regulated by a locally computable triggering criterion. We provide convergence guarantees of our algorithm for general (non-convex) and convex smooth objectives, which, to the best of our knowledge, is the first theoretical analysis for compressed decentralized SGD with momentum updates. We show that the convergence rate of SQuARM-SGD matches that of vanilla SGD. We empirically show that including momentum updates in SQuARM-SGD can lead to better test performance than the current state-of-the-art which does not consider momentum updates.

Navjot Singh, Deepesh Data, Jemin George, Suhas Diggavi• 2020

Related benchmarks

TaskDatasetResultRank
Stochastic Optimization ConvergenceTheoretical Analysis
Convergence Rate Bound1
7
Decentralized Distributed LearningNonconvex Functions
Communication Cost1
6
Showing 2 of 2 rows

Other info

Follow for update