Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the Convergence of Adam and Beyond

About

Several recently proposed stochastic optimization methods that have been successfully used in training deep networks such as RMSProp, Adam, Adadelta, Nadam are based on using gradient updates scaled by square roots of exponential moving averages of squared past gradients. In many applications, e.g. learning with large output spaces, it has been empirically observed that these algorithms fail to converge to an optimal solution (or a critical point in nonconvex settings). We show that one cause for such failures is the exponential moving average used in the algorithms. We provide an explicit example of a simple convex optimization setting where Adam does not converge to the optimal solution, and describe the precise problems with the previous analysis of Adam algorithm. Our analysis suggests that the convergence issues can be fixed by endowing such algorithms with `long-term memory' of past gradients, and propose new variants of the Adam algorithm which not only fix the convergence issues but often also lead to improved empirical performance.

Sashank J. Reddi, Satyen Kale, Sanjiv Kumar• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Image ClassificationCIFAR-10 (test)--
3381
Image ClassificationImageNet (val)--
1206
Language ModelingPenn Treebank (test)--
411
Decision Making under Imperfect RecallDetection Game Benchmarks 1.0 (full set)
Value Score12.84
11
Decision Making under Imperfect Recall61 aggregate benchmark instances (Sim, Det, and Rand)
Utility (% of Best)60.7
8
Decision Making under Imperfect RecallRandom (Rand) Game Benchmarks 1.0 (full set)
Value0.5
8
Random ProblemRand-24k
Value68
7
Simulation ProblemSim 540k
Value Score8.54
7
Subgroup DetectionDet 1k
Detection Value13
7
Showing 10 of 12 rows

Other info

Follow for update