Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LaProp: Separating Momentum and Adaptivity in Adam

About

We identity a by-far-unrecognized problem of Adam-style optimizers which results from unnecessary coupling between momentum and adaptivity. The coupling leads to instability and divergence when the momentum and adaptivity parameters are mismatched. In this work, we propose a method, Laprop, which decouples momentum and adaptivity in the Adam-style methods. We show that the decoupling leads to greater flexibility in the hyperparameters and allows for a straightforward interpolation between the signed gradient methods and the adaptive gradient methods. We experimentally show that Laprop has consistently improved speed and stability over Adam on a variety of tasks. We also bound the regret of Laprop on a convex problem and show that our bound differs from that of Adam by a key factor, which demonstrates its advantage.

Liu Ziyin, Zhikang T.Wang, Masahito Ueda• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy78.4
3518
Language ModelingWikiText-103 (val)
PPL66.96
180
Language Model Pre-trainingC4 Llama 2 pre-training (val)
Perplexity16.38
47
Image ClassificationCIFAR100 (train)--
8
Image ClassificationMini-ImageNet
Top-1 Acc71.73
6
Language ModelingWikiText-103 (train)
PPL73.63
4
Showing 6 of 6 rows

Other info

Follow for update