Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Alada: Alternating Adaptation of Momentum Method for Memory-Efficient Matrix Optimization

About

This work proposes Alada, an adaptive momentum method for stochastic optimization over large-scale matrices. Alada employs a rank-one factorization approach to estimate the second moment of gradients, where factors are updated alternatively to minimize the estimation error. Alada achieves sublinear memory overheads and can be readily extended to optimizing tensor-shaped variables.We also equip Alada with a first moment estimation rule, which enhances the algorithm's robustness without incurring additional memory overheads. The theoretical performance of Alada aligns with that of traditional methods such as Adam. Numerical studies conducted on several natural language processing tasks demonstrate the reduction in memory overheads and the robustness in training large models relative to Adam and its variants.

Xiaoyu He, Yu Cai, Jin Jia, Canxi Huang, Wenqing Chen, Zibin Zheng• 2025

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-2 (test)
PPL12.61
1541
Natural Language UnderstandingGLUE
SST-295.68
452
Machine TranslationWMT 2016 (test)--
58
Showing 3 of 3 rows

Other info

Follow for update