Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Practical tradeoffs between memory, compute, and performance in learned optimizers

About

Optimization plays a costly and crucial role in developing machine learning systems. In learned optimizers, the few hyperparameters of commonly used hand-designed optimizers, e.g. Adam or SGD, are replaced with flexible parametric functions. The parameters of these functions are then optimized so that the resulting learned optimizer minimizes a target loss on a chosen class of models. Learned optimizers can both reduce the number of required training steps and improve the final test loss. However, they can be expensive to train, and once trained can be expensive to use due to computational and memory overhead for the optimizer itself. In this work, we identify and quantify the design features governing the memory, compute, and performance trade-offs for many learned and hand-designed optimizers. We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work. Our model and training code are open source.

Luke Metz, C. Daniel Freeman, James Harrison, Niru Maheswaranathan, Jascha Sohl-Dickstein• 2022

Related benchmarks

TaskDatasetResultRank
Global OptimizationAckley 2d
Mean Objective Value0.169
5
Global OptimizationAckley 100d
Mean Final Objective Value0.0902
5
Global OptimizationDixon-Price 100d
Mean Final Objective Value0.72
5
Global OptimizationLevy 100d
Mean Final Objective Value0.0033
5
Global OptimizationPerm Function 0, d, beta 100d
Mean Final Objective Value4.70e-8
5
Global OptimizationPowel 100d
Mean Final Objective Value294
5
Global OptimizationGriwank 100d
Mean Final Objective Value1.79
5
Trajectory OptimizationHuman3.6M in domain (val)
MPJPE-G33
3
Trajectory OptimizationHuman3.6M out of domain (val)
MPJPE-G29.3
3
Showing 9 of 9 rows

Other info

Follow for update