Practical tradeoffs between memory, compute, and performance in learned optimizers
About
Optimization plays a costly and crucial role in developing machine learning systems. In learned optimizers, the few hyperparameters of commonly used hand-designed optimizers, e.g. Adam or SGD, are replaced with flexible parametric functions. The parameters of these functions are then optimized so that the resulting learned optimizer minimizes a target loss on a chosen class of models. Learned optimizers can both reduce the number of required training steps and improve the final test loss. However, they can be expensive to train, and once trained can be expensive to use due to computational and memory overhead for the optimizer itself. In this work, we identify and quantify the design features governing the memory, compute, and performance trade-offs for many learned and hand-designed optimizers. We further leverage our analysis to construct a learned optimizer that is both faster and more memory efficient than previous work. Our model and training code are open source.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Global Optimization | Ackley 2d | Mean Objective Value0.169 | 5 | |
| Global Optimization | Ackley 100d | Mean Final Objective Value0.0902 | 5 | |
| Global Optimization | Dixon-Price 100d | Mean Final Objective Value0.72 | 5 | |
| Global Optimization | Levy 100d | Mean Final Objective Value0.0033 | 5 | |
| Global Optimization | Perm Function 0, d, beta 100d | Mean Final Objective Value4.70e-8 | 5 | |
| Global Optimization | Powel 100d | Mean Final Objective Value294 | 5 | |
| Global Optimization | Griwank 100d | Mean Final Objective Value1.79 | 5 | |
| Trajectory Optimization | Human3.6M in domain (val) | MPJPE-G33 | 3 | |
| Trajectory Optimization | Human3.6M out of domain (val) | MPJPE-G29.3 | 3 |