Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Optimizing Millions of Hyperparameters by Implicit Differentiation

About

We propose an algorithm for inexpensive gradient-based hyperparameter optimization that combines the implicit function theorem (IFT) with efficient inverse Hessian approximations. We present results about the relationship between the IFT and differentiating through optimization, motivating our algorithm. We use the proposed approach to train modern network architectures with millions of weights and millions of hyper-parameters. For example, we learn a data-augmentation network - where every weight is a hyperparameter tuned for validation performance - outputting augmented training examples. Jointly tuning weights and hyperparameters with our approach is only a few times more costly in memory and compute than standard training.

Jonathan Lorraine, Paul Vicol, David Duvenaud• 2019

Related benchmarks

TaskDatasetResultRank
Hyper-data CleaningMNIST (test)
Test Accuracy0.917
31
Protein Function PredictionPPI 40 tasks (test)
Mean AUC67.7
13
Protein Function PredictionProtein Function Prediction 10 held-out tasks (test)
AUC0.691
11
Data DistillationFashion MNIST (test)
Outer Loss0.4895
8
Showing 4 of 4 rows

Other info

Follow for update