Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Forward and Reverse Gradient-Based Hyperparameter Optimization

About

We study two procedures (reverse-mode and forward-mode) for computing the gradient of the validation error with respect to the hyperparameters of any iterative learning algorithm such as stochastic gradient descent. These procedures mirror two methods of computing gradients for recurrent neural networks and have different trade-offs in terms of running time and space requirements. Our formulation of the reverse-mode procedure is linked to previous work by Maclaurin et al. [2015] but does not require reversible dynamics. The forward-mode procedure is suitable for real-time hyperparameter updates, which may significantly speed up hyperparameter optimization on large datasets. We present experiments on data cleaning and on learning task interactions. We also present one large-scale experiment where the use of previous gradient-based methods would be prohibitive.

Luca Franceschi, Michele Donini, Paolo Frasconi, Massimiliano Pontil• 2017

Related benchmarks

TaskDatasetResultRank
Image ClassificationMNIST (test)
Accuracy87.49
882
Image ClassificationFashionMNIST (test)
Accuracy79.23
218
ClassificationDiabetes (test)
Accuracy72.82
32
Hyper-data CleaningMNIST (test)
Test Accuracy0.8591
31
Image ClassificationCIFAR-10 (test)
Accuracy35.02
26
Binary ClassificationHeart (test)--
16
RegressionAbalone (test)
L2 Risk6.46
14
Classificationgisette (test)
Loss0.18
11
Classificationa1a (test)
Loss0.3875
11
Classificationionosphere (test)
Loss0.487
11
Showing 10 of 31 rows

Other info

Follow for update