Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Hyperparameters via a Data-Emphasized Variational Objective

About

When training large models on limited data, avoiding overfitting is paramount. Common grid search or smarter search methods rely on expensive separate runs for each candidate hyperparameter, while carving out a validation set that reduces available training data. In this paper, we study gradient-based learning of hyperparameters via the evidence lower bound (ELBO) objective from Bayesian variational methods. This avoids the need for any validation set. We focus on scenarios where the model is over-parameterized for flexibility and the approximate posterior is chosen to be Gaussian with isotropic covariance for tractability, even though it cannot match the true posterior. In such scenarios, we find the ELBO prioritizes posteriors that match the prior, leading to severe underfitting. Instead, we recommend a data-emphasized ELBO that upweights the likelihood but not the prior. In Bayesian transfer learning of image and text classifiers, our method reduces the 88+ hour grid search of past work to under 3 hours while delivering comparable accuracy. We further demonstrate how our approach enables efficient yet accurate approximations of Gaussian processes with learnable lengthscale kernels.

Ethan Harvey, Mikhail Petrov, Michael C. Hughes• 2025

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
NLL (N=100)0.36
16
Image ClassificationPet-37 (test)
NLL (N=370)0.32
16
Image ClassificationFlower-102 (test)
NLL (N=510)0.58
16
Image ClassificationCIFAR-10 (test)
Accuracy (N=100)88.5
14
Text ClassificationNews-4 (test)
NLL0.02
12
Transfer LearningCIFAR-10 N=50000 (train)
Avg SGD Run Time (L2-SP)33
3
Showing 6 of 6 rows

Other info

Code

Follow for update