Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Meta-Learning Reliable Priors in the Function Space

About

When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks. However, existing methods have unreliable uncertainty estimates which are often overconfident. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines.

Jonas Rothfuss, Dominique Heyn, Jinfan Chen, Andreas Krause• 2021

Related benchmarks

TaskDatasetResultRank
meta learningRand. Branin (meta-test)
Calibration Error0.095
6
meta learningCamelb. Sin-Noise (meta-test)
Calibration Error0.046
6
meta learningRPart (meta-test)
Calibration Error0.125
6
meta learningXGBoost (meta-test)
Calibration Error0.077
6
Meta-RegressionRand. Branin (meta-test)
Test Log-Likelihood-1.854
6
Meta-RegressionRand. Hartmann6 (meta-test)
Avg Test Log-Likelihood1.448
6
Meta-RegressionGLMNET (meta-test)
Average Test Log-Likelihood1.692
6
meta learningRand. Hartmann6 (meta-test)
Calibration Error0.049
6
meta learningGLMNET (meta-test)
Calibration Error0.124
6
Meta-RegressionRPart (meta-test)
Avg Test Log-Likelihood1.596
6
Showing 10 of 12 rows

Other info

Code

Follow for update