Meta-Learning Reliable Priors in the Function Space
About
When data are scarce meta-learning can improve a learner's accuracy by harnessing previous experience from related learning tasks. However, existing methods have unreliable uncertainty estimates which are often overconfident. Addressing these shortcomings, we introduce a novel meta-learning framework, called F-PACOH, that treats meta-learned priors as stochastic processes and performs meta-level regularization directly in the function space. This allows us to directly steer the probabilistic predictions of the meta-learner towards high epistemic uncertainty in regions of insufficient meta-training data and, thus, obtain well-calibrated uncertainty estimates. Finally, we showcase how our approach can be integrated with sequential decision making, where reliable uncertainty quantification is imperative. In our benchmark study on meta-learning for Bayesian Optimization (BO), F-PACOH significantly outperforms all other meta-learners and standard baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| meta learning | Rand. Branin (meta-test) | Calibration Error0.095 | 6 | |
| meta learning | Camelb. Sin-Noise (meta-test) | Calibration Error0.046 | 6 | |
| meta learning | RPart (meta-test) | Calibration Error0.125 | 6 | |
| meta learning | XGBoost (meta-test) | Calibration Error0.077 | 6 | |
| Meta-Regression | Rand. Branin (meta-test) | Test Log-Likelihood-1.854 | 6 | |
| Meta-Regression | Rand. Hartmann6 (meta-test) | Avg Test Log-Likelihood1.448 | 6 | |
| Meta-Regression | GLMNET (meta-test) | Average Test Log-Likelihood1.692 | 6 | |
| meta learning | Rand. Hartmann6 (meta-test) | Calibration Error0.049 | 6 | |
| meta learning | GLMNET (meta-test) | Calibration Error0.124 | 6 | |
| Meta-Regression | RPart (meta-test) | Avg Test Log-Likelihood1.596 | 6 |