Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Maximizing acquisition functions for Bayesian optimization

About

Bayesian optimization is a sample-efficient approach to global optimization that relies on theoretically motivated value heuristics (acquisition functions) to guide its search process. Fully maximizing acquisition functions produces the Bayes' decision rule, but this ideal is difficult to achieve since these functions are frequently non-trivial to optimize. This statement is especially true when evaluating queries in parallel, where acquisition functions are routinely non-convex, high-dimensional, and intractable. We first show that acquisition functions estimated via Monte Carlo integration are consistently amenable to gradient-based optimization. Subsequently, we identify a common family of acquisition functions, including EI and UCB, whose properties not only facilitate but justify use of greedy approaches for their maximization.

James T. Wilson, Frank Hutter, Marc Peter Deisenroth• 2018

Related benchmarks

TaskDatasetResultRank
Bayesian Optimizationnoise-free synthetic problems (test)
Normalized Score0.681
42
Function OptimizationAckley
Avg Max Reward0.973
12
Bayesian OptimizationRastrigin d=50 synthetic (round 10)
Relative Batch Instantaneous Regret0.768
9
Bayesian OptimizationAckley d=50 synthetic (round 10)
Relative Batch Instantaneous Regret0.874
9
Bayesian OptimizationAckley d=100 synthetic (round 10)
Relative Batch Instantaneous Regret0.863
9
Bayesian OptimizationAckley d=2 synthetic (round 10)
Relative Batch Instantaneous Regret0.999
9
Bayesian OptimizationLevy (d=2) synthetic (round 10)
Relative batch instantaneous regret1.046
9
Bayesian OptimizationRastrigin d=2 synthetic (round 10)
Relative Batch Instantaneous Regret0.999
9
Bayesian OptimizationRosenbrock (d=2) synthetic (round 10)
Relative Batch Instantaneous Regret0.992
9
Bayesian OptimizationStyblinski-Tang d=2 synthetic (round 10)
Relative Batch Instantaneous Regret1.024
9
Showing 10 of 42 rows

Other info

Follow for update