Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stopping Bayesian Optimization with Probabilistic Regret Bounds

About

Bayesian optimization is a popular framework for efficiently tackling black-box search problems. As a rule, these algorithms operate by iteratively choosing what to evaluate next until some predefined budget has been exhausted. We investigate replacing this de facto stopping rule with criteria based on the probability that a point satisfies a given set of conditions. We focus on the prototypical example of an $(\epsilon, \delta)$-criterion: stop when a solution has been found whose value is within $\epsilon > 0$ of the optimum with probability at least $1 - \delta$ under the model. For Gaussian process priors, we show that Bayesian optimization satisfies this criterion under mild technical assumptions. Further, we give a practical algorithm for evaluating Monte Carlo stopping rules in a manner that is both sample efficient and robust to estimation error. These findings are accompanied by empirical results which demonstrate the strengths and weaknesses of the proposed approach.

James T. Wilson• 2024

Related benchmarks

TaskDatasetResultRank
Global OptimizationGP 10-2 (D=4, T=256)
Stopping Time86
12
Global OptimizationGP 10-6 D=4, T=128
Stopping Time61
12
Global OptimizationGP 10-2 (D=2, T=128)
Stopping Time23
6
Global OptimizationHartmann 3 D=3, T=64
Stopping Time19
6
Global OptimizationGP† 10-6 (D=2, T=64)
Stopping Time17
6
Global OptimizationGP 10-6 (D=6, T=256)
Stopping Time134
6
Global OptimizationGP 10-2 (D=6, T=512)
Stopping Time235
6
Global OptimizationBranin D=2, T=128
Stopping Time33
6
Global OptimizationHartmann 6 (D=6, T=64)
Stopping Time40
6
Global OptimizationRosenbrock D=4, T=96
Stopping Time84
6
Showing 10 of 12 rows

Other info

Code

Follow for update