Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Parallel Knowledge Gradient Method for Batch Bayesian Optimization

About

In many applications of black-box optimization, one can evaluate multiple points simultaneously, e.g. when evaluating the performances of several different neural network architectures in a parallel computing environment. In this paper, we develop a novel batch Bayesian optimization algorithm --- the parallel knowledge gradient method. By construction, this method provides the one-step Bayes-optimal batch of points to sample. We provide an efficient strategy for computing this Bayes-optimal batch of points, and we demonstrate that the parallel knowledge gradient method finds global optima significantly faster than previous batch Bayesian optimization algorithms on both synthetic test functions and when tuning hyperparameters of practical machine learning algorithms, especially when function evaluations are noisy.

Jian Wu, Peter I. Frazier• 2016

Related benchmarks

TaskDatasetResultRank
Bayesian Optimization50 optimization problems COCO, BoTorch, Bayesmark (aggregated)
Mean RP1.66
26
Bayesian OptimizationPortfolio Optimization
Computation Time (s)3.79e+3
5
Bayesian OptimizationStybTang-HT synthetic (test)
Computation Time (s)3.01e+3
5
Bayesian OptimizationHPOBench
Computation Time (s)3.70e+3
5
Bayesian OptimizationNAS-Bench-201
Computation Time (s)4.20e+3
5
Bayesian OptimizationAckley-HT synthetic (test)
Computation Time (s)3.39e+3
5
Bayesian OptimizationAckley-NS synthetic (test)
Computation Time (s)3.45e+3
5
Bayesian OptimizationRosenbrock-HT synthetic (test)
Computation Time (s)3.31e+3
5
Bayesian OptimizationRosenbrock-NS synthetic (test)
Computation Time (s)3.38e+3
5
Bayesian OptimizationStybTang-NS synthetic (test)
Computation Time (s)3.38e+3
5
Showing 10 of 10 rows

Other info

Follow for update