Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Gaussian Process Optimization in the Bandit Setting: No Regret and Experimental Design

About

Many applications require optimizing an unknown, noisy function that is expensive to evaluate. We formalize this task as a multi-armed bandit problem, where the payoff function is either sampled from a Gaussian process (GP) or has low RKHS norm. We resolve the important open problem of deriving regret bounds for this setting, which imply novel convergence rates for GP optimization. We analyze GP-UCB, an intuitive upper-confidence based algorithm, and bound its cumulative regret in terms of maximal information gain, establishing a novel connection between GP optimization and experimental design. Moreover, by bounding the latter in terms of operator spectra, we obtain explicit sublinear regret bounds for many commonly used covariance functions. In some important cases, our bounds have surprisingly weak dependence on the dimensionality. In our experiments on real sensor data, GP-UCB compares favorably with other heuristical GP optimization approaches.

Niranjan Srinivas, Andreas Krause, Sham M. Kakade, Matthias Seeger• 2009

Related benchmarks

TaskDatasetResultRank
Bayesian Optimization50 optimization problems COCO, BoTorch, Bayesmark (aggregated)
Mean RP1.75
26
IL-2 Phenotype OptimizationFusion
Cumulative Top-k Recall17.4
20
IFN-γ Phenotype OptimizationFusion
Cumulative Top-k Recall10
20
IL-2 Phenotype OptimizationAchilles
Cumulative Top-k Recall14.3
20
IFN-γ Phenotype OptimizationGene2Vec
Cumulative top-k Recall9.3
20
IL-2 Phenotype OptimizationGene2Vec
Cumulative top-k Recall12.3
20
IFN-γ Phenotype OptimizationAchilles
Cumulative top-k Recall7.7
20
IL-2 Phenotype OptimizationGenePT
Cumulative Recall@k11.8
20
IFN-γ Phenotype OptimizationGenePT
Cumulative Top-k Recall8.6
20
Bioactivity-guided Molecule GenerationPMO-1K JNK3
Top-10 AUC0.346
13
Showing 10 of 62 rows

Other info

Follow for update