Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Nonmyopic Global Optimisation via Approximate Dynamic Programming

About

Global optimisation to optimise expensive-to-evaluate black-box functions without gradient information. Bayesian optimisation, one of the most well-known techniques, typically employs Gaussian processes as surrogate models, leveraging their probabilistic nature to balance exploration and exploitation. However, these processes become computationally prohibitive in high-dimensional spaces. Recent alternatives, based on inverse distance weighting (IDW) and radial basis functions (RBFs), offer competitive, computationally lighter solutions. Despite their efficiency, both traditional global and Bayesian optimisation strategies suffer from the myopic nature of their acquisition functions, which focus on immediate improvement neglecting future implications of the sequential decision making process. Nonmyopic acquisition functions devised for the Bayesian setting have shown promise in improving long-term performance. Yet, their combination with deterministic surrogate models remains unexplored. In this work, we introduce novel nonmyopic acquisition strategies tailored to IDW and RBF based on approximate dynamic programming paradigms, including rollout and multi-step scenario-based optimisation schemes, to enable lookahead acquisition. These methods optimise a sequence of query points over a horizon by predicting the evolution of the surrogate model, inherently managing the exploration-exploitation trade-off via optimisation techniques. The proposed approach represents a significant advance in extending nonmyopic acquisition principles, previously confined to Bayesian optimisation, to deterministic models. Empirical results on synthetic and hyperparameter tuning benchmark problems, a constrained problem, as well as on a data-driven predictive control application, demonstrate that these nonmyopic methods outperform conventional myopic approaches, leading to faster and more robust convergence.

Filippo Airaldi, Bart De Schutter, Azita Dabiri• 2024

Related benchmarks

TaskDatasetResultRank
Global OptimizationDropWave
Mean Objective Value0.63
23
Global OptimizationBrochu function 2d
Mean Final Optimality Gap0.744
13
Global OptimizationBukin function
Mean Final Optimality Gap74.7
13
Global OptimizationDixon-Price function 4d
Mean Optimality Gap0.905
13
Global OptimizationAckley function
Mean Final Optimality Gap0.787
13
Global OptimizationAdjiman function
Mean Final Optimality Gap0.885
13
Global OptimizationBeale function
Mean Optimality Gap0.818
13
Global OptimizationBohachevsky function
Mean Final Optimality Gap0.966
13
Global OptimizationBrochu (4d) function
Mean Final Optimality Gap0.628
13
Global OptimizationCamel hump function 3d
Mean Optimality Gap86.2
13
Showing 10 of 13 rows

Other info

Follow for update