Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Evolution of Benchmark: Black-Box Optimization Benchmark Design through Large Language Model

About

Benchmark Design in Black-Box Optimization (BBO) is a fundamental yet open-ended topic. Early BBO benchmarks are predominantly human-crafted, introducing expert bias and constraining diversity. Automating this design process can relieve the human-in-the-loop burden while enhancing diversity and objectivity. We propose Evolution of Benchmark (EoB), an automated BBO benchmark designer empowered by the large language model (LLM) and its program evolution capability. Specifically, we formulate benchmark design as a bi-objective optimization problem towards maximizing (i) landscape diversity and (ii) algorithm-differentiation ability across a portfolio of BBO solvers. Under this paradigm, EoB iteratively prompts LLM to evolve a population of benchmark programs and employs a reflection-based scheme to co-evolve the landscape and its corresponding program. Comprehensive experiments validate our EoB is a competitive candidate in multi-dimensional usages: 1) Benchmarking BBO algorithms; 2) Training and testing learning-assisted BBO algorithms; 3) Extending proxy for expensive real-world problems.

Chen Wang, Sijie Ma, Zeyuan Ma, Yue-Jiao Gong• 2026

Related benchmarks

TaskDatasetResultRank
Black-Box Optimization Performance ConsistencyUAV
Performance Consistency0.396
3
Black-box OptimizationUAV--
3
Black-box OptimizationHPO--
3
Showing 3 of 3 rows

Other info

Follow for update