Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

POP: Prior-fitted Optimizer Policies

About

Optimization refers to the task of finding extrema of an objective function. Classical gradient-based optimizers are highly sensitive to hyperparameter choices. In highly non-convex settings their performance relies on carefully tuned learning rates, momentum, and gradient accumulation. To address these limitations, we introduce POP (Prior-fitted Optimizer Policies), a meta-learned optimizer that predicts coordinate-wise step sizes conditioned on the contextual information provided in the optimization trajectory. Our model is learned on millions of synthetic optimization problems sampled from a novel prior spanning both convex and non-convex objectives. We evaluate POP on an established benchmark including 47 optimization functions of various complexity, where it consistently outperforms first-order gradient-based methods, non-convex optimization approaches (e.g., evolutionary strategies), Bayesian optimization, and a recent meta-learned competitor under matched budget constraints. Our evaluation demonstrates strong generalization capabilities without task-specific tuning.

Jan Kobiolka, Christian Frey, Gresa Shala, Arlind Kadra, Erind Bedalli, Josif Grabocka• 2026

Related benchmarks

TaskDatasetResultRank
OptimizationSynthetic 8D (test)
Mean Performance0.2433
5
OptimizationSynthetic 16D (test)
Mean Performance24.67
5
OptimizationSynthetic 32D (test)
Mean Performance0.2508
5
Showing 3 of 3 rows

Other info

Follow for update