Evolutionary Discovery of Reinforcement Learning Algorithms via Large Language Models
About
Reinforcement learning algorithms are defined by their learning update rules, which are typically hand-designed and fixed. We present an evolutionary framework for discovering reinforcement learning algorithms by searching directly over executable update rules that implement complete training procedures. The approach builds on REvolve, an evolutionary system that uses large language models as generative variation operators, and extends it from reward-function discovery to algorithm discovery. To promote the emergence of nonstandard learning rules, the search excludes canonical mechanisms such as actor--critic structures, temporal-difference losses, and value bootstrapping. Because reinforcement learning algorithms are highly sensitive to internal scalar parameters, we introduce a post-evolution refinement stage in which a large language model proposes feasible hyperparameter ranges for each evolved update rule. Evaluated end-to-end by full training runs on multiple Gymnasium benchmarks, the discovered algorithms achieve competitive performance relative to established baselines, including SAC, PPO, DQN, and A2C.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Halfcheetah | Average Return2.41e+3 | 22 | |
| Reinforcement Learning | Swimmer | Average Returns247.5 | 20 | |
| Reinforcement Learning | Pusher | Average Returns39.88 | 10 | |
| Reinforcement Learning | LunarLander | Maximum Return260.6 | 5 | |
| Reinforcement Learning | reacher | Maximum Return5.43 | 5 | |
| Reinforcement Learning | Walker2D | Max Return1.60e+3 | 5 | |
| Reinforcement Learning | cartpole | Max Return500 | 5 | |
| Reinforcement Learning | MountainCar | Maximum Return108.7 | 5 | |
| Reinforcement Learning | Inverted Pendulum | Maximum Evaluation Return1.00e+3 | 5 | |
| Reinforcement Learning | Acrobot | -- | 5 |