Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Evolution Strategies as a Scalable Alternative to Reinforcement Learning

About

We explore the use of Evolution Strategies (ES), a class of black box optimization algorithms, as an alternative to popular MDP-based RL techniques such as Q-learning and Policy Gradients. Experiments on MuJoCo and Atari show that ES is a viable solution strategy that scales extremely well with the number of CPUs available: By using a novel communication strategy based on common random numbers, our ES implementation only needs to communicate scalars, making it possible to scale to over a thousand parallel workers. This allows us to solve 3D humanoid walking in 10 minutes and obtain competitive results on most Atari games after one hour of training. In addition, we highlight several advantages of ES as a black box optimization technique: it is invariant to action frequency and delayed rewards, tolerant of extremely long horizons, and does not need temporal discounting or value function approximation.

Tim Salimans, Jonathan Ho, Xi Chen, Szymon Sidor, Ilya Sutskever• 2017

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningLunarLanderContinuous v2
Mean Reward115
59
Reinforcement LearningAtari 2600 MONTEZUMA'S REVENGE
Score0.00e+0
45
Reinforcement LearningHalfCheetah v3
Mean Reward2.42e+3
34
Reinforcement LearningInvertedPendulum v2
Mean Reward651.9
27
Continuous ControlHumanoid 17-Dof
Final Return1.25e+4
21
Reinforcement LearningAtari 2600 Qbert
Score147.5
20
Continuous ControlHopper 3-Dof
Final Return2.56e+3
18
Reinforcement LearningSwimmer v3
Mean Reward318.4
15
Global OptimizationF5 benchmark function
Final Error0.0012
14
Global OptimizationF9 benchmark function
Final Error0.018
14
Showing 10 of 31 rows

Other info

Follow for update