Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Continuous control with deep reinforcement learning

About

We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra• 2015

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningLunarLanderContinuous v2
Mean Reward337.2
59
Reinforcement LearningMountainCarContinuous v0
Average Agent Reward93.62
48
Reinforcement LearningWalker2D v5
Average Return200.3
45
Reinforcement Learning ControlPendulum v1
Mean Score942.2
40
Reinforcement LearningPendulum
Avg Episode Reward-155.6
26
Reinforcement LearningBipedalWalker
Average Episode Reward209.4
20
Continuous ControlWalker2D v5
Avg Return200.3
17
Goal1Safety Gymnasium
Reward7.9
16
Button2Safety Gymnasium
Reward9.09
16
HalfCheetahMujoco
Reward9.24
16
Showing 10 of 135 rows
...

Other info

Follow for update