Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Continuous control with deep reinforcement learning

About

We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.

Timothy P. Lillicrap, Jonathan J. Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, Daan Wierstra• 2015

Related benchmarks

TaskDatasetResultRank
Goal1Safety Gymnasium
Reward7.9
16
Button2Safety Gymnasium
Reward9.09
16
HalfCheetahMujoco
Reward9.24
16
Goal2Safety Gymnasium
Reward9.13
16
Button1Safety Gymnasium
Reward6.2
16
FetchReachGymnasium Robotics
Reward4.74
16
Reinforcement LearningPendulum
Avg Episode Reward-155.6
15
Reinforcement LearningHopper
Avg Episode Reward1.68e+3
15
Continuous ControlHopper
Average Reward0.676
15
Reinforcement LearningMountainCar
Avg Episode Reward0.9536
14
Showing 10 of 87 rows
...

Other info

Follow for update