Continuous control with deep reinforcement learning
About
We adapt the ideas underlying the success of Deep Q-Learning to the continuous action domain. We present an actor-critic, model-free algorithm based on the deterministic policy gradient that can operate over continuous action spaces. Using the same learning algorithm, network architecture and hyper-parameters, our algorithm robustly solves more than 20 simulated physics tasks, including classic problems such as cartpole swing-up, dexterous manipulation, legged locomotion and car driving. Our algorithm is able to find policies whose performance is competitive with those found by a planning algorithm with full access to the dynamics of the domain and its derivatives. We further demonstrate that for many of the tasks the algorithm can learn policies end-to-end: directly from raw pixel inputs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | LunarLanderContinuous v2 | Mean Reward337.2 | 59 | |
| Reinforcement Learning | MountainCarContinuous v0 | Average Agent Reward93.62 | 48 | |
| Reinforcement Learning | Walker2D v5 | Average Return200.3 | 45 | |
| Reinforcement Learning Control | Pendulum v1 | Mean Score942.2 | 40 | |
| Reinforcement Learning | Pendulum | Avg Episode Reward-155.6 | 26 | |
| Reinforcement Learning | BipedalWalker | Average Episode Reward209.4 | 20 | |
| Continuous Control | Walker2D v5 | Avg Return200.3 | 17 | |
| Goal1 | Safety Gymnasium | Reward7.9 | 16 | |
| Button2 | Safety Gymnasium | Reward9.09 | 16 | |
| HalfCheetah | Mujoco | Reward9.24 | 16 |