Efficient Continuous Control with Double Actors and Regularized Critics
About
How to obtain good value estimation is one of the key problems in Reinforcement Learning (RL). Current value estimation methods, such as DDPG and TD3, suffer from unnecessary over- or underestimation bias. In this paper, we explore the potential of double actors, which has been neglected for a long time, for better value function estimation in continuous setting. First, we uncover and demonstrate the bias alleviation property of double actors by building double actors upon single critic and double critics to handle overestimation bias in DDPG and underestimation bias in TD3 respectively. Next, we interestingly find that double actors help improve the exploration ability of the agent. Finally, to mitigate the uncertainty of value estimate from double critics, we further propose to regularize the critic networks under double actors architecture, which gives rise to Double Actors Regularized Critics (DARC) algorithm. Extensive experimental results on challenging continuous control tasks show that DARC significantly outperforms state-of-the-art methods with higher sample efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Cheetah Run | DeepMind Control suite | Average Return685.5 | 4 | |
| Finger-Turn Easy | DeepMind Control suite | Average Return937.7 | 4 | |
| Hopper Hop | DeepMind Control suite | Average Return120.3 | 4 | |
| Walker Walk | DeepMind Control suite | Average Return852.3 | 4 | |
| Ball In Cup Catch | DeepMind Control suite | Average Return980.1 | 4 | |
| Continuous Control | MuJoCo Ant v5 (test) | Average Return3.93e+3 | 4 | |
| Continuous Control | MuJoCo HalfCheetah v5 (test) | Average Return4.23e+3 | 4 | |
| Continuous Control | MuJoCo Walker2d v5 (test) | Average Return3.76e+3 | 4 | |
| Continuous Control | MuJoCo Humanoid v5 (test) | Average Return4.55e+3 | 4 | |
| Finger Turn hard | DeepMind Control suite | Average Return784.2 | 4 |