Distributional Soft Actor-Critic with Three Refinements
About
Reinforcement learning (RL) has shown remarkable success in solving complex decision-making and control tasks. However, many model-free RL algorithms experience performance degradation due to inaccurate value estimation, particularly the overestimation of Q-values, which can lead to suboptimal policies. To address this issue, we previously proposed the Distributional Soft Actor-Critic (DSAC or DSACv1), an off-policy RL algorithm that enhances value estimation accuracy by learning a continuous Gaussian value distribution. Despite its effectiveness, DSACv1 faces challenges such as training instability and sensitivity to reward scaling, caused by high variance in critic gradients due to return randomness. In this paper, we introduce three key refinements to DSACv1 to overcome these limitations and further improve Q-value estimation accuracy: expected value substitution, twin value distribution learning, and variance-based critic gradient adjustment. The enhanced algorithm, termed DSAC with Three refinements (DSAC-T or DSACv2), is systematically evaluated across a diverse set of benchmark tasks. Without the need for task-specific hyperparameter tuning, DSAC-T consistently matches or outperforms leading model-free RL algorithms, including SAC, TD3, DDPG, TRPO, and PPO, in all tested environments. Additionally, DSAC-T ensures a stable learning process and maintains robust performance across varying reward scales. Its effectiveness is further demonstrated through real-world application in controlling a wheeled robot, highlighting its potential for deployment in practical robotic tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Locomotion | Humanoid-Bench Stand (test) | Return4.4 | 11 | |
| Locomotion | DMC Dog-run (test) | Average Return6.7 | 8 | |
| Locomotion | DMC Dog-walk (test) | Average Return10.5 | 8 | |
| Locomotion | DMC Humanoid-walk (test) | Average Return1.1 | 8 | |
| Locomotion | DMC Dog-trot (test) | Average Return7.7 | 8 | |
| Locomotion | DMC Dog-stand (test) | Average Return25.1 | 8 | |
| Continuous Control | MuJoCo Reacher v4 (test) | Mean Episodic Return-4 | 6 | |
| Continuous Control | MuJoCo InvertedPendulum v4 (test) | Mean Episodic Return860 | 6 | |
| Continuous Control | MuJoCo HalfCheetah v4 (test) | Mean Episodic Return1.17e+4 | 6 | |
| Continuous Control | MuJoCo Ant v4 (test) | Mean Episodic Return3.50e+3 | 6 |