Diffusion-based Reinforcement Learning via Q-weighted Variational Policy Optimization
About
Diffusion models have garnered widespread attention in Reinforcement Learning (RL) for their powerful expressiveness and multimodality. It has been verified that utilizing diffusion policies can significantly improve the performance of RL algorithms in continuous control tasks by overcoming the limitations of unimodal policies, such as Gaussian policies, and providing the agent with enhanced exploration capabilities. However, existing works mainly focus on the application of diffusion policies in offline RL, while their incorporation into online RL is less investigated. The training objective of the diffusion model, known as the variational lower bound, cannot be optimized directly in online RL due to the unavailability of 'good' actions. This leads to difficulties in conducting diffusion policy improvement. To overcome this, we propose a novel model-free diffusion-based online RL algorithm, Q-weighted Variational Policy Optimization (QVPO). Specifically, we introduce the Q-weighted variational loss, which can be proved to be a tight lower bound of the policy objective in online RL under certain conditions. To fulfill these conditions, the Q-weight transformation functions are introduced for general scenarios. Additionally, to further enhance the exploration capability of the diffusion policy, we design a special entropy regularization term. We also develop an efficient behavior policy to enhance sample efficiency by reducing the variance of the diffusion policy during online interactions. Consequently, the QVPO algorithm leverages the exploration capabilities and multimodality of diffusion policies, preventing the RL agent from converging to a sub-optimal policy. To verify the effectiveness of QVPO, we conduct comprehensive experiments on MuJoCo benchmarks. The final results demonstrate that QVPO achieves state-of-the-art performance on both cumulative reward and sample efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Online Reinforcement Learning | OpenAI Gym MuJoCo Normalized v4 | Normalized Mean Return45 | 50 | |
| Reinforcement Learning | MuJoCo Half-Cheetah | Average Return8.08e+3 | 28 | |
| Reinforcement Learning | MuJoCo Hopper | Average Return960 | 24 | |
| Reinforcement Learning | MuJoCo Ant | Average Return2.12e+3 | 24 | |
| Reinforcement Learning | Swimmer | Average Returns83 | 20 | |
| Reinforcement Learning | MuJoCo Humanoid | Average Return1.38e+3 | 12 | |
| Locomotion | Humanoid-Bench Stand (test) | Return7.6 | 11 | |
| Reinforcement Learning | Gym-MuJoCo Walker2D | Average Return2.87e+3 | 10 | |
| Continuous Control | MuJoCo Hopper v5 | Average Return3.67e+3 | 8 | |
| Continuous Control | MuJoCo HalfCheetah v5 | Max Return1.03e+4 | 8 |