Variational Delayed Policy Optimization
About
In environments with delayed observation, state augmentation by including actions within the delay window is adopted to retrieve Markovian property to enable reinforcement learning (RL). However, state-of-the-art (SOTA) RL techniques with Temporal-Difference (TD) learning frameworks often suffer from learning inefficiency, due to the significant expansion of the augmented state space with the delay. To improve learning efficiency without sacrificing performance, this work introduces a novel framework called Variational Delayed Policy Optimization (VDPO), which reformulates delayed RL as a variational inference problem. This problem is further modelled as a two-step iterative optimization problem, where the first step is TD learning in the delay-free environment with a small state space, and the second step is behaviour cloning which can be addressed much more efficiently than TD learning. We not only provide a theoretical analysis of VDPO in terms of sample complexity and performance, but also empirically demonstrate that VDPO can achieve consistent performance with SOTA methods, with a significant enhancement of sample efficiency (approximately 50\% less amount of samples) in the MuJoCo benchmark.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Continuous Control | MuJoCo Walker2d v4 | Normalized Performance127 | 34 | |
| Reinforcement Learning | HalfCheetah v3 | Mean Reward4.82e+3 | 34 | |
| Continuous Control | MuJoCo Hopper v4 | Normalized Performance1.22 | 28 | |
| Reinforcement Learning | InvertedPendulum v2 | Mean Reward764.5 | 27 | |
| Reinforcement Learning | Ant v3 | Average Final Return4.37e+3 | 26 | |
| Reinforcement Learning | Walker2d v3 | Average Final Return3.40e+3 | 26 | |
| Reinforcement Learning | Humanoid v3 | Avg Final Return2.84e+3 | 26 | |
| Reinforcement Learning | Hopper v3 | Average Final Return1.94e+3 | 26 | |
| Continuous Control | MuJoCo Ant v4 | Normalized Return1.11 | 24 | |
| Continuous Control | MuJoCo Humanoid v4 | Normalized Performance (Ret_nor)115 | 18 |