Distributional Reinforcement Learning with Diffusion Bridge Critics
About
Recent advances in diffusion-based reinforcement learning (RL) methods have demonstrated promising results in a wide range of continuous control tasks. However, existing works in this field focus on the application of diffusion policies while leaving the diffusion critics unexplored. In fact, since policy optimization fundamentally relies on the critic, accurate value estimation is far more important than policy expressiveness. Furthermore, given the stochasticity of most reinforcement learning tasks, it has been confirmed that the critic is more appropriately depicted with a distributional model. Motivated by these points, we propose a novel distributional RL method with Diffusion Bridge Critics (DBC). DBC directly models the inverse cumulative distribution function (CDF) of the Q value. This allows us to accurately capture the value distribution and prevents it from collapsing into a trivial Gaussian distribution owing to the strong distribution-matching capability of the diffusion bridge. Moreover, we further derive an analytic integral formula to address discretization errors in DBC, which is essential in value estimation. To our knowledge, DBC is the first work to employ the diffusion bridge model as the critic. Notably, DBC is also a plug-and-play component and can be integrated into most existing RL frameworks. Experimental results on MuJoCo robot control benchmarks demonstrate the superiority of DBC compared with previous distributional critic models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Hopper v5 | Average Return3.73e+3 | 93 | |
| Reinforcement Learning | Ant v5 | Average Return6.63e+3 | 49 | |
| Reinforcement Learning | Walker2D v5 | Average Return6.34e+3 | 43 | |
| Reinforcement Learning | Halfcheetah v5 | Average Return1.38e+4 | 43 | |
| Reinforcement Learning | Humanoid v5 | Performance Score5.91e+3 | 11 | |
| Continuous Control | Ant v5 | Average Return6.50e+3 | 7 | |
| Continuous Control | Hopper v5 | Average Return3.73e+3 | 7 | |
| Continuous Control | Humanoid v5 | Average Return5.91e+3 | 7 | |
| Continuous Control | Walker2D v5 | Avg Return6.14e+3 | 7 | |
| Continuous Control | Halfcheetah v5 | Average Return1.38e+4 | 7 |