Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distributional Reinforcement Learning with Diffusion Bridge Critics

About

Recent advances in diffusion-based reinforcement learning (RL) methods have demonstrated promising results in a wide range of continuous control tasks. However, existing works in this field focus on the application of diffusion policies while leaving the diffusion critics unexplored. In fact, since policy optimization fundamentally relies on the critic, accurate value estimation is far more important than policy expressiveness. Furthermore, given the stochasticity of most reinforcement learning tasks, it has been confirmed that the critic is more appropriately depicted with a distributional model. Motivated by these points, we propose a novel distributional RL method with Diffusion Bridge Critics (DBC). DBC directly models the inverse cumulative distribution function (CDF) of the Q value. This allows us to accurately capture the value distribution and prevents it from collapsing into a trivial Gaussian distribution owing to the strong distribution-matching capability of the diffusion bridge. Moreover, we further derive an analytic integral formula to address discretization errors in DBC, which is essential in value estimation. To our knowledge, DBC is the first work to employ the diffusion bridge model as the critic. Notably, DBC is also a plug-and-play component and can be integrated into most existing RL frameworks. Experimental results on MuJoCo robot control benchmarks demonstrate the superiority of DBC compared with previous distributional critic models.

Shutong Ding, Yimiao Zhou, Ke Hu, Mokai Pan, Shan Zhong, Yanwei Fu, Jingya Wang, Ye Shi• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningHopper v5
Average Return3.73e+3
93
Reinforcement LearningAnt v5
Average Return6.63e+3
49
Reinforcement LearningWalker2D v5
Average Return6.34e+3
43
Reinforcement LearningHalfcheetah v5
Average Return1.38e+4
43
Reinforcement LearningHumanoid v5
Performance Score5.91e+3
11
Continuous ControlAnt v5
Average Return6.50e+3
7
Continuous ControlHopper v5
Average Return3.73e+3
7
Continuous ControlHumanoid v5
Average Return5.91e+3
7
Continuous ControlWalker2D v5
Avg Return6.14e+3
7
Continuous ControlHalfcheetah v5
Average Return1.38e+4
7
Showing 10 of 10 rows

Other info

Follow for update