Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Online Reinforcement Learning for Diffusion Policy

About

Diffusion policies have achieved superior performance in imitation learning and offline reinforcement learning (RL) due to their rich expressiveness. However, the conventional diffusion training procedure requires samples from target distribution, which is impossible in online RL since we cannot sample from the optimal policy. Backpropagating policy gradient through the diffusion process incurs huge computational costs and instability, thus being expensive and not scalable. To enable efficient training of diffusion policies in online RL, we generalize the conventional denoising score matching by reweighting the loss function. The resulting Reweighted Score Matching (RSM) preserves the optimal solution and low computational cost of denoising score matching, while eliminating the need to sample from the target distribution and allowing learning to optimize value functions. We introduce two tractable reweighted loss functions to solve two commonly used policy optimization problems, policy mirror descent and max-entropy policy, resulting in two practical algorithms named Diffusion Policy Mirror Descent (DPMD) and Soft Diffusion Actor-Critic (SDAC). We conducted comprehensive comparisons on MuJoCo benchmarks. The empirical results show that the proposed algorithms outperform recent diffusion-policy online RLs on most tasks, and the DPMD improves more than 120% over soft actor-critic on Humanoid and Ant.

Haitong Ma, Tianyi Chen, Kai Wang, Na Li, Bo Dai• 2025

Related benchmarks

TaskDatasetResultRank
Online Reinforcement LearningOpenAI Gym MuJoCo Normalized v4
Normalized Mean Return84.7
50
LocomotionHumanoid-Bench Stand (test)
Return7.7
11
Continuous ControlMuJoCo HalfCheetah v5
Max Return1.07e+4
8
Robotic ManipulationFrankaKitchen N=1
Task Accomplishment64
8
Robotic ManipulationFrankaKitchen N=2
Accomplished Tasks1.12
8
Robotic ManipulationFrankaKitchen N=4
Accomplished Tasks1.22
8
Robotic ManipulationFrankaKitchen N=7
Accomplished Tasks1.56
8
Continuous ControlMuJoCo Hopper v5
Average Return3.50e+3
8
Continuous ControlMuJoCo Walker2d v5
Max Average Return4.91e+3
8
Continuous ControlMuJoCo Humanoid v5
Maximum Average Return5.10e+3
8
Showing 10 of 15 rows

Other info

Follow for update