Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Advantage-Guided Diffusion for Model-Based Reinforcement Learning

About

Model-based reinforcement learning (MBRL) with autoregressive world models suffers from compounding errors, whereas diffusion world models mitigate this by generating trajectory segments jointly. However, existing diffusion guides are either policy-only, discarding value information, or reward-based, which becomes myopic when the diffusion horizon is short. We introduce Advantage-Guided Diffusion for MBRL (AGD-MBRL), which steers the reverse diffusion process using the agent's advantage estimates so that sampling concentrates on trajectories expected to yield higher long-term return beyond the generated window. We develop two guides: (i) Sigmoid Advantage Guidance (SAG) and (ii) Exponential Advantage Guidance (EAG). We prove that a diffusion model guided through SAG or EAG allows us to perform reweighted sampling of trajectories with weights increasing in state-action advantage-implying policy improvement under standard assumptions. Additionally, we show that the trajectories generated from AGD-MBRL follow an improved policy (that is, with higher value) compared to an unguided diffusion model. AGD integrates seamlessly with PolyGRAD-style architectures by guiding the state components while leaving action generation policy-conditioned, and requires no change to the diffusion training objective. On MuJoCo control tasks (HalfCheetah, Hopper, Walker2D and Reacher), AGD-MBRL improves sample efficiency and final return over PolyGRAD, an online Diffuser-style reward guide, and model-free baselines (PPO/TRPO), in some cases by a margin of 2x. These results show that advantage-aware guidance is a simple, effective remedy for short-horizon myopia in diffusion-model MBRL.

Daniele Foffano, Arvid Eriksson, David Broman, Karl H. Johansson, Alexandre Proutiere• 2026

Related benchmarks

TaskDatasetResultRank
Continuous ControlMuJoCo HalfCheetah
Average Reward4.86e+3
25
Continuous ControlMuJoCo Reacher
Average Reward-3.87
18
Continuous ControlMuJoCo Hopper
Maximum Average Return3.33e+3
13
Continuous ControlMuJoCo Walker2d
Max Return3.84e+3
13
Showing 4 of 4 rows

Other info

Follow for update