Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Refining Diffusion Planner for Reliable Behavior Synthesis by Automatic Detection of Infeasible Plans

About

Diffusion-based planning has shown promising results in long-horizon, sparse-reward tasks by training trajectory diffusion models and conditioning the sampled trajectories using auxiliary guidance functions. However, due to their nature as generative models, diffusion models are not guaranteed to generate feasible plans, resulting in failed execution and precluding planners from being useful in safety-critical applications. In this work, we propose a novel approach to refine unreliable plans generated by diffusion models by providing refining guidance to error-prone plans. To this end, we suggest a new metric named restoration gap for evaluating the quality of individual plans generated by the diffusion model. A restoration gap is estimated by a gap predictor which produces restoration gap guidance to refine a diffusion planner. We additionally present an attribution map regularizer to prevent adversarial refining guidance that could be generated from the sub-optimal gap predictor, which enables further refinement of infeasible plans. We demonstrate the effectiveness of our approach on three different benchmarks in offline control settings that require long-horizon planning. We also illustrate that our approach presents explainability by presenting the attribution maps of the gap predictor and highlighting error-prone transitions, allowing for a deeper understanding of the generated plans.

Kyowoon Lee, Seongun Kim, Jaesik Choi• 2023

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score107.8
124
LocomotionD4RL walker2d-medium-expert
Normalized Score107.8
63
LocomotionD4RL HalfCheetah Medium-Replay
Normalized Score0.41
61
LocomotionD4RL Walker2d medium
Normalized Score81.7
60
LocomotionD4RL Halfcheetah medium
Normalized Score44
60
LocomotionD4RL halfcheetah-medium-expert
Normalized Score90.8
53
Offline Reinforcement LearningD4RL Hopper medium
Reward84.9
35
Offline Reinforcement LearningD4RL hopper medium-replay
Reward95.8
30
LocomotionD4RL Hopper medium
Normalized Score82.5
30
Offline Reinforcement LearningD4RL Halfcheetah medium
Reward44.2
28
Showing 10 of 29 rows

Other info

Code

Follow for update