Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Goal-Conditioned Imitation Learning using Score-based Diffusion Policies

About

We propose a new policy representation based on score-based diffusion models (SDMs). We apply our new policy representation in the domain of Goal-Conditioned Imitation Learning (GCIL) to learn general-purpose goal-specified policies from large uncurated datasets without rewards. Our new goal-conditioned policy architecture "$\textbf{BE}$havior generation with $\textbf{S}$c$\textbf{O}$re-based Diffusion Policies" (BESO) leverages a generative, score-based diffusion model as its policy. BESO decouples the learning of the score model from the inference sampling process, and, hence allows for fast sampling strategies to generate goal-specified behavior in just 3 denoising steps, compared to 30+ steps of other diffusion based policies. Furthermore, BESO is highly expressive and can effectively capture multi-modality present in the solution space of the play data. Unlike previous methods such as Latent Plans or C-Bet, BESO does not rely on complex hierarchical policies or additional clustering for effective goal-conditioned behavior learning. Finally, we show how BESO can even be used to learn a goal-independent policy from play-data using classifier-free guidance. To the best of our knowledge this is the first work that a) represents a behavior policy based on such a decoupled SDM b) learns an SDM based policy in the domain of GCIL and c) provides a way to simultaneously learn a goal-dependent and a goal-independent policy from play-data. We evaluate BESO through detailed simulation and show that it consistently outperforms several state-of-the-art goal-conditioned imitation learning methods on challenging benchmarks. We additionally provide extensive ablation studies and experiments to demonstrate the effectiveness of our method for goal-conditioned behavior generation. Demonstrations and Code are available at https://intuitive-robots.github.io/beso-website/

Moritz Reuss, Maximilian Li, Xiaogang Jia, Rudolf Lioutikov• 2023

Related benchmarks

TaskDatasetResultRank
scene-playOGBench 100% offline dataset
Success Rate81
12
antsoccer-medium-navigateOGBench 100% offline dataset
Success Rate12
12
antmaze-medium-navigateOGBench 100% offline dataset
Success Rate85
12
antsoccer-arena-navigateOGBench 100% offline
Success Rate56
12
cube-single-playOGBench 100% offline dataset
Success Rate0.21
12
Goal-conditioned policy learningBlock Push state-based
Performance96
4
Goal-conditioned policy learningRelay Kitchen state-based
Performance3.73
4
Imitation LearningAligning
Success Rate85.417
4
Showing 8 of 8 rows

Other info

Follow for update