Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation

About

Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.

Shichao Fan, Quantao Yang, Yajie Liu, Kun Wu, Zhengping Che, Qingjie Liu, Min Wan• 2025

Related benchmarks

TaskDatasetResultRank
Long-horizon robot manipulationCalvin ABCD→D
Task 1 Completion Rate89
96
Robotic ManipulationCALVIN D->D
Success Rate (Length 1)92.4
12
Robot ManipulationCALVIN 10% ABCD → D
Success Rate (L=1)81.3
11
Robot ManipulationFranka Real-world Individual Tasks (real-robot)
Pick Bread Success Rate80
4
Long-horizon robot manipulationFranka Real-world Long-horizon Tasks (real-robot)
Average Length4.6
2
Showing 5 of 5 rows

Other info

Follow for update