Diffusion Trajectory-guided Policy for Long-horizon Robot Manipulation
About
Recently, Vision-Language-Action models (VLA) have advanced robot imitation learning, but high data collection costs and limited demonstrations hinder generalization and current imitation learning methods struggle in out-of-distribution scenarios, especially for long-horizon tasks. A key challenge is how to mitigate compounding errors in imitation learning, which lead to cascading failures over extended trajectories. To address these challenges, we propose the Diffusion Trajectory-guided Policy (DTP) framework, which generates 2D trajectories through a diffusion model to guide policy learning for long-horizon tasks. By leveraging task-relevant trajectories, DTP provides trajectory-level guidance to reduce error accumulation. Our two-stage approach first trains a generative vision-language model to create diffusion-based trajectories, then refines the imitation policy using them. Experiments on the CALVIN benchmark show that DTP outperforms state-of-the-art baselines by 25% in success rate, starting from scratch without external pretraining. Moreover, DTP significantly improves real-world robot performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long-horizon robot manipulation | Calvin ABCD→D | Task 1 Completion Rate89 | 96 | |
| Robotic Manipulation | CALVIN D->D | Success Rate (Length 1)92.4 | 12 | |
| Robot Manipulation | CALVIN 10% ABCD → D | Success Rate (L=1)81.3 | 11 | |
| Robot Manipulation | Franka Real-world Individual Tasks (real-robot) | Pick Bread Success Rate80 | 4 | |
| Long-horizon robot manipulation | Franka Real-world Long-horizon Tasks (real-robot) | Average Length4.6 | 2 |