Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Policy Adaptation from Foundation Model Feedback

About

Recent progress on vision-language foundation models have brought significant advancement to building general-purpose robots. By using the pre-trained models to encode the scene and instructions as inputs for decision making, the instruction-conditioned policy can generalize across different objects and tasks. While this is encouraging, the policy still fails in most cases given an unseen task or environment. In this work, we propose Policy Adaptation from Foundation model Feedback (PAFF). When deploying the trained policy to a new task or a new environment, we first let the policy play with randomly generated instructions to record the demonstrations. While the execution could be wrong, we can use the pre-trained foundation models to provide feedback to relabel the demonstrations. This automatically provides new pairs of demonstration-instruction data for policy fine-tuning. We evaluate our method on a broad range of experiments with the focus on generalization on unseen objects, unseen tasks, unseen environments, and sim-to-real transfer. We show PAFF improves baselines by a large margin in all cases. Our project page is available at https://geyuying.github.io/PAFF/

Yuying Ge, Annabella Macaluso, Li Erran Li, Ping Luo, Xiaolong Wang• 2022

Related benchmarks

TaskDatasetResultRank
Long-horizon robot manipulationCalvin ABCD→D
Task 1 Completion Rate72
96
pack-unseen-objectsCLIPORT
Success Rate72.8
8
put-shapes-in-bowlsCLIPORT
Success Rate51
8
put-shapes-in-bowlsReal-world Sim-to-real
Success Rate82
4
pack-blocksReal-world Sim-to-real
Success Rate98
2
pack-shapesReal-world Sim-to-real
Success Rate92
2
put-blocks-in-bowlsReal-world Sim-to-real
Success Rate88
2
Showing 7 of 7 rows

Other info

Follow for update