Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reflective Planning: Vision-Language Models for Multi-Stage Long-Horizon Robotic Manipulation

About

Solving complex long-horizon robotic manipulation problems requires sophisticated high-level planning capabilities, the ability to reason about the physical world, and reactively choose appropriate motor skills. Vision-language models (VLMs) pretrained on Internet data could in principle offer a framework for tackling such problems. However, in their current form, VLMs lack both the nuanced understanding of intricate physics required for robotic manipulation and the ability to reason over long horizons to address error compounding issues. In this paper, we introduce a novel test-time computation framework that enhances VLMs' physical reasoning capabilities for multi-stage manipulation tasks. At its core, our approach iteratively improves a pretrained VLM with a "reflection" mechanism - it uses a generative model to imagine future world states, leverages these predictions to guide action selection, and critically reflects on potential suboptimalities to refine its reasoning. Experimental results demonstrate that our method significantly outperforms several state-of-the-art commercial VLMs as well as other post-training approaches such as Monte Carlo Tree Search (MCTS). Videos are available at https://reflect-vlm.github.io.

Yunhai Feng, Jiaming Han, Zhuoran Yang, Xiangyu Yue, Sergey Levine, Jianlan Luo• 2025

Related benchmarks

TaskDatasetResultRank
Long-horizon household tasksBehavior-1K
Fitting2.12
12
Preparation tasksHabitat-Matterport 3D (HM3D)
Success Rate0.00e+0
12
Showing 2 of 2 rows

Other info

Follow for update