Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ImagineNav: Prompting Vision-Language Models as Embodied Navigator through Scene Imagination

About

Visual navigation is an essential skill for home-assistance robots, providing the object-searching ability to accomplish long-horizon daily tasks. Many recent approaches use Large Language Models (LLMs) for commonsense inference to improve exploration efficiency. However, the planning process of LLMs is limited within texts and it is difficult to represent the spatial occupancy and geometry layout only by texts. Both are important for making rational navigation decisions. In this work, we seek to unleash the spatial perception and planning ability of Vision-Language Models (VLMs), and explore whether the VLM, with only on-board camera captured RGB/RGB-D stream inputs, can efficiently finish the visual navigation tasks in a mapless manner. We achieve this by developing the imagination-powered navigation framework ImagineNav, which imagines the future observation images at valuable robot views and translates the complex navigation planning process into a rather simple best-view image selection problem for VLM. To generate appropriate candidate robot views for imagination, we introduce the Where2Imagine module, which is distilled to align with human navigation habits. Finally, to reach the VLM preferred views, an off-the-shelf point-goal navigation policy is utilized. Empirical experiments on the challenging open-vocabulary object navigation benchmarks demonstrates the superiority of our proposed system.

Xinxin Zhao, Wenzhe Cai, Likun Tang, Teng Wang• 2024

Related benchmarks

TaskDatasetResultRank
Object Goal NavigationHM3D
Success Rate53
67
Object Goal NavigationHM3D v1 (val)
Success Rate (SR)53
44
Object Goal NavigationHM3D 0.1
SR53
35
Embodied NavigationHSSD
Success Rate51
7
Object Goal NavigationHM3D v0.2 (val)
Success Rate (SR)53
6
Object Goal NavigationHSSD (val)
SR51
3
Showing 6 of 6 rows

Other info

Follow for update