Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReasonNavi: Human-Inspired Global Map Reasoning for Zero-Shot Embodied Navigation

About

Embodied agents often struggle with efficient navigation because they rely primarily on partial egocentric observations, which restrict global foresight and lead to inefficient exploration. In contrast, humans plan using maps: we reason globally first, then act locally. We introduce ReasonNavi, a human-inspired framework that operationalizes this reason-then-act paradigm by coupling Multimodal Large Language Models (MLLMs) with deterministic planners. ReasonNavi converts a top-down map into a discrete reasoning space by room segmentation and candidate target nodes sampling. An MLLM is then queried in a multi-stage process to identify the candidate most consistent with the instruction (object, image, or text goal), effectively leveraging the model's semantic reasoning ability while sidestepping its weakness in continuous coordinate prediction. The selected waypoint is grounded into executable trajectories using a deterministic action planner over an online-built occupancy map, while pretrained object detectors and segmenters ensure robust recognition at the goal. This yields a unified zero-shot navigation framework that requires no MLLM fine-tuning, circumvents the brittleness of RL-based policies and scales naturally with foundation model improvements. Across three navigation tasks, ReasonNavi consistently outperforms prior methods that demand extensive training or heavy scene modeling, offering a scalable, interpretable, and globally grounded solution to embodied navigation. Project page: https://reasonnavi.github.io/

Yuzhuo Ao, Anbang Wang, Yu-Wing Tai, Chi-Keung Tang• 2026

Related benchmarks

TaskDatasetResultRank
Object NavigationHM3D Challenge (test)
Success Rate57.9
14
Image-Goal NavigationHM3D challenge
Success Rate (SR)47.8
7
Text-goal navigationHM3D challenge
SR38.8
4
Showing 3 of 3 rows

Other info

Follow for update