Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VISOR: VIsual Spatial Object Reasoning for Language-driven Object Navigation

About

Language-driven object navigation requires agents to interpret natural language descriptions of target objects, which combine intrinsic and extrinsic attributes for instance recognition and commonsense navigation. Existing methods either (i) use end-to-end trained models with vision-language embeddings, which struggle to generalize beyond training data and lack action-level explainability, or (ii) rely on modular zero-shot pipelines with large language models (LLMs) and open-set object detectors, which suffer from error propagation, high computational cost, and difficulty integrating their reasoning back into the navigation policy. To this end, we propose a compact 3B-parameter Vision-Language-Action (VLA) agent that performs human-like embodied reasoning for both object recognition and action selection, removing the need for stitched multi-model pipelines. Instead of raw embedding matching, our agent employs explicit image-grounded reasoning to directly answer "Is this the target object?" and "Why should I take this action?" The reasoning process unfolds in three stages: "think", "think summary", and "action", yielding improved explainability, stronger generalization, and more efficient navigation. Code and dataset available upon acceptance.

Francesco Taioli, Shiping Yang, Sonia Raychaudhuri, Marco Cristani, Unnat Jain, Angel X Chang• 2026

Related benchmarks

TaskDatasetResultRank
Object NavigationCoIN-Bench Seen Synonyms (val)
SPL19.58
13
Object NavigationOVON unseen (val)
SR28.48
12
Object NavigationOVON seen (val)
SPL16.68
8
Object NavigationCoIN-Bench Seen (val)
SPL10.93
5
Object NavigationCoIN-Bench Unseen (val)
SPL854
5
Showing 5 of 5 rows

Other info

Follow for update