Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

osmAG-LLM: Zero-Shot Open-Vocabulary Object Navigation via Semantic Maps and Large Language Models Reasoning

About

Recent open-vocabulary robot mapping methods enrich dense geometric maps with pre-trained visual-language features, achieving a high level of detail and guiding robots to find objects specified by open-vocabulary language queries. While the issue of scalability for such approaches has received some attention, another fundamental problem is that high-detail object mapping quickly becomes outdated, as objects get moved around a lot. In this work, we develop a mapping and navigation system for object-goal navigation that, from the ground up, considers the possibilities that a queried object can have moved, or may not be mapped at all. Instead of striving for high-fidelity mapping detail, we consider that the main purpose of a map is to provide environment grounding and context, which we combine with the semantic priors of LLMs to reason about object locations and deploy an active, online approach to navigate to the objects. Through simulated and real-world experiments we find that our approach tends to have higher retrieval success at shorter path lengths for static objects and by far outperforms prior approaches in cases of dynamic or unmapped object queries. We provide our code and dataset at: https://github.com/xiexiexiaoxiexie/osmAG-LLM.

Fujing Xie, S\"oren Schwertfeger, Hermann Blum• 2025

Related benchmarks

TaskDatasetResultRank
Object Retrieval & Goal NavigationHM3D-SEM (test)
R-RSR83
6
Object Retrieval & Goal NavigationReal Data static SO
R-RSR1
3
Object Retrieval & Goal NavigationReal Data relocated RO
R-RSR1
3
Object Retrieval & Goal NavigationReal Data unmapped UO
R-RSR Success Rate1
3
Map RepresentationHM3D-SEM 8 scenes
Representation Size (MB)3.2
2
Map Representationreal dataset
Representation Size (MB)0.62
2
Showing 6 of 6 rows

Other info

Follow for update