Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VLG-Loc: Vision-Language Global Localization from Labeled Footprint Maps

About

This paper presents Vision-Language Global Localization (VLG-Loc), a novel global localization method that uses human-readable labeled footprint maps containing only names and areas of distinctive visual landmarks in an environment. While humans naturally localize themselves using such maps, translating this capability to robotic systems remains highly challenging due to the difficulty of establishing correspondences between observed landmarks and those in the map without geometric and appearance details. To address this challenge, VLG-Loc leverages a vision-language model (VLM) to search the robot's multi-directional image observations for the landmarks noted in the map. The method then identifies robot poses within a Monte Carlo localization framework, where the found landmarks are used to evaluate the likelihood of each pose hypothesis. Experimental validation in simulated and real-world retail environments demonstrates superior robustness compared to existing scan-based methods, particularly under environmental changes. Further improvements are achieved through the probabilistic fusion of visual and scan-based localization.

Mizuho Aoki, Kohei Honda, Yasuhiro Yoshimura, Takeshi Ishita, Ryo Yonetani• 2025

Related benchmarks

TaskDatasetResultRank
Global LocalizationSimulated Environment UG/UA
Translational Error0.67
8
Global LocalizationSimulated Environment UG/DA
Translational Error0.21
8
Global LocalizationRetail Env.
Translation Error0.52
4
Global LocalizationRetail Env. Subst. Appear.
Translation Error0.18
4
Showing 4 of 4 rows

Other info

Follow for update