Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Affordance RAG: Hierarchical Multimodal Retrieval with Affordance-Aware Embodied Memory for Mobile Manipulation

About

In this study, we address the problem of open-vocabulary mobile manipulation, where a robot is required to carry a wide range of objects to receptacles based on free-form natural language instructions. This task is challenging, as it involves understanding visual semantics and the affordance of manipulation actions. To tackle these challenges, we propose Affordance RAG, a zero-shot hierarchical multimodal retrieval framework that constructs Affordance-Aware Embodied Memory from pre-explored images. The model retrieves candidate targets based on regional and visual semantics and reranks them with affordance scores, allowing the robot to identify manipulation options that are likely to be executable in real-world environments. Our method outperformed existing approaches in retrieval performance for mobile manipulation instruction in large-scale indoor environments. Furthermore, in real-world experiments where the robot performed mobile manipulation in indoor environments based on free-form instructions, the proposed method achieved a task success rate of 85%, outperforming existing methods in both retrieval performance and overall task success.

Ryosuke Korekata, Quanting Xie, Yonatan Bisk, Komei Sugiura• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal RetrievalWholeHouse-MM (test)
Target Object Recall@532.8
9
Open Vocabulary Mobile ManipulationReal-world indoor environment office and kitchen areas (40 trials)
R@594
3
Showing 2 of 2 rows

Other info

Follow for update