Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReMEmbR: Building and Reasoning Over Long-Horizon Spatio-Temporal Memory for Robot Navigation

About

Navigating and understanding complex environments over extended periods of time is a significant challenge for robots. People interacting with the robot may want to ask questions like where something happened, when it occurred, or how long ago it took place, which would require the robot to reason over a long history of their deployment. To address this problem, we introduce a Retrieval-augmented Memory for Embodied Robots, or ReMEmbR, a system designed for long-horizon video question answering for robot navigation. To evaluate ReMEmbR, we introduce the NaVQA dataset where we annotate spatial, temporal, and descriptive questions to long-horizon robot navigation videos. ReMEmbR employs a structured approach involving a memory building and a querying phase, leveraging temporal information, spatial information, and images to efficiently handle continuously growing robot histories. Our experiments demonstrate that ReMEmbR outperforms LLM and VLM baselines, allowing ReMEmbR to achieve effective long-horizon reasoning with low latency. Additionally, we deploy ReMEmbR on a robot and show that our approach can handle diverse queries. The dataset, code, videos, and other material can be found at the following link: https://nvidia-ai-iot.github.io/remembr

Abrar Anwar, John Welsh, Joydeep Biswas, Soha Pouya, Yan Chang• 2024

Related benchmarks

TaskDatasetResultRank
TemporalNaVQA Short memory horizon
SR100
3
TemporalNaVQA Long memory horizon
SR88
3
Multi-modal ReasoningWH-VQA
SR31
3
SpatialNaVQA Short memory horizon
SR84
3
SpatialNaVQA Long memory horizon
SR49
3
Spatial ReasoningWH-VQA
SR39
3
TextualNaVQA Short memory horizon
SR62
3
TextualNaVQA Medium memory horizon
SR65
3
TextualNaVQA Long memory horizon
SR50
3
Textual ReasoningWH-VQA
SR34
3
Showing 10 of 13 rows

Other info

Follow for update