Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving MLLMs in Embodied Exploration and Question Answering with Human-Inspired Memory Modeling

About

Deploying Multimodal Large Language Models as the brain of embodied agents remains challenging, particularly under long-horizon observations and limited context budgets. Existing memory assisted methods often rely on textual summaries, which discard rich visual and spatial details and remain brittle in non-stationary environments. In this work, we propose a non-parametric memory framework that explicitly disentangles episodic and semantic memory for embodied exploration and question answering. Our retrieval-first, reasoning-assisted paradigm recalls episodic experiences via semantic similarity and verifies them through visual reasoning, enabling robust reuse of past observations without rigid geometric alignment. In parallel, we introduce a program-style rule extraction mechanism that converts experiences into structured, reusable semantic memory, facilitating cross-environment generalization. Extensive experiments demonstrate state-of-the-art performance on embodied question answering and exploration benchmarks, yielding a 7.3% gain in LLM-Match and an 11.4% gain in LLM MatchXSPL on A-EQA, as well as +7.7% success rate and +6.8% SPL on GOAT-Bench. Analyses reveal that our episodic memory primarily improves exploration efficiency, while semantic memory strengthens complex reasoning of embodied agents.

Ji Li, Jing Xia, Mingyi Li, Shiyan Hu• 2026

Related benchmarks

TaskDatasetResultRank
Embodied Question AnsweringA-EQA
Object Rec. (LLM-Match)62
15
Lifelong Visual NavigationGOAT-Bench 1/10-scale subset (val-unseen)
Success Rate72.8
13
Showing 2 of 2 rows

Other info

Follow for update