Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MemCtrl: Using MLLMs as Active Memory Controllers on Embodied Agents

About

Foundation models rely on in-context learning for personalized decision making. The limited size of this context window necessitates memory compression and retrieval systems like RAG. These systems however often treat memory as large offline storage spaces, which is unfavorable for embodied agents that are expected to operate under strict memory and compute constraints, online. In this work, we propose MemCtrl, a novel framework that uses Multimodal Large Language Models (MLLMs) for pruning memory online. MemCtrl augments MLLMs with a trainable memory head \mu that acts as a gate to determine which observations or reflections to retain, update, or discard during exploration. We evaluate with training two types of \mu, 1) via an offline expert, and 2) via online RL, and observe significant improvement in overall embodied task completion ability on \mu-augmented MLLMs. In particular, on augmenting two low performing MLLMs with MemCtrl on multiple subsets of the EmbodiedBench benchmark, we observe that \mu-augmented MLLMs show an improvement of around 16% on average, with over 20% on specific instruction subsets. Finally, we present a qualitative analysis on the memory fragments collected by \mu, noting the superior performance of \mu augmented MLLMs on long and complex instruction types.

Vishnu Sashank Dorbala, Dinesh Manocha• 2026

Related benchmarks

TaskDatasetResultRank
Embodied Task CompletionEB-Habitat
Avg Success Rate33.8
32
Embodied Task CompletionALFRED EB
Avg Score32.2
8
Showing 2 of 2 rows

Other info

Follow for update