MemER: Scaling Up Memory for Robot Control via Experience Retrieval
About
Humans routinely rely on memory to perform tasks, yet most robot policies lack this capability; our goal is to endow robot policies with the same ability. Naively conditioning on long observation histories is computationally expensive and brittle under covariate shift, while indiscriminate subsampling of history leads to irrelevant or redundant information. We propose a hierarchical policy framework, where the high-level policy is trained to select and track previous relevant keyframes from its experience. The high-level policy uses selected keyframes and the most recent frames when generating text instructions for a low-level policy to execute. This design is compatible with existing vision-language-action (VLA) models and enables the system to efficiently reason over long-horizon dependencies. In our experiments, we finetune Qwen2.5-VL-7B-Instruct and $\pi_{0.5}$ as the high-level and low-level policies respectively, using demonstrations supplemented with minimal language annotations. Our approach, MemER, outperforms prior methods on three real-world long-horizon robotic manipulation tasks that require minutes of memory. Videos and code can be found at https://jen-pan.github.io/memer/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reference | RoboMME Reference | Pick HighL Success Rate70.67 | 23 | |
| Permanence | RoboMME Permanence | Video Umsk Success Rate81.33 | 23 | |
| Robotic Memory Manipulation | RoboMME Overall | Average Success Rate42.38 | 23 | |
| Counting | RoboMME Counting | Bin Fill Success Rate56.67 | 23 | |
| Imitation | RoboMME Imitation | Move Cube Success Rate82.67 | 23 | |
| Robotic Generalist Policy Execution | MME-VLA 1.0 (test) | Counting Score48.83 | 21 |