Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RAP: Retrieval-Augmented Planning with Contextual Memory for Multimodal LLM Agents

About

Owing to recent advancements, Large Language Models (LLMs) can now be deployed as agents for increasingly complex decision-making applications in areas including robotics, gaming, and API integration. However, reflecting past experiences in current decision-making processes, an innate human behavior, continues to pose significant challenges. Addressing this, we propose Retrieval-Augmented Planning (RAP) framework, designed to dynamically leverage past experiences corresponding to the current situation and context, thereby enhancing agents' planning capabilities. RAP distinguishes itself by being versatile: it excels in both text-only and multimodal environments, making it suitable for a wide range of tasks. Empirical evaluations demonstrate RAP's effectiveness, where it achieves SOTA performance in textual scenarios and notably enhances multimodal LLM agents' performance for embodied tasks. These results highlight RAP's potential in advancing the functionality and applicability of LLM agents in complex, real-world applications.

Tomoyuki Kagaya, Thong Jing Yuan, Yuxuan Lou, Jayashree Karlekar, Sugiri Pranata, Akira Kinose, Koki Oguri, Felix Wick, Yang You• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationAPPS Intermediate
Pass Rate36.32
32
Code GenerationAPPS Introductory--
21
Code GenerationAPPS Competition--
20
Code GenerationCodeContest (Basic)
Pass Rate29.54
11
Code GenerationCodeContest (Advanced)
Pass Rate20.56
11
Showing 5 of 5 rows

Other info

Follow for update