Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Contextual Experience Replay for Self-Improvement of Language Agents

About

Large language model (LLM) agents have been applied to sequential decision-making tasks such as web navigation, but without any environment-specific experiences, they often fail in these complex tasks. Moreover, current LLM agents are not designed to continually learn from past experiences during inference time, which could be crucial for them to gain these environment-specific experiences. To address this, we propose Contextual Experience Replay (CER), a training-free framework to enable efficient self-improvement for language agents in their context window. Specifically, CER accumulates and synthesizes past experiences into a dynamic memory buffer. These experiences encompass environment dynamics and common decision-making patterns, allowing the agents to retrieve and augment themselves with relevant knowledge in new tasks, enhancing their adaptability in complex environments. We evaluate CER on the challenging WebArena and VisualWebArena benchmarks. On VisualWebArena, CER achieves a competitive performance of 31.9%. On WebArena, CER also gets a competitive average success rate of 36.7%, relatively improving the success rate of the GPT-4o agent baseline by 51.0%. We also conduct a comprehensive analysis on it to prove its efficiency, validity and understand it better.

Yitao Liu, Chenglei Si, Karthik Narasimhan, Shunyu Yao• 2025

Related benchmarks

TaskDatasetResultRank
Web navigation and task completionWebArena (test)
Average Task Completion36.7
42
Autonomous Web NavigationVisualWebArena latest (test)
Success Rate (Classifieds)27
8
Showing 2 of 2 rows

Other info

Follow for update