Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ReasoningBank: Scaling Agent Self-Evolving with Reasoning Memory

About

With the growing adoption of large language model agents in persistent real-world roles, they naturally encounter continuous streams of tasks. A key limitation, however, is their failure to learn from the accumulated interaction history, forcing them to discard valuable insights and repeat past errors. We propose ReasoningBank, a novel memory framework that distills generalizable reasoning strategies from an agent's self-judged successful and failed experiences. At test time, an agent retrieves relevant memories from ReasoningBank to inform its interaction and then integrates new learnings back, enabling it to become more capable over time. Building on this powerful experience learner, we further introduce memory-aware test-time scaling (MaTTS), which accelerates and diversifies this learning process by scaling up the agent's interaction experience. By allocating more compute to each task, the agent generates abundant, diverse experiences that provide rich contrastive signals for synthesizing higher-quality memory. The better memory in turn guides more effective scaling, establishing a powerful synergy between memory and test-time scaling. Across web browsing and software engineering benchmarks, ReasoningBank consistently outperforms existing memory mechanisms that store raw trajectories or only successful task routines, improving both effectiveness and efficiency; MaTTS further amplifies these gains. These findings establish memory-driven experience scaling as a new scaling dimension, enabling agents to self-evolve with emergent behaviors naturally arise. Our code can be found at https://github.com/google-research/reasoning-bank.

Siru Ouyang, Jun Yan, I-Hung Hsu, Yanfei Chen, Ke Jiang, Zifeng Wang, Rujun Han, Long T. Le, Samira Daruki, Xiangru Tang, Vishy Tirumalashetty, George Lee, Mahsan Rofouei, Hangfei Lin, Jiawei Han, Chen-Yu Lee, Tomas Pfister• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME
AIME Accuracy58.7
288
Graduate-level Question AnsweringGPQA
Accuracy65.9
184
Visual Question AnsweringLiveVQA
Accuracy34.2
108
Visual Question AnsweringSimpleVQA
Accuracy0.604
99
Visual Question AnsweringInfoSeek
Accuracy59.5
64
Question AnsweringMMLU-Pro
Accuracy89.1
62
Web agent tasksMind2Web Cross-Task
Element Accuracy53.6
57
Multimodal SearchMMSearch
Accuracy57.3
52
Visual Question AnsweringFVQA (test)
Accuracy64.7
36
Embodied Task CompletionEB-Habitat
Avg Success Rate46.4
32
Showing 10 of 54 rows

Other info

Follow for update