Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Agent KB: Leveraging Cross-Domain Experience for Agentic Problem Solving

About

AI agent frameworks operate in isolation, forcing agents to rediscover solutions and repeat mistakes across different systems. Despite valuable problem-solving experiences accumulated by frameworks like smolagents, OpenHands, and OWL, this knowledge remains trapped within individual systems, preventing the emergence of collective intelligence. Current memory systems focus on individual agents or framework-specific demonstrations, failing to enable cross-architecture knowledge transfer. We introduce AGENT KB, a universal memory infrastructure enabling seamless experience sharing across heterogeneous agent frameworks without retraining. AGENT KB aggregates trajectories into a structured knowledge base and serves lightweight APIs. At inference time, hybrid retrieval operates through two stages: planning seeds agents with cross-domain workflows, while feedback applies targeted diagnostic fixes. A disagreement gate ensures retrieved knowledge enhances rather than disrupts reasoning, addressing knowledge interference in cross-framework transfer. We validate AGENT KB across major frameworks on GAIA, Humanity's Last Exam, GPQA, and SWE-bench. Results show substantial improvements across diverse model families: compared to baseline pass@1, smolagents with AGENT KB achieve up to 18.7pp gains at pass@3 (55.2% -> 73.9%), while OpenHands improves 4.0pp on SWE-bench pass@1 (24.3% -> 28.3%). Similar improvements are observed across all base model families. Ablations confirm that hybrid retrieval and feedback stages are essential, with automatically generated experiences matching manual curation. This establishes the foundation for collective agent intelligence through shared memory infrastructures.

Xiangru Tang, Tianrui Qin, Tianhao Peng, Ziyang Zhou, Daniel Shao, Tingting Du, Xinming Wei, Peng Xia, Fang Wu, He Zhu, Ge Zhang, Jiaheng Liu, Xingyao Wang, Sirui Hong, Chenglin Wu, Hao Cheng, Chi Wang, Wangchunshu Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal Agent TaskTIR-Bench
Average@436.62
24
Multimodal Agent TaskVisualToolBench
Average@441.75
24
Multimodal Agent TaskMMSearch+
Average@439.81
24
Multimodal Agent TaskAgentVista
Average@421.33
24
Software EngineeringSWE-bench Verified--
18
Code GenerationReplicationBench
Pass@320
13
Visual Tool ReasoningVisualToolBench (test)
Average@412.85
12
Multimodal SearchMMSearch-Plus (test)
Average@411.37
12
CodingLiveCodeBench v6
Pass@392
4
Showing 9 of 9 rows

Other info

Follow for update