Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

About

Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation: modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. We introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.

Qizheng Zhang, Changran Hu, Shubhangi Upasani, Boyuan Ma, Fenglu Hong, Vamsidhar Kamanuru, Jay Rainton, Chen Wu, Mengmeng Ji, Hanchen Li, Urmish Thakker, James Zou, Kunle Olukotun• 2025

Related benchmarks

TaskDatasetResultRank
Agentic task solvingAppWorld
TGC44.6
28
Financial AnalysisFinancial Analysis Benchmark
FiNER Accuracy81
22
Chemical Reaction PredictionUSPTO50k
Accuracy (%)18
21
Content ModerationAegis 2.0
F1 Score68
21
Agentic Tool-useAppWorld (Challenge)
TGC80.7
20
Web BrowsingBrowseComp+ (test)
Accuracy50
20
Agentic Tool-useAppWorld Normal
Task Goal Completion (TGC)86.9
20
Reward Modeling EvaluationRewardBench2 (test)
Accuracy79.2
20
Task and Scenario Goal CompletionAppWorld normal (test)
Task Goal Completion61.3
18
Interactive environment task executionAppWorld normal (test)
Avg@8 Success65.8
15
Showing 10 of 46 rows

Other info

Follow for update