Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Agentic Context Engineering: Evolving Contexts for Self-Improving Language Models

About

Large language model (LLM) applications such as agents and domain-specific reasoning increasingly rely on context adaptation -- modifying inputs with instructions, strategies, or evidence, rather than weight updates. Prior approaches improve usability but often suffer from brevity bias, which drops domain insights for concise summaries, and from context collapse, where iterative rewriting erodes details over time. Building on the adaptive memory introduced by Dynamic Cheatsheet, we introduce ACE (Agentic Context Engineering), a framework that treats contexts as evolving playbooks that accumulate, refine, and organize strategies through a modular process of generation, reflection, and curation. ACE prevents collapse with structured, incremental updates that preserve detailed knowledge and scale with long-context models. Across agent and domain-specific benchmarks, ACE optimizes contexts both offline (e.g., system prompts) and online (e.g., agent memory), consistently outperforming strong baselines: +10.6% on agents and +8.6% on finance, while significantly reducing adaptation latency and rollout cost. Notably, ACE could adapt effectively without labeled supervision and instead by leveraging natural execution feedback. On the AppWorld leaderboard, ACE matches the top-ranked production-level agent on the overall average and surpasses it on the harder test-challenge split, despite using a smaller open-source model. These results show that comprehensive, evolving contexts enable scalable, efficient, and self-improving LLM systems with low overhead.

Qizheng Zhang, Changran Hu, Shubhangi Upasani, Boyuan Ma, Fenglu Hong, Vamsidhar Kamanuru, Jay Rainton, Chen Wu, Mengmeng Ji, Hanchen Li, Urmish Thakker, James Zou, Kunle Olukotun• 2025

Related benchmarks

TaskDatasetResultRank
Agentic task solvingAppWorld
TGC44.6
28
Text-to-SQLKaggleDBQA (test)
EA (%)54
14
Mathematical ReasoningAIME 2024
Pass@3280
12
Mathematical ReasoningAIME 2025
Pass@3267
12
Named Entity RecognitionFiNER
Accuracy0.71
10
Chemical Reaction PredictionUSPTO50k
Accuracy (%)18
10
Content ModerationAegis 2.0
F1 Score68
10
Medical Text ClassificationSymptom2Disease
Accuracy79.2
10
Legal ReasoningLawBench
Micro-F165
10
Legal Question AnsweringJapanese Bar Examination 2024 (Reiwa 6)
Overall Accuracy40.26
9
Showing 10 of 11 rows

Other info

Follow for update