Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FoldAct: Efficient and Stable Context Folding for Long-Horizon Search Agents

About

Long-horizon reinforcement learning (RL) for large language models faces critical scalability challenges from unbounded context growth, leading to context folding methods that compress interaction history during task execution. However, existing approaches treat summary actions as standard actions, overlooking that summaries fundamentally modify the agent's future observation space, creating a policy-dependent, non-stationary observation distribution that violates core RL assumptions. This introduces three fundamental challenges: (1) gradient dilution where summary tokens receive insufficient training signal, (2) self-conditioning where policy updates change summary distributions, creating a vicious cycle of training collapse, and (3) computational cost from processing unique contexts at each turn. We introduce \textbf{FoldAct}\footnote{https://github.com/SHAO-Jiaqi757/FoldAct}, a framework that explicitly addresses these challenges through three key innovations: separated loss computation for independent gradient signals on summary and action tokens, full context consistency loss to reduce distribution shift, and selective segment training to reduce computational cost. Our method enables stable training of long-horizon search agents with context folding, addressing the non-stationary observation problem while improving training efficiency with 5.19$\times$ speedup.

Jiaqi Shao, Yufeng Miao, Wei Zhang, Bing Luo• 2025

Related benchmarks

TaskDatasetResultRank
Web ResearchBrowseComp-EN 200
Pass@18.4
19
Web ResearchBrowseComp-ZH
Pass@115.2
19
Web Researchxbench DeepSearch
Pass@135.4
18
Local RAGPopQA
F1 Score33.3
8
Local RAGHotpotQA
F1 Score38.5
8
Web SearchGAIA
Pass@146.3
7
Web SearchWebWalker
Pass@146.1
6
Showing 7 of 7 rows

Other info

Follow for update