Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AutoForge: Automated Environment Synthesis for Agentic Reinforcement Learning

About

Conducting reinforcement learning (RL) in simulated environments offers a cost-effective and highly scalable way to enhance language-based agents. However, previous work has been limited to semi-automated environment synthesis or tasks lacking sufficient difficulty, offering little breadth or depth. In addition, the instability of simulated users integrated into these environments, along with the heterogeneity across simulated environments, poses further challenges for agentic RL. In this work, we propose: (1) a unified pipeline for automated and scalable synthesis of simulated environments associated with high-difficulty but easily verifiable tasks; and (2) an environment level RL algorithm that not only effectively mitigates user instability but also performs advantage estimation at the environment level, thereby improving training efficiency and stability. Comprehensive evaluations on agentic benchmarks, including tau-bench, tau2-Bench, and VitaBench, validate the effectiveness of our proposed method. Further in-depth analyses underscore its out-of-domain generalization.

Shihao Cai, Runnan Fang, Jialong Wu, Baixuan Li, Xinyu Wang, Yong Jiang, Liangcai Su, Liwen Zhang, Wenbiao Yin, Zhen Zhang, Fuli Feng, Pengjun Xie, Xiaobin Wang• 2025

Related benchmarks

TaskDatasetResultRank
Interactive Tool-Use Agent Performancetau2-Bench
Retail Performance Score74.8
84
Agent PerformanceTau-Bench
Retail Accuracy73.1
55
Interactive Tool-Use Agent PerformanceVitaBench
Cross Score17.5
30
Environment SynthesisProgramming-based Environments
Environment Count10
6
Showing 4 of 4 rows

Other info

Follow for update