Simulating Environments with Reasoning Models for Agent Training
About
LLM agents excel in compact environments requiring deep reasoning but remain brittle when operating in broader, more complex contexts that demand robustness across diverse tools and schemas. Building bespoke environments for training is heavy, brittle, and limits progress. In this paper, we demonstrate that LLMs can simulate realistic environment feedback without access to actual testbed data or APIs. Inspired by this capability, we propose two frameworks: Simia-SFT, a pipeline that synthesizes SFT data by amplifying small seed sets into diverse trajectories in an environment-agnostic manner, and Simia-RL, a framework that enables RL training without real environment implementations through LLM-simulated feedback. Fine-tuning open models yields consistent improvements across multiple benchmarks, surpassing GPT-4o and approaching o4-mini on $\tau^2$-Bench. Together, Simia-SFT and Simia-RL enable scalable agent training without environment engineering, replacing heavy and brittle implementations with flexible LLM-based simulation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Function Calling | BFCL V3 | Overall Accuracy67.68 | 88 | |
| Tool Use | BFCL Multi-turn | Accuracy23.22 | 24 | |
| Tool Use | Tau-Bench | TAU-AIR Score52 | 14 | |
| Agentic Workflow Success | τ2-bench | Airline Success Rate34 | 13 | |
| Agentic Task Success | MCP-Universe | Location Success Score5.71 | 11 | |
| Coding Agent | RebenchT | OH-p@121.39 | 5 | |
| Coding Agent | CodeCI | Avg@230.86 | 5 | |
| Coding Agent | Bird | Pass@131.16 | 5 | |
| Coding Agent | Aggregated (RebenchT, CodeCI, Bird) | Overall Average Score22.81 | 5 |