Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM-in-Sandbox Elicits General Agentic Intelligence

About

We introduce LLM-in-Sandbox, enabling LLMs to explore within a code sandbox (i.e., a virtual computer), to elicit general intelligence in non-code domains. We first demonstrate that strong LLMs, without additional training, exhibit generalization capabilities to leverage the code sandbox for non-code tasks. For example, LLMs spontaneously access external resources to acquire new knowledge, leverage the file system to handle long contexts, and execute scripts to satisfy formatting requirements. We further show that these agentic capabilities can be enhanced through LLM-in-Sandbox Reinforcement Learning (LLM-in-Sandbox-RL), which uses only non-agentic data to train models for sandbox exploration. Experiments demonstrate that LLM-in-Sandbox, in both training-free and post-trained settings, achieves robust generalization spanning mathematics, physics, chemistry, biomedicine, long-context understanding, and instruction following. Finally, we analyze LLM-in-Sandbox's efficiency from computational and system perspectives, and open-source it as a Python package to facilitate real-world deployment.

Daixuan Cheng, Shaohan Huang, Yuxian Gu, Huatong Song, Guoxin Chen, Li Dong, Wayne Xin Zhao, Ji-Rong Wen, Furu Wei• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Score53.3
50
BioMedicineBiomedicine
Score55.8
14
ChemistryChemistry tasks
Score84.4
14
Instruction FollowingInstruction Following tasks
Score78.3
14
Long-context UnderstandingLong-Context Understanding
Score66.8
14
MathematicsMathematics tasks
Score97.9
14
PhysicsPhysics tasks
Score52.5
14
Showing 7 of 7 rows

Other info

GitHub

Follow for update