Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Computer Environments Elicit General Agentic Intelligence in LLMs

About

Agentic intelligence in large language models (LLMs) requires not only model intrinsic capabilities but also interactions with external environments. Equipping LLMs with computers now represents a prevailing trend. However, the computer environment's intrinsic value has not been systematically investigated, particularly its potential to elicit general capabilities. Here we introduce LLM-in-Sandbox, which virtualizes the computer as a code sandbox with only basic functionalities, and demonstrate that this minimal setting elicits computer-based meta-capabilities for general task solving: external resource access, file management, and code execution. Without additional training, strong models achieve substantial gains (up to 15.5%) across mathematics, physics, chemistry, biomedicine, long-context understanding, and instruction following, while reducing token consumption by up to 8 times. Furthermore, we develop LLM-in-Sandbox-RL to train models exclusively on non-agentic data within the sandbox, empowering weaker models to harness the environment and internalize these interactions. Our results demonstrate that computer environments elicit general intelligence, yield efficiency gains, and can be harnessed through training, serving as a promising foundation for generalist agents.

Daixuan Cheng, Shaohan Huang, Yuxian Gu, Huatong Song, Guoxin Chen, Li Dong, Wayne Xin Zhao, Ji-Rong Wen, Furu Wei• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Score53.3
50
BioMedicineBiomedicine
Score55.8
14
ChemistryChemistry tasks
Score84.4
14
Instruction FollowingInstruction Following tasks
Score78.3
14
Long-context UnderstandingLong-Context Understanding
Score66.8
14
MathematicsMathematics tasks
Score97.9
14
PhysicsPhysics tasks
Score52.5
14
Showing 7 of 7 rows

Other info

GitHub

Follow for update