Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SWE-smith: Scaling Data for Software Engineering Agents

About

Despite recent progress in Language Models (LMs) for software engineering, collecting training data remains a significant pain point. Existing datasets are small, with at most 1,000s of training instances from 11 or fewer GitHub repositories. The procedures to curate such datasets are often complex, necessitating hundreds of hours of human labor; companion execution environments also take up several terabytes of storage, severely limiting their scalability and usability. To address this pain point, we introduce SWE-smith, a novel pipeline for generating software engineering training data at scale. Given any Python codebase, SWE-smith constructs a corresponding execution environment, then automatically synthesizes 100s to 1,000s of task instances that break existing test(s) in the codebase. Using SWE-smith, we create a dataset of 50k instances sourced from 128 GitHub repositories, an order of magnitude larger than all previous works. We train SWE-agent-LM-32B, achieving 40.2% Pass@1 resolve rate on the SWE-bench Verified benchmark, state of the art among open source models. We open source SWE-smith (collection procedure, task instances, trajectories, models) to lower the barrier of entry for research in LM systems for automated software engineering. All assets available at https://swesmith.com.

John Yang, Kilian Lieret, Carlos E. Jimenez, Alexander Wettig, Kabir Khandpur, Yanzhe Zhang, Binyuan Hui, Ofir Press, Ludwig Schmidt, Diyi Yang• 2025

Related benchmarks

TaskDatasetResultRank
Automated Software EngineeringSWE-bench Verified
Resolved Rate32.6
39
Issue ResolutionSWE-bench Verified (test)
Pass Rate40.2
36
Automated Software EngineeringSWE-Bench Lite
Resolve Rate30.7
19
Software EngineeringSWE-bench Verified
Resolution Rate0.402
9
Code-Intensive Task GenerationGitHub Repositories
Instances Count5.01e+4
5
Showing 5 of 5 rows

Other info

Follow for update