Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Edit: Fault-Aware Code Editor for Code Generation

About

Large language models (LLMs) have demonstrated an impressive ability to generate codes on competitive programming tasks. However, with limited sample numbers, LLMs still suffer from poor accuracy. Inspired by the process of human programming, we propose a generate-and-edit approach named Self-Edit that utilizes execution results of the generated code from LLMs to improve the code quality on the competitive programming task. We execute the generated code on the example test case provided in the question and wrap execution results into a supplementary comment. Utilizing this comment as guidance, our fault-aware code editor is employed to correct errors in the generated code. We perform extensive evaluations across two competitive programming datasets with nine different LLMs. Compared to directly generating from LLMs, our approach can improve the average of pass@1 by 89\% on APPS-dev, 31\% on APPS-test, and 48\% on HumanEval over nine popular code generation LLMs with parameter sizes ranging from 110M to 175B. Compared to other post-processing methods, our method demonstrates superior accuracy and efficiency.

Kechi Zhang, Zhuo Li, Jia Li, Ge Li, Zhi Jin• 2023

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@162.2
850
Code GenerationMBPP
Pass@156.4
113
Code GenerationHumanEval-ET
Pass@154.3
75
Code GenerationMBPP-ET
Pass@145.9
75
Code DebuggingHumanEval
Accuracy86
42
Code DebuggingMBPP
Accuracy75.8
30
Showing 6 of 6 rows

Other info

Follow for update