Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Planning with Large Language Models for Code Generation

About

Existing large language model-based code generation pipelines typically use beam search or sampling algorithms during the decoding process. Although the programs they generate achieve high token-matching-based scores, they often fail to compile or generate incorrect outputs. The main reason is that conventional Transformer decoding algorithms may not be the best choice for code generation. In this work, we propose a novel Transformer decoding algorithm, Planning-Guided Transformer Decoding (PG-TD), that uses a planning algorithm to do lookahead search and guide the Transformer to generate better programs. Specifically, instead of simply optimizing the likelihood of the generated sequences, the Transformer makes use of a planner to generate candidate programs and test them on public test cases. The Transformer can therefore make more informed decisions and generate tokens that will eventually lead to higher-quality programs. We also design a mechanism that shares information between the Transformer and the planner to make our algorithm computationally efficient. We empirically evaluate our framework with several large language models as backbones on public coding challenge benchmarks, showing that 1) it can generate programs that consistently achieve higher performance compared with competing baseline methods; 2) it enables controllable code generation, such as concise codes and highly-commented codes by optimizing modified objective.

Shun Zhang, Zhenfang Chen, Yikang Shen, Mingyu Ding, Joshua B. Tenenbaum, Chuang Gan• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy95
1362
Scientific ReasoningGPQA
Accuracy65.1
75
Troop placement predictionRisk
EMD0.76
66
Code GenerationCodeContests (test)
Pass@167
48
Mathematical ReasoningMATH 500
Accuracy (avg@4)82.2
30
Code GenerationHumanEval 2021 (test)
Accuracy89.02
21
CAD GenerationCADPrompt
HD0.13
18
Code GenerationOJBench ICPC 2025 (test)
Accuracy10.59
18
Code GenerationMBPP 2021 (test)
Accuracy83.72
18
Code GenerationLiveCodeBench lite v5 (test)
Accuracy38.12
18
Showing 10 of 11 rows

Other info

Follow for update