Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Selective Prompt Anchoring for Code Generation

About

Recent advances in large language models (LLMs) have transformed software development by automatically generating code from natural language. Yet challenges remain in generating fully correct code that aligns with user intent. Our study reveals that LLMs tend to pay less attention to user prompts as more code tokens are generated. We hypothesize that this attention dilution issue is an important reason for code generation errors. To mitigate this issue, we propose Selective Prompt Anchoring (SPA) to guide code LLMs to pay more attention to user intent when generating code. We evaluate SPA using six base LLMs across six benchmarks. Our results demonstrate that SPA enhances Pass@1 by up to 12.9%, consistently outperforming SOTA code generation methods in all settings. Our code is available at https://github.com/magic-YuanTian/Selective-Prompt-Anchoring.

Yuan Tian, Tianyi Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge EditingCounterFact
Efficacy93.9
301
Instruction FollowingPronoun Changing
P. Score91.6
40
Bias MitigationBias-in-Bios
Accuracy68
40
Gender bias evaluationPronoun Change
Performance Score (P)91.6
35
Bias classificationBiasBios
Accuracy68
35
Showing 5 of 5 rows

Other info

Follow for update