Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Uncovering Systematic Failures of LLMs in Verifying Code Against Natural Language Specifications

About

Large language models (LLMs) have become essential tools in software development, widely used for requirements engineering, code generation and review tasks. Software engineers often rely on LLMs to assess whether system code implementation satisfy task requirements, thereby enhancing code robustness and accuracy. However, it remains unclear whether LLMs can reliably determine whether the code complies fully with the given task descriptions, which is usually natural language specifications. In this paper, we uncover a systematic failure of LLMs in evaluating whether code aligns with natural language requirements. Specifically, with widely used benchmarks, we employ unified prompts to judge code correctness. Our results reveal that LLMs frequently misclassify correct code implementations as either ``not satisfying requirements'' or containing potential defects. Surprisingly, more complex prompting, especially when leveraging prompt engineering techniques involving explanations and proposed corrections, leads to higher misjudgment rate, which highlights the critical reliability issues in using LLMs as code review assistants. We further analyze the root causes of these misjudgments, and propose two improved prompting strategies for mitigation. For the first time, our findings reveals unrecognized limitations in LLMs to match code with requirements. We also offer novel insights and practical guidance for effective use of LLMs in automated code review and task-oriented agent scenarios.

Haolin Jin, Huaming Chen• 2025

Related benchmarks

TaskDatasetResultRank
Code Correctness EvaluationAPPS
F161.1
25
Code Correctness EvaluationHE-Go
F1 Score54.2
25
Code Correctness EvaluationHE-PY
F1 Score55.2
25
Code Correctness EvaluationHE-JS
F1 Score46.2
25
Code Correctness EvaluationHE-JA
F1 Score55.9
25
Code Correctness EvaluationHE-CPP
F1 Score46.5
25
Code Correctness EvaluationBCB
F1 Score55.2
25
Code Correctness EvaluationVP
F1 Score60.9
24
Code Correctness EvaluationDB
F1 Score57.2
24
Code RefinementHE-PY
PR2.19
15
Showing 10 of 10 rows

Other info

Follow for update