Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

NL2SQL-BUGs: A Benchmark for Detecting Semantic Errors in NL2SQL Translation

About

Natural Language to SQL (i.e., NL2SQL) translation is crucial for democratizing database access, but even state-of-the-art models frequently generate semantically incorrect SQL queries, hindering the widespread adoption of these techniques by database vendors. While existing NL2SQL benchmarks primarily focus on correct query translation, we argue that a benchmark dedicated to identifying common errors in NL2SQL translations is equally important, as accurately detecting these errors is a prerequisite for any subsequent correction-whether performed by humans or models. To address this gap, we propose NL2SQL-BUGs, the first benchmark dedicated to detecting and categorizing semantic errors in NL2SQL translation. NL2SQL-BUGs adopts a two-level taxonomy to systematically classify semantic errors, covering 9 main categories and 31 subcategories. The benchmark consists of 2,018 expert-annotated instances, each containing a natural language query, database schema, and SQL query, with detailed error annotations for semantically incorrect queries. Through comprehensive experiments, we demonstrate that current large language models exhibit significant limitations in semantic error detection, achieving an average detection accuracy of 75.16%. Specifically, our method successfully detected 106 errors (accounting for 6.91%) in BIRD, a widely-used NL2SQL dataset, which were previously undetected annotation errors. This highlights the importance of semantic error detection in NL2SQL systems. The benchmark is publicly available at https://nl2sql-bugs.github.io/.

Xinyu Liu, Shuyu Shen, Boyan Li, Nan Tang, Yuyu Luo• 2025

Related benchmarks

TaskDatasetResultRank
Dataset-level accuracy estimationSParC to CoSQL
MAE6.1
54
Dataset-level accuracy estimationWikiSQL to Spider 2.0
MAE17.3
54
Dataset-level accuracy estimationSpider to BIRD
MAE14.8
54
Dataset-level accuracy estimationWikiSQL to Spider
MAE13.6
54
Dataset-level accuracy estimationSpider to SynSQL 2.5M
MAE12.4
54
Label-free Performance EstimationSpider
MAE (ATHENA)12
5
Label-free Performance EstimationSpider 2.0
MAE (ATHENA)12.8
5
Label-free Performance EstimationSynSQL 2.5M
MAE (ATHENA)13
5
Label-free Performance EstimationCoSQL
MAE (ATHENA)11.5
5
Label-free Performance EstimationBird
MAE (ATHENA)13.2
5
Showing 10 of 10 rows

Other info

Follow for update