Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

An Efficient and Effective Evaluator for Text2SQL Models on Unseen and Unlabeled Data

About

Recent advances in large language models has strengthened Text2SQL systems that translate natural language questions into database queries. A persistent deployment challenge is to assess a newly trained Text2SQL system on an unseen and unlabeled dataset when no verified answers are available. This situation arises frequently because database content and structure evolve, privacy policies slow manual review, and carefully written SQL labels are costly and time-consuming. Without timely evaluation, organizations cannot approve releases or detect failures early. FusionSQL addresses this gap by working with any Text2SQL models and estimating accuracy without reference labels, allowing teams to measure quality on unseen and unlabeled datasets. It analyzes patterns in the system's own outputs to characterize how the target dataset differs from the material used during training. FusionSQL supports pre-release checks, continuous monitoring of new databases, and detection of quality decline. Experiments across diverse application settings and question types show that FusionSQL closely follows actual accuracy and reliably signals emerging issues. Our code is available at https://github.com/phkhanhtrinh23/FusionSQL.

Trinh Pham, Thanh Tam Nguyen, Viet Huynh, Hongzhi Yin, Quoc Viet Hung Nguyen• 2026

Related benchmarks

TaskDatasetResultRank
Dataset-level accuracy estimationSpider to BIRD
MAE3.1
54
Dataset-level accuracy estimationWikiSQL to Spider
MAE3.2
54
Dataset-level accuracy estimationSParC to CoSQL
MAE1.5
54
Dataset-level accuracy estimationSpider to SynSQL 2.5M
MAE2.8
54
Dataset-level accuracy estimationWikiSQL to Spider 2.0
MAE4.2
54
Label-free Performance EstimationSpider
MAE (ATHENA)8.3
5
Label-free Performance EstimationSpider 2.0
MAE (ATHENA)9
5
Label-free Performance EstimationSynSQL 2.5M
MAE (ATHENA)9.1
5
Label-free Performance EstimationCoSQL
MAE (ATHENA)7.9
5
Label-free Performance EstimationBird
MAE (ATHENA)9.2
5
Showing 10 of 10 rows

Other info

Follow for update