Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought

About

Recently Large Language Models (LLMs) have been proven to have strong abilities in various domains and tasks. We study the problem of prompt designing in the text-to-SQL task and attempt to improve the LLMs' reasoning ability when generating SQL queries. Besides the trivial few-shot in-context learning setting, we design our chain-of-thought (CoT) prompt with a similar method to schema linking. We provide a method named ACT-SQL to automatically generate auto-CoT exemplars and thus the whole process doesn't need manual labeling. Our approach is cost-saving since we only use the LLMs' API call once when generating one SQL query. Furthermore, we extend our in-context learning method to the multi-turn text-to-SQL task. The experiment results show that the LLMs' performance can benefit from our ACT-SQL approach. Our approach achieves SOTA performance on the Spider dev set among existing in-context learning approaches.

Hanchong Zhang, Ruisheng Cao, Lu Chen, Hongshen Xu, Kai Yu• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-SQLSpider (test)
Execution Accuracy75
162
Text-to-SQLSpider (dev)
EX (All)83.9
100
Text-to-SQLSpider-Realistic
Execution Accuracy (EX)81.3
33
Context-dependent Text-to-SQLCoSQL (dev)
Question Match46
33
Text-to-SQLSpider-DK--
32
Text-to-SQLSpider-Syn--
32
Multi-turn Text-to-SQLSparc (dev)
QM EM51
11
Showing 7 of 7 rows

Other info

Follow for update