Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Table-GPT: Table-tuned GPT for Diverse Table Tasks

About

Language models, such as GPT-3.5 and ChatGPT, demonstrate remarkable abilities to follow diverse human instructions and perform a wide range of tasks. However, when probing language models using a range of basic table-understanding tasks, we observe that today's language models are still sub-optimal in many table-related tasks, likely because they are pre-trained predominantly on \emph{one-dimensional} natural-language texts, whereas relational tables are \emph{two-dimensional} objects. In this work, we propose a new "\emph{table-tuning}" paradigm, where we continue to train/fine-tune language models like GPT-3.5 and ChatGPT, using diverse table-tasks synthesized from real tables as training data, with the goal of enhancing language models' ability to understand tables and perform table tasks. We show that our resulting Table-GPT models demonstrate (1) better \emph{table-understanding} capabilities, by consistently outperforming the vanilla GPT-3.5 and ChatGPT, on a wide-range of table tasks, including holdout unseen tasks, and (2) strong \emph{generalizability}, in its ability to respond to diverse human instructions to perform new table-tasks, in a manner similar to GPT-3.5 and ChatGPT.

Peng Li, Yeye He, Dror Yashar, Weiwei Cui, Song Ge, Haidong Zhang, Danielle Rifinski Fainman, Dongmei Zhang, Surajit Chaudhuri• 2023

Related benchmarks

TaskDatasetResultRank
Table Question AnsweringWTQ
Accuracy9.13
101
Table Question AnsweringHiTab
Accuracy24.26
67
Table Question AnsweringTabMWP
Accuracy16.13
53
Table Question AnsweringAIT-QA
Accuracy47.52
41
Table-based Fact VerificationTabFact
Accuracy25.29
33
Table SummarizationQTSumm
Accuracy47.23
24
Table ReasoningInfoTabs
Accuracy46.03
24
Table-to-text generationFeTaQA
Accuracy36.64
24
Table Question AnsweringTabMCQ
Accuracy19.7
24
Tabular UnderstandingTableGPT
Accuracy25.21
24
Showing 10 of 10 rows

Other info

Follow for update