Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data

About

Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. Implementation of the model will be available at http://fburl.com/TaBERT .

Pengcheng Yin, Graham Neubig, Wen-tau Yih, Sebastian Riedel• 2020

Related benchmarks

TaskDatasetResultRank
Text-to-SQLSpider (dev)--
100
Table Question AnsweringWikiTQ (test)
Accuracy52.3
92
Table Question AnsweringWikiTableQuestions (test)
Accuracy52.3
86
Table-based Question AnsweringWIKITABLEQUESTIONS (dev)
Accuracy52.2
25
Table Question AnsweringWikiTQ (dev)
Denotation Acc53
18
Semantic ParsingWikiTableQuestions (test)
Execution Accuracy (Best)52.3
17
Semantic ParsingWIKITABLEQUESTIONS (dev)
Execution Accuracy (Best)53
16
Column PopulationWikipedia tables (test)
MAP53.3
15
Row PopulationWikipedia Tables Row Population
MAP43.8
12
Column Type PredictionVizNet
Support-weighted F197.2
11
Showing 10 of 11 rows

Other info

Code

Follow for update