TaBERT: Pretraining for Joint Understanding of Textual and Tabular Data
About
Recent years have witnessed the burgeoning of pretrained language models (LMs) for text-based natural language (NL) understanding tasks. Such models are typically trained on free-form NL text, hence may not be suitable for tasks like semantic parsing over structured data, which require reasoning over both free-form NL questions and structured tabular data (e.g., database tables). In this paper we present TaBERT, a pretrained LM that jointly learns representations for NL sentences and (semi-)structured tables. TaBERT is trained on a large corpus of 26 million tables and their English contexts. In experiments, neural semantic parsers using TaBERT as feature representation layers achieve new best results on the challenging weakly-supervised semantic parsing benchmark WikiTableQuestions, while performing competitively on the text-to-SQL dataset Spider. Implementation of the model will be available at http://fburl.com/TaBERT .
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-SQL | Spider (dev) | -- | 100 | |
| Table Question Answering | WikiTQ (test) | Accuracy52.3 | 92 | |
| Table Question Answering | WikiTableQuestions (test) | Accuracy52.3 | 86 | |
| Table-based Question Answering | WIKITABLEQUESTIONS (dev) | Accuracy52.2 | 25 | |
| Table Question Answering | WikiTQ (dev) | Denotation Acc53 | 18 | |
| Semantic Parsing | WikiTableQuestions (test) | Execution Accuracy (Best)52.3 | 17 | |
| Semantic Parsing | WIKITABLEQUESTIONS (dev) | Execution Accuracy (Best)53 | 16 | |
| Column Population | Wikipedia tables (test) | MAP53.3 | 15 | |
| Row Population | Wikipedia Tables Row Population | MAP43.8 | 12 | |
| Column Type Prediction | VizNet | Support-weighted F197.2 | 11 |