TaCube: Pre-computing Data Cubes for Answering Numerical-Reasoning Questions over Tabular Data
About
Existing auto-regressive pre-trained language models (PLMs) like T5 and BART, have been well applied to table question answering by UNIFIEDSKG and TAPEX, respectively, and demonstrated state-of-the-art results on multiple benchmarks. However, auto-regressive PLMs are challenged by recent emerging numerical reasoning datasets, such as TAT-QA, due to the error-prone implicit calculation. In this paper, we present TaCube, to pre-compute aggregation/arithmetic results for the table in advance, so that they are handy and readily available for PLMs to answer numerical reasoning questions. TaCube systematically and comprehensively covers a collection of computational operations over table segments. By simply concatenating TaCube to the input sequence of PLMs, it shows significant experimental effectiveness. TaCube promotes the F1 score from 49.6% to 66.2% on TAT-QA and achieves new state-of-the-art results on WikiTQ (59.6% denotation accuracy). TaCube's improvements on numerical reasoning cases are even more notable: on TAT-QA, TaCube promotes the exact match accuracy of BART-large by 39.6% on sum, 52.5% on average, 36.6% on substraction, and 22.2% on division. We believe that TaCube is a general and portable pre-computation solution that can be potentially integrated to various numerical reasoning frameworks
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Table Question Answering | WikiTQ (test) | Accuracy60.8 | 92 | |
| Table Question Answering | WikiTableQuestions (test) | Accuracy60.8 | 86 | |
| Table-based Question Answering | WIKITABLEQUESTIONS (dev) | Accuracy60.9 | 25 | |
| Table Question Answering | WikiTQ (dev) | -- | 18 | |
| Table-based Question Answering | WikiTableQuestion (official) | Test Accuracy60.8 | 15 | |
| Language-to-Code Generation | WikiTQ official (test) | Execution Accuracy59.6 | 12 | |
| Language-to-Code Generation | WikiTQ official (dev) | Execution Accuracy59.7 | 11 |