Texts or Images? A Fine-grained Analysis on the Effectiveness of Input Representations and Models for Table Question Answering
About
In table question answering (TQA), tables are encoded as either texts or images. Prior work suggests that passing images of tables to multi-modal large language models (MLLMs) performs comparably to or even better than using textual input with large language models (LLMs). However, the lack of controlled setups limits fine-grained distinctions between these approaches. In this paper, we conduct the first controlled study on the effectiveness of several combinations of table representations and models from two perspectives: question complexity and table size. We build a new benchmark based on existing TQA datasets. In a systematic analysis of seven pairs of MLLMs and LLMs, we find that the best combination of table representation and model varies across setups. We propose FRES, a method selecting table representations dynamically, and observe a 10% average performance improvement compared to using both representations indiscriminately.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Table Question Answering | WikiSQL (test) | -- | 55 | |
| Table Question Answering | WTQ (test) | Denotation Accuracy54.4 | 45 | |
| Table Question Answering | HiTab (test) | Exact Match64 | 8 | |
| Table Question Answering | TabFact small (test) | Exact Match75.4 | 8 |