Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Texts or Images? A Fine-grained Analysis on the Effectiveness of Input Representations and Models for Table Question Answering

About

In table question answering (TQA), tables are encoded as either texts or images. Prior work suggests that passing images of tables to multi-modal large language models (MLLMs) performs comparably to or even better than using textual input with large language models (LLMs). However, the lack of controlled setups limits fine-grained distinctions between these approaches. In this paper, we conduct the first controlled study on the effectiveness of several combinations of table representations and models from two perspectives: question complexity and table size. We build a new benchmark based on existing TQA datasets. In a systematic analysis of seven pairs of MLLMs and LLMs, we find that the best combination of table representation and model varies across setups. We propose FRES, a method selecting table representations dynamically, and observe a 10% average performance improvement compared to using both representations indiscriminately.

Wei Zhou, Mohsen Mesgar, Heike Adel, Annemarie Friedrich• 2025

Related benchmarks

TaskDatasetResultRank
Table Question AnsweringWikiSQL (test)--
55
Table Question AnsweringWTQ (test)
Denotation Accuracy54.4
45
Table Question AnsweringHiTab (test)
Exact Match64
8
Table Question AnsweringTabFact small (test)
Exact Match75.4
8
Showing 4 of 4 rows

Other info

Code

Follow for update