Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Table Retrieval and Understanding with Multimodal Large Language Models

About

Tabular data is frequently captured in image form across a wide range of real-world scenarios such as financial reports, handwritten records, and document scans. These visual representations pose unique challenges for machine understanding, as they combine both structural and visual complexities. While recent advances in Multimodal Large Language Models (MLLMs) show promising results in table understanding, they typically assume the relevant table is readily available. However, a more practical scenario involves identifying and reasoning over relevant tables from large-scale collections to answer user queries. To address this gap, we propose TabRAG, a framework that enables MLLMs to answer queries over large collections of table images. Our approach first retrieves candidate tables using jointly trained visual-text foundation models, then leverages MLLMs to perform fine-grained reranking of these candidates, and finally employs MLLMs to reason over the selected tables for answer generation. Through extensive experiments on a newly constructed dataset comprising 88,161 training and 9,819 testing samples across 8 benchmarks with 48,504 unique tables, we demonstrate that our framework significantly outperforms existing methods by 7.0% in retrieval recall and 6.1% in answer accuracy, offering a practical solution for real-world table understanding tasks.

Zhuoyan Xu, Haoyang Fang, Boran Han, Bonan Min, Bernie Wang, Cuixiong Hu, Shuai Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Fact VerificationTabFact
Accuracy56.67
73
Table Question AnsweringHiTab
Accuracy19.49
67
Data-to-text generationToTTo
BLEU52.28
18
Text GenerationHiTab T2T
BLEU16.96
11
Question AnsweringWTQ
Accuracy19.19
11
Text GenerationRotowire
BLEU8.15
11
Text GenerationWikiBio
BLEU4.79
11
Question AnsweringFeTaQA
BLEU23.14
11
Showing 8 of 8 rows

Other info

Follow for update