Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TableRAG: Million-Token Table Understanding with Language Models

About

Recent advancements in language models (LMs) have notably enhanced their ability to reason with tabular data, primarily through program-aided mechanisms that manipulate and analyze tables. However, these methods often require the entire table as input, leading to scalability challenges due to the positional bias or context length constraints. In response to these challenges, we introduce TableRAG, a Retrieval-Augmented Generation (RAG) framework specifically designed for LM-based table understanding. TableRAG leverages query expansion combined with schema and cell retrieval to pinpoint crucial information before providing it to the LMs. This enables more efficient data encoding and precise retrieval, significantly reducing prompt lengths and mitigating information loss. We have developed two new million-token benchmarks from the Arcade and BIRD-SQL datasets to thoroughly evaluate TableRAG's effectiveness at scale. Our results demonstrate that TableRAG's retrieval design achieves the highest retrieval quality, leading to the new state-of-the-art performance on large-scale table understanding.

Si-An Chen, Lesly Miculicich, Julian Martin Eisenschlos, Zifeng Wang, Zilong Wang, Yanfei Chen, Yasuhisa Fujii, Hsuan-Tien Lin, Chen-Yu Lee, Tomas Pfister• 2024

Related benchmarks

TaskDatasetResultRank
Table Question AnsweringWikiTableQuestions (test)
Accuracy57.03
86
Table Question AnsweringArcadeQA
Exact Match Accuracy49.2
15
Table Question AnsweringBirdQA
Exact Match Accuracy45.5
15
Showing 3 of 3 rows

Other info

Code

Follow for update