Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PubTables-v2: A new large-scale dataset for full-page and multi-page table extraction

About

Table extraction (TE) is a key challenge in visual document understanding. Traditional approaches detect tables first, then recognize their structure. Recently, interest has surged in developing methods, such as vision-language models (VLMs), that can extract tables directly in their full page or document context. However, progress has been difficult to demonstrate due to a lack of annotated data. To address this, we create a new large-scale dataset, PubTables-v2. PubTables-v2 supports a number of current challenging table extraction tasks. Notably, it is the first large-scale benchmark for multi-page table structure recognition. We demonstrate its usefulness by evaluating domain-specialized VLMs on these tasks and highlighting current progress. Finally, we use PubTables-v2 to create the Page-Object Table Transformer (POTATR), an image-to-graph extension of the Table Transformer to comprehensive page-level TE. Data, code, and trained models will be released.

Brandon Smock, Valerie Faucon-Morin, Max Sokolov, Libin Liang, Tayyibah Khanam, Maury Courtland• 2025

Related benchmarks

TaskDatasetResultRank
Table Structure RecognitionPubTables cropped tables collection v2
GriTS Top98.03
6
Page-level Table ExtractionPubTables page-level table extraction v2
GriTS (Top)96.04
5
Cross-page table continuation classificationPubTables v2--
2
Showing 3 of 3 rows

Other info

Follow for update