Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

How to Utilize Complementary Vision-Text Information for 2D Structure Understanding

About

LLMs typically linearize 2D tables into 1D sequences to fit their autoregressive architecture, which weakens row-column adjacency and other layout cues. In contrast, purely visual encoders can capture spatial cues, yet often struggle to preserve exact cell text. Our analysis reveals that these two modalities provide highly distinct information to LLMs and exhibit strong complementarity. However, direct concatenation and other fusion methods yield limited gains and frequently introduce cross-modal interference. To address this issue, we propose DiVA-Former, a lightweight architecture designed to effectively integrate vision and text information. DiVA-Former leverages visual tokens as dynamic queries to distill long textual sequences into digest vectors, thereby effectively exploiting complementary vision--text information. Evaluated across 13 table benchmarks, DiVA-Former improves upon the pure-text baseline by 23.9\% and achieves consistent gains over existing baselines using visual inputs, textual inputs, or a combination of both.

Jiancheng Dong, Pengyue Jia, Derong Xu, Jiawei Cheng, Jingyu Peng, Chao Zhang, Bowen Liu, Xin Sun, Lixin Su, Shuaiqiang Wang, Dawei Yin, Xiangyu Zhao• 2026

Related benchmarks

TaskDatasetResultRank
Fact VerificationTabFact
Accuracy79.1
83
Table Question AnsweringTabMWP
Accuracy92
79
Question AnsweringWTQ
Accuracy60.9
21
Question AnsweringFeTaQA
ROUGE-L50.4
10
Question AnsweringTAT
Accuracy78
10
Question AnsweringHiTab
Accuracy70.5
10
Structure UnderstandingTSR
Accuracy79.1
10
Structure UnderstandingTCE
Accuracy85.3
10
Structure UnderstandingRCE
Accuracy76.2
10
Structure UnderstandingTCR
Accuracy71.6
10
Showing 10 of 13 rows

Other info

Follow for update