Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Decoupling Skeleton and Flesh: Efficient Multimodal Table Reasoning with Disentangled Alignment and Structure-aware Guidance

About

Reasoning over table images remains challenging for Large Vision-Language Models (LVLMs) due to complex layouts and tightly coupled structure-content information. Existing solutions often depend on expensive supervised training, reinforcement learning, or external tools, limiting efficiency and scalability. This work addresses a key question: how to adapt LVLMs to table reasoning with minimal annotation and no external tools? Specifically, we first introduce DiSCo, a Disentangled Structure-Content alignment framework that explicitly separates structural abstraction from semantic grounding during multimodal alignment, efficiently adapting LVLMs to tables structures. Building on DiSCo, we further present Table-GLS, a Global-to-Local Structure-guided reasoning framework that performs table reasoning via structured exploration and evidence-grounded inference. Extensive experiments across diverse benchmarks demonstrate that our framework efficiently enhances LVLM's table understanding and reasoning capabilities, particularly generalizing to unseen table structures.

Yingjie Zhu, Xuefeng Bai, Kehai Chen, Yang Xiang, Youcheng Pan, Xiaoqiang Zhou, Min Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Text-based Visual Question AnsweringTextVQA
Accuracy80.83
496
Science Question AnsweringScienceQA
Accuracy95.09
229
Table Fact VerificationTabFact (test)
Accuracy75.41
98
Hallucination EvaluationCRPE relation
Accuracy77.92
23
Table Structure DetectionMMTab In-domain
Row Score64.2
19
Visual Hallucination EvaluationHallusionBench--
19
Table Question AnsweringTAT-QA (test)
Accuracy40.54
15
Question AnsweringWTQ (test)
Accuracy57.11
11
Fact VerificationInfoTabs (test)
Accuracy72.67
11
Question AnsweringHiTab (test)
Accuracy35.47
11
Showing 10 of 16 rows

Other info

Follow for update