Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Internalized Reasoning for Long-Context Visual Document Understanding

About

Visual long-document understanding is critical for enterprise, legal, and scientific applications, yet the best performing open recipes have not explored reasoning, a capability which has driven leaps in math and code performance. We introduce a synthetic data pipeline for reasoning in long-document understanding that generates thinking traces by scoring each page for question relevance, extracting textual evidence and ordering it from most to least relevant. We apply SFT to the resulting traces within \texttt{<think>} tags, gated by a \texttt{<cot>} control token, and the resulting reasoning capability is internalized via low-strength model merging. We study Qwen3 VL 32B and Mistral Small 3.1 24B. With Qwen3 VL, we achieve 58.3 on MMLongBenchDoc, surpassing the 7$\times$ larger Qwen3 VL 235B A22B (57.0). With Mistral, we show that synthetic reasoning outperforms distillation from the Thinking version's traces by 3.8 points on MMLBD-C, and internalized reasoning exhibits 12.4$\times$ fewer mean output tokens compared to explicit reasoning. We release our pipeline for reproducibility and further exploration.

Austin Veselka• 2026

Related benchmarks

TaskDatasetResultRank
Long-context UnderstandingLongBench v2--
109
Long-context document understandingMMLongBench-Doc
Accuracy55.8
58
Visual Question AnsweringSlideVQA--
46
Document UnderstandingDUDE
Accuracy55.1
17
Long-context UnderstandingHELMET
Accuracy68.5
15
Visual Document UnderstandingMMLongBenchDoc-C
Accuracy58.2
11
Long-context Visual Question AnsweringMMLongBench 128K
Accuracy75.7
11
Visual Document UnderstandingVA
Accuracy95
11
Visual Document UnderstandingLCA
Accuracy94.4
11
Long-context Visual Question AnsweringMMLongBench 32K
Accuracy78.6
11
Showing 10 of 12 rows

Other info

Follow for update