DocVAL: Validated Chain-of-Thought Distillation for Grounded Document VQA
About
Document visual question answering requires models not only to answer questions correctly, but also to precisely localize answers within complex document layouts. While large vision-language models (VLMs) achieve strong spatial grounding, their inference cost and latency limit real-world deployment. Compact VLMs are more efficient, but they often suffer substantial localization degradation under standard fine-tuning or distillation. To address this gap, we propose DocVAL, a validated chain-of-thought (CoT) distillation framework that transfers explicit spatial reasoning from large teacher models to compact, deployable student VLMs. DocVAL combines (1) teacher-generated spatial CoT supervision, (2) a rule-based dual-mode validator that filters low-quality training signals and provides fine-grained, pixel-level corrective feedback, and (3) a validation-driven two-stage training procedure with iterative refinement. Text detection is used only as training-time scaffolding for supervision and validation, enabling the final student to operate as a pure VLM without OCR or detection at inference. Across multiple document understanding benchmarks, DocVAL yields consistent improvements of up to 6-7 ANLS points over comparable compact VLMs. We further introduce mean Average Precision (mAP) as a localization metric for document question answering and report strong spatial grounding performance under this new evaluation. We release 95K validator-verified CoT traces and show that high-quality, validated supervision is more effective than scaling unfiltered data, enabling efficient and trustworthy document grounding. Dataset and implementation: https://github.com/ahmad-shirazi/DocVAL
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Document Visual Question Answering | DocVQA | ANLS91.4 | 263 | |
| Document Visual Question Answering | VisualMRC | ANLS73.7 | 12 | |
| Document Visual Question Answering | FUNSD | ANLS92.2 | 12 | |
| Document Visual Question Answering | CORD | ANLS88.8 | 12 | |
| Document Visual Question Answering | SROIE | ANLS95.2 | 12 |