Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ViDoRAG: Visual Document Retrieval-Augmented Generation via Dynamic Iterative Reasoning Agents

About

Understanding information from visually rich documents remains a significant challenge for traditional Retrieval-Augmented Generation (RAG) methods. Existing benchmarks predominantly focus on image-based question answering (QA), overlooking the fundamental challenges of efficient retrieval, comprehension, and reasoning within dense visual documents. To bridge this gap, we introduce ViDoSeek, a novel dataset designed to evaluate RAG performance on visually rich documents requiring complex reasoning. Based on it, we identify key limitations in current RAG approaches: (i) purely visual retrieval methods struggle to effectively integrate both textual and visual features, and (ii) previous approaches often allocate insufficient reasoning tokens, limiting their effectiveness. To address these challenges, we propose ViDoRAG, a novel multi-agent RAG framework tailored for complex reasoning across visual documents. ViDoRAG employs a Gaussian Mixture Model (GMM)-based hybrid strategy to effectively handle multi-modal retrieval. To further elicit the model's reasoning capabilities, we introduce an iterative agent workflow incorporating exploration, summarization, and reflection, providing a framework for investigating test-time scaling in RAG domains. Extensive experiments on ViDoSeek validate the effectiveness and generalization of our approach. Notably, ViDoRAG outperforms existing methods by over 10% on the competitive ViDoSeek benchmark. The code is available at https://github.com/Alibaba-NLP/ViDoRAG.

Qiuchen Wang, Ruixue Ding, Zehui Chen, Weiqi Wu, Shihang Wang, Pengjun Xie, Feng Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Multimodal Document QAVisDoMBench SlideVQA (full)
Accuracy71.71
11
Multimodal Document QAVisDoMBench SPIQA (full)
Accuracy68.18
11
Multimodal Document QAVisDoMBench PaperTab (full)
Accuracy43.67
11
Multimodal Document QAVisDoMBench FetaTab (full)
Accuracy58.74
11
Multimodal Document QAVisDoMBench SciGraphQA (full)
Accuracy37.86
11
Multimodal Document Question AnsweringDocBench (test)
Accuracy (Academic)29
6
Multimodal Long-document UnderstandingMMLongBench-Doc 1.0 (test)
Reports Accuracy29.8
6
Showing 7 of 7 rows

Other info

Follow for update