Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MLDocRAG: Multimodal Long-Context Document Retrieval Augmented Generation

About

Understanding multimodal long-context documents that comprise multimodal chunks such as paragraphs, figures, and tables is challenging due to (1) cross-modal heterogeneity to localize relevant information across modalities, (2) cross-page reasoning to aggregate dispersed evidence across pages. To address these challenges, we are motivated to adopt a query-centric formulation that projects cross-modal and cross-page information into a unified query representation space, with queries acting as abstract semantic surrogates for heterogeneous multimodal content. In this paper, we propose a Multimodal Long-Context Document Retrieval Augmented Generation (MLDocRAG) framework that leverages a Multimodal Chunk-Query Graph (MCQG) to organize multimodal document content around semantically rich, answerable queries. MCQG is constructed via a multimodal document expansion process that generates fine-grained queries from heterogeneous document chunks and links them to their corresponding content across modalities and pages. This graph-based structure enables selective, query-centric retrieval and structured evidence aggregation, thereby enhancing grounding and coherence in multimodal long-context question answering. Experiments on datasets MMLongBench-Doc and LongDocURL demonstrate that MLDocRAG consistently improves retrieval quality and answer accuracy, demonstrating its effectiveness for multimodal long-context understanding.

Yongyue Zhang, Yaxiong Wu• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal Document Question AnsweringMMLongBench-Doc
Acc (TXT Evidence)47.2
30
Multimodal Document Question AnsweringLongDocURL
Overall Acc50.8
21
Showing 2 of 2 rows

Other info

Follow for update