Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniversalRAG: Retrieval-Augmented Generation over Corpora of Diverse Modalities and Granularities

About

Retrieval-Augmented Generation (RAG) has shown substantial promise in improving factual accuracy by grounding model responses with external knowledge relevant to queries. However, most existing approaches are limited to a text-only corpus, and while recent efforts have extended RAG to other modalities such as images and videos, they typically operate over a single modality-specific corpus. In contrast, real-world queries vary widely in the type of knowledge they require, which a single type of knowledge source cannot address. To address this, we introduce UniversalRAG, designed to retrieve and integrate knowledge from heterogeneous sources with diverse modalities and granularities. Specifically, motivated by the observation that forcing all modalities into a unified representation space derived from a single aggregated corpus causes a modality gap, where the retrieval tends to favor items from the same modality as the query, we propose modality-aware routing, which dynamically identifies the most appropriate modality-specific corpus and performs targeted retrieval within it, and further justify its effectiveness with a theoretical analysis. Moreover, beyond modality, we organize each modality into multiple granularity levels, enabling fine-tuned retrieval tailored to the complexity and scope of the query. We validate UniversalRAG on 10 benchmarks of multiple modalities, showing its superiority over various modality-specific and unified baselines.

Woongyeong Yeo, Kangsan Kim, Soyeong Jeong, Jinheon Baek, Sung Ju Hwang• 2025

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringLVBench
Accuracy19.1
108
Document Visual Question AnsweringSlideVQA--
32
Multimodal Question AnsweringOpen-WikiTable
F1 Recall31.12
22
Multimodal Question AnsweringWebQA
F1-Recall79.48
22
Multimodal Question Answering2WikiMQA
F1-Recall47.3
22
Visual Question AnsweringInfoSeek
F1 Recall37.25
22
Multimodal Question AnsweringAggregate (Open-WikiTable, 2WikiMQA, InfoSeek, Dyn-VQA, TabFact, WebQA)
Average Score38.84
22
Multimodal Question AnsweringTabFact
F1-Recall26
22
Multimodal Question AnsweringDyn-VQA
F1-Recall11.91
22
Multimodal Document Question AnsweringMMLongBench
Accuracy6.6
19
Showing 10 of 17 rows

Other info

Follow for update