Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs

About

Multimodal LLMs are the natural evolution of LLMs, and enlarge their capabilities so as to work beyond the pure textual modality. As research is being carried out to design novel architectures and vision-and-language adapters, in this paper we concentrate on endowing such models with the capability of answering questions that require external knowledge. Our approach, termed Wiki-LLaVA, aims at integrating an external knowledge source of multimodal documents, which is accessed through a hierarchical retrieval pipeline. Relevant passages, using this approach, are retrieved from the external knowledge source and employed as additional context for the LLM, augmenting the effectiveness and precision of generated dialogues. We conduct extensive experiments on datasets tailored for visual question answering with external data and demonstrate the appropriateness of our approach.

Davide Caffagni, Federico Cocchi, Nicholas Moratelli, Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringE-VQA (test)
Accuracy21.8
85
Visual Question AnsweringEnc-VQA (test)
Single-Hop Accuracy18.3
84
Visual Question AnsweringInfoSeek (test)
Accuracy28.9
81
Knowledge-Intensive Visual Question AnsweringInfoSeek (val)
Accuracy (All)28.9
50
Visual Question AnsweringInfoSeek (val)
Overall Accuracy28.9
38
Knowledge-Intensive Visual Question AnsweringE-VQA (test)
Accuracy (All)27.1
34
Visual Question AnsweringInfoSeek
Overall Score27.1
30
Knowledge-based Visual Question AnsweringE-VQA Single-Hop
Accuracy21.8
27
Knowledge-based Visual Question AnsweringINFOSEEK Unseen Question
Accuracy30.1
19
Knowledge-based Visual Question AnsweringINFOSEEK (Unseen Entity)
Accuracy27.8
19
Showing 10 of 24 rows

Other info

Follow for update