Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Wiki-LLaVA: Hierarchical Retrieval-Augmented Generation for Multimodal LLMs

About

Multimodal LLMs are the natural evolution of LLMs, and enlarge their capabilities so as to work beyond the pure textual modality. As research is being carried out to design novel architectures and vision-and-language adapters, in this paper we concentrate on endowing such models with the capability of answering questions that require external knowledge. Our approach, termed Wiki-LLaVA, aims at integrating an external knowledge source of multimodal documents, which is accessed through a hierarchical retrieval pipeline. Relevant passages, using this approach, are retrieved from the external knowledge source and employed as additional context for the LLM, augmenting the effectiveness and precision of generated dialogues. We conduct extensive experiments on datasets tailored for visual question answering with external data and demonstrate the appropriateness of our approach.

Davide Caffagni, Federico Cocchi, Nicholas Moratelli, Sara Sarto, Marcella Cornia, Lorenzo Baraldi, Rita Cucchiara• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringEnc-VQA (test)
Single-Hop Accuracy17.7
69
Visual Question AnsweringInfoSeek (test)
Accuracy28.9
60
Visual Question AnsweringE-VQA (test)
Accuracy21.8
56
Knowledge-Intensive Visual Question AnsweringInfoSeek (val)
Accuracy (All)28.9
30
Visual Question AnsweringInfoSeek (val)
Unseen-Q Accuracy30.1
28
Knowledge-Intensive Visual Question AnsweringE-VQA (test)
BEM (Single-Hop)17.7
15
Entity RetrievalInfoSeek (val)
R@136.9
9
Entity RetrievalE-VQA (test)
Recall@10.033
7
Visual Question AnsweringViQuAE
F1 Score12.7
6
Visual Question AnsweringS3VQA
Model Score (GPT-4)22.7
6
Showing 10 of 13 rows

Other info

Follow for update