Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

EchoSight: Advancing Visual-Language Models with Wiki Knowledge

About

Knowledge-based Visual Question Answering (KVQA) tasks require answering questions about images using extensive background knowledge. Despite significant advancements, generative models often struggle with these tasks due to the limited integration of external knowledge. In this paper, we introduce EchoSight, a novel multimodal Retrieval-Augmented Generation (RAG) framework that enables large language models (LLMs) to answer visual questions requiring fine-grained encyclopedic knowledge. To strive for high-performing retrieval, EchoSight first searches wiki articles by using visual-only information, subsequently, these candidate articles are further reranked according to their relevance to the combined text-image query. This approach significantly improves the integration of multimodal knowledge, leading to enhanced retrieval outcomes and more accurate VQA responses. Our experimental results on the Encyclopedic VQA and InfoSeek datasets demonstrate that EchoSight establishes new state-of-the-art results in knowledge-based VQA, achieving an accuracy of 41.8% on Encyclopedic VQA and 31.3% on InfoSeek.

Yibin Yan, Weidi Xie• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringEnc-VQA (test)
Single-Hop Accuracy41.8
69
Visual Question AnsweringInfoSeek (test)
Accuracy27.7
60
Visual Question AnsweringE-VQA (test)
Accuracy41.8
56
Knowledge-Intensive Visual Question AnsweringInfoSeek (val)
Accuracy (All)27.7
30
Visual Question AnsweringInfoSeek (val)
Unseen-Q Accuracy30
28
Visual Question AnsweringE-VQA
Accuracy41.8
15
Visual Question AnsweringInfoSeek
Overall Score31.3
15
Knowledge-Intensive Visual Question AnsweringE-VQA (test)
BEM (Single-Hop)19.4
15
Multi-hop Question AnsweringMMhops Bridging (test)
String Success Rate19.14
13
Multi-hop Question AnsweringMMhops Comparison (test)
Accuracy4.81
13
Showing 10 of 13 rows

Other info

Follow for update