Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-modal Retrieval for Knowledge-based Visual Question Answering

About

Knowledge-based Visual Question Answering about Named Entities is a challenging task that requires retrieving information from a multimodal Knowledge Base. Named entities have diverse visual representations and are therefore difficult to recognize. We argue that cross-modal retrieval may help bridge the semantic gap between an entity and its depictions, and is foremost complementary with mono-modal retrieval. We provide empirical evidence through experiments with a multimodal dual encoder, namely CLIP, on the recent ViQuAE, InfoSeek, and Encyclopedic-VQA datasets. Additionally, we study three different strategies to fine-tune such a model: mono-modal, cross-modal, or joint training. Our method, which combines mono-and cross-modal retrieval, is competitive with billion-parameter models on the three datasets, while being conceptually simpler and computationally cheaper.

Paul Lerner, Olivier Ferret, DIASI), Camille Guinaudeau• 2024

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringEnc-VQA (test)
Single-Hop Accuracy29.1
69
Knowledge-Intensive Visual Question AnsweringInfoSeek (val)
Accuracy (All)12.4
30
Visual Question AnsweringInfoSeek (val)--
28
Knowledge-Intensive Visual Question AnsweringE-VQA (test)
BEM (Single-Hop)29.1
15
Showing 4 of 4 rows

Other info

Follow for update