Cross-modal Retrieval for Knowledge-based Visual Question Answering
About
Knowledge-based Visual Question Answering about Named Entities is a challenging task that requires retrieving information from a multimodal Knowledge Base. Named entities have diverse visual representations and are therefore difficult to recognize. We argue that cross-modal retrieval may help bridge the semantic gap between an entity and its depictions, and is foremost complementary with mono-modal retrieval. We provide empirical evidence through experiments with a multimodal dual encoder, namely CLIP, on the recent ViQuAE, InfoSeek, and Encyclopedic-VQA datasets. Additionally, we study three different strategies to fine-tune such a model: mono-modal, cross-modal, or joint training. Our method, which combines mono-and cross-modal retrieval, is competitive with billion-parameter models on the three datasets, while being conceptually simpler and computationally cheaper.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | Enc-VQA (test) | Single-Hop Accuracy29.1 | 84 | |
| Knowledge-Intensive Visual Question Answering | InfoSeek (val) | Accuracy (All)12.4 | 50 | |
| Visual Question Answering | InfoSeek (val) | Overall Accuracy12.4 | 38 | |
| Knowledge-Intensive Visual Question Answering | E-VQA (test) | -- | 34 | |
| Visual Question Answering | InfoSeek | Overall Score12.4 | 30 | |
| Knowledge-based Visual Question Answering | E-VQA Single-Hop | Accuracy29.1 | 27 | |
| Knowledge-based Visual Question Answering | InfoSeek All | Accuracy12.4 | 16 |