PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers
About
Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive training and evaluation framework, M2KR, for KB-VQA. M2KR contains a collection of vision and language tasks which we have incorporated into a single suite of benchmark tasks for training and evaluating general-purpose multi-modal retrievers. We use M2KR to develop PreFLMR, a pre-trained version of the recently developed Fine-grained Late-interaction Multi-modal Retriever (FLMR) approach to KB-VQA, and we report new state-of-the-art results across a range of tasks. We also present investigations into the scaling behaviors of PreFLMR intended to be useful in future developments in general-purpose multi-modal retrievers.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | OK-VQA | VQA Score61.88 | 18 | |
| Knowledge-based Visual Retrieval | OKVQA Google Search (test) | PR@576.83 | 16 | |
| Knowledge-based Visual Retrieval | ReMuQ 1.0 (test) | MRR@552.27 | 8 | |
| Knowledge-based Visual Question Answering | OKVQA M2KR | VQA Score0.6188 | 6 | |
| Knowledge-based Visual Retrieval | OKVQA WK11M (test) | MRR@545.68 | 6 | |
| Knowledge-based Visual Retrieval | E-VQA 1.0 (test) | MRR@50.3092 | 6 | |
| Retrieval | OKVQA (test) | PR@570.9 | 5 | |
| Retrieval | InfoSeek (test) | P@562.1 | 5 | |
| Retrieval | E-VQA (test) | PR@50.737 | 5 | |
| Knowledge-based Visual Question Answering | Infoseek M2KR | Accuracy30.65 | 3 |