UniIR: Training and Benchmarking Universal Multimodal Information Retrievers
About
Existing information retrieval (IR) models often assume a homogeneous format, limiting their applicability to diverse user needs, such as searching for images with text descriptions, searching for a news article with a headline image, or finding a similar photo with a query image. To approach such different information-seeking demands, we introduce UniIR, a unified instruction-guided multimodal retriever capable of handling eight distinct retrieval tasks across modalities. UniIR, a single retrieval system jointly trained on ten diverse multimodal-IR datasets, interprets user instructions to execute various retrieval tasks, demonstrating robust performance across existing datasets and zero-shot generalization to new tasks. Our experiments highlight that multi-task training and instruction tuning are keys to UniIR's generalization ability. Additionally, we construct the M-BEIR, a multimodal retrieval benchmark with comprehensive results, to standardize the evaluation of universal multimodal information retrieval.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image-to-Text Retrieval | MSCOCO | -- | 124 | |
| Text-to-Image Retrieval | MSCOCO | -- | 118 | |
| Text-to-Image Retrieval | MS-COCO | R@581.1 | 79 | |
| Composed Image Retrieval (Image-Text to Image) | CIRR | -- | 75 | |
| Image-to-Text Retrieval | MS-COCO | R@592.3 | 65 | |
| Composed Image Retrieval | CIRCO | mAP@512.5 | 63 | |
| Multi-modal Retrieval | M-BEIR (test) | Average Recall50.6 | 36 | |
| Text-to-Image Retrieval | Flickr | R@184.1 | 35 | |
| Text-to-Image Retrieval | ShareGPT4V | R@185.8 | 35 | |
| Image-to-Text Retrieval | Urban-1K | R@178.4 | 34 |