Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniIR: Training and Benchmarking Universal Multimodal Information Retrievers

About

Existing information retrieval (IR) models often assume a homogeneous format, limiting their applicability to diverse user needs, such as searching for images with text descriptions, searching for a news article with a headline image, or finding a similar photo with a query image. To approach such different information-seeking demands, we introduce UniIR, a unified instruction-guided multimodal retriever capable of handling eight distinct retrieval tasks across modalities. UniIR, a single retrieval system jointly trained on ten diverse multimodal-IR datasets, interprets user instructions to execute various retrieval tasks, demonstrating robust performance across existing datasets and zero-shot generalization to new tasks. Our experiments highlight that multi-task training and instruction tuning are keys to UniIR's generalization ability. Additionally, we construct the M-BEIR, a multimodal retrieval benchmark with comprehensive results, to standardize the evaluation of universal multimodal information retrieval.

Cong Wei, Yang Chen, Haonan Chen, Hexiang Hu, Ge Zhang, Jie Fu, Alan Ritter, Wenhu Chen• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalMS-COCO--
151
Image-to-Text RetrievalMS-COCO--
132
Image-to-Text RetrievalMSCOCO--
129
Text-to-Image RetrievalMSCOCO--
123
Composed Image Retrieval (Image-Text to Image)CIRR
Recall@552.2
93
Composed Image RetrievalCIRCO
mAP@512.5
76
Image EmbeddingMMEB v1 (test)
Classification44.3
70
Multimodal EmbeddingMMEB
Classification Accuracy44.3
56
Multi-modal EmbeddingMMEB 1.0 (test)
Classification Accuracy44.3
52
Image-to-Text RetrievalFlickr
R@194.2
45
Showing 10 of 59 rows

Other info

Follow for update