Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UniIR: Training and Benchmarking Universal Multimodal Information Retrievers

About

Existing information retrieval (IR) models often assume a homogeneous format, limiting their applicability to diverse user needs, such as searching for images with text descriptions, searching for a news article with a headline image, or finding a similar photo with a query image. To approach such different information-seeking demands, we introduce UniIR, a unified instruction-guided multimodal retriever capable of handling eight distinct retrieval tasks across modalities. UniIR, a single retrieval system jointly trained on ten diverse multimodal-IR datasets, interprets user instructions to execute various retrieval tasks, demonstrating robust performance across existing datasets and zero-shot generalization to new tasks. Our experiments highlight that multi-task training and instruction tuning are keys to UniIR's generalization ability. Additionally, we construct the M-BEIR, a multimodal retrieval benchmark with comprehensive results, to standardize the evaluation of universal multimodal information retrieval.

Cong Wei, Yang Chen, Haonan Chen, Hexiang Hu, Ge Zhang, Jie Fu, Alan Ritter, Wenhu Chen• 2023

Related benchmarks

TaskDatasetResultRank
Image-to-Text RetrievalMSCOCO--
124
Text-to-Image RetrievalMSCOCO--
118
Text-to-Image RetrievalMS-COCO
R@581.1
79
Composed Image Retrieval (Image-Text to Image)CIRR--
75
Image-to-Text RetrievalMS-COCO
R@592.3
65
Composed Image RetrievalCIRCO
mAP@512.5
63
Multi-modal RetrievalM-BEIR (test)
Average Recall50.6
36
Text-to-Image RetrievalFlickr
R@184.1
35
Text-to-Image RetrievalShareGPT4V
R@185.8
35
Image-to-Text RetrievalUrban-1K
R@178.4
34
Showing 10 of 56 rows

Other info

Follow for update