Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Look, Imagine and Match: Improving Textual-Visual Cross-Modal Retrieval with Generative Models

About

Textual-visual cross-modal retrieval has been a hot research topic in both computer vision and natural language processing communities. Learning appropriate representations for multi-modal data is crucial for the cross-modal retrieval performance. Unlike existing image-text retrieval approaches that embed image-text pairs as single feature vectors in a common representational space, we propose to incorporate generative processes into the cross-modal feature embedding, through which we are able to learn not only the global abstract features but also the local grounded features. Extensive experiments show that our framework can well match images and sentences with complex content, and achieve the state-of-the-art cross-modal retrieval results on MSCOCO dataset.

Jiuxiang Gu, Jianfei Cai, Shafiq Joty, Li Niu, Gang Wang• 2017

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30k (test)
Recall@141.5
445
Image-to-Text RetrievalFlickr30k (test)
R@156.8
392
Image-to-Text RetrievalMS-COCO 5K (test)
R@142
320
Text-to-Image RetrievalMSCOCO 5K (test)
R@131.7
308
Text-to-Image RetrievalMS-COCO
R@156.6
151
Image RetrievalMS-COCO 1K (test)
R@156.6
128
Text-to-Image RetrievalMSCOCO (1K test)
R@156.6
118
Image-to-Text RetrievalMSCOCO (1K test)
R@168.5
96
Caption RetrievalMS COCO Karpathy 1k (test)
R@168.5
62
Caption RetrievalMS COCO Karpathy 5k (test)
R@142
26
Showing 10 of 11 rows

Other info

Follow for update