Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Fragment Embeddings for Bidirectional Image Sentence Mapping

About

We introduce a model for bidirectional retrieval of images and sentences through a multi-modal embedding of visual and natural language data. Unlike previous models that directly map images or sentences into a common embedding space, our model works on a finer level and embeds fragments of images (objects) and fragments of sentences (typed dependency tree relations) into a common space. In addition to a ranking objective seen in previous work, this allows us to add a new fragment alignment objective that learns to directly associate these fragments across modalities. Extensive experimental evaluation shows that reasoning on both the global level of images and sentences and the finer level of their respective fragments significantly improves performance on image-sentence retrieval tasks. Additionally, our model provides interpretable predictions since the inferred inter-modal fragment alignment is explicit.

Andrej Karpathy, Armand Joulin, Li Fei-Fei• 2014

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30k (test)
Recall@112.9
423
Image-to-Text RetrievalFlickr30k (test)
R@119.2
370
Image RetrievalFlickr30k (test)
R@110.3
195
Image RetrievalFlickr30K
R@11.02e+3
144
Image SearchFlickr8K
R@11.26e+3
74
Image AnnotationFlickr30k (test)
R@116
39
Caption RetrievalFlickr30k (test)
R@116.4
36
Sentence RetrievalFlickr30K
R@11.42e+3
32
Image AnnotationFlickr8K
R@113.8
18
Image SearchFlickr8k (test)
R@110
11
Showing 10 of 12 rows

Other info

Follow for update