Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep Visual-Semantic Alignments for Generating Image Descriptions

About

We present a model that generates natural language descriptions of images and their regions. Our approach leverages datasets of images and their sentence descriptions to learn about the inter-modal correspondences between language and visual data. Our alignment model is based on a novel combination of Convolutional Neural Networks over image regions, bidirectional Recurrent Neural Networks over sentences, and a structured objective that aligns the two modalities through a multimodal embedding. We then describe a Multimodal Recurrent Neural Network architecture that uses the inferred alignments to learn to generate novel descriptions of image regions. We demonstrate that our alignment model produces state of the art results in retrieval experiments on Flickr8K, Flickr30K and MSCOCO datasets. We then show that the generated descriptions significantly outperform retrieval baselines on both full images and on a new dataset of region-level annotations.

Andrej Karpathy, Li Fei-Fei• 2014

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)
CIDEr0.66
682
Image-to-Text RetrievalFlickr30K 1K (test)
R@122.2
491
Text-to-Image RetrievalFlickr30k (test)
Recall@115.2
445
Image-to-Text RetrievalFlickr30k (test)
R@122.2
392
Image-to-Text RetrievalMS-COCO 5K (test)
R@116.5
320
Text-to-Image RetrievalMSCOCO 5K (test)
R@116.5
308
Image RetrievalFlickr30k (test)
R@115.2
210
Image RetrievalFlickr30K
R@11.52e+3
144
Image RetrievalMS-COCO 1K (test)
R@127.4
128
Image CaptioningMS-COCO (test)
CIDEr69
120
Showing 10 of 48 rows

Other info

Code

Follow for update