Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning a Recurrent Visual Representation for Image Caption Generation

About

In this paper we explore the bi-directional mapping between images and their sentence-based descriptions. We propose learning this mapping using a recurrent neural network. Unlike previous approaches that map both sentences and images to a common embedding, we enable the generation of novel sentences given an image. Using the same model, we can also reconstruct the visual features associated with an image given its visual description. We use a novel recurrent visual memory that automatically learns to remember long-term visual concepts to aid in both sentence generation and visual feature reconstruction. We evaluate our approach on several tasks. These include sentence generation, sentence retrieval and image retrieval. State-of-the-art results are shown for the task of generating novel image descriptions. When compared to human generated captions, our automatically generated captions are preferred by humans over $19.8\%$ of the time. Results are better than or comparable to state-of-the-art results on the image and sentence retrieval tasks for methods using similar visual features.

Xinlei Chen, C. Lawrence Zitnick• 2014

Related benchmarks

TaskDatasetResultRank
Image RetrievalFlickr30K
R@11.28e+3
144
Image SearchFlickr8K
R@11.17e+3
74
Sentence RetrievalFlickr30K
R@11.21e+3
32
Image CaptioningMSCOCO 1,000 images 2014 (test)
BLEU-419
5
Image CaptioningFlickr30K 1,000 images (test)--
4
Image CaptioningFlickr8K 1,000 images (test)--
3
Showing 6 of 6 rows

Other info

Follow for update