Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Captions to Visual Concepts and Back

About

This paper presents a novel approach for automatically generating image descriptions: visual detectors, language models, and multimodal similarity models learnt directly from a dataset of image captions. We use multiple instance learning to train visual detectors for words that commonly occur in captions, including many different parts of speech such as nouns, verbs, and adjectives. The word detector outputs serve as conditional inputs to a maximum-entropy language model. The language model learns from a set of over 400,000 image descriptions to capture the statistics of word usage. We capture global semantics by re-ranking caption candidates using sentence-level features and a deep multimodal similarity model. Our system is state-of-the-art on the official Microsoft COCO benchmark, producing a BLEU-4 score of 29.1%. When human judges compare the system captions to ones written by other people on our held-out test set, the system captions have equal or better quality 34% of the time.

Hao Fang, Saurabh Gupta, Forrest Iandola, Rupesh Srivastava, Li Deng, Piotr Doll\'ar, Jianfeng Gao, Xiaodong He, Margaret Mitchell, John C. Platt, C. Lawrence Zitnick, Geoffrey Zweig• 2014

Related benchmarks

TaskDatasetResultRank
Image CaptioningMS COCO Karpathy (test)--
682
Image CaptioningMS-COCO (test)
CIDEr93
117
Image CaptioningCOCO 2014 (test)
CIDEr0.925
44
Phrase LocalizationVisualGenome (VG) (test)
Pointing Accuracy14.03
29
Relationship Phrase DetectionVRD
Recall@501.47
20
Phrase groundingFlickr30K--
20
Phrase groundingReferIt (test)
Pointing Accuracy33.52
18
Visual GroundingReferIt
Pointing Game Accuracy33.52
16
Image CaptioningMS COCO 40,775 images (test)
CIDEr0.925
15
Weakly Supervised GroundingVisual Genome (VG) (test)
Accuracy (Pointing Game)14.03
15
Showing 10 of 23 rows

Other info

Code

Follow for update