Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models

About

The Flickr30k dataset has become a standard benchmark for sentence-based image description. This paper presents Flickr30k Entities, which augments the 158k captions from Flickr30k with 244k coreference chains, linking mentions of the same entities across different captions for the same image, and associating them with 276k manually annotated bounding boxes. Such annotations are essential for continued progress in automatic image description and grounded language understanding. They enable us to define a new benchmark for localization of textual entity mentions in an image. We present a strong baseline for this task that combines an image-text embedding, detectors for common objects, a color classifier, and a bias towards selecting larger objects. While our baseline rivals in accuracy more complex state-of-the-art models, we show that its gains cannot be easily parlayed into improvements on such tasks as image-sentence retrieval, thus underlining the limitations of current methods and the need for further research.

Bryan A. Plummer, Liwei Wang, Chris M. Cervantes, Juan C. Caicedo, Julia Hockenmaier, Svetlana Lazebnik• 2015

Related benchmarks

TaskDatasetResultRank
Image-to-Text RetrievalFlickr30k (test)
R@136.5
370
Image RetrievalFlickr30k (test)
R@126
195
Image RetrievalFlickr30K
R@12.60e+3
144
Image AnnotationFlickr30k (test)
R@137.4
39
Phrase LocalizationFlickr30K Entities (test)
Accuracy27.42
35
Sentence RetrievalFlickr30K
R@13.74e+3
32
Visual GroundingFlickr30K Entities (test)
Accuracy50.89
29
Phrase groundingFlickr30K
Accuracy55.49
20
Natural Language Object RetrievalFlickr30K Entities (test)
R@125.3
3
Showing 9 of 9 rows

Other info

Code

Follow for update