Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Visual N-Grams from Web Data

About

Real-world image recognition systems need to recognize tens of thousands of classes that constitute a plethora of visual concepts. The traditional approach of annotating thousands of images per class for training is infeasible in such a scenario, prompting the use of webly supervised data. This paper explores the training of image-recognition systems on large numbers of images and associated user comments. In particular, we develop visual n-gram models that can predict arbitrary phrases that are relevant to the content of an image. Our visual n-gram models are feed-forward convolutional networks trained using new loss functions that are inspired by n-gram models commonly used in language modeling. We demonstrate the merits of our models in phrase prediction, phrase-based image retrieval, relating images and captions, and zero-shot transfer.

Ang Li, Allan Jabri, Armand Joulin, Laurens van der Maaten• 2016

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)--
1453
Image ClassificationImageNet 1k (test)
Top-1 Accuracy11.5
798
Image ClassificationImageNet--
429
Image RetrievalMS-COCO 5K (test)
R@15
217
Image RetrievalFlickr30k (test)
R@18.8
195
Text RetrievalMS-COCO 5K (test)
R@18.7
182
Text RetrievalFlickr30k (test)
R@115.4
89
Image ClassificationSUN
Accuracy23
27
Image ClassificationaYahoo
Accuracy72.4
2
Showing 9 of 9 rows

Other info

Follow for update