Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

What value do explicit high level concepts have in vision to language problems?

About

Much of the recent progress in Vision-to-Language (V2L) problems has been achieved through a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs). This approach does not explicitly represent high-level semantic concepts, but rather seeks to progress directly from image features to text. We propose here a method of incorporating high-level concepts into the very successful CNN-RNN approach, and show that it achieves a significant improvement on the state-of-the-art performance in both image captioning and visual question answering. We also show that the same mechanism can be used to introduce external semantic information and that doing so further improves performance. In doing so we provide an analysis of the value of high level semantic information in V2L problems.

Qi Wu, Chunhua Shen, Lingqiao Liu, Anthony Dick, Anton van den Hengel• 2015

Related benchmarks

TaskDatasetResultRank
Image RetrievalMS-COCO 5K (test)
R@139
217
Text RetrievalMS-COCO 5K (test)
R@150.1
182
Image RetrievalMS-COCO 1K (test)
R@158.5
128
Open-Ended Visual Question AnsweringVQA 1.0 (test-dev)
Overall Accuracy59.17
100
Image CaptioningMS-COCO
CIDEr0.94
61
Text RetrievalMS-COCO 1K (test)
R@172.3
53
Visual Question AnsweringCOCO-QA (test)--
51
Caption GenerationCOCO 2014 (test)
BLEU-1 (c5)72.5
7
Showing 8 of 8 rows

Other info

Follow for update