Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Neural Self Talk: Image Understanding via Continuous Questioning and Answering

About

In this paper we consider the problem of continuously discovering image contents by actively asking image based questions and subsequently answering the questions being asked. The key components include a Visual Question Generation (VQG) module and a Visual Question Answering module, in which Recurrent Neural Networks (RNN) and Convolutional Neural Network (CNN) are used. Given a dataset that contains images, questions and their answers, both modules are trained at the same time, with the difference being VQG uses the images as input and the corresponding questions as output, while VQA uses images and questions as input and the corresponding answers as output. We evaluate the self talk process subjectively using Amazon Mechanical Turk, which show effectiveness of the proposed method.

Yezhou Yang, Yi Li, Cornelia Fermuller, Yiannis Aloimonos• 2015

Related benchmarks

TaskDatasetResultRank
Question GenerationMSCOCO-VQA (test)
METEOR0.178
12
Visual Question GenerationVQA 1.0
BLEU-159.4
8
Question GenerationDAQUAR (test)
CIDEr0.512
2
Showing 3 of 3 rows

Other info

Follow for update