Learning to Answer Questions From Image Using Convolutional Neural Network
About
In this paper, we propose to employ the convolutional neural network (CNN) for the image question answering (QA). Our proposed CNN provides an end-to-end framework with convolutional architectures for learning not only the image and question representations, but also their inter-modal interactions to produce the answer. More specifically, our model consists of three CNNs: one image CNN to encode the image content, one sentence CNN to compose the words of the question, and one multimodal convolution layer to learn their joint representation for the classification in the space of candidate answer words. We demonstrate the efficacy of our proposed model on the DAQUAR and COCO-QA datasets, which are two benchmark datasets for the image QA, with the performances significantly outperforming the state-of-the-art.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | COCO-QA (test) | WUPS (IoU=0.9)68.5 | 51 | |
| Image Question Answering | DAQUAR REDUCED (test) | Accuracy39.7 | 33 | |
| Visual Question Answering | DAQUAR-ALL full (test) | Accuracy23.4 | 22 | |
| Visual Question Answering | COCO-QA | -- | 7 | |
| Visual Question Answering | DAQUAR reduced Single answer | Accuracy39.66 | 6 | |
| Visual Question Answering | DAQUAR all Multiple answers | Accuracy20.69 | 5 | |
| Visual Question Answering | DAQUAR reduced Multiple answers | Accuracy38.72 | 4 | |
| Visual Question Answering | DAQUAR all Single answer | Acc23.4 | 3 |