Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speech-Based Visual Question Answering

About

This paper introduces speech-based visual question answering (VQA), the task of generating an answer given an image and a spoken question. Two methods are studied: an end-to-end, deep neural network that directly uses audio waveforms as input versus a pipelined approach that performs ASR (Automatic Speech Recognition) on the question, followed by text-based visual question answering. Furthermore, we investigate the robustness of both methods by injecting various levels of noise into the spoken question and find both methods to be tolerate noise at similar levels.

Ted Zhang, Dengxin Dai, Tinne Tuytelaars, Marie-Francine Moens, Luc Van Gool• 2017

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v1
Accuracy56.7
4
Showing 1 of 1 rows

Other info

Follow for update