Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TGIF-QA: Toward Spatio-Temporal Reasoning in Visual Question Answering

About

Vision and language understanding has emerged as a subject undergoing intense study in Artificial Intelligence. Among many tasks in this line of research, visual question answering (VQA) has been one of the most successful ones, where the goal is to learn a model that understands visual content at region-level details and finds their associations with pairs of questions and answers in the natural language form. Despite the rapid progress in the past few years, most existing work in VQA have focused primarily on images. In this paper, we focus on extending VQA to the video domain and contribute to the literature in three important ways. First, we propose three new tasks designed specifically for video VQA, which require spatio-temporal reasoning from videos to answer questions correctly. Next, we introduce a new large-scale dataset for video VQA named TGIF-QA that extends existing VQA work with our new tasks. Finally, we propose a dual-LSTM based approach with both spatial and temporal attention, and show its effectiveness over conventional VQA techniques through empirical evaluations.

Yunseok Jang, Yale Song, Youngjae Yu, Youngjin Kim, Gunhee Kim• 2017

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringMSRVTT-QA
Accuracy31.3
481
Video Question AnsweringMSRVTT-QA (test)
Accuracy66.1
371
Video Question AnsweringMSVD-QA
Accuracy31.3
340
Video Question AnsweringMSVD-QA (test)--
274
Video Question AnsweringNExT-QA (test)
Accuracy47.64
204
Video Question AnsweringNExT-QA (val)
Overall Acc47.94
176
Video Question AnsweringTGIF-QA--
147
Video Question AnsweringTGIF-QA (test)
Accuracy69.4
89
Video Question AnsweringTGIF-QA original (test)
Repetition Count Loss (Mean L2)4.2825
13
Video Question AnsweringTGIF-QA v2 (test)
Action Acc62.9
12
Showing 10 of 14 rows

Other info

Code

Follow for update