Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Motion-Appearance Co-Memory Networks for Video Question Answering

About

Video Question Answering (QA) is an important task in understanding video temporal structure. We observe that there are three unique attributes of video QA compared with image QA: (1) it deals with long sequences of images containing richer information not only in quantity but also in variety; (2) motion and appearance information are usually correlated with each other and able to provide useful attention cues to the other; (3) different questions require different number of frames to infer the answer. Based these observations, we propose a motion-appearance comemory network for video QA. Our networks are built on concepts from Dynamic Memory Network (DMN) and introduces new mechanisms for video QA. Specifically, there are three salient aspects: (1) a co-memory attention mechanism that utilizes cues from both motion and appearance to generate attention; (2) a temporal conv-deconv network to generate multi-level contextual facts; (3) a dynamic fact ensemble method to construct temporal representation dynamically for different questions. We evaluate our method on TGIF-QA dataset, and the results outperform state-of-the-art significantly on all four tasks of TGIF-QA.

Jiyang Gao, Runzhou Ge, Kan Chen, Ram Nevatia• 2018

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringMSRVTT-QA
Accuracy32
481
Video Question AnsweringMSRVTT-QA (test)
Accuracy32
371
Video Question AnsweringMSVD-QA
Accuracy32
340
Video Question AnsweringMSVD-QA (test)--
274
Video Question AnsweringNExT-QA (test)
Accuracy48.54
204
Video Question AnsweringNExT-QA (val)
Overall Acc48.04
176
Video Question AnsweringTGIF-QA--
147
Video Question AnsweringTGIF-QA (test)
Accuracy74.3
89
Video Question AnsweringTGIF-QA v2 (test)
Action Acc68.2
12
Repetition CountTGIF-QA (test)
MSE4.1
5
Showing 10 of 10 rows

Other info

Follow for update