Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Revealing Single Frame Bias for Video-and-Language Learning

About

Training an effective video-and-language model intuitively requires multiple frames as model inputs. However, it is unclear whether using multiple frames is beneficial to downstream tasks, and if yes, whether the performance gain is worth the drastically-increased computation and memory costs resulting from using more frames. In this work, we explore single-frame models for video-and-language learning. On a diverse set of video-and-language tasks (including text-to-video retrieval and video question answering), we show the surprising result that, with large-scale pre-training and a proper frame ensemble strategy at inference time, a single-frame trained model that does not consider temporal information can achieve better performance than existing methods that use multiple frames for training. This result reveals the existence of a strong "static appearance bias" in popular video-and-language datasets. Therefore, to allow for a more comprehensive evaluation of video-and-language models, we propose two new retrieval tasks based on existing fine-grained action recognition datasets that encourage temporal modeling. Our code is available at https://github.com/jayleicn/singularity

Jie Lei, Tamara L. Berg, Mohit Bansal• 2022

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2 (test-dev)
Overall Accuracy70.3
664
Video Question AnsweringMSRVTT-QA
Accuracy43.9
481
Visual Question AnsweringVQA v2 (test-std)
Accuracy73.27
466
Image-to-Text RetrievalFlickr30K 1K (test)
R@184.7
439
Text-to-Video RetrievalDiDeMo (test)
R@153.9
376
Video Question AnsweringMSRVTT-QA (test)
Accuracy43.9
371
Text-to-Video RetrievalDiDeMo
R@10.539
360
Visual Question AnsweringVQA 2.0 (test-dev)
Accuracy73.13
337
Video Question AnsweringActivityNet-QA
Accuracy44.1
319
Text-to-Video RetrievalMSR-VTT
Recall@141.5
313
Showing 10 of 51 rows

Other info

Code

Follow for update