Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Chained Image-Language Model for Video Localization and Question Answering

About

Recent studies have shown promising results on utilizing large pre-trained image-language models for video question answering. While these image-language models can efficiently bootstrap the representation learning of video-language models, they typically concatenate uniformly sampled video frames as visual inputs without explicit language-aware, temporal modeling. When only a portion of a video input is relevant to the language query, such uniform frame sampling can often lead to missing important visual cues. Although humans often find a video moment to focus on and rewind the moment to answer questions, training a query-aware video moment localizer often requires expensive annotations and high computational costs. To address this issue, we propose Self-Chained Video Localization-Answering (SeViLA), a novel framework that leverages a single image-language model (BLIP-2) to tackle both temporal keyframe localization and QA on videos. SeViLA framework consists of two modules: Localizer and Answerer, where both are parameter-efficiently fine-tuned from BLIP-2. We propose two ways of chaining these modules for cascaded inference and self-refinement. First, in the forward chain, the Localizer finds multiple language-aware keyframes in a video, which the Answerer uses to predict the answer. Second, in the reverse chain, the Answerer generates keyframe pseudo-labels to refine the Localizer, alleviating the need for expensive video moment localization annotations. Our SeViLA framework outperforms several strong baselines on 5 challenging video QA and event prediction benchmarks, and achieves the state-of-the-art in both fine-tuning (NExT-QA, STAR) and zero-shot (NExT-QA, STAR, How2QA, VLEP) settings. We also analyze the impact of Localizer, comparisons of Localizer with other temporal localization models, pre-training/self-refinement of Localizer, and varying the number of keyframes.

Shoubin Yu, Jaemin Cho, Prateek Yadav, Mohit Bansal• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringNExT-QA (test)
Accuracy73.8
204
Video Question AnsweringEgoSchema (Full)
Accuracy22.7
193
Video Question AnsweringNExT-QA (val)
Overall Acc73.8
176
Moment RetrievalQVHighlights (test)
R@1 (IoU=0.5)54.5
170
Temporal Video GroundingCharades-STA (test)
Recall@IoU=0.515
117
Video Question AnsweringNEXT-QA
Overall Accuracy73.8
105
Video Question AnsweringEgoSchema (test)
Accuracy22.7
80
Video Question AnsweringEgoSchema subset
Accuracy25.7
73
Video Question AnsweringPerception (test)
Test Accuracy46.2
59
Video Question AnsweringEgoSchema 500-question subset
Accuracy25.7
50
Showing 10 of 49 rows

Other info

Code

Follow for update