Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ViLA: Efficient Video-Language Alignment for Video Question Answering

About

In this work, we propose an efficient Video-Language Alignment (ViLA) network. Our ViLA model addresses both efficient frame sampling and effective cross-modal alignment in a unified way. In our ViLA network, we design a new learnable text-guided Frame-Prompter together with a new cross-modal distillation (QFormer-Distiller) module. Pre-trained large image-language models have shown promising results on problems such as visual question answering (VQA). However, how to efficiently and effectively sample video frames when adapting pre-trained large image-language model to video-language alignment is still the major challenge. Compared with prior work, our ViLA model demonstrates the capability of selecting key frames with critical contents, thus improving the video-language alignment accuracy while reducing the inference latency +3.3% on NExT-QA Temporal with 3.0X speed up). Overall, our ViLA network outperforms the state-of-the-art methods on the video question-answering benchmarks: +4.6% on STAR Interaction, +2.2% on STAR average with 3.0X speed up, ours 2-frames out-perform SeViLA 4-frames on the VLEP dataset with 4.2X speed-up. The code will be available at https://github.com/xijun-cs/ViLA.

Xijun Wang, Junbang Liang, Chun-Kai Wang, Kenan Deng, Yu Lou, Ming Lin, Shan Yang• 2023

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringNExT-QA (test)--
204
Video Question AnsweringHow2QA
Acc83.9
47
Video Question AnsweringTVQA
Accuracy63.4
40
Video Question AnsweringTVQA (test)
Accuracy63.4
35
Video ReasoningSTAR
Score67.1
19
Video Question AnsweringVideoMME (long split)
Accuracy46.2
18
Video Question AnsweringNextQA (val)
Accuracy74.4
11
Video Question AnsweringVLEP
Total Accuracy69.6
8
Video QANEXT-QA
Accuracy75.6
7
Showing 9 of 9 rows

Other info

Follow for update