Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RTQ: Rethinking Video-language Understanding Based on Image-text Model

About

Recent advancements in video-language understanding have been established on the foundation of image-text models, resulting in promising outcomes due to the shared knowledge between images and videos. However, video-language understanding presents unique challenges due to the inclusion of highly complex semantic details, which result in information redundancy, temporal dependency, and scene complexity. Current techniques have only partially tackled these issues, and our quantitative analysis indicates that some of these methods are complementary. In light of this, we propose a novel framework called RTQ (Refine, Temporal model, and Query), which addresses these challenges simultaneously. The approach involves refining redundant information within frames, modeling temporal relations among frames, and querying task-specific information from the videos. Remarkably, our model demonstrates outstanding performance even in the absence of video-language pre-training, and the results are comparable with or superior to those achieved by state-of-the-art pre-training methods. Code is available at https://github.com/SCZwangxiao/RTQ-MM2023.

Xiao Wang, Yaoyu Li, Tian Gan, Zheng Zhang, Jingjing Lv, Liqiang Nie• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo
R@10.576
360
Text-to-Video RetrievalMSR-VTT
Recall@153.4
313
Video CaptioningMSVD
CIDEr123.4
128
Video Question AnsweringNEXT-QA
Overall Accuracy63.2
105
Video CaptioningMSRVTT
CIDEr69.3
101
Text-to-Video RetrievalActivityNet Captions
R@153.5
56
Video Question AnsweringMSR-VTT
Accuracy42.1
42
Showing 7 of 7 rows

Other info

Code

Follow for update