Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Test of Time: Instilling Video-Language Models with a Sense of Time

About

Modelling and understanding time remains a challenge in contemporary video understanding models. With language emerging as a key driver towards powerful generalization, it is imperative for foundational video-language models to have a sense of time. In this paper, we consider a specific aspect of temporal understanding: consistency of time order as elicited by before/after relations. We establish that seven existing video-language models struggle to understand even such simple temporal relations. We then question whether it is feasible to equip these foundational models with temporal awareness without re-training them from scratch. Towards this, we propose a temporal adaptation recipe on top of one such model, VideoCLIP, based on post-pretraining on a small amount of video-text data. We conduct a zero-shot evaluation of the adapted models on six datasets for three downstream tasks which require varying degrees of time awareness. We observe encouraging performance gains especially when the task needs higher time awareness. Our work serves as a first step towards probing and instilling a sense of time in existing video-language models without the need for data and compute-intense training from scratch.

Piyush Bagad, Makarand Tapaswi, Cees G. M. Snoek• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalMSR-VTT
Recall@113.8
313
Text-to-Video RetrievalYouCook2
Recall@1062.4
117
Video Question AnsweringNExT-QA ATPhard
Overall Accuracy27.6
27
Action-to-Video RetrievalSS v2
mAP4.3
10
Action-to-Video RetrievalSSv2 events
mAP7.8
10
Action-to-Video RetrievalTemporal
mAP16.2
10
Temporal VQAAGQA
Accuracy58.4
10
Temporal VQANext-QA ATP
Accuracy27.6
10
Text-to-Video RetrievalSSv2-Events zero-shot
mAP7.8
8
Text-to-Video RetrievalSynthetic Control 1.0 (test)--
8
Showing 10 of 21 rows

Other info

Code

Follow for update