Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TARA: Simple and Efficient Time Aware Retrieval Adaptation of MLLMs for Video Understanding

About

Our objective is to build a general time-aware video-text embedding model for retrieval. To that end, we propose a simple and efficient recipe, dubbed TARA (Time Aware Retrieval Adaptation), to adapt Multimodal LLMs (MLLMs) to a time-aware video-text embedding model without using any video data at all. For evaluating time-awareness in retrieval, we propose a new benchmark with temporally opposite (chiral) actions as hard negatives and curated splits for chiral and non-chiral actions. We show that TARA outperforms all existing video-text models on this chiral benchmark while also achieving strong results on standard benchmarks. Furthermore, we discover additional benefits of TARA beyond time-awareness: (i) TARA embeddings are negation-aware as shown in NegBench benchmark that evaluates negation in video retrieval, (ii) TARA achieves state of the art performance on verb and adverb understanding in videos. Overall, TARA yields a strong, versatile, time-aware video-text embedding model with state of the art zero-shot performance.

Piyush Bagad, Andrew Zisserman• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalMSR-VTT--
313
Text-to-Image RetrievalCOCO--
130
Composed Video RetrievalWebVid-CoVR (test)
R@153.1
45
Verb recognitionEpic-Kitchens (EK)
Top-1 Acc6.1
22
Text-to-Video RetrievalSomething-Something CiA-Retrieval v2
mAP (Chiral)85.1
16
Video-to-Text retrievalSomething-Something CiA-Retrieval v2
R@1 (Chiral)84
16
Text-to-Video RetrievalReversedInTime
Binary Accuracy71.6
11
Video-to-Text retrievalReversedInTime
Binary Accuracy71.3
11
Chiral Action RecognitionCiA
SSv2 Accuracy90.8
9
Video ClassificationMMEB Video Classification (Kinetics-700, SSv2, HMDB, UCF, Breakfast) v2 (test)
Classification Accuracy63.7
8
Showing 10 of 18 rows

Other info

Follow for update