TARA: Simple and Efficient Time Aware Retrieval Adaptation of MLLMs for Video Understanding
About
Our objective is to build a general time-aware video-text embedding model for retrieval. To that end, we propose a simple and efficient recipe, dubbed TARA (Time Aware Retrieval Adaptation), to adapt Multimodal LLMs (MLLMs) to a time-aware video-text embedding model without using any video data at all. For evaluating time-awareness in retrieval, we propose a new benchmark with temporally opposite (chiral) actions as hard negatives and curated splits for chiral and non-chiral actions. We show that TARA outperforms all existing video-text models on this chiral benchmark while also achieving strong results on standard benchmarks. Furthermore, we discover additional benefits of TARA beyond time-awareness: (i) TARA embeddings are negation-aware as shown in NegBench benchmark that evaluates negation in video retrieval, (ii) TARA achieves state of the art performance on verb and adverb understanding in videos. Overall, TARA yields a strong, versatile, time-aware video-text embedding model with state of the art zero-shot performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Retrieval | MSR-VTT | -- | 313 | |
| Text-to-Image Retrieval | COCO | -- | 130 | |
| Composed Video Retrieval | WebVid-CoVR (test) | R@153.1 | 45 | |
| Verb recognition | Epic-Kitchens (EK) | Top-1 Acc6.1 | 22 | |
| Text-to-Video Retrieval | Something-Something CiA-Retrieval v2 | mAP (Chiral)85.1 | 16 | |
| Video-to-Text retrieval | Something-Something CiA-Retrieval v2 | R@1 (Chiral)84 | 16 | |
| Text-to-Video Retrieval | ReversedInTime | Binary Accuracy71.6 | 11 | |
| Video-to-Text retrieval | ReversedInTime | Binary Accuracy71.3 | 11 | |
| Chiral Action Recognition | CiA | SSv2 Accuracy90.8 | 9 | |
| Video Classification | MMEB Video Classification (Kinetics-700, SSv2, HMDB, UCF, Breakfast) v2 (test) | Classification Accuracy63.7 | 8 |