VidVec: Unlocking Video MLLM Embeddings for Video-Text Retrieval
About
Recent studies have adapted generative Multimodal Large Language Models (MLLMs) into embedding extractors for vision tasks, typically through fine-tuning to produce universal representations. However, their performance on video remains inferior to Video Foundation Models (VFMs). In this paper, we focus on leveraging MLLMs for video-text embedding and retrieval. We first conduct a systematic layer-wise analysis, showing that intermediate (pre-trained) MLLM layers already encode substantial task-relevant information. Leveraging this insight, we demonstrate that combining intermediate-layer embeddings with a calibrated MLLM head yields strong zero-shot retrieval performance without any training. Building on these findings, we introduce a lightweight text-based alignment strategy which maps dense video captions to short summaries and enables task-related video-text embedding learning without visual supervision. Remarkably, without any fine-tuning beyond text, our method outperforms current methods, often by a substantial margin, achieving state-of-the-art results across common video retrieval benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-to-Video Retrieval | DiDeMo (test) | R@155.7 | 376 | |
| Text-to-Video Retrieval | DiDeMo | R@10.618 | 360 | |
| Text-to-Video Retrieval | MSR-VTT | Recall@156.2 | 313 | |
| Text-to-Video Retrieval | MSR-VTT (test) | R@152.5 | 234 | |
| Text-to-Video Retrieval | MSVD | R@160.9 | 218 | |
| Text-to-Video Retrieval | MSVD (test) | R@160.8 | 204 | |
| Video-to-Text retrieval | MSR-VTT | Recall@154.9 | 157 | |
| Text-to-Video Retrieval | ActivityNet (test) | R@179.2 | 108 | |
| Video-to-Text retrieval | DiDeMo | R@156.5 | 108 | |
| Text-to-Video Retrieval | VATEX | R@170 | 95 |