Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VidVec: Unlocking Video MLLM Embeddings for Video-Text Retrieval

About

Recent studies have adapted generative Multimodal Large Language Models (MLLMs) into embedding extractors for vision tasks, typically through fine-tuning to produce universal representations. However, their performance on video remains inferior to Video Foundation Models (VFMs). In this paper, we focus on leveraging MLLMs for video-text embedding and retrieval. We first conduct a systematic layer-wise analysis, showing that intermediate (pre-trained) MLLM layers already encode substantial task-relevant information. Leveraging this insight, we demonstrate that combining intermediate-layer embeddings with a calibrated MLLM head yields strong zero-shot retrieval performance without any training. Building on these findings, we introduce a lightweight text-based alignment strategy which maps dense video captions to short summaries and enables task-related video-text embedding learning without visual supervision. Remarkably, without any fine-tuning beyond text, our method outperforms current methods, often by a substantial margin, achieving state-of-the-art results across common video retrieval benchmarks.

Issar Tzachor, Dvir Samuel, Rami Ben-Ari• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-Video RetrievalDiDeMo (test)
R@155.7
376
Text-to-Video RetrievalDiDeMo
R@10.618
360
Text-to-Video RetrievalMSR-VTT
Recall@156.2
313
Text-to-Video RetrievalMSR-VTT (test)
R@152.5
234
Text-to-Video RetrievalMSVD
R@160.9
218
Text-to-Video RetrievalMSVD (test)
R@160.8
204
Video-to-Text retrievalMSR-VTT
Recall@154.9
157
Text-to-Video RetrievalActivityNet (test)
R@179.2
108
Video-to-Text retrievalDiDeMo
R@156.5
108
Text-to-Video RetrievalVATEX
R@170
95
Showing 10 of 14 rows

Other info

Follow for update