Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MMMORRF: Multimodal Multilingual Modularized Reciprocal Rank Fusion

About

Videos inherently contain multiple modalities, including visual events, text overlays, sounds, and speech, all of which are important for retrieval. However, state-of-the-art multimodal language models like VAST and LanguageBind are built on vision-language models (VLMs), and thus overly prioritize visual signals. Retrieval benchmarks further reinforce this bias by focusing on visual queries and neglecting other modalities. We create a search system MMMORRF that extracts text and features from both visual and audio modalities and integrates them with a novel modality-aware weighted reciprocal rank fusion. MMMORRF is both effective and efficient, demonstrating practicality in searching videos based on users' information needs instead of visual descriptive queries. We evaluate MMMORRF on MultiVENT 2.0 and TVR, two multimodal benchmarks designed for more targeted information needs, and find that it improves nDCG@20 by 81% over leading multimodal encoders and 37% over single-modality retrieval, demonstrating the value of integrating diverse modalities.

Saron Samuel, Dan DeGenaro, Jimena Guallar-Blasco, Kate Sanders, Oluwaseun Eisape, Tanner Spendlove, Arun Reddy, Alexander Martin, Andrew Yates, Eugene Yang, Cameron Carpenter, David Etter, Efsun Kayi, Matthew Wiesner, Kenton Murray, Reno Kriz• 2025

Related benchmarks

TaskDatasetResultRank
Video RetrievalMULTIVENT 2.0 (test)
Recall@1061.1
12
Article GenerationWikiVideo (test)
InfoP Score94.4
10
Multimodal RetrievalWikiVideo (test)
Alpha-nDCG54
10
Showing 3 of 3 rows

Other info

Follow for update