Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

T*: Re-thinking Temporal Search for Long-Form Video Understanding

About

Efficiently understanding long-form videos remains a significant challenge in computer vision. In this work, we revisit temporal search paradigms for long-form video understanding and address a fundamental issue pertaining to all state-of-the-art (SOTA) long-context vision-language models (VLMs). Our contributions are twofold: First, we frame temporal search as a Long Video Haystack problem: finding a minimal set of relevant frames (e.g., one to five) from tens of thousands based on specific queries. Upon this formulation, we introduce LV-Haystack, the first dataset with 480 hours of videos, 15,092 human-annotated instances for both training and evaluation aiming to improve temporal search quality and efficiency. Results on LV-Haystack highlight a significant research gap in temporal search capabilities, with current SOTA search methods only achieving 2.1% temporal F1 score on the Longvideobench subset. Next, inspired by visual search in images, we propose a lightweight temporal search framework, T* that reframes costly temporal search as spatial search. T* leverages powerful visual localization techniques commonly used in images and introduces an adaptive zooming-in mechanism that operates across both temporal and spatial dimensions. Extensive experiments show that integrating T* with existing methods significantly improves SOTA long-form video understanding. Under an inference budget of 32 frames, T* improves GPT-4o's performance from 50.5% to 53.1% and LLaVA-OneVision-OV-72B's performance from 56.5% to 62.4% on the Longvideobench XL subset. Our code, benchmark, and models are provided in the Supplementary material.

Jinhui Ye, Zihan Wang, Haosen Sun, Keshigeyan Chandrasegaran, Zane Durante, Cristobal Eyzaguirre, Yonatan Bisk, Juan Carlos Niebles, Ehsan Adeli, Li Fei-Fei, Jiajun Wu, Manling Li• 2025

Related benchmarks

TaskDatasetResultRank
Long Video UnderstandingLongVideoBench (val)
Accuracy57.49
210
Video Question AnsweringVideoMME--
210
Video Question AnsweringNExT-QA (test)
Accuracy80.4
204
Video Question AnsweringLongVideoBench
Accuracy47.3
180
Video Question AnsweringEgoSchema subset
Accuracy66.6
114
Video UnderstandingVideo-MME
Overall Score69.77
96
Video UnderstandingLongVideoBench--
92
Video Question AnsweringNextQA
Accuracy80.4
78
Long Video UnderstandingVideoMME--
40
Video Question AnsweringVideo-MME Long
Accuracy55.2
36
Showing 10 of 27 rows

Other info

Follow for update