T*: Re-thinking Temporal Search for Long-Form Video Understanding
About
Efficiently understanding long-form videos remains a significant challenge in computer vision. In this work, we revisit temporal search paradigms for long-form video understanding and address a fundamental issue pertaining to all state-of-the-art (SOTA) long-context vision-language models (VLMs). Our contributions are twofold: First, we frame temporal search as a Long Video Haystack problem: finding a minimal set of relevant frames (e.g., one to five) from tens of thousands based on specific queries. Upon this formulation, we introduce LV-Haystack, the first dataset with 480 hours of videos, 15,092 human-annotated instances for both training and evaluation aiming to improve temporal search quality and efficiency. Results on LV-Haystack highlight a significant research gap in temporal search capabilities, with current SOTA search methods only achieving 2.1% temporal F1 score on the Longvideobench subset. Next, inspired by visual search in images, we propose a lightweight temporal search framework, T* that reframes costly temporal search as spatial search. T* leverages powerful visual localization techniques commonly used in images and introduces an adaptive zooming-in mechanism that operates across both temporal and spatial dimensions. Extensive experiments show that integrating T* with existing methods significantly improves SOTA long-form video understanding. Under an inference budget of 32 frames, T* improves GPT-4o's performance from 50.5% to 53.1% and LLaVA-OneVision-OV-72B's performance from 56.5% to 62.4% on the Longvideobench XL subset. Our code, benchmark, and models are provided in the Supplementary material.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Long Video Understanding | LongVideoBench (val) | Accuracy57.49 | 210 | |
| Video Question Answering | VideoMME | -- | 210 | |
| Video Question Answering | NExT-QA (test) | Accuracy80.4 | 204 | |
| Video Question Answering | LongVideoBench | Accuracy47.3 | 180 | |
| Video Question Answering | EgoSchema subset | Accuracy66.6 | 114 | |
| Video Understanding | Video-MME | Overall Score69.77 | 96 | |
| Video Understanding | LongVideoBench | -- | 92 | |
| Video Question Answering | NextQA | Accuracy80.4 | 78 | |
| Long Video Understanding | VideoMME | -- | 40 | |
| Video Question Answering | Video-MME Long | Accuracy55.2 | 36 |