Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VideoStir: Understanding Long Videos via Spatio-Temporally Structured and Intent-Aware RAG

About

Scaling multimodal large language models (MLLMs) to long videos is constrained by limited context windows. While retrieval-augmented generation (RAG) is a promising remedy by organizing query-relevant visual evidence into a compact context, most existing methods (i) flatten videos into independent segments, breaking their inherent spatio-temporal structure, and (ii) depend on explicit semantic matching, which can miss cues that are implicitly relevant to the query's intent. To overcome these limitations, we propose VideoStir, a structured and intent-aware long-video RAG framework. It firstly structures a video as a spatio-temporal graph at clip level, and then performs multi-hop retrieval to aggregate evidence across distant yet contextually related events. Furthermore, it introduces an MLLM-backed intent-relevance scorer that retrieves frames based on their alignment with the query's reasoning intent. To support this capability, we curate IR-600K, a large-scale dataset tailored for learning frame-query intent alignment. Experiments show that VideoStir is competitive with state-of-the-art baselines without relying on auxiliary information, highlighting the promise of shifting long-video RAG from flattened semantic matching to structured, intent-aware reasoning. Codes and checkpoints are available at https://github.com/RomGai/VideoStir.

Honghao Fu, Miao Xu, Yiwei Wang, Dailing Zhang, Jun Liu, Yujun Cai• 2026

Related benchmarks

TaskDatasetResultRank
Video Question AnsweringEgoSchema (test)
Accuracy67.2
90
Long Video Question AnsweringLV-Bench (val)
Overall Accuracy66
20
Long Video Question AnsweringVideo-MME Long without Subtitles
Overall Accuracy62.1
16
Showing 3 of 3 rows

Other info

Follow for update