Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Bridging Time and Space: Decoupled Spatio-Temporal Alignment for Video Grounding

About

Spatio-Temporal Video Grounding requires jointly localizing target objects across both temporal and spatial dimensions based on natural language queries, posing fundamental challenges for existing Multimodal Large Language Models (MLLMs). We identify two core challenges: \textit{entangled spatio-temporal alignment}, arising from coupling two heterogeneous sub-tasks within the same autoregressive output space, and \textit{dual-domain visual token redundancy}, where target objects exhibit simultaneous temporal and spatial sparsity, rendering the overwhelming majority of visual tokens irrelevant to the grounding query. To address these, we propose \textbf{Bridge-STG}, an end-to-end framework that decouples temporal and spatial localization while maintaining semantic coherence. While decoupling is the natural solution to this entanglement, it risks creating a semantic gap between the temporal MLLM and the spatial decoder. Bridge-STG resolves this through two pivotal designs: the \textbf{Spatio-Temporal Semantic Bridging (STSB)} mechanism with Explicit Temporal Alignment (ETA) distills the MLLM's temporal reasoning context into enriched bridging queries as a robust semantic interface; and the \textbf{Query-Guided Spatial Localization (QGSL)} module leverages these queries to drive a purpose-built spatial decoder with multi-layer interactive queries and positive/negative frame sampling, jointly eliminating dual-domain visual token redundancy. Extensive experiments across multiple benchmarks demonstrate that Bridge-STG achieves state-of-the-art performance among MLLM-based methods. Bridge-STG improves average m\_vIoU from $26.4$ to $34.3$ on VidSTG and demonstrates strong cross-task transfer across various fine-grained video understanding tasks under a unified multi-task training regime.

Xuezhen Tu, Jingyu Wu, Fangyu Kang, Qingpeng Nong, Kaijin Zhang, Chaoyue Niu, Fan Wu• 2026

Related benchmarks

TaskDatasetResultRank
Visual Object TrackingGOT-10k (test)
Average Overlap79.3
408
Referring Expression ComprehensionRefCOCO+ (val)--
354
Referring Expression ComprehensionRefCOCO (val)--
344
Referring Expression ComprehensionRefCOCO (testA)--
342
Referring Expression ComprehensionRefCOCOg (val)--
300
Referring Expression ComprehensionRefCOCOg (test)--
300
Referring Expression ComprehensionRefCOCO+ (test-A)--
172
Referring Expression ComprehensionRefCOCO+ (test-B)--
167
Referring Expression ComprehensionRefCOCO (test-B)--
160
Temporal Video GroundingCharades-STA
Rank-1 Recall (IoU=0.5)70.3
47
Showing 10 of 14 rows

Other info

Follow for update