Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Track and Caption Any Motion: Query-Free Motion Discovery and Description in Videos

About

We propose Track and Caption Any Motion (TCAM), a motion-centric framework for automatic video understanding that discovers and describes motion patterns without user queries. Understanding videos in challenging conditions like occlusion, camouflage, or rapid movement often depends more on motion dynamics than static appearance. TCAM autonomously observes a video, identifies multiple motion activities, and spatially grounds each natural language description to its corresponding trajectory through a motion-field attention mechanism. Our key insight is that motion patterns, when aligned with contrastive vision-language representations, provide powerful semantic signals for recognizing and describing actions. Through unified training that combines global video-text alignment with fine-grained spatial correspondence, TCAM enables query-free discovery of multiple motion expressions via multi-head cross-attention. On the MeViS benchmark, TCAM achieves 58.4% video-to-text retrieval, 64.9 JF for spatial grounding, and discovers 4.8 relevant expressions per video with 84.7% precision, demonstrating strong cross-task generalization.

Bishoy Galoaa, Sarah Ostadabbas• 2025

Related benchmarks

TaskDatasetResultRank
Spatio-Temporal Video GroundingHC-STVG (val)
Mean vIoU42.3
19
Text-to-Video RetrievalMeViS
Recall@155.6
6
Video Object GroundingMeViS
J Score62.3
6
Video-to-Text retrievalMeViS
R@1 (V2T)58.4
6
Text-to-Video RetrievalMeViS (test)
R@10.568
5
Video-to-Text retrievalMeViS (test)
R@159.2
5
Spatial GroundingMeViS (val)
J Score62.3
3
Showing 7 of 7 rows

Other info

Follow for update