Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VideoMolmo: Spatio-Temporal Grounding Meets Pointing

About

Spatio-temporal localization is vital for precise interactions across diverse domains, from biological research to autonomous navigation and interactive interfaces. Current video-based approaches, while proficient in tracking, lack the sophisticated reasoning capabilities of large language models, limiting their contextual understanding and generalization. We introduce VideoMolmo, a large multimodal model tailored for fine-grained spatio-temporal pointing conditioned on textual descriptions. Building upon the Molmo architecture, VideoMolmo incorporates a temporal module utilizing an attention mechanism to condition each frame on preceding frames, ensuring temporal consistency. Additionally, our novel temporal mask fusion pipeline employs SAM2 for bidirectional point propagation, significantly enhancing coherence across video sequences. This two-step decomposition, i.e., first using the LLM to generate precise pointing coordinates, then relying on a sequential mask-fusion module to produce coherent segmentation, not only simplifies the task for the language model but also enhances interpretability. Due to the lack of suitable datasets, we curate a comprehensive dataset comprising 72k video-caption pairs annotated with 100k object points. To evaluate the generalization of VideoMolmo, we introduce VPoS-Bench, a challenging out-of-distribution benchmark spanning five real-world scenarios: Cell Tracking, Egocentric Vision, Autonomous Driving, Video-GUI Interaction, and Robotics. We also evaluate our model on Referring Video Object Segmentation (Refer-VOS) and Reasoning VOS tasks. In comparison to existing models, VideoMolmo substantially improves spatio-temporal pointing accuracy and reasoning capability. Our code and models are publicly available at https://github.com/mbzuai-oryx/VideoMolmo.

Ghazi Shazan Ahmad, Ahmed Heakl, Hanan Gani, Abdelrahman Shaker, Zhiqiang Shen, Fahad Shahbaz Khan, Salman Khan• 2025

Related benchmarks

TaskDatasetResultRank
Referring Video Object SegmentationRef-YouTube-VOS (val)
J&F Score67.3
244
Referring Video Object SegmentationMeViS (val)
J&F Score0.539
161
Spatio-Temporal Video GroundingVidSTG Interrogative Sentences (test)
m_vIoU11.7
40
Reasoning Video Object SegmentationReasonVOS (test)
J & F Score51.1
39
Referring Video Object SegmentationRef-Davis (val)
J&F Score0.725
33
Spatio-Temporal Video GroundingVidSTG Declarative Sentences (test)
m_vIoU15.6
24
Video Referring Expression SegmentationMeViS (val-u)
J&F Score57
18
Referring Video Object SegmentationMeViS (val-u)
J&F Score57
17
TrackingMolmo2 Track
Animals J&F0.684
17
Spatio-Temporal Video GroundingHCSTVG
m_tIoU44.6
7
Showing 10 of 10 rows

Other info

Follow for update