VideoGLaMM: A Large Multimodal Model for Pixel-Level Visual Grounding in Videos
About
Fine-grained alignment between videos and text is challenging due to complex spatial and temporal dynamics in videos. Existing video-based Large Multimodal Models (LMMs) handle basic conversations but struggle with precise pixel-level grounding in videos. To address this, we introduce VideoGLaMM, a LMM designed for fine-grained pixel-level grounding in videos based on user-provided textual inputs. Our design seamlessly connects three key components: a Large Language Model, a dual vision encoder that emphasizes both spatial and temporal details, and a spatio-temporal decoder for accurate mask generation. This connection is facilitated via tunable V-L and L-V adapters that enable close Vision-Language (VL) alignment. The architecture is trained to synchronize both spatial and temporal elements of video content with textual instructions. To enable fine-grained grounding, we curate a multimodal dataset featuring detailed visually-grounded conversations using a semiautomatic annotation pipeline, resulting in a diverse set of 38k video-QA triplets along with 83k objects and 671k masks. We evaluate VideoGLaMM on three challenging tasks: Grounded Conversation Generation, Visual Grounding, and Referring Video Segmentation. Experimental results show that our model consistently outperforms existing approaches across all three tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Referring Video Object Segmentation | Ref-YouTube-VOS (val) | J&F Score66.8 | 200 | |
| Referring Video Object Segmentation | MeViS (val) | J&F Score0.452 | 122 | |
| Referring Video Segmentation | MeViS | J&F Score45.15 | 50 | |
| Spatio-Temporal Video Grounding | VidSTG Interrogative Sentences (test) | -- | 33 | |
| Referring Video Segmentation | MeViS (test) | J&F Score45.2 | 18 | |
| Referring Video Segmentation | Ref-DAVIS 2017 (test) | Jaccard Index73.3 | 6 | |
| Grounded Conversation Generation | GCG (test) | mIoU62.34 | 3 |