Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sa2VA: Marrying SAM2 with LLaVA for Dense Grounded Understanding of Images and Videos

About

This work presents Sa2VA, the first comprehensive, unified model for dense grounded understanding of both images and videos. Unlike existing multi-modal large language models, which are often limited to specific modalities and tasks, Sa2VA supports a wide range of image and video tasks, including referring segmentation and conversation, with minimal one-shot instruction tuning. Sa2VA combines SAM-2, a foundation video segmentation model, with MLLM, the advanced vision-language model, and unifies text, image, and video into a shared LLM token space. Using the LLM, Sa2VA generates instruction tokens that guide SAM-2 in producing precise masks, enabling a grounded, multi-modal understanding of both static and dynamic visual content. Additionally, we introduce Ref-SAV, an auto-labeled dataset containing over 72k object expressions in complex video scenes, designed to boost model performance. We also manually validate 2k video objects in the Ref-SAV datasets to benchmark referring video object segmentation in complex environments. Experiments show that Sa2VA achieves strong performance across multiple tasks, particularly in referring video object segmentation, highlighting its potential for complex real-world applications. In addition, Sa2VA can be easily extended into various VLMs, including Qwen-VL and Intern-VL, which can be updated with rapid process in current open-sourced VLMs. Code and models have been provided to the community.

Haobo Yuan, Xiangtai Li, Tao Zhang, Yueyi Sun, Zilong Huang, Shilin Xu, Shunping Ji, Yunhai Tong, Lu Qi, Jiashi Feng, Ming-Hsuan Yang• 2025

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)
cIoU84.2
217
Referring Expression SegmentationRefCOCO+ (val)
cIoU77.6
201
Referring Video Object SegmentationRef-YouTube-VOS (val)
J&F Score70.7
200
Referring Expression SegmentationRefCOCO (testB)
cIoU79.5
191
Referring Expression SegmentationRefCOCO+ (testA)
cIoU81.2
190
Referring Expression SegmentationRefCOCO (val)
cIoU82.4
190
Referring Expression SegmentationRefCOCO+ (testB)
cIoU73.1
188
Referring Video Object SegmentationMeViS (val)
J&F Score0.469
122
Referring Expression SegmentationRefCOCOg (val)
cIoU79.7
107
Referring Expression SegmentationRefCOCOg (val (U))--
89
Showing 10 of 41 rows

Other info

Follow for update