Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos

About

We introduce VideoLISA, a video-based multimodal large language model designed to tackle the problem of language-instructed reasoning segmentation in videos. Leveraging the reasoning capabilities and world knowledge of large language models, and augmented by the Segment Anything Model, VideoLISA generates temporally consistent segmentation masks in videos based on language instructions. Existing image-based methods, such as LISA, struggle with video tasks due to the additional temporal dimension, which requires temporal dynamic understanding and consistent segmentation across frames. VideoLISA addresses these challenges by integrating a Sparse Dense Sampling strategy into the video-LLM, which balances temporal context and spatial detail within computational constraints. Additionally, we propose a One-Token-Seg-All approach using a specially designed <TRK> token, enabling the model to segment and track objects across multiple frames. Extensive evaluations on diverse benchmarks, including our newly introduced ReasonVOS benchmark, demonstrate VideoLISA's superior performance in video object segmentation tasks involving complex reasoning, temporal understanding, and object tracking. While optimized for videos, VideoLISA also shows promising generalization to image segmentation, revealing its potential as a unified foundation model for language-instructed object segmentation. Code and model will be available at: https://github.com/showlab/VideoLISA.

Zechen Bai, Tong He, Haiyang Mei, Pichao Wang, Ziteng Gao, Joya Chen, Lei Liu, Zheng Zhang, Mike Zheng Shou• 2024

Related benchmarks

TaskDatasetResultRank
Referring Expression SegmentationRefCOCO (testA)--
217
Referring Expression SegmentationRefCOCO+ (val)--
201
Referring Video Object SegmentationRef-YouTube-VOS (val)
J&F Score63.7
200
Referring Image SegmentationRefCOCO+ (test-B)--
200
Referring Image SegmentationRefCOCO (val)--
197
Referring Expression SegmentationRefCOCO (testB)--
191
Referring Expression SegmentationRefCOCO (val)--
190
Referring Expression SegmentationRefCOCO+ (testA)--
190
Referring Expression SegmentationRefCOCO+ (testB)--
188
Referring Video Object SegmentationRef-DAVIS 2017 (val)
J&F68.8
178
Showing 10 of 38 rows

Other info

Code

Follow for update