Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

InstructSeg: Unifying Instructed Visual Segmentation with Multi-modal Large Language Models

About

Boosted by Multi-modal Large Language Models (MLLMs), text-guided universal segmentation models for the image and video domains have made rapid progress recently. However, these methods are often developed separately for specific domains, overlooking the similarities in task settings and solutions across these two areas. In this paper, we define the union of referring segmentation and reasoning segmentation at both the image and video levels as Instructed Visual Segmentation (IVS). Correspondingly, we propose InstructSeg, an end-to-end segmentation pipeline equipped with MLLMs for IVS. Specifically, we employ an object-aware video perceiver to extract temporal and object information from reference frames, facilitating comprehensive video understanding. Additionally, we introduce vision-guided multi-granularity text fusion to better integrate global and detailed text information with fine-grained visual guidance. By leveraging multi-task and end-to-end training, InstructSeg demonstrates superior performance across diverse image and video segmentation tasks, surpassing both segmentation specialists and MLLM-based methods with a single model. Our code is available at https://github.com/congvvc/InstructSeg.

Cong Wei, Yujie Zhong, Haoxian Tan, Yingsen Zeng, Yong Liu, Zheng Zhao, Yujiu Yang• 2024

Related benchmarks

TaskDatasetResultRank
Video Referring SegmentationReVOS Referring
J Score54.8
19
Reasoning Video Object SegmentationReVOS Overall (Entire Dataset)
J&F Score54.5
14
Reasoning Video Object SegmentationReVOS Reasoning
Jaccard (J)49.2
12
Referring Video Object SegmentationReVOS Reasoning
J&F Score51.9
10
Video Object SegmentationReVOS Overall
J&F Score54.5
10
Video Object SegmentationReVOS
J&F Score57
7
Text-to-maskGroundingSuite
Stuff Score56.2
5
Showing 7 of 7 rows

Other info

Follow for update