Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Segment Everything Everywhere All at Once

About

In this work, we present SEEM, a promptable and interactive model for segmenting everything everywhere all at once in an image, as shown in Fig.1. In SEEM, we propose a novel decoding mechanism that enables diverse prompting for all types of segmentation tasks, aiming at a universal segmentation interface that behaves like large language models (LLMs). More specifically, SEEM is designed with four desiderata: i) Versatility. We introduce a new visual prompt to unify different spatial queries including points, boxes, scribbles and masks, which can further generalize to a different referring image; ii) Compositionality. We learn a joint visual-semantic space between text and visual prompts, which facilitates the dynamic composition of two prompt types required for various segmentation tasks; iii) Interactivity. We further incorporate learnable memory prompts into the decoder to retain segmentation history through mask-guided cross-attention from decoder to image features; and iv) Semantic-awareness. We use a text encoder to encode text queries and mask labels into the same semantic space for open-vocabulary segmentation. We conduct a comprehensive empirical study to validate the effectiveness of SEEM across diverse segmentation tasks. Notably, our single SEEM model achieves competitive performance across interactive segmentation, generic segmentation, referring segmentation, and video object segmentation on 9 datasets with minimum 1/100 supervision. Furthermore, SEEM showcases a remarkable capacity for generalization to novel prompts or their combinations, rendering it a readily universal image segmentation interface.

Xueyan Zou, Jianwei Yang, Hao Zhang, Feng Li, Linjie Li, Jianfeng Wang, Lijuan Wang, Jianfeng Gao, Yong Jae Lee• 2023

Related benchmarks

TaskDatasetResultRank
Video Object SegmentationDAVIS 2017 (val)
J mean59.5
1193
Semantic segmentationCityscapes
mIoU49.3
658
Instance SegmentationCOCO (val)--
475
Salient Object DetectionDUTS (test)
M (MAE)0.326
325
Instance SegmentationCOCO
APmask46.8
291
Referring Expression SegmentationRefCOCO+ (testA)
cIoU65.7
230
Panoptic SegmentationCOCO (val)
PQ56.1
219
Reasoning SegmentationReasonSeg (val)
gIoU25.5
193
Semantic segmentationCOCO (val)
mIoU66.3
150
Reasoning SegmentationReasonSeg (test)
gIoU24.3
145
Showing 10 of 100 rows
...

Other info

Follow for update