Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling Open-Vocabulary Image Segmentation with Image-Level Labels

About

We design an open-vocabulary image segmentation model to organize an image into meaningful regions indicated by arbitrary texts. Recent works (CLIP and ALIGN), despite attaining impressive open-vocabulary classification accuracy with image-level caption labels, are unable to segment visual concepts with pixels. We argue that these models miss an important step of visual grouping, which organizes pixels into groups before learning visual-semantic alignments. We propose OpenSeg to address the above issue while still making use of scalable image-level supervision of captions. First, it learns to propose segmentation masks for possible organizations. Then it learns visual-semantic alignments by aligning each word in a caption to one or a few predicted masks. We find the mask representations are the key to support learning image segmentation from captions, making it possible to scale up the dataset and vocabulary sizes. OpenSeg significantly outperforms the recent open-vocabulary method of LSeg by +19.9 mIoU on PASCAL dataset, thanks to its scalability.

Golnaz Ghiasi, Xiuye Gu, Yin Cui, Tsung-Yi Lin• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU24.8
936
Semantic segmentationPASCAL VOC (val)
mIoU72.2
338
Semantic segmentationPascal VOC (test)
mIoU63.8
236
Semantic segmentationADE20K A-150
mIoU28.6
188
Semantic segmentationPascal Context 59
mIoU48.2
164
Semantic segmentationPASCAL-Context 59 class (val)
mIoU48.2
125
Semantic segmentationPascal Context
mIoU48.2
111
Semantic segmentationPascal VOC 20
mIoU72.2
105
Semantic segmentationCOCO
mIoU38
96
Semantic segmentationADE20K 847
mIoU810
83
Showing 10 of 65 rows

Other info

Follow for update