Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Scaling Open-Vocabulary Image Segmentation with Image-Level Labels

About

We design an open-vocabulary image segmentation model to organize an image into meaningful regions indicated by arbitrary texts. Recent works (CLIP and ALIGN), despite attaining impressive open-vocabulary classification accuracy with image-level caption labels, are unable to segment visual concepts with pixels. We argue that these models miss an important step of visual grouping, which organizes pixels into groups before learning visual-semantic alignments. We propose OpenSeg to address the above issue while still making use of scalable image-level supervision of captions. First, it learns to propose segmentation masks for possible organizations. Then it learns visual-semantic alignments by aligning each word in a caption to one or a few predicted masks. We find the mask representations are the key to support learning image segmentation from captions, making it possible to scale up the dataset and vocabulary sizes. OpenSeg significantly outperforms the recent open-vocabulary method of LSeg by +19.9 mIoU on PASCAL dataset, thanks to its scalability.

Golnaz Ghiasi, Xiuye Gu, Yin Cui, Tsung-Yi Lin• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K
mIoU24.8
1024
Semantic segmentationPASCAL VOC (val)
mIoU72.2
362
Semantic segmentationPASCAL Context (val)
mIoU42.1
360
Semantic segmentationPascal VOC (test)
mIoU63.8
236
Semantic segmentationPascal Context
mIoU48.2
217
Semantic segmentationADE20K A-150
mIoU28.6
217
Semantic segmentationPascal Context 59
mIoU48.2
204
Semantic segmentationCOCO (val)
mIoU36.1
150
Semantic segmentationPC-59
mIoU48.2
148
Semantic segmentationPascal VOC 20
mIoU72.2
130
Showing 10 of 72 rows
...

Other info

Follow for update