Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OTSeg: Multi-prompt Sinkhorn Attention for Zero-Shot Semantic Segmentation

About

The recent success of CLIP has demonstrated promising results in zero-shot semantic segmentation by transferring muiltimodal knowledge to pixel-level classification. However, leveraging pre-trained CLIP knowledge to closely align text embeddings with pixel embeddings still has limitations in existing approaches. To address this issue, we propose OTSeg, a novel multimodal attention mechanism aimed at enhancing the potential of multiple text prompts for matching associated pixel embeddings. We first propose Multi-Prompts Sinkhorn (MPS) based on the Optimal Transport (OT) algorithm, which leads multiple text prompts to selectively focus on various semantic features within image pixels. Moreover, inspired by the success of Sinkformers in unimodal settings, we introduce the extension of MPS, called Multi-Prompts Sinkhorn Attention (MPSA) , which effectively replaces cross-attention mechanisms within Transformer framework in multimodal settings. Through extensive experiments, we demonstrate that OTSeg achieves state-of-the-art (SOTA) performance with significant gains on Zero-Shot Semantic Segmentation (ZS3) tasks across three benchmark datasets.

Kwanyoung Kim, Yujin Oh, Jong Chul Ye• 2024

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU21.9
2731
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU94.4
2040
Semantic segmentationPASCAL Context (val)
mIoU53.4
323
Semantic segmentationPascal Context (test)--
176
Semantic segmentationPASCAL-Context 59 class (val)
mIoU53.4
125
Semantic segmentationCOCO-Stuff 164K (test)
mIoU (Mean Scale)41.8
43
Semantic segmentationCOCOStuff 164k (val)
mIoU18.9
41
Semantic segmentationVOC (val)
mIoU94.4
25
Semantic segmentationVOC 2012
mIoU (Smoothed)94.3
23
Semantic segmentationEfficiency benchmark NVIDIA 3090 GPU
GFLOPS61.9
5
Showing 10 of 10 rows

Other info

Code

Follow for update