Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ReSTR: Convolution-free Referring Image Segmentation Using Transformers

About

Referring image segmentation is an advanced semantic segmentation task where target is not a predefined class but is described in natural language. Most of existing methods for this task rely heavily on convolutional neural networks, which however have trouble capturing long-range dependencies between entities in the language expression and are not flexible enough for modeling interactions between the two different modalities. To address these issues, we present the first convolution-free model for referring image segmentation using transformers, dubbed ReSTR. Since it extracts features of both modalities through transformer encoders, it can capture long-range dependencies between entities within each modality. Also, ReSTR fuses features of the two modalities by a self-attention encoder, which enables flexible and adaptive interactions between the two modalities in the fusion process. The fused features are fed to a segmentation module, which works adaptively according to the image and language expression in hand. ReSTR is evaluated and compared with previous work on all public benchmarks, where it outperforms all existing models.

Namyup Kim, Dongwon Kim, Cuiling Lan, Wenjun Zeng, Suha Kwak• 2022

Related benchmarks

TaskDatasetResultRank
Referring Image SegmentationRefCOCO (val)
mIoU67.22
259
Referring Expression SegmentationRefCOCO (testA)
cIoU69.3
257
Referring Image SegmentationRefCOCO+ (test-B)
mIoU48.3
252
Referring Image SegmentationRefCOCO (test A)
mIoU69.3
230
Referring Expression SegmentationRefCOCO+ (testA)
cIoU60.44
230
Referring Expression SegmentationRefCOCO+ (val)
cIoU55.78
223
Referring Expression SegmentationRefCOCO (testB)
cIoU64.45
213
Referring Expression SegmentationRefCOCO (val)
cIoU67.22
212
Referring Expression SegmentationRefCOCO+ (testB)
cIoU48.27
210
Referring Image SegmentationRefCOCO+ (val)--
179
Showing 10 of 28 rows

Other info

Follow for update