Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Encoder Fusion Network with Co-Attention Embedding for Referring Image Segmentation

About

Recently, referring image segmentation has aroused widespread interest. Previous methods perform the multi-modal fusion between language and vision at the decoding side of the network. And, linguistic feature interacts with visual feature of each scale separately, which ignores the continuous guidance of language to multi-scale visual features. In this work, we propose an encoder fusion network (EFN), which transforms the visual encoder into a multi-modal feature learning network, and uses language to refine the multi-modal features progressively. Moreover, a co-attention mechanism is embedded in the EFN to realize the parallel update of multi-modal features, which can promote the consistent of the cross-modal information representation in the semantic space. Finally, we propose a boundary enhancement module (BEM) to make the network pay more attention to the fine structure. The experiment results on four benchmark datasets demonstrate that the proposed approach achieves the state-of-the-art performance under different evaluation metrics without any post-processing.

Guang Feng, Zhiwei Hu, Lihe Zhang, Huchuan Lu• 2021

Related benchmarks

TaskDatasetResultRank
Referring Image SegmentationRefCOCO (val)
mIoU62.76
259
Referring Expression SegmentationRefCOCO (testA)
cIoU65.69
257
Referring Image SegmentationRefCOCO+ (test-B)
mIoU43.01
252
Referring Image SegmentationRefCOCO (test A)
mIoU65.69
230
Referring Expression SegmentationRefCOCO+ (testA)
cIoU55.24
230
Referring Expression SegmentationRefCOCO+ (val)
cIoU51.5
223
Referring Expression SegmentationRefCOCO (testB)
cIoU59.67
213
Referring Expression SegmentationRefCOCO (val)
cIoU62.76
212
Referring Expression SegmentationRefCOCO+ (testB)
cIoU43.01
210
Referring Image SegmentationRefCOCO+ (val)--
179
Showing 10 of 28 rows

Other info

Follow for update