Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Comprehensive Multi-Modal Interactions for Referring Image Segmentation

About

We investigate Referring Image Segmentation (RIS), which outputs a segmentation map corresponding to the natural language description. Addressing RIS efficiently requires considering the interactions happening across visual and linguistic modalities and the interactions within each modality. Existing methods are limited because they either compute different forms of interactions sequentially (leading to error propagation) or ignore intramodal interactions. We address this limitation by performing all three interactions simultaneously through a Synchronous Multi-Modal Fusion Module (SFM). Moreover, to produce refined segmentation masks, we propose a novel Hierarchical Cross-Modal Aggregation Module (HCAM), where linguistic features facilitate the exchange of contextual information across the visual hierarchy. We present thorough ablation studies and validate our approach's performance on four benchmark datasets, showing considerable performance gains over the existing state-of-the-art (SOTA) methods.

Kanishk Jain, Vineet Gandhi• 2021

Related benchmarks

TaskDatasetResultRank
Referring Image SegmentationRefCOCO+ (test-B)
mIoU44.12
200
Referring Image SegmentationRefCOCO (val)
mIoU65.32
197
Referring Image SegmentationRefCOCO (test A)
mIoU68.56
178
Referring Image SegmentationRefCOCO (test-B)--
119
Referring Image SegmentationRefCOCO+ (val)--
117
Referring Image SegmentationG-Ref (val)
mIoU49.9
95
Referring Image SegmentationRefCOCO+ (test-A)--
89
Referring Image SegmentationReferIt (test)
IoU69.19
59
Referring Image SegmentationG-Ref Google split (val)
IoU48.95
58
Referring Image SegmentationUNC (val)
IoU65.32
44
Showing 10 of 15 rows

Other info

Follow for update