Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ISNet: Integrate Image-Level and Semantic-Level Context for Semantic Segmentation

About

Co-occurrent visual pattern makes aggregating contextual information a common paradigm to enhance the pixel representation for semantic image segmentation. The existing approaches focus on modeling the context from the perspective of the whole image, i.e., aggregating the image-level contextual information. Despite impressive, these methods weaken the significance of the pixel representations of the same category, i.e., the semantic-level contextual information. To address this, this paper proposes to augment the pixel representations by aggregating the image-level and semantic-level contextual information, respectively. First, an image-level context module is designed to capture the contextual information for each pixel in the whole image. Second, we aggregate the representations of the same category for each pixel where the category regions are learned under the supervision of the ground-truth segmentation. Third, we compute the similarities between each pixel representation and the image-level contextual information, the semantic-level contextual information, respectively. At last, a pixel representation is augmented by weighted aggregating both the image-level contextual information and the semantic-level contextual information with the similarities as the weights. Integrating the image-level and semantic-level context allows this paper to report state-of-the-art accuracy on four benchmarks, i.e., ADE20K, LIP, COCOStuff and Cityscapes.

Zhenchao Jin, Bin Liu, Qi Chu, Nenghai Yu• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU47.31
2731
Semantic segmentationCoco-Stuff (test)
mIoU41.6
184
Human ParsingLIP (val)
mIoU56.96
111
Semantic segmentationCOCO-Stuff-10K (test)
mIoU42.1
47
Semantic segmentationLIP (val)
mIoU55.41
24
Showing 5 of 5 rows

Other info

Follow for update