Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transformer Scale Gate for Semantic Segmentation

About

Effectively encoding multi-scale contextual information is crucial for accurate semantic segmentation. Existing transformer-based segmentation models combine features across scales without any selection, where features on sub-optimal scales may degrade segmentation outcomes. Leveraging from the inherent properties of Vision Transformers, we propose a simple yet effective module, Transformer Scale Gate (TSG), to optimally combine multi-scale features.TSG exploits cues in self and cross attentions in Vision Transformers for the scale selection. TSG is a highly flexible plug-and-play module, and can easily be incorporated with any encoder-decoder-based hierarchical vision Transformer architecture. Extensive experiments on the Pascal Context and ADE20K datasets demonstrate that our feature selection strategy achieves consistent gains.

Hengcan Shi, Munawar Hayat, Jianfei Cai• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU54.2
2731
Semantic segmentationCityscapes (val)--
572
Semantic segmentationPASCAL Context (val)
mIoU63.3
323
Semantic segmentationPascal Context (test)--
176
Showing 4 of 4 rows

Other info

Follow for update