Squeeze-and-Attention Networks for Semantic Segmentation
About
The recent integration of attention mechanisms into segmentation networks improves their representational capabilities through a great emphasis on more informative features. However, these attention mechanisms ignore an implicit sub-task of semantic segmentation and are constrained by the grid structure of convolution kernels. In this paper, we propose a novel squeeze-and-attention network (SANet) architecture that leverages an effective squeeze-and-attention (SA) module to account for two distinctive characteristics of segmentation: i) pixel-group attention, and ii) pixel-wise prediction. Specifically, the proposed SA modules impose pixel-group attention on conventional convolution by introducing an 'attention' convolutional channel, thus taking into account spatial-channel inter-dependencies in an efficient manner. The final segmentation results are produced by merging outputs from four hierarchical stages of a SANet to integrate multi-scale contexts for obtaining an enhanced pixel-wise prediction. Empirical experiments on two challenging public datasets validate the effectiveness of the proposed SANets, which achieves 83.2% mIoU (without COCO pre-training) on PASCAL VOC and a state-of-the-art mIoU of 54.4% on PASCAL Context.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | PASCAL VOC 2012 (test) | mIoU83.2 | 1342 | |
| Image Generation | CIFAR-10 (test) | FID14.498 | 471 | |
| Semantic segmentation | PASCAL Context (val) | mIoU53 | 323 | |
| Semantic segmentation | Pascal VOC (test) | mIoU86.1 | 236 | |
| Semantic segmentation | Pascal Context 60 | mIoU54.4 | 81 | |
| Image Generation | Tiny-ImageNet | Inception Score8.342 | 34 | |
| Semantic Segmentation Efficiency | Pascal VOC (test) | mIoU83.2 | 5 |