Learning to Fuse Things and Stuff
About
We propose an end-to-end learning approach for panoptic segmentation, a novel task unifying instance (things) and semantic (stuff) segmentation. Our model, TASCNet, uses feature maps from a shared backbone network to predict in a single feed-forward pass both things and stuff segmentations. We explicitly constrain these two output distributions through a global things and stuff binary mask to enforce cross-task consistency. Our proposed unified network is competitive with the state of the art on several benchmarks for panoptic segmentation as well as on the individual semantic and instance segmentation tasks.
Jie Li, Allan Raventos, Arjun Bhargava, Takaaki Tagawa, Adrien Gaidon• 2018
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic segmentation | Cityscapes (val) | mIoU78.7 | 572 | |
| Panoptic Segmentation | Cityscapes (val) | PQ60.4 | 276 | |
| Instance Segmentation | Cityscapes (val) | AP39.1 | 239 | |
| Panoptic Segmentation | COCO (test-dev) | PQ40.7 | 162 | |
| Panoptic Segmentation | Mapillary Vistas (val) | PQ34.3 | 82 | |
| Panoptic Segmentation | Cityscapes (test) | PQ60.7 | 51 | |
| Instance Segmentation | Mapillary Vistas Dataset (val) | AP20.4 | 19 |
Showing 7 of 7 rows