Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unifying Training and Inference for Panoptic Segmentation

About

We present an end-to-end network to bridge the gap between training and inference pipeline for panoptic segmentation, a task that seeks to partition an image into semantic regions for "stuff" and object instances for "things". In contrast to recent works, our network exploits a parametrised, yet lightweight panoptic segmentation submodule, powered by an end-to-end learnt dense instance affinity, to capture the probability that any pair of pixels belong to the same instance. This panoptic submodule gives rise to a novel propagation mechanism for panoptic logits and enables the network to output a coherent panoptic segmentation map for both "stuff" and "thing" classes, without any post-processing. Reaping the benefits of end-to-end training, our full system sets new records on the popular street scene dataset, Cityscapes, achieving 61.4 PQ with a ResNet-50 backbone using only the fine annotations. On the challenging COCO dataset, our ResNet-50-based network also delivers state-of-the-art accuracy of 43.4 PQ. Moreover, our network flexibly works with and without object mask cues, performing competitively under both settings, which is of interest for applications with computation budgets.

Qizhu Li, Xiaojuan Qi, Philip H.S. Torr• 2020

Related benchmarks

TaskDatasetResultRank
Semantic segmentationCityscapes (val)
mIoU79.5
572
Panoptic SegmentationCityscapes (val)
PQ61.4
276
Instance SegmentationCityscapes (val)
AP33.7
239
Panoptic SegmentationCOCO (val)
PQ43.4
219
Panoptic SegmentationCOCO 2017 (val)
PQ43.4
172
Panoptic SegmentationCOCO (test-dev)
PQ47.2
162
Panoptic SegmentationCityscapes (test)
PQ63.3
51
Panoptic SegmentationCOCO 2017 (test-dev)
PQ47.2
41
Panoptic SegmentationCOCO (test)
PQ43.6
23
Showing 9 of 9 rows

Other info

Follow for update