Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning 3D Semantic Segmentation with only 2D Image Supervision

About

With the recent growth of urban mapping and autonomous driving efforts, there has been an explosion of raw 3D data collected from terrestrial platforms with lidar scanners and color cameras. However, due to high labeling costs, ground-truth 3D semantic segmentation annotations are limited in both quantity and geographic diversity, while also being difficult to transfer across sensors. In contrast, large image collections with ground-truth semantic segmentations are readily available for diverse sets of scenes. In this paper, we investigate how to use only those labeled 2D image collections to supervise training 3D semantic segmentation models. Our approach is to train a 3D model from pseudo-labels derived from 2D semantic image segmentations using multiview fusion. We address several novel issues with this approach, including how to select trusted pseudo-labels, how to sample 3D scenes with rare object categories, and how to decouple input features from 2D images from pseudo-labels during training. The proposed network architecture, 2D3DNet, achieves significantly better performance (+6.2-11.4 mIoU) than baselines during experiments on a new urban dataset with lidar and images captured in 20 cities across 5 continents.

Kyle Genova, Xiaoqi Yin, Abhijit Kundu, Caroline Pantofaru, Forrester Cole, Avneesh Sud, Brian Brewington, Brian Shucker, Thomas Funkhouser• 2021

Related benchmarks

TaskDatasetResultRank
LiDAR Semantic SegmentationnuScenes (val)
mIoU79
169
LiDAR Semantic SegmentationnuScenes official (test)
mIoU80
132
Semantic segmentationnuScenes (test)
mIoU80
75
3D Semantic SegmentationnuScenes (test)
mIoU80
36
Semantic segmentationnuScenes 1.0 (val)
mIoU79
29
3D Semantic SegmentationnuScenes Lidar-Seg (val)
mIoU79
28
3D Semantic SegmentationScanNet
Semantics mIoU55.23
11
Showing 7 of 7 rows

Other info

Follow for update