Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Virtual Multi-view Fusion for 3D Semantic Segmentation

About

Semantic segmentation of 3D meshes is an important problem for 3D scene understanding. In this paper we revisit the classic multiview representation of 3D meshes and study several techniques that make them effective for 3D semantic segmentation of meshes. Given a 3D mesh reconstructed from RGBD sensors, our method effectively chooses different virtual views of the 3D mesh and renders multiple 2D channels for training an effective 2D semantic segmentation model. Features from multiple per view predictions are finally fused on 3D mesh vertices to predict mesh semantic segmentation labels. Using the large scale indoor 3D semantic segmentation benchmark of ScanNet, we show that our virtual views enable more effective training of 2D semantic segmentation networks than previous multiview approaches. When the 2D per pixel predictions are aggregated on 3D surfaces, our virtual multiview fusion method is able to achieve significantly better 3D semantic segmentation results compared to all prior multiview approaches and competitive with recent 3D convolution approaches.

Abhijit Kundu, Xiaoqi Yin, Alireza Fathi, David Ross, Brian Brewington, Thomas Funkhouser, Caroline Pantofaru• 2020

Related benchmarks

TaskDatasetResultRank
Semantic segmentationScanNet V2 (val)
mIoU76.4
288
Semantic segmentationScanNet v2 (test)
mIoU76.4
248
3D Semantic SegmentationScanNet V2 (val)
mIoU76.4
171
3D Semantic SegmentationScanNet v2 (test)
mIoU74.6
110
3D Semantic SegmentationScanNet (test)
mIoU74.6
105
3D Semantic SegmentationScanNet (val)
mIoU76.4
100
Semantic segmentationScanNet (test)
mIoU74.6
59
Semantic segmentationS3DIS (test)
mIoU65.4
47
3D Semantic SegmentationS3DIS (Area 5 test (Fold #1))
mIoU65.38
19
Semantic segmentationS3DIS (5th fold)
Mean IoU65.4
19
Showing 10 of 13 rows

Other info

Follow for update