Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SAM3D: Segment Anything in 3D Scenes

About

In this work, we propose SAM3D, a novel framework that is able to predict masks in 3D point clouds by leveraging the Segment-Anything Model (SAM) in RGB images without further training or finetuning. For a point cloud of a 3D scene with posed RGB images, we first predict segmentation masks of RGB images with SAM, and then project the 2D masks into the 3D points. Later, we merge the 3D masks iteratively with a bottom-up merging approach. At each step, we merge the point cloud masks of two adjacent frames with the bidirectional merging approach. In this way, the 3D masks predicted from different frames are gradually merged into the 3D masks of the whole 3D scene. Finally, we can optionally ensemble the result from our SAM3D with the over-segmentation results based on the geometric information of the 3D scenes. Our approach is experimented with ScanNet dataset and qualitative results demonstrate that our SAM3D achieves reasonable and fine-grained 3D segmentation results without any training or finetuning of SAM.

Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, Xihui Liu• 2023

Related benchmarks

TaskDatasetResultRank
3D Instance SegmentationScanNet V2 (val)
Average AP5017.9
195
3D Instance SegmentationScanNet200 (val)
mAP9.6
52
3D Instance SegmentationScanNet200
mAP@0.535.7
29
3D Instance SegmentationScanNet (val)
mAP@0.2547.6
19
Class-agnostic 3D instance segmentationScanNet200 (val)
AP20.2
12
3D Instance SegmentationScanNet++ V1 (val)
AP507.9
12
3D Instance SegmentationScanNet200 v2 (val)
mAP (%)12.1
10
3D Instance SegmentationSceneNN
AP9.1
10
Class-agnostic 3D instance segmentationScanNet V2
AP20.2
8
3D Instance SegmentationScanNet200→SceneNN transfer (test)
AP15.1
8
Showing 10 of 16 rows

Other info

Follow for update