Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SAM3D: Segment Anything in 3D Scenes

About

In this work, we propose SAM3D, a novel framework that is able to predict masks in 3D point clouds by leveraging the Segment-Anything Model (SAM) in RGB images without further training or finetuning. For a point cloud of a 3D scene with posed RGB images, we first predict segmentation masks of RGB images with SAM, and then project the 2D masks into the 3D points. Later, we merge the 3D masks iteratively with a bottom-up merging approach. At each step, we merge the point cloud masks of two adjacent frames with the bidirectional merging approach. In this way, the 3D masks predicted from different frames are gradually merged into the 3D masks of the whole 3D scene. Finally, we can optionally ensemble the result from our SAM3D with the over-segmentation results based on the geometric information of the 3D scenes. Our approach is experimented with ScanNet dataset and qualitative results demonstrate that our SAM3D achieves reasonable and fine-grained 3D segmentation results without any training or finetuning of SAM.

Yunhan Yang, Xiaoyang Wu, Tong He, Hengshuang Zhao, Xihui Liu• 2023

Related benchmarks

TaskDatasetResultRank
3D Instance SegmentationScanNet V2 (val)
Average AP5017.9
198
3D Instance SegmentationScanNet200
mAP@0.535.7
63
3D Instance SegmentationScanNet200 (val)
mAP9.6
55
3D Instance SegmentationReplica
AP2528.3
24
3D Instance SegmentationScanNet V2
AP@50%8
24
3D Instance SegmentationScanNet (val)
mAP@0.2547.6
19
Class-agnostic 3D instance segmentationScanNet++
AP7.2
17
Class-agnostic 3D instance segmentationScanNet V2 (val)
AP20.2
17
Class-agnostic 3D instance segmentationScanNet200 (val)
AP20.2
12
3D Instance SegmentationScanNet++ V1 (val)
AP507.9
12
Showing 10 of 22 rows

Other info

Follow for update