Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenScene: 3D Scene Understanding with Open Vocabularies

About

Traditional 3D scene understanding approaches rely on labeled 3D datasets to train a model for a single task with supervision. We propose OpenScene, an alternative approach where a model predicts dense features for 3D scene points that are co-embedded with text and image pixels in CLIP feature space. This zero-shot approach enables task-agnostic training and open-vocabulary queries. For example, to perform SOTA zero-shot 3D semantic segmentation it first infers CLIP features for every 3D point and later classifies them based on similarities to embeddings of arbitrary class labels. More interestingly, it enables a suite of open-vocabulary scene understanding applications that have never been done before. For example, it allows a user to enter an arbitrary text query and then see a heat map indicating which parts of a scene match. Our approach is effective at identifying objects, materials, affordances, activities, and room types in complex 3D scenes, all using a single model trained without any labeled 3D data.

Songyou Peng, Kyle Genova, Chiyu "Max" Jiang, Andrea Tagliasacchi, Marc Pollefeys, Thomas Funkhouser• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationScanNet V2 (val)
mIoU54.2
288
Semantic segmentationScanNet v2 (test)
mIoU54.2
248
Semantic segmentationnuScenes (val)
mIoU (Segmentation)0.146
212
3D Semantic SegmentationScanNet V2 (val)
mIoU62.8
171
LiDAR Semantic SegmentationnuScenes (val)
mIoU42.1
169
3D Visual GroundingScanRefer (val)
Overall Accuracy @ IoU 0.506.5
155
3D Semantic SegmentationScanNet (val)
mIoU47
100
Semantic segmentationScanNet V2
mIoU47.5
54
Instance SegmentationScanNet200 (val)
mAP@5015.2
53
3D Instance SegmentationScanNet200 (val)
mAP11.7
52
Showing 10 of 60 rows

Other info

Code

Follow for update