Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lowis3D: Language-Driven Open-World Instance-Level 3D Scene Understanding

About

Open-world instance-level scene understanding aims to locate and recognize unseen object categories that are not present in the annotated dataset. This task is challenging because the model needs to both localize novel 3D objects and infer their semantic categories. A key factor for the recent progress in 2D open-world perception is the availability of large-scale image-text pairs from the Internet, which cover a wide range of vocabulary concepts. However, this success is hard to replicate in 3D scenarios due to the scarcity of 3D-text pairs. To address this challenge, we propose to harness pre-trained vision-language (VL) foundation models that encode extensive knowledge from image-text pairs to generate captions for multi-view images of 3D scenes. This allows us to establish explicit associations between 3D shapes and semantic-rich captions. Moreover, to enhance the fine-grained visual-semantic representation learning from captions for object-level categorization, we design hierarchical point-caption association methods to learn semantic-aware embeddings that exploit the 3D geometry between 3D points and multi-view images. In addition, to tackle the localization challenge for novel classes in the open-world setting, we develop debiased instance localization, which involves training object grouping modules on unlabeled data using instance-level pseudo supervision. This significantly improves the generalization capabilities of instance grouping and thus the ability to accurately locate novel objects. We conduct extensive experiments on 3D semantic, instance, and panoptic segmentation tasks, covering indoor and outdoor scenes across three datasets. Our method outperforms baseline methods by a significant margin in semantic segmentation (e.g. 34.5%$\sim$65.3%), instance segmentation (e.g. 21.8%$\sim$54.0%) and panoptic segmentation (e.g. 14.7%$\sim$43.3%). Code will be available.

Runyu Ding, Jihan Yang, Chuhui Xue, Wenqing Zhang, Song Bai, Xiaojuan Qi• 2023

Related benchmarks

TaskDatasetResultRank
3D Semantic SegmentationScanNet B10/N9
hIoU53.1
20
3D Semantic SegmentationScanNet B12 N7
hIoU55.3
20
3D Semantic SegmentationS3DIS B6 N6
hIoU38.5
19
3D Semantic SegmentationS3DIS (B8/N4)
hIoU34.6
19
3D Instance SegmentationS3DIS B6 N6
mAP50 (Base)51.8
13
3D Semantic SegmentationScanNet B15 N4
hIoU65.3
13
3D Instance SegmentationS3DIS (B8/N4)
mAP50 (Base)58.7
13
3D Open-vocabulary Instance SegmentationScanNetv2 8/9 base/novel categories split
AP@0.5038.1
11
3D Open-vocabulary Instance SegmentationScanNet 10/7 base/novel categories v2
AP@0.5031.2
11
3D Open-vocabulary Instance SegmentationS3DIS 8/4 base/novel categories split
AP@5013.8
11
Showing 10 of 14 rows

Other info

Follow for update