Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

3D-STMN: Dependency-Driven Superpoint-Text Matching Network for End-to-End 3D Referring Expression Segmentation

About

In 3D Referring Expression Segmentation (3D-RES), the earlier approach adopts a two-stage paradigm, extracting segmentation proposals and then matching them with referring expressions. However, this conventional paradigm encounters significant challenges, most notably in terms of the generation of lackluster initial proposals and a pronounced deceleration in inference speed. Recognizing these limitations, we introduce an innovative end-to-end Superpoint-Text Matching Network (3D-STMN) that is enriched by dependency-driven insights. One of the keystones of our model is the Superpoint-Text Matching (STM) mechanism. Unlike traditional methods that navigate through instance proposals, STM directly correlates linguistic indications with their respective superpoints, clusters of semantically related points. This architectural decision empowers our model to efficiently harness cross-modal semantic relationships, primarily leveraging densely annotated superpoint-text pairs, as opposed to the more sparse instance-text pairs. In pursuit of enhancing the role of text in guiding the segmentation process, we further incorporate the Dependency-Driven Interaction (DDI) module to deepen the network's semantic comprehension of referring expressions. Using the dependency trees as a beacon, this module discerns the intricate relationships between primary terms and their associated descriptors in expressions, thereby elevating both the localization and segmentation capacities of our model. Comprehensive experiments on the ScanRefer benchmark reveal that our model not only set new performance standards, registering an mIoU gain of 11.7 points but also achieve a staggering enhancement in inference speed, surpassing traditional methods by 95.7 times. The code and models are available at https://github.com/sosppxo/3D-STMN.

Changli Wu, Yiwei Ma, Qi Chen, Haowei Wang, Gen Luo, Jiayi Ji, Xiaoshuai Sun• 2023

Related benchmarks

TaskDatasetResultRank
Referring 3D Instance SegmentationScanRefer (val)
mIoU74.5
37
3D Referring Expression SegmentationScanRefer
mIoU39.5
16
Referring Expression SegmentationScanRefer
mIoU39.5
9
3D Referring Expression SegmentationScanRefer Multiple
Acc@250.462
7
Referring Expression SegmentationReferIt3D Nr3D
mIoU27.6
7
3D Referring Expression Segmentation (3DRES)ScanRefer Multiple subset (val)
Overall Accuracy @0.2546.2
7
3D Grounded Referring Expression SegmentationMulti3DRefer v1 (test)
Acc@0.25 (ZT, with distractor)42.6
6
3D Referring Expression Segmentation (3DRES)ScanRefer Implicit (val)
Overall Accuracy (IoU=0.25)57.07
5
3D Referring Expression SegmentationDetailRefer Long (val)
Accuracy @ 0.2563.8
3
3D Referring Expression SegmentationDetailRefer Complex (val)
Accuracy @ IoU=0.2565.4
3
Showing 10 of 15 rows

Other info

Follow for update