Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

D3Net: A Unified Speaker-Listener Architecture for 3D Dense Captioning and Visual Grounding

About

Recent studies on dense captioning and visual grounding in 3D have achieved impressive results. Despite developments in both areas, the limited amount of available 3D vision-language data causes overfitting issues for 3D visual grounding and 3D dense captioning methods. Also, how to discriminatively describe objects in complex 3D environments is not fully studied yet. To address these challenges, we present D3Net, an end-to-end neural speaker-listener architecture that can detect, describe and discriminate. Our D3Net unifies dense captioning and visual grounding in 3D in a self-critical manner. This self-critical property of D3Net also introduces discriminability during object caption generation and enables semi-supervised training on ScanNet data with partially annotated descriptions. Our method outperforms SOTA methods in both tasks on the ScanRefer dataset, surpassing the SOTA 3D dense captioning method by a significant margin.

Dave Zhenyu Chen, Qirui Wu, Matthias Nie{\ss}ner, Angel X. Chang• 2021

Related benchmarks

TaskDatasetResultRank
3D Visual GroundingScanRefer (val)
Overall Accuracy @ IoU 0.5037.87
155
3D Dense CaptioningScanRefer (val)
CIDEr46.07
91
3D Dense CaptioningScan2Cap (val)
CIDEr (@0.5)0.461
33
3D Dense CaptioningScanRefer (test)
CIDEr62.64
30
Visual GroundingScanRefer v1 (val)
Acc@0.5 (All)37.9
30
3D Dense CaptioningNr3D 1 (val)
CIDEr (IoU=0.5)38.42
22
3D Dense CaptioningScanRefer
CIDEr@0.5IoU47.32
16
3D Visual GroundingScanRefer v1 (test)
Unique Acc@0.5IoU68.43
15
3D Dense CaptioningReferIt3D Nr3D (test)
C Score (0.5 IoU)38.42
13
3D Dense CaptioningNr3D (test)
C Score @ 0.5 IoU38.42
13
Showing 10 of 15 rows

Other info

Code

Follow for update