Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scan2Cap: Context-aware Dense Captioning in RGB-D Scans

About

We introduce the task of dense captioning in 3D scans from commodity RGB-D sensors. As input, we assume a point cloud of a 3D scene; the expected output is the bounding boxes along with the descriptions for the underlying objects. To address the 3D object detection and description problems, we propose Scan2Cap, an end-to-end trained method, to detect objects in the input scene and describe them in natural language. We use an attention mechanism that generates descriptive tokens while referring to the related components in the local context. To reflect object relations (i.e. relative spatial relations) in the generated captions, we use a message passing graph module to facilitate learning object relation features. Our method can effectively localize and describe 3D objects in scenes from the ScanRefer dataset, outperforming 2D baseline methods by a significant margin (27.61% CiDEr@0.5IoUimprovement).

Dave Zhenyu Chen, Ali Gholami, Matthias Nie{\ss}ner, Angel X. Chang• 2020

Related benchmarks

TaskDatasetResultRank
3D Question AnsweringScanQA (val)
CIDEr64.9
133
3D Dense CaptioningScanRefer (val)
CIDEr70.04
91
3D Question AnsweringSQA3D (test)
EM@141
55
3D Dense CaptioningScan2Cap (val)
CIDEr (@0.5)39.08
33
3D Dense CaptioningScanRefer (test)
CIDEr61.83
30
3D Dense CaptioningScan2Cap
BLEU-4 @0.522.4
23
3D Dense CaptioningNr3D 1 (val)
CIDEr (IoU=0.5)27.47
22
3D Dense CaptioningScanRefer
CIDEr@0.5IoU39.08
16
3D Dense CaptioningReferIt3D Nr3D (test)
C Score (0.5 IoU)27.47
13
3D Dense CaptioningNr3D (test)
C Score @ 0.5 IoU27.47
13
Showing 10 of 19 rows

Other info

Follow for update