Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AugRefer: Advancing 3D Visual Grounding via Cross-Modal Augmentation and Spatial Relation-based Referring

About

3D visual grounding (3DVG), which aims to correlate a natural language description with the target object within a 3D scene, is a significant yet challenging task. Despite recent advancements in this domain, existing approaches commonly encounter a shortage: a limited amount and diversity of text3D pairs available for training. Moreover, they fall short in effectively leveraging different contextual clues (e.g., rich spatial relations within the 3D visual space) for grounding. To address these limitations, we propose AugRefer, a novel approach for advancing 3D visual grounding. AugRefer introduces cross-modal augmentation designed to extensively generate diverse text-3D pairs by placing objects into 3D scenes and creating accurate and semantically rich descriptions using foundation models. Notably, the resulting pairs can be utilized by any existing 3DVG methods for enriching their training data. Additionally, AugRefer presents a language-spatial adaptive decoder that effectively adapts the potential referring objects based on the language description and various 3D spatial relations. Extensive experiments on three benchmark datasets clearly validate the effectiveness of AugRefer.

Xinyi Wang, Na Zhao, Zhiyuan Han, Dan Guo, Xun Yang• 2025

Related benchmarks

TaskDatasetResultRank
3D referring expression comprehensionScanRefer
Overall@0.25 Accuracy55.68
21
3D Referring Expression SegmentationSr3D
Acc@0.2560.22
11
3DRECNr3D
Accuracy (0.25 IoU)48.41
9
Showing 3 of 3 rows

Other info

Follow for update