Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MVGGT: Multimodal Visual Geometry Grounded Transformer for Multiview 3D Referring Expression Segmentation

About

Most existing 3D referring expression segmentation (3DRES) methods rely on dense, high-quality point clouds, while real-world agents such as robots and mobile phones operate with only a few sparse RGB views and strict latency constraints. We introduce Multi-view 3D Referring Expression Segmentation (MV-3DRES), where the model must recover scene structure and segment the referred object directly from sparse multi-view images. Traditional two-stage pipelines, which first reconstruct a point cloud and then perform segmentation, often yield low-quality geometry, produce coarse or degraded target regions, and run slowly. We propose the Multimodal Visual Geometry Grounded Transformer (MVGGT), an efficient end-to-end framework that integrates language information into sparse-view geometric reasoning through a dual-branch design. Training in this setting exposes a critical optimization barrier, termed Foreground Gradient Dilution (FGD), where sparse 3D signals lead to weak supervision. To resolve this, we introduce Per-view No-target Suppression Optimization (PVSO), which provides stronger and more balanced gradients across views, enabling stable and efficient learning. To support consistent evaluation, we build MVRefer, a benchmark that defines standardized settings and metrics for MV-3DRES. Experiments show that MVGGT establishes the first strong baseline and achieves both high accuracy and fast inference, outperforming existing alternatives. The code is available at https://mvggt.github.io/.

Changli Wu, Haodong Wang, Jiayi Ji, Yutian Yao, Chunsai Du, Jihua Kang, Yanwei Fu, Liujuan Cao• 2026

Related benchmarks

TaskDatasetResultRank
Referring 3D Instance SegmentationScanRefer (val)
mIoU65.2
37
3D Referring Expression SegmentationScanRefer Multiple
Acc@250.492
7
3D Referring Expression SegmentationMVRefer (Hard)
mIoU (global)24.4
3
3D Referring Expression SegmentationMVRefer (Easy ~60%)
mIoU (Global)50.1
3
3D Referring Expression SegmentationMVRefer Unique (~19%)
mIoU (global)65.2
3
3D Referring Expression SegmentationMVRefer (Multiple (~81%))
mIoU (Global)33.8
3
3D Referring Expression SegmentationMVRefer (Overall)
mIoU (global)39.9
3
Showing 7 of 7 rows

Other info

Follow for update