Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-Attentive Multiview Fusion of Vision-Language Embeddings

About

Vision-language models have been key to the development of open-vocabulary 2D semantic segmentation. Lifting these models from 2D images to 3D scenes, however, remains a challenging problem. Existing approaches typically back-project and average 2D descriptors across views, or heuristically select a single representative one, often resulting in suboptimal 3D representations. In this work, we introduce a novel multiview transformer architecture that cross-attends across vision-language descriptors from multiple viewpoints and fuses them into a unified per-3D-instance embedding. As a second contribution, we leverage multiview consistency as a self-supervision signal for this fusion, which significantly improves performance when added to a standard supervised target-class loss. Our Cross-Attentive Multiview Fusion, which we denote with its acronym CAMFusion, not only consistently outperforms naive averaging or single-view descriptor selection, but also achieves state-of-the-art results on 3D semantic and instance classification benchmarks, including zero-shot evaluations on out-of-domain datasets.

Tomas Berriel Martins, Martin R. Oswald, Javier Civera• 2026

Related benchmarks

TaskDatasetResultRank
3D Instance SegmentationScanNet200
mAP@0.539.3
63
3D Instance SegmentationReplica
AP2546.6
24
3D Instance Segmentation3RScan (test)
mAP42.9
10
Open-Vocabulary 3D Semantic SegmentationReplica (test)
All IoU38.3
7
3D Semantic SegmentationScanNet200
mIoU17.7
5
3D Semantic SegmentationScanNet20
mIoU35.7
5
Showing 6 of 6 rows

Other info

Follow for update