Cross-Attentive Multiview Fusion of Vision-Language Embeddings
About
Vision-language models have been key to the development of open-vocabulary 2D semantic segmentation. Lifting these models from 2D images to 3D scenes, however, remains a challenging problem. Existing approaches typically back-project and average 2D descriptors across views, or heuristically select a single representative one, often resulting in suboptimal 3D representations. In this work, we introduce a novel multiview transformer architecture that cross-attends across vision-language descriptors from multiple viewpoints and fuses them into a unified per-3D-instance embedding. As a second contribution, we leverage multiview consistency as a self-supervision signal for this fusion, which significantly improves performance when added to a standard supervised target-class loss. Our Cross-Attentive Multiview Fusion, which we denote with its acronym CAMFusion, not only consistently outperforms naive averaging or single-view descriptor selection, but also achieves state-of-the-art results on 3D semantic and instance classification benchmarks, including zero-shot evaluations on out-of-domain datasets.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Instance Segmentation | ScanNet200 | mAP@0.539.3 | 63 | |
| 3D Instance Segmentation | Replica | AP2546.6 | 24 | |
| 3D Instance Segmentation | 3RScan (test) | mAP42.9 | 10 | |
| Open-Vocabulary 3D Semantic Segmentation | Replica (test) | All IoU38.3 | 7 | |
| 3D Semantic Segmentation | ScanNet200 | mIoU17.7 | 5 | |
| 3D Semantic Segmentation | ScanNet20 | mIoU35.7 | 5 |