Contrastive Language-Colored Pointmap Pretraining for Unified 3D Scene Understanding
About
Pretraining 3D encoders by aligning with Contrastive Language Image Pretraining (CLIP) has emerged as a promising direction to learn generalizable representations for 3D scene understanding. In this paper, we propose UniScene3D, a transformer-based encoder that learns unified scene representations from multi-view colored pointmaps, jointly modeling image appearance and geometry. For robust colored pointmap representation learning, we introduce novel cross-view geometric alignment and grounded view alignment to enforce cross-view geometry and semantic consistency. Extensive low-shot and task-specific fine-tuning evaluations on viewpoint grounding, scene retrieval, scene type classification, and 3D VQA demonstrate our state-of-the-art performance. These results highlight the effectiveness of our approach for unified 3D scene understanding. https://yebulabula.github.io/UniScene3D/
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Visual Question Answering | SQA3D | EM@152.5 | 8 | |
| Scene Retrieval | ScanRefer (n=5) | Recall@122.4 | 8 | |
| Scene Retrieval | ScanRefer (n=10) | Recall@133.4 | 8 | |
| Scene Retrieval | Nr3D n=5 | R@119.7 | 8 | |
| Scene Retrieval | Nr3D n=10 | R@130.7 | 8 | |
| Scene Retrieval | Sr3D n=5 | R@13 | 8 | |
| Scene Retrieval | Sr3D (n=10) | R@14.6 | 8 | |
| Scene type classification | ScanNet v2 (test) | Accuracy (0-shot)70.7 | 8 | |
| Viewpoint Grounding | Locate-3D (ScanNet++) | Performance on Room Type Prompt69.13 | 8 | |
| Viewpoint Grounding | ScanRefer | R@138.6 | 6 |