TrianguLang: Geometry-Aware Semantic Consensus for Pose-Free 3D Localization
About
Localizing objects and parts from natural language in 3D space is essential for robotics, AR, and embodied AI, yet existing methods face a trade-off between the accuracy and geometric consistency of per-scene optimization and the efficiency of feed-forward inference. We present TrianguLang, a feed-forward framework for 3D localization that requires no camera calibration at inference. Unlike prior methods that treat views independently, we introduce Geometry-Aware Semantic Attention (GASA), which utilizes predicted geometry to gate cross-view feature correspondence, suppressing semantically plausible but geometrically inconsistent matches without requiring ground-truth poses. Validated on five benchmarks including ScanNet++ and uCO3D, TrianguLang achieves state-of-the-art feed-forward text-guided segmentation and localization, reducing user effort from $O(N)$ clicks to a single text query. The model processes each frame at 1008x1008 resolution in $\sim$57ms ($\sim$18 FPS) without optimization, enabling practical deployment for interactive robotics and AR applications. Code and checkpoints are available at https://cwru-aism.github.io/triangulang/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D Semantic Segmentation | ScanNet++ | mIoU (20 classes)62.4 | 31 | |
| Semantic segmentation | ScanNet++ | Mean IoU (mIoU)62.4 | 15 | |
| Open-Vocabulary Segmentation | SPIn-NeRF | mIoU91.4 | 8 | |
| Open-Vocabulary Segmentation | NVOS | mIoU93.5 | 7 | |
| 3D Semantic Segmentation | uCo3D | mIoU94.6 | 6 | |
| Semantic segmentation | uCo3D | mIoU94.6 | 5 | |
| Language Grounding | LERF-OVS | mIoU58.1 | 4 |