Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Point Linguist Model: Segment Any Object via Bridged Large 3D-Language Model

About

3D object segmentation with Large Language Models (LLMs) has become a prevailing paradigm due to its broad semantics, task flexibility, and strong generalization. However, this paradigm is hindered by representation misalignment: LLMs process high-level semantic tokens, whereas 3D point clouds convey only dense geometric structures. In prior methods, misalignment limits both input and output. At the input stage, dense point patches require heavy pre-alignment, weakening object-level semantics and confusing similar distractors. At the output stage, predictions depend only on dense features without explicit geometric cues, leading to a loss of fine-grained accuracy. To address these limitations, we present the Point Linguist Model (PLM), a general framework that bridges the representation gap between LLMs and dense 3D point clouds without requiring large-scale pre-alignment between 3D-text or 3D-images. Specifically, we introduce Object-centric Discriminative Representation (OcDR), which learns object-centric tokens that capture target semantics and scene relations under a hard negative-aware training objective. This mitigates the misalignment between LLM tokens and 3D points, enhances resilience to distractors, and facilitates semantic-level reasoning within LLMs. For accurate segmentation, we introduce the Geometric Reactivation Decoder (GRD), which predicts masks by combining OcDR tokens carrying LLM-inferred geometry with corresponding dense features, preserving comprehensive dense features throughout the pipeline. Extensive experiments show that PLM achieves significant improvements of +7.3 mIoU on ScanNetv2 and +6.0 mIoU on Multi3DRefer for 3D referring segmentation, with consistent gains across 7 benchmarks spanning 4 different tasks, demonstrating the effectiveness of comprehensive object-centric reasoning for robust 3D understanding.

Zhuoxu Huang, Mingqi Gao, Jungong Han• 2025

Related benchmarks

TaskDatasetResultRank
Semantic segmentationScanNet V2
mIoU66
54
Semantic segmentationScanNet200
mIoU43.5
12
3D Open-vocabulary Instance SegmentationS3DIS 8/4 base/novel categories split
AP@5038.7
11
3D Open-vocabulary Instance SegmentationS3DIS (6/6 base/novel categories split)
AP@5034
11
3D Open-vocabulary Instance SegmentationScanNet 10/7 base/novel categories v2
AP@0.5054.1
11
3D Open-vocabulary Instance SegmentationScanNetv2 8/9 base/novel categories split
AP@0.5060.5
11
Referring Expression SegmentationScanRefer
mIoU43.1
9
Open Vocabulary Semantic SegmentationScanNet V2 (N7)
mIoU6.61e+3
7
Open Vocabulary Semantic SegmentationScanNet V2 (N9)
mIoU6.65e+3
7
Open Vocabulary Semantic SegmentationScanNet200 (N30)
mIoU4.25e+3
7
Showing 10 of 19 rows

Other info

Follow for update