Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SUGAR: Pre-training 3D Visual Representations for Robotics

About

Learning generalizable visual representations from Internet data has yielded promising results for robotics. Yet, prevailing approaches focus on pre-training 2D representations, being sub-optimal to deal with occlusions and accurately localize objects in complex 3D scenes. Meanwhile, 3D representation learning has been limited to single-object understanding. To address these limitations, we introduce a novel 3D pre-training framework for robotics named SUGAR that captures semantic, geometric and affordance properties of objects through 3D point clouds. We underscore the importance of cluttered scenes in 3D representation learning, and automatically construct a multi-object dataset benefiting from cost-free supervision in simulation. SUGAR employs a versatile transformer-based model to jointly address five pre-training tasks, namely cross-modal knowledge distillation for semantic learning, masked point modeling to understand geometry structures, grasping pose synthesis for object affordance, 3D instance segmentation and referring expression grounding to analyze cluttered scenes. We evaluate our learned representation on three robotic-related tasks, namely, zero-shot 3D object recognition, referring expression grounding, and language-driven robotic manipulation. Experimental results show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.

Shizhe Chen, Ricardo Garcia, Ivan Laptev, Cordelia Schmid• 2024

Related benchmarks

TaskDatasetResultRank
3D Object ClassificationModelNet40 (test)--
302
3D Object ClassificationObjaverse-LVIS (test)
Top-1 Accuracy42.1
95
3D Object ClassificationScanObjectNN OBJ-ONLY (test)
Accuracy65.3
49
3D Object RecognitionScanObjectNN OBJ_BG (test)
Top-1 Accuracy68
35
Object ClassificationScanObjectNN--
29
object recognitionObjaverse LVIS
Top-1 Acc49.5
25
3D Object RecognitionScanObjectNN PB_T50_RS (test)
Top-1 Accuracy49.3
14
Multi-task Robotic ManipulationRLBench 100 demonstrations (test)
Average Success Rate93
11
RecognitionModelNet40
Top-1 Accuracy84.6
10
Referring expression detectionOCID-Ref (test)
Acc@0.25 (Total)97.74
5
Showing 10 of 10 rows

Other info

Code

Follow for update