Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

3D sans 3D Scans: Scalable Pre-training from Video-Generated Point Clouds

About

Despite recent progress in 3D self-supervised learning, collecting large-scale 3D scene scans remains expensive and labor-intensive. In this work, we investigate whether 3D representations can be learned from unlabeled videos recorded without any real 3D sensors. We present Laplacian-Aware Multi-level 3D Clustering with Sinkhorn-Knopp (LAM3C), a self-supervised framework that learns from video-generated point clouds reconstructed from unlabeled videos. We first introduce RoomTours, a video-generated point cloud dataset constructed by collecting room-walkthrough videos from the web (e.g., real-estate tours) and generating 49,219 scenes using an off-the-shelf feed-forward reconstruction model. We also propose a noise-regularized loss that stabilizes representation learning by enforcing local geometric smoothness and ensuring feature stability under noisy point clouds. Remarkably, without using any real 3D scans, LAM3C achieves better performance than previous self-supervised methods on indoor semantic and instance segmentation. These results suggest that unlabeled videos represent an abundant source of data for 3D self-supervised learning. Our source code is available at https://ryosuke-yamada.github.io/lam3c/.

Ryousuke Yamada, Kohsuke Ide, Yoshihiro Fukuhara, Hirokatsu Kataoka, Gilles Puy, Andrei Bursuc, Yuki M. Asano• 2025

Related benchmarks

TaskDatasetResultRank
3D Instance SegmentationS3DIS (Area 5)
mAP@50% IoU47.2
106
Instance SegmentationScanNet200 (val)--
72
Instance SegmentationScanNet (val)
mAP41.7
62
Instance SegmentationScanNet++ (val)
mAP0.223
24
Showing 4 of 4 rows

Other info

Follow for update