Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions

About

Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu

Andy Zeng, Shuran Song, Matthias Nie{\ss}ner, Matthew Fisher, Jianxiong Xiao, Thomas Funkhouser• 2016

Related benchmarks

TaskDatasetResultRank
Point cloud registration3DMatch (test)--
339
Point cloud registrationETH
Success Rate91.7
38
Feature Matching3DMatch (Origin)
STD8.8
33
Feature MatchingETH dataset (test)
FMR (Gazebo Summer)22.8
23
ReconstructionReplica average over 8 scenes
Accuracy (Dist)1.56
21
Descriptor matching3DMatch Rotated
STD1.2
18
Local Descriptor Matching3DMatch 1.0 (test)
Kitchen Scene Performance57.51
18
Geometric RegistrationKITTI
RTE0.283
16
3D local descriptor matching3DMatch
Average Recall57.3
16
Feature Matching3DMatch
FMR (tau_2=0.05)59.6
15
Showing 10 of 28 rows

Other info

Code

Follow for update