3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions
About
Matching local geometric features on real-world depth images is a challenging task due to the noisy, low-resolution, and incomplete nature of 3D scan data. These difficulties limit the performance of current state-of-art methods, which are typically based on histograms over geometric properties. In this paper, we present 3DMatch, a data-driven model that learns a local volumetric patch descriptor for establishing correspondences between partial 3D data. To amass training data for our model, we propose a self-supervised feature learning method that leverages the millions of correspondence labels found in existing RGB-D reconstructions. Experiments show that our descriptor is not only able to match local geometry in new scenes for reconstruction, but also generalize to different tasks and spatial scales (e.g. instance-level object model alignment for the Amazon Picking Challenge, and mesh surface correspondence). Results show that 3DMatch consistently outperforms other state-of-the-art approaches by a significant margin. Code, data, benchmarks, and pre-trained models are available online at http://3dmatch.cs.princeton.edu
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Point cloud registration | 3DMatch (test) | -- | 339 | |
| Point cloud registration | ETH | Success Rate91.7 | 38 | |
| Feature Matching | 3DMatch (Origin) | STD8.8 | 33 | |
| Feature Matching | ETH dataset (test) | FMR (Gazebo Summer)22.8 | 23 | |
| Reconstruction | Replica average over 8 scenes | Accuracy (Dist)1.56 | 21 | |
| Descriptor matching | 3DMatch Rotated | STD1.2 | 18 | |
| Local Descriptor Matching | 3DMatch 1.0 (test) | Kitchen Scene Performance57.51 | 18 | |
| Geometric Registration | KITTI | RTE0.283 | 16 | |
| 3D local descriptor matching | 3DMatch | Average Recall57.3 | 16 | |
| Feature Matching | 3DMatch | FMR (tau_2=0.05)59.6 | 15 |