3DP3: 3D Scene Perception via Probabilistic Programming
About
We present 3DP3, a framework for inverse graphics that uses inference in a structured generative model of objects, scenes, and images. 3DP3 uses (i) voxel models to represent the 3D shape of objects, (ii) hierarchical scene graphs to decompose scenes into objects and the contacts between them, and (iii) depth image likelihoods based on real-time graphics. Given an observed RGB-D image, 3DP3's inference algorithm infers the underlying latent 3D scene, including the object poses and a parsimonious joint parametrization of these poses, using fast bottom-up pose proposals, novel involutive MCMC updates of the scene graph structure, and, optionally, neural object detectors and pose estimators. We show that 3DP3 enables scene understanding that is aware of 3D shape, occlusion, and contact structure. Our results demonstrate that 3DP3 is more accurate at 6DoF object pose estimation from real images than deep learning baselines and shows better generalization to challenging scenes with novel viewpoints, contact, and partial observability.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 6DoF Pose Estimation | YCB-Video (test) | 2D Error < 2cm Rate100 | 72 | |
| 6DoF Pose Estimation | Synthetic YCB Challenging Single Object | Acc @ 0.5cm Thresh99 | 12 | |
| 6DoF Pose Estimation | Synthetic YCB-Challenging (Stacked) | Accuracy (0.5cm)86 | 2 | |
| 6DoF Pose Estimation | Synthetic YCB-Challenging (Partially Occluded) | Acc (0.5cm)70 | 2 | |
| 6DoF Pose Estimation | Synthetic YCB-Challenging Partial View | Acc @ 0.5cm18 | 1 |