Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VConv-DAE: Deep Volumetric Shape Learning Without Object Labels

About

With the advent of affordable depth sensors, 3D capture becomes more and more ubiquitous and already has made its way into commercial products. Yet, capturing the geometry or complete shapes of everyday objects using scanning devices (e.g. Kinect) still comes with several challenges that result in noise or even incomplete shapes. Recent success in deep learning has shown how to learn complex shape distributions in a data-driven way from large scale 3D CAD Model collections and to utilize them for 3D processing on volumetric representations and thereby circumventing problems of topology and tessellation. Prior work has shown encouraging results on problems ranging from shape completion to recognition. We provide an analysis of such approaches and discover that training as well as the resulting representation are strongly and unnecessarily tied to the notion of object labels. Thus, we propose a full convolutional volumetric auto encoder that learns volumetric representation from noisy data by estimating the voxel occupancy grids. The proposed method outperforms prior work on challenging tasks like denoising and shape completion. We also show that the obtained deep embedding gives competitive performance when used for classification and promising results for shape interpolation.

Abhishek Sharma, Oliver Grau, Mario Fritz• 2016

Related benchmarks

TaskDatasetResultRank
3D Shape ClassificationModelNet40 (test)
Accuracy75.5
227
Object ClassificationModelNet40 (test)
Accuracy75.5
180
ClassificationModelNet40 (test)
Accuracy75.5
99
3D shape recognitionModelNet10 (test)
Accuracy80.5
64
3D Object ClassificationModelNet10 (test)
Mean Class Accuracy80.5
57
Object ClassificationModelNet10 (test)
Accuracy80.5
46
Unsupervised Representation LearningModelNet40 (test)
Accuracy75.5
13
Unsupervised Representation LearningModelNet10 (test)
Accuracy80.5
10
Showing 8 of 8 rows

Other info

Follow for update