Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

3D Hand Pose Estimation using Simulation and Partial-Supervision with a Shared Latent Space

About

Tremendous amounts of expensive annotated data are a vital ingredient for state-of-the-art 3d hand pose estimation. Therefore, synthetic data has been popularized as annotations are automatically available. However, models trained only with synthetic samples do not generalize to real data, mainly due to the gap between the distribution of synthetic and real data. In this paper, we propose a novel method that seeks to predict the 3d position of the hand using both synthetic and partially-labeled real data. Accordingly, we form a shared latent space between three modalities: synthetic depth image, real depth image, and pose. We demonstrate that by carefully learning the shared latent space, we can find a regression model that is able to generalize to real data. As such, we show that our method produces accurate predictions in both semi-supervised and unsupervised settings. Additionally, the proposed model is capable of generating novel, meaningful, and consistent samples from all of the three domains. We evaluate our method qualitatively and quantitively on two highly competitive benchmarks (i.e., NYU and ICVL) and demonstrate its superiority over the state-of-the-art methods. The source code will be made available at https://github.com/masabdi/LSPS.

Masoud Abdi, Ehsan Abbasnejad, Chee Peng Lim, Saeid Nahavandi• 2018

Related benchmarks

TaskDatasetResultRank
3D Hand Pose EstimationNYU (test)
Mean Error (mm)15.4
100
3D Hand Pose EstimationICVL (test)
Mean Error (mm)7
91
Showing 2 of 2 rows

Other info

Follow for update