Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VirtualPose: Learning Generalizable 3D Human Pose Models from Virtual Data

About

While monocular 3D pose estimation seems to have achieved very accurate results on the public datasets, their generalization ability is largely overlooked. In this work, we perform a systematic evaluation of the existing methods and find that they get notably larger errors when tested on different cameras, human poses and appearance. To address the problem, we introduce VirtualPose, a two-stage learning framework to exploit the hidden "free lunch" specific to this task, i.e. generating infinite number of poses and cameras for training models at no cost. To that end, the first stage transforms images to abstract geometry representations (AGR), and then the second maps them to 3D poses. It addresses the generalization issue from two aspects: (1) the first stage can be trained on diverse 2D datasets to reduce the risk of over-fitting to limited appearance; (2) the second stage can be trained on diverse AGR synthesized from a large number of virtual cameras and poses. It outperforms the SOTA methods without using any paired images and 3D poses from the benchmarks, which paves the way for practical applications. Code is available at https://github.com/wkom/VirtualPose.

Jiajun Su, Chunyu Wang, Xiaoxuan Ma, Wenjun Zeng, Yizhou Wang• 2022

Related benchmarks

TaskDatasetResultRank
Multi-person 3D Pose EstimationMuPoTS-3D (test)--
41
3D Multi-person Pose EstimationMuPoTS-3D All people
PCK (Absolute)44
24
Multi-person 3D Human Pose EstimationCMU Panoptic (test)
MPJPE (Average)58.9
22
3D Multi-person Pose EstimationMuPoTS-3D Matched people--
22
3D Human Pose EstimationCMU Panoptic 18
Haggling MPJPE54.1
14
3D Multi-person Pose EstimationMuPoTS-3D occlusion
PCK (Rel)69
5
Showing 6 of 6 rows

Other info

Code

Follow for update