Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lifting Motion to the 3D World via 2D Diffusion

About

Estimating 3D motion from 2D observations is a long-standing research challenge. Prior work typically requires training on datasets containing ground truth 3D motions, limiting their applicability to activities well-represented in existing motion capture data. This dependency particularly hinders generalization to out-of-distribution scenarios or subjects where collecting 3D ground truth is challenging, such as complex athletic movements or animal motion. We introduce MVLift, a novel approach to predict global 3D motion -- including both joint rotations and root trajectories in the world coordinate system -- using only 2D pose sequences for training. Our multi-stage framework leverages 2D motion diffusion models to progressively generate consistent 2D pose sequences across multiple views, a key step in recovering accurate global 3D motion. MVLift generalizes across various domains, including human poses, human-object interactions, and animal poses. Despite not requiring 3D supervision, it outperforms prior work on five datasets, including those methods that require 3D supervision.

Jiaman Li, C. Karen Liu, Jiajun Wu• 2024

Related benchmarks

TaskDatasetResultRank
Human Pose LiftingNicoleMove
J2D Error26.2
6
Human Pose LiftingAIST++
MPJPE110.7
6
Human Pose LiftingSteezy
J2D Accuracy11.7
6
Animal Pose LiftingCatPlay
J2D57
3
Human-Object Interaction LiftingOMOMO
Root Joint Error (T_root)54.9
2
Showing 5 of 5 rows

Other info

Follow for update