Lifting Motion to the 3D World via 2D Diffusion
About
Estimating 3D motion from 2D observations is a long-standing research challenge. Prior work typically requires training on datasets containing ground truth 3D motions, limiting their applicability to activities well-represented in existing motion capture data. This dependency particularly hinders generalization to out-of-distribution scenarios or subjects where collecting 3D ground truth is challenging, such as complex athletic movements or animal motion. We introduce MVLift, a novel approach to predict global 3D motion -- including both joint rotations and root trajectories in the world coordinate system -- using only 2D pose sequences for training. Our multi-stage framework leverages 2D motion diffusion models to progressively generate consistent 2D pose sequences across multiple views, a key step in recovering accurate global 3D motion. MVLift generalizes across various domains, including human poses, human-object interactions, and animal poses. Despite not requiring 3D supervision, it outperforms prior work on five datasets, including those methods that require 3D supervision.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Human Pose Lifting | NicoleMove | J2D Error26.2 | 6 | |
| Human Pose Lifting | AIST++ | MPJPE110.7 | 6 | |
| Human Pose Lifting | Steezy | J2D Accuracy11.7 | 6 | |
| Animal Pose Lifting | CatPlay | J2D57 | 3 | |
| Human-Object Interaction Lifting | OMOMO | Root Joint Error (T_root)54.9 | 2 |