Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DuoMo: Dual Motion Diffusion for World-Space Human Reconstruction

About

We present DuoMo, a generative method that recovers human motion in world-space coordinates from unconstrained videos with noisy or incomplete observations. Reconstructing such motion requires solving a fundamental trade-off: generalizing from diverse and noisy video inputs while maintaining global motion consistency. Our approach addresses this problem by factorizing motion learning into two diffusion models. The camera-space model first estimates motion from videos in camera coordinates. The world-space model then lifts this initial estimate into world coordinates and refines it to be globally consistent. Together, the two models can reconstruct motion across diverse scenes and trajectories, even from highly noisy or incomplete observations. Moreover, our formulation is general, generating the motion of mesh vertices directly and bypassing parametric models. DuoMo achieves state-of-the-art performance. On EMDB, our method obtains a 16% reduction in world-space reconstruction error while maintaining low foot skating. On RICH, it obtains a 30% reduction in world-space error. Project page: https://yufu-wang.github.io/duomo/

Yufu Wang, Evonne Ng, Soyong Shin, Rawal Khirodkar, Yuan Dong, Zhaoen Su, Jinhyung Park, Kris Kitani, Alexander Richard, Fabian Prada, Michael Zollhofer• 2026

Related benchmarks

TaskDatasetResultRank
Camera-space reconstructionEMDB 24 (test)
PA-MPJPE41.7
11
World-space reconstructionEMDB 24 (test)
WA-MPJPE66
9
Camera-space reconstructionRICH 24 (test)
PA-MPJPE34.8
9
World-space reconstructionRICH 24 (test)
WA-MPJPE53.5
8
World-space human motion reconstructionEgobody (Visible segment)
W-MPJPE90.4
5
World-space human motion reconstructionEgobody Full segment
W-MPJPE101.3
5
Showing 6 of 6 rows

Other info

Follow for update