HumanRAM: Feed-forward Human Reconstruction and Animation Model using Transformers
About
3D human reconstruction and animation are long-standing topics in computer graphics and vision. However, existing methods typically rely on sophisticated dense-view capture and/or time-consuming per-subject optimization procedures. To address these limitations, we propose HumanRAM, a novel feed-forward approach for generalizable human reconstruction and animation from monocular or sparse human images. Our approach integrates human reconstruction and animation into a unified framework by introducing explicit pose conditions, parameterized by a shared SMPL-X neural texture, into transformer-based large reconstruction models (LRM). Given monocular or sparse input images with associated camera parameters and SMPL-X poses, our model employs scalable transformers and a DPT-based decoder to synthesize realistic human renderings under novel viewpoints and novel poses. By leveraging the explicit pose conditions, our model simultaneously enables high-quality human reconstruction and high-fidelity pose-controlled animation. Experiments show that HumanRAM significantly surpasses previous methods in terms of reconstruction accuracy, animation fidelity, and generalization performance on real-world datasets. Video results are available at https://zju3dv.github.io/humanram/.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| 3D human reconstruction | THuman 2.1 (test) | PSNR33.16 | 16 | |
| Reconstruction | THuman 4.0 | PSNR28.98 | 4 | |
| Reconstruction | AvatarReX (test) | PSNR27.76 | 4 | |
| Human Animation | AvRex Animation setting THuman 2.1 | PSNR24.58 | 3 | |
| Reconstruction + Animation | THuman 4.0 | PSNR25.1 | 2 |