Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HumanVid: Demystifying Training Data for Camera-controllable Human Image Animation

About

Human image animation involves generating videos from a character photo, allowing user control and unlocking the potential for video and movie production. While recent approaches yield impressive results using high-quality training data, the inaccessibility of these datasets hampers fair and transparent benchmarking. Moreover, these approaches prioritize 2D human motion and overlook the significance of camera motions in videos, leading to limited control and unstable video generation. To demystify the training data, we present HumanVid, the first large-scale high-quality dataset tailored for human image animation, which combines crafted real-world and synthetic data. For the real-world data, we compile a vast collection of real-world videos from the internet. We developed and applied careful filtering rules to ensure video quality, resulting in a curated collection of 20K high-resolution (1080P) human-centric videos. Human and camera motion annotation is accomplished using a 2D pose estimator and a SLAM-based method. To expand our synthetic dataset, we collected 10K 3D avatar assets and leveraged existing assets of body shapes, skin textures and clothings. Notably, we introduce a rule-based camera trajectory generation method, enabling the synthetic pipeline to incorporate diverse and precise camera motion annotation, which can rarely be found in real-world data. To verify the effectiveness of HumanVid, we establish a baseline model named CamAnimate, short for Camera-controllable Human Animation, that considers both human and camera motions as conditions. Through extensive experimentation, we demonstrate that such simple baseline training on our HumanVid achieves state-of-the-art performance in controlling both human pose and camera motions, setting a new benchmark. Demo, data and code could be found in the project website: https://humanvid.github.io/.

Zhenzhi Wang, Yixuan Li, Yanhong Zeng, Youqing Fang, Yuwei Guo, Wenran Liu, Jing Tan, Kai Chen, Tianfan Xue, Bo Dai, Dahua Lin• 2024

Related benchmarks

TaskDatasetResultRank
Fashion video synthesisUBC fashion video dataset (test)
SSIM0.929
11
Video GenerationTiktok (test)
SSIM0.778
11
Character AnimationDualDynamics
FVD174.6
8
2D Character AnimationTED-talks dataset
FVD138.9
6
Human Video GenerationHumanVid Landscape
SSIM0.672
5
Human Video GenerationHumanVid Portrait
SSIM67.8
5
Human Video GenerationHumanVid
SSIM67.2
5
Human Video GenerationOur General scenarios (test)
FVD1.37e+3
5
Human Image AnimationTiktok (test)--
5
Showing 9 of 9 rows

Other info

Code

Follow for update