StableAnimator: High-Quality Identity-Preserving Human Image Animation
About
Current diffusion models for human image animation struggle to ensure identity (ID) consistency. This paper presents StableAnimator, the first end-to-end ID-preserving video diffusion framework, which synthesizes high-quality videos without any post-processing, conditioned on a reference image and a sequence of poses. Building upon a video diffusion model, StableAnimator contains carefully designed modules for both training and inference striving for identity consistency. In particular, StableAnimator begins by computing image and face embeddings with off-the-shelf extractors, respectively and face embeddings are further refined by interacting with image embeddings using a global content-aware Face Encoder. Then, StableAnimator introduces a novel distribution-aware ID Adapter that prevents interference caused by temporal layers while preserving ID via alignment. During inference, we propose a novel Hamilton-Jacobi-Bellman (HJB) equation-based optimization to further enhance the face quality. We demonstrate that solving the HJB equation can be integrated into the diffusion denoising process, and the resulting solution constrains the denoising path and thus benefits ID preservation. Experiments on multiple benchmarks show the effectiveness of StableAnimator both qualitatively and quantitatively.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Human Image Animation | TikTok | FVD140.6 | 15 | |
| Character Image Animation | Follow-Your-Pose V2 | LPIPS0.214 | 15 | |
| Human Image Animation | Unseen100 | L1 Loss2.71e+4 | 9 | |
| Character Animation | User Study 20 identities and 20 driving videos (test) | Video Quality0.38 | 9 | |
| Character Image Animation | CoDanceBench (test) | LPIPS0.604 | 9 | |
| Character Animation | DualDynamics | FVD262.5 | 8 | |
| Human Image Animation | User Study 30 selected videos | M-A95.6 | 7 | |
| Human Video Generation | Our General scenarios (test) | FVD1.33e+3 | 5 |