Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

AniTalker: Animate Vivid and Diverse Talking Faces through Identity-Decoupled Facial Motion Encoding

About

The paper introduces AniTalker, an innovative framework designed to generate lifelike talking faces from a single portrait. Unlike existing models that primarily focus on verbal cues such as lip synchronization and fail to capture the complex dynamics of facial expressions and nonverbal cues, AniTalker employs a universal motion representation. This innovative representation effectively captures a wide range of facial dynamics, including subtle expressions and head movements. AniTalker enhances motion depiction through two self-supervised learning strategies: the first involves reconstructing target video frames from source frames within the same identity to learn subtle motion representations, and the second develops an identity encoder using metric learning while actively minimizing mutual information between the identity and motion encoders. This approach ensures that the motion representation is dynamic and devoid of identity-specific details, significantly reducing the need for labeled data. Additionally, the integration of a diffusion model with a variance adapter allows for the generation of diverse and controllable facial animations. This method not only demonstrates AniTalker's capability to create detailed and realistic facial movements but also underscores its potential in crafting dynamic avatars for real-world applications. Synthetic results can be viewed at https://github.com/X-LANCE/AniTalker.

Tao Liu, Feilong Chen, Shuai Fan, Chenpeng Du, Qi Chen, Xie Chen, Kai Yu• 2024

Related benchmarks

TaskDatasetResultRank
Talking Head GenerationHDTF
FID34.644
33
Talking Face GenerationHDTF (test)
SSIM0.69
16
Audio Driven Talking Head GenerationHDTF 51 (test)
SSIM0.62
9
Audio-driven portrait animationHallo3 (test)
LSE-C5.873
8
Audio-driven portrait animationTH-1KH (test)
LSE-C5.419
8
Talking Face GenerationCREMA-D (test)
SSIM0.726
8
Audio-visual SynchronizationHDTF cross-driven
Sync-C (Cross-Gender)6.719
8
Speech-driven talking face generationVoxCeleb (test)
LSE-D14.76
8
Talking head synthesisCurated 5-Identity Audio-Visual Dataset (Macron, Paul, Obama, May, Stabenow) (test)
PSNR17.115
8
Speech-driven talking face generationWild Dataset
LSE-D19.5
8
Showing 10 of 15 rows

Other info

Follow for update