Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Follow-Your-Emoji: Fine-Controllable and Expressive Freestyle Portrait Animation

About

We present Follow-Your-Emoji, a diffusion-based framework for portrait animation, which animates a reference portrait with target landmark sequences. The main challenge of portrait animation is to preserve the identity of the reference portrait and transfer the target expression to this portrait while maintaining temporal consistency and fidelity. To address these challenges, Follow-Your-Emoji equipped the powerful Stable Diffusion model with two well-designed technologies. Specifically, we first adopt a new explicit motion signal, namely expression-aware landmark, to guide the animation process. We discover this landmark can not only ensure the accurate motion alignment between the reference portrait and target motion during inference but also increase the ability to portray exaggerated expressions (i.e., large pupil movements) and avoid identity leakage. Then, we propose a facial fine-grained loss to improve the model's ability of subtle expression perception and reference portrait appearance reconstruction by using both expression and facial masks. Accordingly, our method demonstrates significant performance in controlling the expression of freestyle portraits, including real humans, cartoons, sculptures, and even animals. By leveraging a simple and effective progressive generation strategy, we extend our model to stable long-term animation, thus increasing its potential application value. To address the lack of a benchmark for this field, we introduce EmojiBench, a comprehensive benchmark comprising diverse portrait images, driving videos, and landmarks. We show extensive evaluations on EmojiBench to verify the superiority of Follow-Your-Emoji.

Yue Ma, Hongyu Liu, Hongfa Wang, Heng Pan, Yingqing He, Junkun Yuan, Ailing Zeng, Chengfei Cai, Heung-Yeung Shum, Wei Liu, Qifeng Chen• 2024

Related benchmarks

TaskDatasetResultRank
Talking head video generationTalkinghead1kh
FID19.37
8
Talking head video generationHDTF
FID16.09
8
Cross-ReenactmentTalkingHead-1KH and LV100 (test)
ID-SIM0.773
7
Self-ReenactmentTalkingHead-1KH and LV100 (test)
L1 Loss0.045
7
Self-ReenactmentRAVDESS
PSNR25.6872
6
Talking head synthesisVFHQ (first 100 frames)
FID32.4
6
Talking head synthesisSelf-Collected Dataset 50 identities
FID34.94
6
Cross-identity reenactmentHDTF
FVD154.1
6
Cross-ReenactmentNeRSemble
AED0.2207
6
Talking head synthesisHDTF
PSNR24.16
5
Showing 10 of 13 rows

Other info

Follow for update