Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FaceFormer: Speech-Driven 3D Facial Animation with Transformers

About

Speech-driven 3D facial animation is challenging due to the complex geometry of human faces and the limited availability of 3D audio-visual data. Prior works typically focus on learning phoneme-level features of short audio windows with limited context, occasionally resulting in inaccurate lip movements. To tackle this limitation, we propose a Transformer-based autoregressive model, FaceFormer, which encodes the long-term audio context and autoregressively predicts a sequence of animated 3D face meshes. To cope with the data scarcity issue, we integrate the self-supervised pre-trained speech representations. Also, we devise two biased attention mechanisms well suited to this specific task, including the biased cross-modal multi-head (MH) attention and the biased causal MH self-attention with a periodic positional encoding strategy. The former effectively aligns the audio-motion modalities, whereas the latter offers abilities to generalize to longer audio sequences. Extensive experiments and a perceptual user study show that our approach outperforms the existing state-of-the-arts. The code will be made available.

Yingruo Fan, Zhaojiang Lin, Jun Saito, Wenping Wang, Taku Komura• 2021

Related benchmarks

TaskDatasetResultRank
3D talking head generationDualTalk (test)
FD (Expression)34.9
34
Talking Face GenerationLRW (test)
SSIM0.856
28
Co-speech 3D Gesture SynthesisBEAT2 (test)--
27
3D talking head generationDualTalk OOD set
FD (EXP)35.92
26
Talking Face GenerationLRS2 (test)
SSIM0.84
18
3D Talking Face GenerationBIWI A (test)
LVE5.3077
16
Speech-driven gesture generationBEAT-X--
11
3D facial animation generationBIWI (test)
Mean Vertex Error5.95
10
3D talking head animationVOCASET (test)
LVE (x10^-5mm)4.109
10
Speech-Driven Facial AnimationBIWI B (test)
Lip Sync34.4
10
Showing 10 of 29 rows

Other info

Code

Follow for update