Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DiffSpeaker: Speech-Driven 3D Facial Animation with Diffusion Transformer

About

Speech-driven 3D facial animation is important for many multimedia applications. Recent work has shown promise in using either Diffusion models or Transformer architectures for this task. However, their mere aggregation does not lead to improved performance. We suspect this is due to a shortage of paired audio-4D data, which is crucial for the Transformer to effectively perform as a denoiser within the Diffusion framework. To tackle this issue, we present DiffSpeaker, a Transformer-based network equipped with novel biased conditional attention modules. These modules serve as substitutes for the traditional self/cross-attention in standard Transformers, incorporating thoughtfully designed biases that steer the attention mechanisms to concentrate on both the relevant task-specific and diffusion-related conditions. We also explore the trade-off between accurate lip synchronization and non-verbal facial expressions within the Diffusion paradigm. Experiments show our model not only achieves state-of-the-art performance on existing benchmarks, but also fast inference speed owing to its ability to generate facial motions in parallel.

Zhiyuan Ma, Xiangyu Zhu, Guojun Qi, Chen Qian, Zhaoxiang Zhang, Zhen Lei• 2024

Related benchmarks

TaskDatasetResultRank
3D Talking Face GenerationBIWI A (test)
LVE4.2829
16
3D talking head animationVOCASET (test)
LVE (x10^-5mm)3.1478
10
3D talking head generationVOCASET
LVE3.2879
7
Showing 3 of 3 rows

Other info

Follow for update