Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LatentSync: Taming Audio-Conditioned Latent Diffusion Models for Lip Sync with SyncNet Supervision

About

End-to-end audio-conditioned latent diffusion models (LDMs) have been widely adopted for audio-driven portrait animation, demonstrating their effectiveness in generating lifelike and high-resolution talking videos. However, direct application of audio-conditioned LDMs to lip-synchronization (lip-sync) tasks results in suboptimal lip-sync accuracy. Through an in-depth analysis, we identified the underlying cause as the "shortcut learning problem", wherein the model predominantly learns visual-visual shortcuts while neglecting the critical audio-visual correlations. To address this issue, we explored different approaches for integrating SyncNet supervision into audio-conditioned LDMs to explicitly enforce the learning of audio-visual correlations. Since the performance of SyncNet directly influences the lip-sync accuracy of the supervised model, the training of a well-converged SyncNet becomes crucial. We conducted the first comprehensive empirical studies to identify key factors affecting SyncNet convergence. Based on our analysis, we introduce StableSyncNet, with an architecture designed for stable convergence. Our StableSyncNet achieved a significant improvement in accuracy, increasing from 91% to 94% on the HDTF test set. Additionally, we introduce a novel Temporal Representation Alignment (TREPA) mechanism to enhance temporal consistency in the generated videos. Experimental results show that our method surpasses state-of-the-art lip-sync approaches across various evaluation metrics on the HDTF and VoxCeleb2 datasets.

Chunyu Li, Chao Zhang, Weikai Xu, Jingyu Lin, Jinghui Xie, Weiguo Feng, Bingyue Peng, Cunjian Chen, Weiwei Xing• 2024

Related benchmarks

TaskDatasetResultRank
Visual DubbingContextDubBench 1.0 (test)
FID13.602
18
Visual DubbingHDTF (test)
PSNR31.325
9
Video-to-Video lip-syncingTalkVid Self-Reenactment
FID46.22
9
Visual DubbingUser Study
Realism2.91
9
Talking Head ReconstructionHDTF, CelebV-HQ, and CelebV-Text 100 randomly sampled reconstruction videos
FID5.3
8
Cross-Audio Talking Head GenerationHDTF, CelebV-HQ, and CelebV-Text 100 cross-audio pairs
FID7.69
8
Lip-audio synchronizationHDTF, CelebV-HQ, and CelebV-Text
FPS5.7
8
Lip-syncingHDTF
FID8.78
7
Lip-syncingVFHQ
FID9.56
7
Visual DubbingVideo 3-second 25fps 512x512 resolution
Inference Time (s)30
4
Showing 10 of 11 rows

Other info

Follow for update