Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LPIPS-AttnWav2Lip: Generic Audio-Driven lip synchronization for Talking Head Generation in the Wild

About

Researchers have shown a growing interest in Audio-driven Talking Head Generation. The primary challenge in talking head generation is achieving audio-visual coherence between the lips and the audio, known as lip synchronization. This paper proposes a generic method, LPIPS-AttnWav2Lip, for reconstructing face images of any speaker based on audio. We used the U-Net architecture based on residual CBAM to better encode and fuse audio and visual modal information. Additionally, the semantic alignment module extends the receptive field of the generator network to obtain the spatial and channel information of the visual features efficiently; and match statistical information of visual features with audio latent vector to achieve the adjustment and injection of the audio content information to the visual information. To achieve exact lip synchronization and to generate realistic high-quality images, our approach adopts LPIPS Loss, which simulates human judgment of image quality and reduces instability possibility during the training process. The proposed method achieves outstanding performance in terms of lip synchronization accuracy and visual quality as demonstrated by subjective and objective evaluation results. The code for the paper is available at the following link: https://github.com/FelixChan9527/LPIPS-AttnWav2Lip

Zhipeng Chen, Xinheng Wang, Lun Xie, Haijie Yuan, Hang Pan• 2026

Related benchmarks

TaskDatasetResultRank
Talking Face GenerationLRS2--
8
Talking Head GenerationLRS2 35
LSE-C7.287
6
Talking Head GenerationLRS3 37
LSE-C7.513
6
Talking Head GenerationLRW 38
LSE-C6.86
6
Showing 4 of 4 rows

Other info

Follow for update