Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Talking Head Models With Adjacent Frame Prior for Speech-Preserving Facial Expression Manipulation

About

Speech-Preserving Facial Expression Manipulation (SPFEM) is an innovative technique aimed at altering facial expressions in images and videos while retaining the original mouth movements. Despite advancements, SPFEM still struggles with accurate lip synchronization due to the complex interplay between facial expressions and mouth shapes. Capitalizing on the advanced capabilities of audio-driven talking head generation (AD-THG) models in synthesizing precise lip movements, our research introduces a novel integration of these models with SPFEM. We present a new framework, Talking Head Facial Expression Manipulation (THFEM), which utilizes AD-THG models to generate frames with accurately synchronized lip movements from audio inputs and SPFEM-altered images. However, increasing the number of frames generated by AD-THG models tends to compromise the realism and expression fidelity of the images. To counter this, we develop an adjacent frame learning strategy that finetunes AD-THG models to predict sequences of consecutive frames. This strategy enables the models to incorporate information from neighboring frames, significantly improving image quality during testing. Our extensive experimental evaluations demonstrate that this framework effectively preserves mouth shapes during expression manipulations, highlighting the substantial benefits of integrating AD-THG with SPFEM.

Zhenxuan Lu, Zhihua Xu, Zhijing Yang, Feng Gao, Yongyi Lu, Keze Wang, Tianshui Chen• 2026

Related benchmarks

TaskDatasetResultRank
Talking Head GenerationRAVDESS intra-identity 1.0
FAD0.833
48
Audio Driven Talking Head GenerationRAVDESS (cross-identity)
FAD1.885
48
Speech-Preserving Facial Expression ManipulationMEAD (intra-identity)
FAD0.857
48
Speech-Preserving Facial Expression ManipulationMEAD (cross-identity)
FAD2.058
38
Facial Expression ManipulationIntra-identity (Intra-ID)
FAD1.241
32
Speech-Preserving Facial Expression ManipulationUser Study (test)
Realism65
8
Showing 6 of 6 rows

Other info

Follow for update