Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MEMO: Memory-Guided Diffusion for Expressive Talking Video Generation

About

Recent advances in video diffusion models have unlocked new potential for realistic audio-driven talking video generation. However, achieving seamless audio-lip synchronization, maintaining long-term identity consistency, and producing natural, audio-aligned expressions in generated talking videos remain significant challenges. To address these challenges, we propose Memory-guided EMOtion-aware diffusion (MEMO), an end-to-end audio-driven portrait animation approach to generate identity-consistent and expressive talking videos. Our approach is built around two key modules: (1) a memory-guided temporal module, which enhances long-term identity consistency and motion smoothness by developing memory states to store information from a longer past context to guide temporal modeling via linear attention; and (2) an emotion-aware audio module, which replaces traditional cross attention with multi-modal attention to enhance audio-video interaction, while detecting emotions from audio to refine facial expressions via emotion adaptive layer norm. Extensive quantitative and qualitative results demonstrate that MEMO generates more realistic talking videos across diverse image and audio types, outperforming state-of-the-art methods in overall quality, audio-lip synchronization, identity consistency, and expression-emotion alignment.

Longtao Zheng, Yifan Zhang, Hanzhong Guo, Jiachun Pan, Zhenxiong Tan, Jiahao Lu, Chuanxin Tang, Bo An, Shuicheng Yan• 2024

Related benchmarks

TaskDatasetResultRank
Audio Driven Talking Head GenerationCREMA
Sync6.0922
14
Audio Driven Talking Head GenerationMead
Sync6.9885
14
Talking Head GenerationInternal talking-head dataset Out-domain (test)
Lip Sync Score4.45
6
Talking Head GenerationInternal talking-head dataset In-domain (test)
Lip-sync Score3.53
6
Showing 4 of 4 rows

Other info

Follow for update