Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MANGO:Natural Multi-speaker 3D Talking Head Generation via 2D-Lifted Enhancement

About

Current audio-driven 3D head generation methods mainly focus on single-speaker scenarios, lacking natural, bidirectional listen-and-speak interaction. Achieving seamless conversational behavior, where speaking and listening states transition fluidly remains a key challenge. Existing 3D conversational avatar approaches rely on error-prone pseudo-3D labels that fail to capture fine-grained facial dynamics. To address these limitations, we introduce a novel two-stage framework MANGO, which leveraging pure image-level supervision by alternately training to mitigate the noise introduced by pseudo-3D labels, thereby achieving better alignment with real-world conversational behaviors. Specifically, in the first stage, a diffusion-based transformer with a dual-audio interaction module models natural 3D motion from multi-speaker audio. In the second stage, we use a fast 3D Gaussian Renderer to generate high-fidelity images and provide 2D-level photometric supervision for the 3D motions through alternate training. Additionally, we introduce MANGO-Dialog, a high-quality dataset with over 50 hours of aligned 2D-3D conversational data across 500+ identities. Extensive experiments demonstrate that our method achieves exceptional accuracy and realism in modeling two-person 3D dialogue motion, significantly advancing the fidelity and controllability of audio-driven talking heads.

Lei Zhu, Lijian Lin, Ye Zhu, Jiahao Wu, Xuehan Hou, Yu Li, Yunfei Liu, Jie Chen• 2026

Related benchmarks

TaskDatasetResultRank
3D mesh modelingMANGO-Dialog (test)
LVE1.741
6
3D mesh modelingDualTalk (test)
LVE (Error)1.894
6
2D Image GenerationMANGO-Dialog (test)
PSNR26.36
4
3D talking head generationUser Study
Visual Quality (L)3.9
4
Conversational Talking Head GenerationMANGO-Dialog (test)
S-FD (Exp)22.37
3
Showing 5 of 5 rows

Other info

Follow for update