Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Real-time Neural Radiance Talking Portrait Synthesis via Audio-spatial Decomposition

About

While dynamic Neural Radiance Fields (NeRF) have shown success in high-fidelity 3D modeling of talking portraits, the slow training and inference speed severely obstruct their potential usage. In this paper, we propose an efficient NeRF-based framework that enables real-time synthesizing of talking portraits and faster convergence by leveraging the recent success of grid-based NeRF. Our key insight is to decompose the inherently high-dimensional talking portrait representation into three low-dimensional feature grids. Specifically, a Decomposed Audio-spatial Encoding Module models the dynamic head with a 3D spatial grid and a 2D audio grid. The torso is handled with another 2D grid in a lightweight Pseudo-3D Deformable Module. Both modules focus on efficiency under the premise of good rendering quality. Extensive experiments demonstrate that our method can generate realistic and audio-lips synchronized talking portrait videos, while also being highly efficient compared to previous methods.

Jiaxiang Tang, Kaisiyuan Wang, Hang Zhou, Xiaokang Chen, Dongliang He, Tianshu Hu, Jingtuo Liu, Gang Zeng, Jingdong Wang• 2022

Related benchmarks

TaskDatasetResultRank
Head reconstructionVideo sequences (test)
PSNR31.7754
11
Talking Head ReconstructionTalking Head Reconstruction (test)
PSNR31.78
9
Lip synchronizationCross-subject Lip Synchronization (Audio A)
LSE-D11.639
8
Lip synchronizationCross-subject Lip Synchronization (Audio B)
LSE-D11.082
8
Lip synchronizationSynObama Audio B cross-driven (test)
Macron Sync (E)7.875
6
Lip synchronizationSynObama Audio A cross-driven (test)
Macron Sync-E7.999
6
Talking Head GenerationObama dataset (test)
CSIM0.825
5
Talking Head GenerationSelf-reconstruction setting
PSNR26.794
5
Showing 8 of 8 rows

Other info

Follow for update