Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UniTalking: A Unified Audio-Video Framework for Talking Portrait Generation

About

While state-of-the-art audio-video generation models like Veo3 and Sora2 demonstrate remarkable capabilities, their closed-source nature makes their architectures and training paradigms inaccessible. To bridge this gap in accessibility and performance, we introduce UniTalking, a unified, end-to-end diffusion framework for generating high-fidelity speech and lip-synchronized video. At its core, our framework employs Multi-Modal Transformer Blocks to explicitly model the fine-grained temporal correspondence between audio and video latent tokens via a shared self-attention mechanism. By leveraging powerful priors from a pre-trained video generation model, our framework ensures state-of-the-art visual fidelity while enabling efficient training. Furthermore, UniTalking incorporates a personalized voice cloning capability, allowing the generation of speech in a target style from a brief audio reference. Qualitative and quantitative results demonstrate that our method produces highly realistic talking portraits, achieving superior performance over existing open-source approaches in lip-sync accuracy, audio naturalness, and overall perceptual quality.

Hebeizi Li, Zihao Liang, Benyuan Sun, Zihao Yin, Xiao Sha, Chenliang Wang, Yi Yang• 2026

Related benchmarks

TaskDatasetResultRank
Text-to-SpeechSeed-TTS en (test)
WER3.8
90
Lip-sync evaluationT2AV
Sync-C4.87
4
Speaker SimilarityTR2AV
Speaker Similarity (English)0.703
4
Showing 3 of 3 rows

Other info

Follow for update