Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

StreamAvatar: Streaming Diffusion Models for Real-Time Interactive Human Avatars

About

Real-time, streaming interactive avatars represent a critical yet challenging goal in digital human research. Although diffusion-based human avatar generation methods achieve remarkable success, their non-causal architecture and high computational costs make them unsuitable for streaming. Moreover, existing interactive approaches are typically restricted to the head-and-shoulder region, limiting their ability to produce gestures and body motions. To address these challenges, we propose a two-stage autoregressive adaptation and acceleration framework that applies autoregressive distillation and adversarial refinement to adapt a high-fidelity human video diffusion model for real-time, interactive streaming. To ensure long-term stability and consistency, we introduce three key components: a Reference Sink, a Reference-Anchored Positional Re-encoding (RAPR) strategy, and a Consistency-Aware Discriminator. Building on this framework, we develop a one-shot, interactive, human avatar model capable of generating both natural talking and listening behaviors with coherent gestures. Extensive experiments demonstrate that our method achieves state-of-the-art performance, surpassing existing approaches in generation quality, real-time efficiency, and interaction naturalness. Project page: https://streamavatar.github.io .

Zhiyao Sun, Ziqiao Peng, Yifeng Ma, Yi Chen, Zhengguang Zhou, Zixiang Zhou, Guozhen Zhang, Youliang Zhang, Yuan Zhou, Qinglin Lu, Yong-Jin Liu• 2025

Related benchmarks

TaskDatasetResultRank
Talking avatar video generationEMTD (test)
FID59.87
10
Talking avatar video generationShort dataset real avatar images, 5s audio 1.0
FID74.21
10
Talking avatar video generationLong dataset 25 synthesized avatar images, 20s audio clips 1.0
ASE4.01
10
Showing 3 of 3 rows

Other info

Follow for update