FLOAT: Generative Motion Latent Flow Matching for Audio-driven Talking Portrait
About
With the rapid advancement of diffusion-based generative models, portrait image animation has achieved remarkable results. However, it still faces challenges in temporally consistent video generation and fast sampling due to its iterative sampling nature. This paper presents FLOAT, an audio-driven talking portrait video generation method based on flow matching generative model. Instead of a pixel-based latent space, we take advantage of a learned orthogonal motion latent space, enabling efficient generation and editing of temporally consistent motion. To achieve this, we introduce a transformer-based vector field predictor with an effective frame-wise conditioning mechanism. Additionally, our method supports speech-driven emotion enhancement, enabling a natural incorporation of expressive motions. Extensive experiments demonstrate that our method outperforms state-of-the-art audio-driven talking portrait methods in terms of visual quality, motion fidelity, and efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Talking Face Emotion Editing | User Study Extended Emotion | Emotional Accuracy11.8 | 12 | |
| Talking Face Emotion Editing | User Study Basic Emotion | Emotional Expression14.9 | 12 | |
| Audio-visual Synchronization | HDTF cross-driven | Sync-C (Cross-Gender)6.444 | 8 | |
| Audio-driven portrait animation | TH-1KH (test) | LSE-C5.482 | 8 | |
| Audio-driven portrait animation | Hallo3 (test) | LSE-C6.287 | 8 | |
| Talking head synthesis | Curated 5-Identity Audio-Visual Dataset (Macron, Paul, Obama, May, Stabenow) (test) | PSNR17.999 | 8 | |
| Audio-driven talking face generation | HDTF (randomly sampled 50 videos) | FID9.164 | 6 | |
| Audio-driven talking face generation | CelebV (randomly sampled 50 videos) | FID18.272 | 6 | |
| Emotion Editing | CREMA-D | AITV0.846 | 5 | |
| Emotion Editing | Mead | AITV1.434 | 5 |