Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

V2SFlow: Video-to-Speech Generation with Speech Decomposition and Rectified Flow

About

In this paper, we introduce V2SFlow, a novel Video-to-Speech (V2S) framework designed to generate natural and intelligible speech directly from silent talking face videos. While recent V2S systems have shown promising results on constrained datasets with limited speakers and vocabularies, their performance often degrades on real-world, unconstrained datasets due to the inherent variability and complexity of speech signals. To address these challenges, we decompose the speech signal into manageable subspaces (content, pitch, and speaker information), each representing distinct speech attributes, and predict them directly from the visual input. To generate coherent and realistic speech from these predicted attributes, we employ a rectified flow matching decoder built on a Transformer architecture, which models efficient probabilistic pathways from random noise to the target speech distribution. Extensive experiments demonstrate that V2SFlow significantly outperforms state-of-the-art methods, even surpassing the naturalness of ground truth utterances. Code and models are available at: https://github.com/kaistmm/V2SFlow

Jeongsoo Choi, Ji-Hoon Kim, Jinyu Li, Joon Son Chung, Shujie Liu• 2024

Related benchmarks

TaskDatasetResultRank
Lip-to-Speech SynthesisLRS3-TED (test)
UTMOS3.6939
7
Lip-to-Speech SynthesisLRS2-BBC (test)
UTMOS3.4556
7
Showing 2 of 2 rows

Other info

Follow for update