LipSody: Lip-to-Speech Synthesis with Enhanced Prosody Consistency
About
Lip-to-speech synthesis aims to generate speech audio directly from silent facial video by reconstructing linguistic content from lip movements, providing valuable applications in situations where audio signals are unavailable or degraded. While recent diffusion-based models such as LipVoicer have demonstrated impressive performance in reconstructing linguistic content, they often lack prosodic consistency. In this work, we propose LipSody, a lip-to-speech framework enhanced for prosody consistency. LipSody introduces a prosody-guiding strategy that leverages three complementary cues: speaker identity extracted from facial images, linguistic content derived from lip movements, and emotional context inferred from face video. Experimental results demonstrate that LipSody substantially improves prosody-related metrics, including global and local pitch deviations, energy consistency, and speaker similarity, compared to prior approaches.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Lip-to-Speech Synthesis | Lip-to-speech synthesis Subjective (test) | Naturalness3.47 | 3 | |
| Lip-to-Speech Synthesis | LRS3 unseen speaker protocol (test) | WER22.5 | 3 | |
| Lip-to-Speech Synthesis | LRS3 (test) | GF025.15 | 2 |