Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Audio-Visual Speech Codecs: Rethinking Audio-Visual Speech Enhancement by Re-Synthesis

About

Since facial actions such as lip movements contain significant information about speech content, it is not surprising that audio-visual speech enhancement methods are more accurate than their audio-only counterparts. Yet, state-of-the-art approaches still struggle to generate clean, realistic speech without noise artifacts and unnatural distortions in challenging acoustic environments. In this paper, we propose a novel audio-visual speech enhancement framework for high-fidelity telecommunications in AR/VR. Our approach leverages audio-visual speech cues to generate the codes of a neural speech codec, enabling efficient synthesis of clean, realistic speech from noisy signals. Given the importance of speaker-specific cues in speech, we focus on developing personalized models that work well for individual speakers. We demonstrate the efficacy of our approach on a new audio-visual speech dataset collected in an unconstrained, large vocabulary setting, as well as existing audio-visual datasets, outperforming speech enhancement baselines on both quantitative metrics and human evaluation studies. Please see the supplemental video for qualitative results at https://github.com/facebookresearch/facestar/releases/download/paper_materials/video.mp4.

Karren Yang, Dejan Markovic, Steven Krenn, Vasu Agrawal, Alexander Richard• 2022

Related benchmarks

TaskDatasetResultRank
Audio-visual speech separation and enhancementFacestar
PESQ1.354
4
Audio-visual speech separation and enhancementLip2Wav
PESQ1.482
4
Speech Naturalness EvaluationFacestar
Preference (Ours)78.5
3
Showing 3 of 3 rows

Other info

Code

Follow for update