Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Time Domain Audio Visual Speech Separation

About

Audio-visual multi-modal modeling has been demonstrated to be effective in many speech related tasks, such as speech recognition and speech enhancement. This paper introduces a new time-domain audio-visual architecture for target speaker extraction from monaural mixtures. The architecture generalizes the previous TasNet (time-domain speech separation network) to enable multi-modal learning and at meanwhile it extends the classical audio-visual speech separation from frequency-domain to time-domain. The main components of proposed architecture include an audio encoder, a video encoder that extracts lip embedding from video streams, a multi-modal separation network and an audio decoder. Experiments on simulated mixtures based on recently released LRS2 dataset show that our method can bring 3dB+ and 4dB+ Si-SNR improvements on two- and three-speaker cases respectively, compared to audio-only TasNet and frequency-domain audio-visual networks

Jian Wu, Yong Xu, Shi-Xiong Zhang, Lian-Wu Chen, Meng Yu, Lei Xie, Dong Yu• 2019

Related benchmarks

TaskDatasetResultRank
Audio-visual speech separationLRS2-2Mix (test)
SI-SNRi12.5
33
Audio-visual speech separationLRS3 (test)
SDRi11.7
20
Automatic Speech RecognitionLRS2-2Mix (test)
WER31.43
18
Speech SeparationVoxCeleb2-2Mix (test)
SDRi9.8
12
Speech SeparationLRS3-2Mix (test)
SDRi11.7
11
Audio-visual speech separationLRS2-3Mix (test)
SI-SNRi10
8
Audio-Visual Speaker SeparationLRS3-2Mix (test)
SI-SNRi11.2
8
Audio-visual speech separationVoxCeleb2 (test)
SI-SNRi9.2
7
Audio-Visual Speaker SeparationVoxCeleb2-2Mix (test)
SI-SNRi9.2
7
Speech SeparationGRID (test)
SDR-13.99
5
Showing 10 of 11 rows

Other info

Follow for update