Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring the Best Loss Function for DNN-Based Low-latency Speech Enhancement with Temporal Convolutional Networks

About

Recently, deep neural networks (DNNs) have been successfully used for speech enhancement, and DNN-based speech enhancement is becoming an attractive research area. While time-frequency masking based on the short-time Fourier transform (STFT) has been widely used for DNN-based speech enhancement over the last years, time domain methods such as the time-domain audio separation network (TasNet) have also been proposed. The most suitable method depends on the scale of the dataset and the type of task. In this paper, we explore the best speech enhancement algorithm on two different datasets. We propose a STFT-based method and a loss function using problem-agnostic speech encoder (PASE) features to improve subjective quality for the smaller dataset. Our proposed methods are effective on the Voice Bank + DEMAND dataset and compare favorably to other state-of-the-art methods. We also implement a low-latency version of TasNet, which we submitted to the DNS Challenge and made public by open-sourcing it. Our model achieves excellent performance on the DNS Challenge dataset.

Yuichiro Koyama, Tyler Vuong, Stefan Uhlich, Bhiksha Raj• 2020

Related benchmarks

TaskDatasetResultRank
Speech EnhancementVoiceBank + DEMAND (VB-DMD) (test)
PESQ2.89
105
Speech EnhancementDNS Challenge synthetic With reverb
PESQ2.19
8
Speech EnhancementDNS Challenge synthetic No reverb
PESQ2.24
8
Speech EnhancementDNS Challenge 2020
PESQ2.73
8
Speech EnhancementDNS Challenge With Reverb 2020 (test)
WB-PESQ2.75
7
Speech EnhancementDNS Challenge INTERSPEECH Without Reverb 2020 (test)
WB-PESQ2.73
7
Showing 6 of 6 rows

Other info

Code

Follow for update