Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention

About

This paper investigates a self-adaptation method for speech enhancement using auxiliary speaker-aware features; we extract a speaker representation used for adaptation directly from the test utterance. Conventional studies of deep neural network (DNN)--based speech enhancement mainly focus on building a speaker independent model. Meanwhile, in speech applications including speech recognition and synthesis, it is known that model adaptation to the target speaker improves the accuracy. Our research question is whether a DNN for speech enhancement can be adopted to unknown speakers without any auxiliary guidance signal in test-phase. To achieve this, we adopt multi-task learning of speech enhancement and speaker identification, and use the output of the final hidden layer of speaker identification branch as an auxiliary feature. In addition, we use multi-head self-attention for capturing long-term dependencies in the speech and noise. Experimental results on a public dataset show that our strategy achieves the state-of-the-art performance and also outperform conventional methods in terms of subjective quality.

Yuma Koizumi, Kohei Yatabe, Marc Delcroix, Yoshiki Masuyama, Daiki Takeuchi• 2020

Related benchmarks

TaskDatasetResultRank
Speech DenoisingVCTK-DEMAND (test)
PESQ2.99
8
Speech EnhancementVoice Bank + DEMAND
PESQ2.99
6
Showing 2 of 2 rows

Other info

Follow for update