Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

T-GSA: Transformer with Gaussian-weighted self-attention for speech enhancement

About

Transformer neural networks (TNN) demonstrated state-of-art performance on many natural language processing (NLP) tasks, replacing recurrent neural networks (RNNs), such as LSTMs or GRUs. However, TNNs did not perform well in speech enhancement, whose contextual nature is different than NLP tasks, like machine translation. Self-attention is a core building block of the Transformer, which not only enables parallelization of sequence computation, but also provides the constant path length between symbols that is essential to learning long-range dependencies. In this paper, we propose a Transformer with Gaussian-weighted self-attention (T-GSA), whose attention weights are attenuated according to the distance between target and context symbols. The experimental results show that the proposed T-GSA has significantly improved speech-enhancement performance, compared to the Transformer and RNNs.

Jaeyoung Kim, Mostafa El-Khamy, Jungwon Lee• 2019

Related benchmarks

TaskDatasetResultRank
Speech EnhancementVoiceBank + DEMAND (VB-DMD) (test)
PESQ3.06
105
Speech DenoisingVCTK-DEMAND (test)
PESQ3.06
8
Showing 2 of 2 rows

Other info

Follow for update