Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dual-Path Transformer Network: Direct Context-Aware Modeling for End-to-End Monaural Speech Separation

About

The dominant speech separation models are based on complex recurrent or convolution neural network that model speech sequences indirectly conditioning on context, such as passing information through many intermediate states in recurrent neural network, leading to suboptimal separation performance. In this paper, we propose a dual-path transformer network (DPTNet) for end-to-end speech separation, which introduces direct context-awareness in the modeling for speech sequences. By introduces a improved transformer, elements in speech sequences can interact directly, which enables DPTNet can model for the speech sequences with direct context-awareness. The improved transformer in our approach learns the order information of the speech sequences without positional encodings by incorporating a recurrent neural network into the original transformer. In addition, the structure of dual paths makes our model efficient for extremely long speech sequence modeling. Extensive experiments on benchmark datasets show that our approach outperforms the current state-of-the-arts (20.6 dB SDR on the public WSj0-2mix data corpus).

Jingjing Chen, Qirong Mao, Dong Liu• 2020

Related benchmarks

TaskDatasetResultRank
Speech SeparationWSJ0-2Mix (test)
SDRi (dB)20.6
141
Speech SeparationWSJ0-2Mix
SI-SNRi (dB)20.2
65
Speech SeparationWHAM! (test)
SI-SNRi (dB)14.9
58
Speech SeparationWHAMR! (test)
ΔSI-SNR12.1
57
Speech SeparationLibri2Mix (test)
SI-SNRi (dB)16.7
45
Source SeparationWSJ0-2Mix (test)
SI-SNRi20.6
17
Speech SeparationWSJ0-2mix 8 kHz (test)
SI-SNRi20.2
12
Speech SeparationWSJ0-3mix (clean)
Delta SI-SNR (dB)16.2
12
Speech SeparationLRS2-2Mix (test)
GPU RTF (s) (Forward)0.1033
10
Speech RecognitionLibri2Mix max mode (test)
WER (%)22.4
8
Showing 10 of 12 rows

Other info

Code

Follow for update