Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Separate and Reconstruct: Asymmetric Encoder-Decoder for Speech Separation

About

In speech separation, time-domain approaches have successfully replaced the time-frequency domain with latent sequence feature from a learnable encoder. Conventionally, the feature is separated into speaker-specific ones at the final stage of the network. Instead, we propose a more intuitive strategy that separates features earlier by expanding the feature sequence to the number of speakers as an extra dimension. To achieve this, an asymmetric strategy is presented in which the encoder and decoder are partitioned to perform distinct processing in separation tasks. The encoder analyzes features, and the output of the encoder is split into the number of speakers to be separated. The separated sequences are then reconstructed by the weight-shared decoder, which also performs cross-speaker processing. Without relying on speaker information, the weight-shared network in the decoder directly learns to discriminate features using a separation objective. In addition, to improve performance, traditional methods have extended the sequence length, leading to the adoption of dual-path models, which handle the much longer sequence effectively by segmenting it into chunks. To address this, we introduce global and local Transformer blocks that can directly handle long sequences more efficiently without chunking and dual-path processing. The experimental results demonstrated that this asymmetric structure is effective and that the combination of proposed global and local Transformer can sufficiently replace the role of inter- and intra-chunk processing in dual-path structure. Finally, the presented model combining both of these achieved state-of-the-art performance with much less computation in various benchmark datasets.

Ui-Hyeop Shin, Sangyoun Lee, Taehan Kim, Hyung-Min Park• 2024

Related benchmarks

TaskDatasetResultRank
Speech SeparationWSJ0-2Mix (test)
SDRi (dB)24.4
141
Speech SeparationWSJ0-2Mix
SI-SNRi (dB)25.1
65
Speech SeparationWHAM! (test)
SI-SNRi (dB)17.8
58
Speech SeparationLibri2Mix (test)
SI-SNRi (dB)22
45
Speech SeparationWHAMR!
SI-SNRi17.2
20
Speech SeparationWHAM!
SI-SNRi (dB)18.4
15
Overlapped Speech RecognitionLibriCSS (test)
WER @ 0dB (S)9.6
5
Showing 7 of 7 rows

Other info

Code

Follow for update