Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence to Multi-Sequence Learning via Conditional Chain Mapping for Mixture Signals

About

Neural sequence-to-sequence models are well established for applications which can be cast as mapping a single input sequence into a single output sequence. In this work, we focus on one-to-many sequence transduction problems, such as extracting multiple sequential sources from a mixture sequence. We extend the standard sequence-to-sequence model to a conditional multi-sequence model, which explicitly models the relevance between multiple output sequences with the probabilistic chain rule. Based on this extension, our model can conditionally infer output sequences one-by-one by making use of both input and previously-estimated contextual output sequences. This model additionally has a simple and efficient stop criterion for the end of the transduction, making it able to infer the variable number of output sequences. We take speech data as a primary test field to evaluate our methods since the observed speech data is often composed of multiple sources due to the nature of the superposition principle of sound waves. Experiments on several different tasks including speech separation and multi-speaker speech recognition show that our conditional multi-sequence models lead to consistent improvements over the conventional non-conditional models.

Jing Shi, Xuankai Chang, Pengcheng Guo, Shinji Watanabe, Yusuke Fujita, Jiaming Xu, Bo Xu, Lei Xie• 2020

Related benchmarks

TaskDatasetResultRank
Speech SeparationWSJ0-2Mix (test)--
141
Speech SeparationWSJ0-3mix (test)
SI-SNRi14.2
29
Speech SeparationWSJ0-4mix (test)
SI-SNRi12.5
10
Multi-speaker speech recognitionWSJ0-2mix 16 kHz (test)
WER14.9
8
Speech DenoisingDNS-challenge no-reverberant 2020 (test)
SDR18
5
Speech SeparationWSJ0-5mix (test)
SI-SNRi11.7
4
Multi-speaker speech recognitionWSJ0-3mix 16 kHz (test)
WER37.9
1
Showing 7 of 7 rows

Other info

Code

Follow for update