Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Permutation Invariant Training of Deep Models for Speaker-Independent Multi-talker Speech Separation

About

We propose a novel deep learning model, which supports permutation invariant training (PIT), for speaker independent multi-talker speech separation, commonly known as the cocktail-party problem. Different from most of the prior arts that treat speech separation as a multi-class regression problem and the deep clustering technique that considers it a segmentation (or clustering) problem, our model optimizes for the separation regression error, ignoring the order of mixing sources. This strategy cleverly solves the long-lasting label permutation problem that has prevented progress on deep learning based techniques for speech separation. Experiments on the equal-energy mixing setup of a Danish corpus confirms the effectiveness of PIT. We believe improvements built upon PIT can eventually solve the cocktail-party problem and enable real-world adoption of, e.g., automatic meeting transcription and multi-party human-computer interaction, where overlapping speech is common.

Dong Yu, Morten Kolb{\ae}k, Zheng-Hua Tan, Jesper Jensen• 2016

Related benchmarks

TaskDatasetResultRank
Speech SeparationVoxCeleb2-2Mix (test)
SDRi2.8
12
Speech SeparationLRS3-2Mix (test)
SDRi6.4
11
Showing 2 of 2 rows

Other info

Follow for update