Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Problem-agnostic Speech Representations from Multiple Self-supervised Tasks

About

Learning good representations without supervision is still an open issue in machine learning, and is particularly challenging for speech signals, which are often characterized by long sequences with a complex hierarchical structure. Some recent works, however, have shown that it is possible to derive useful speech representations by employing a self-supervised encoder-discriminator approach. This paper proposes an improved self-supervised method, where a single neural encoder is followed by multiple workers that jointly solve different self-supervised tasks. The needed consensus across different tasks naturally imposes meaningful constraints to the encoder, contributing to discover general representations and to minimize the risk of learning superficial ones. Experiments show that the proposed approach can learn transferable, robust, and problem-agnostic features that carry on relevant information from the speech signal, such as speaker identity, phonemes, and even higher-level features such as emotional cues. In addition, a number of design choices make the encoder easily exportable, facilitating its direct usage or adaptation to different problems.

Santiago Pascual, Mirco Ravanelli, Joan Serr\`a, Antonio Bonafonte, Yoshua Bengio• 2019

Related benchmarks

TaskDatasetResultRank
Audio ClassificationAudioSet 20K
mAP31.9
128
Audio ClassificationAudioSet 2M
mAP44.4
79
Environmental Sound ClassificationFSD50K
mAP55.4
60
Discrete Emotion RecognitionCREMA-D 18 (test)
Accuracy47.8
19
Discrete Emotion RecognitionRavdess 19 (test)
Accuracy31.76
19
Automatic Speech RecognitionTIMIT (test)
Accuracy85.3
10
Emotion RecognitionINTERFACE (test)
Accuracy97.7
10
Speaker-IDVCTK (test)
Accuracy99.3
10
Automatic Speech RecognitionDIRHA (test)
WER0.298
8
Showing 9 of 9 rows

Other info

Code

Follow for update