Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Learning a Universal Non-Semantic Representation of Speech

About

The ultimate goal of transfer learning is to reduce labeled data requirements by exploiting a pre-existing embedding model trained for different datasets or tasks. The visual and language communities have established benchmarks to compare embeddings, but the speech community has yet to do so. This paper proposes a benchmark for comparing speech representations on non-semantic tasks, and proposes a representation based on an unsupervised triplet-loss objective. The proposed representation outperforms other representations on the benchmark, and even exceeds state-of-the-art performance on a number of transfer learning tasks. The embedding is trained on a publicly available dataset, and it is tested on a variety of low-resource downstream tasks, including personalization tasks and medical domain. The benchmark, models, and evaluation code are publicly released.

Joel Shor, Aren Jansen, Ronnie Maor, Oran Lang, Omry Tuval, Felix de Chaumont Quitry, Marco Tagliasacchi, Ira Shavitt, Dotan Emanuel, Yinnon Haviv• 2020

Related benchmarks

TaskDatasetResultRank
Keyword SpottingGoogle Speech Commands v1 (test)
Accuracy74
68
Speaker IdentificationVoxCeleb1
Accuracy17.9
58
Speaker IdentificationVOX1 (test)
Accuracy0.179
14
Language IdentificationLanguage Id. (test)
Accuracy88.1
2
Speaker IdentificationVoxCeleb (test)
Accuracy0.177
2
Showing 5 of 5 rows

Other info

Follow for update