Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Learning Universal Audio Representations

About

The ability to learn universal audio representations that can solve diverse speech, music, and environment tasks can spur many applications that require general sound content understanding. In this work, we introduce a holistic audio representation evaluation suite (HARES) spanning 12 downstream tasks across audio domains and provide a thorough empirical study of recent sound representation learning systems on that benchmark. We discover that previous sound event classification or speech models do not generalize outside of their domains. We observe that more robust audio representations can be learned with the SimCLR objective; however, the model's transferability depends heavily on the model architecture. We find the Slowfast architecture is good at learning rich representations required by different domains, but its performance is affected by the normalization scheme. Based on these findings, we propose a novel normalizer-free Slowfast NFNet and achieve state-of-the-art performance across all domains.

Luyu Wang, Pauline Luc, Yan Wu, Adria Recasens, Lucas Smaira, Andrew Brock, Andrew Jaegle, Jean-Baptiste Alayrac, Sander Dieleman, Joao Carreira, Aaron van den Oord• 2021

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy91.1
325
Audio ClassificationSPC V2
Accuracy93
65
Audio ClassificationGTZAN
Accuracy78.2
54
Speech ClassificationVF
Accuracy95.4
47
Speaker IdentificationVC1
Accuracy64.9
33
Sound Event TaggingFSD50K (test)
mAP54.3
26
Keyword SpottingSPC V2
Accuracy93
19
Musical Instrument ClassificationNSynth (test)
Accuracy78.2
17
Audio ClassificationVC1
Accuracy64.9
17
Speaker IdentificationVOX1 (test)
Accuracy0.649
14
Showing 10 of 11 rows

Other info

Follow for update