Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unsupervised Learning of Semantic Audio Representations

About

Even in the absence of any explicit semantic annotation, vast collections of audio recordings provide valuable information for learning the categorical structure of sounds. We consider several class-agnostic semantic constraints that apply to unlabeled nonspeech audio: (i) noise and translations in time do not change the underlying sound category, (ii) a mixture of two sound events inherits the categories of the constituents, and (iii) the categories of events in close temporal proximity are likely to be the same or related. Without labels to ground them, these constraints are incompatible with classification loss functions. However, they may still be leveraged to identify geometric inequalities needed for triplet loss-based training of convolutional neural networks. The result is low-dimensional embeddings of the input spectrograms that recover 41% and 84% of the performance of their fully-supervised counterparts when applied to downstream query-by-example sound retrieval and sound event classification tasks, respectively. Moreover, in limited-supervision settings, our unsupervised embeddings double the state-of-the-art classification performance.

Aren Jansen, Manoj Plakal, Ratheet Pandya, Daniel P. W. Ellis, Shawn Hershey, Jiayang Liu, R. Channing Moore, Rif A. Saurous• 2017

Related benchmarks

TaskDatasetResultRank
ClassificationAudioSet (test)
mAP24.4
57
Sound classificationAudioSet (evaluation)
mAP25.9
39
Audio ClassificationAudioSet Full (test)
mAP25.9
23
Query-by-Example RetrievalAudioSet (evaluation)
mAP57.5
7
Showing 4 of 4 rows

Other info

Follow for update