Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Neural Audio Fingerprint for High-specific Audio Retrieval based on Contrastive Learning

About

Most of existing audio fingerprinting systems have limitations to be used for high-specific audio retrieval at scale. In this work, we generate a low-dimensional representation from a short unit segment of audio, and couple this fingerprint with a fast maximum inner-product search. To this end, we present a contrastive learning framework that derives from the segment-level search objective. Each update in training uses a batch consisting of a set of pseudo labels, randomly selected original samples, and their augmented replicas. These replicas can simulate the degrading effects on original audio signals by applying small time offsets and various types of distortions, such as background noise and room/microphone impulse responses. In the segment-level search task, where the conventional audio fingerprinting systems used to fail, our system using 10x smaller storage has shown promising results. Our code and dataset are available at \url{https://mimbres.github.io/neural-audio-fp/}.

Sungkyun Chang, Donmoon Lee, Jeongsoo Park, Hyungui Lim, Kyogu Lee, Karam Ko, Yoonchang Han• 2020

Related benchmarks

TaskDatasetResultRank
Audio IdentificationFMA (Free Music Archive) derived
Top-1 Exact Hit Rate99.7
40
Dummy-Target RetrievalFMA
Top-1 Hit Rate99.15
36
Commercial-Broadcast RetrievalAudioSet
Precision55.95
6
Audio FingerprintingBAF
Precision39.62
6
Commercial-Broadcast RetrievalFMA
Precision75.84
6
Commercial-Broadcast RetrievalLibriSpeech
Precision44.74
6
Audio FingerprintingFMA
Params (M)16.9
6
Audio FingerprintingFMA CBR Commercial
Segment Count1.50e+4
6
Audio FingerprintingFMA CBR Broadcast
Segment Count2.89e+5
6
Audio FingerprintingFMA DTR (Dummy)
Segment Count5.81e+5
6
Showing 10 of 11 rows

Other info

Follow for update