Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AVES: Animal Vocalization Encoder based on Self-Supervision

About

The lack of annotated training data in bioacoustics hinders the use of large-scale neural network models trained in a supervised way. In order to leverage a large amount of unannotated audio data, we propose AVES (Animal Vocalization Encoder based on Self-Supervision), a self-supervised, transformer-based audio representation model for encoding animal vocalizations. We pretrain AVES on a diverse set of unannotated audio datasets and fine-tune them for downstream bioacoustics tasks. Comprehensive experiments with a suite of classification and detection tasks have shown that AVES outperforms all the strong baselines and even the supervised "topline" models trained on annotated audio classification datasets. The results also suggest that curating a small training subset related to downstream tasks is an efficient way to train high-quality audio representation models. We open-source our models at \url{https://github.com/earthspecies/aves}.

Masato Hagiwara• 2022

Related benchmarks

TaskDatasetResultRank
Bioacoustic AnalysisBeans
wtkn87.9
20
Bioacoustic MonitoringBEANS Acoustic Beehive Monitoring
ROC-AUC (BSTS)90.48
17
Bioacoustic ClassificationCBI
Accuracy59.8
10
ClassificationBeans
Accuracy (bats)73.9
7
DetectionBeans
dcase0.409
7
Vowel ClassificationSperm whale coda dataset 2025 (test)
Accuracy91.8
6
Rhythm ClassificationSperm whale coda dataset (test)
Accuracy90.4
6
Social Unit ClassificationSperm whale coda dataset (test)
Accuracy92
6
DetectionSperm whale coda dataset (test)
Accuracy92.8
6
Showing 9 of 9 rows

Other info

Follow for update