Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

The CMU-AIST submission for the ICME 2025 Audio Encoder Challenge

About

This technical report describes our submission to the ICME 2025 audio encoder challenge. Our submitted system is built on BEATs, a masked speech token prediction based audio encoder. We extend the BEATs model using 74,000 hours of data derived from various speech, music, and sound corpora and scale its architecture upto 300 million parameters. We experiment with speech-heavy and balanced pre-training mixtures to study the impact of different domains on final performance. Our submitted system consists of an ensemble of the Dasheng 1.2 billion model with two custom scaled-up BEATs models trained on the aforementioned pre-training data mixtures. We also propose a simple ensembling technique that retains the best capabilities of constituent models and surpasses both the baseline and Dasheng 1.2B. For open science, we publicly release our trained checkpoints via huggingface at https://huggingface.co/shikhar7ssu/OpenBEATs-ICME-SOUND and https://huggingface.co/shikhar7ssu/OpenBEATs-ICME.

Shikhar Bharadwaj, Samuele Cornell, Kwanghee Choi, Hye-jin Shim, Soham Deshmukh, Satoru Fukayama, Shinji Watanabe• 2026

Related benchmarks

TaskDatasetResultRank
Speech Emotion RecognitionRAVDESS--
43
Music Genre ClassificationGTZAN
Accuracy89.8
39
Speaker CountingLibricount
Score74.7
26
Speaker IdentificationLibriSpeech MF
Score98.6
26
Language IdentificationVoxLingua33
Accuracy81.7
26
Vocal Sound ClassificationVocalSound
Accuracy90.9
21
Sound Event DetectionDESED
Score0.566
17
Sound classificationFSD Kaggle 18
Score76.4
17
Speech Emotion RecognitionCREMA-D
Weighted Accuracy81.5
12
Audio ClassificationFSD50K
Score46.3
6
Showing 10 of 17 rows

Other info

Follow for update