Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Audio Mamba: Pretrained Audio State Space Model For Audio Tagging

About

Audio tagging is an important task of mapping audio samples to their corresponding categories. Recently endeavours that exploit transformer models in this field have achieved great success. However, the quadratic self-attention cost limits the scaling of audio transformer models and further constrains the development of more universal audio models. In this paper, we attempt to solve this problem by proposing Audio Mamba, a self-attention-free approach that captures long audio spectrogram dependency with state space models. Our experimental results on two audio-tagging datasets demonstrate the parameter efficiency of Audio Mamba, it achieves comparable results to SOTA audio spectrogram transformers with one third parameters.

Jiaju Lin, Haoxuan Hu• 2024

Related benchmarks

TaskDatasetResultRank
Audio Event TaggingAudioSet AS-2M (full)
mAP44
33
Showing 1 of 1 rows

Other info

Follow for update