Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Prefix tuning for automated audio captioning

About

Audio captioning aims to generate text descriptions from environmental sounds. One challenge of audio captioning is the difficulty of the generalization due to the lack of audio-text paired training data. In this work, we propose a simple yet effective method of dealing with small-scaled datasets by leveraging a pre-trained language model. We keep the language model frozen to maintain the expressivity for text generation, and we only learn to extract global and temporal features from the input audio. To bridge a modality gap between the audio features and the language model, we employ mapping networks that translate audio features to the continuous vectors the language model can understand, called prefixes. We evaluate our proposed method on the Clotho and AudioCaps dataset and show our method outperforms prior arts in diverse experimental settings.

Minkyu Kim, Kim Sung-Bin, Tae-Hyun Oh• 2023

Related benchmarks

TaskDatasetResultRank
Audio CaptioningAudioCaps (test)
CIDEr73.3
140
Text-to-Audio RetrievalClotho (test)
R@10.076
62
Audio CaptioningClotho
CIDEr39.2
60
Audio CaptioningAudioCaps
CIDEr73.3
47
Audio CaptioningClotho 2.1 (test)
CIDEr0.392
31
Audio CaptioningClotho (test)
METEOR0.159
21
Audio UnderstandingClotho V2
CIDEr19.2
6
Showing 7 of 7 rows

Other info

Code

Follow for update