Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Weakly Supervised Text-to-Audio Grounding

About

Text-to-audio grounding (TAG) task aims to predict the onsets and offsets of sound events described by natural language. This task can facilitate applications such as multimodal information retrieval. This paper focuses on weakly-supervised text-to-audio grounding (WSTAG), where frame-level annotations of sound events are unavailable, and only the caption of a whole audio clip can be utilized for training. WSTAG is superior to strongly-supervised approaches in its scalability to large audio-text datasets. Two WSTAG frameworks are studied in this paper: sentence-level and phrase-level. First, we analyze the limitations of mean pooling used in the previous WSTAG approach and investigate the effects of different pooling strategies. We then propose phrase-level WSTAG to use matching labels between audio clips and phrases for training. Advanced negative sampling strategies and self-supervision are proposed to enhance the accuracy of the weak labels and provide pseudo strong labels. Experimental results show that our system significantly outperforms the previous WSTAG SOTA. Finally, we conduct extensive experiments to analyze the effects of several factors on phrase-level WSTAG. The code and model is available at https://github.com/wsntxxn/TextToAudioGrounding.

Xuenan Xu, Ziyang Ma, Mengyue Wu, Kai Yu• 2024

Related benchmarks

TaskDatasetResultRank
Audio temporal groundingSpotSound-Bench
R1@.347
10
Audio temporal groundingUnAV-100 subset
R1@.353
10
Audio temporal groundingAudioGrounding
R1@.372.5
10
Audio temporal groundingClotho-Moment
R@0.312.1
10
Showing 4 of 4 rows

Other info

Follow for update