Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Advancing Multi-grained Alignment for Contrastive Language-Audio Pre-training

About

Recent advances have been witnessed in audio-language joint learning, such as CLAP, that shows much success in multi-modal understanding tasks. These models usually aggregate uni-modal local representations, namely frame or word features, into global ones, on which the contrastive loss is employed to reach coarse-grained cross-modal alignment. However, frame-level correspondence with texts may be ignored, making it ill-posed on explainability and fine-grained challenges which may also undermine performances on coarse-grained tasks. In this work, we aim to improve both coarse- and fine-grained audio-language alignment in large-scale contrastive pre-training. To unify the granularity and latent distribution of two modalities, a shared codebook is adopted to represent multi-modal global features with common bases, and each codeword is regularized to encode modality-shared semantics, bridging the gap between frame and word features. Based on it, a locality-aware block is involved to purify local patterns, and a hard-negative guided loss is devised to boost alignment. Experiments on eleven zero-shot coarse- and fine-grained tasks suggest that our model not only surpasses the baseline CLAP significantly but also yields superior or competitive results compared to current SOTA works.

Yiming Li, Zhifang Guo, Xiangdong Wang, Hong Liu• 2024

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy94.9
325
ClassificationAudioSet (test)
mAP23
57
Audio ClassificationVGG-Sound
Top-1 Accuracy31.8
50
ClassificationFSD50K (test)
mAP54.5
24
ClassificationESC-50 (test)
Accuracy94.9
16
Audio ClassificationESC
Top-1 Accuracy94.9
10
Audio ClassificationUS8K
Top-1 Accuracy83.7
8
Events UnderstandingSSEU-Bench 1.0 (test)
mAP (10 dB)62.09
7
Scene UnderstandingSSEU-Bench 1.0 (test)
mACC (10 dB)48.46
7
ClassificationUS8K
Accuracy83.7
7
Showing 10 of 10 rows

Other info

Follow for update