Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FLAP: Fast Language-Audio Pre-training

About

We propose Fast Language-Audio Pre-training (FLAP), a self-supervised approach that efficiently and effectively learns aligned audio and language representations through masking, contrastive learning and reconstruction. For efficiency, FLAP randomly drops audio spectrogram tokens, focusing solely on the remaining ones for self-supervision. Through inter-modal contrastive learning, FLAP learns to align paired audio and text representations in a shared latent space. Notably, FLAP leverages multiple augmented views via masking for inter-modal contrast and learns to reconstruct the masked portion of audio tokens. Moreover, FLAP leverages large language models (LLMs) to augment the text inputs, contributing to improved performance. These approaches lead to more robust and informative audio-text representations, enabling FLAP to achieve state-of-the-art (SoTA) performance on audio-text retrieval tasks on AudioCaps (achieving 53.0% R@1) and Clotho (achieving 25.5% R@1).

Ching-Feng Yeh, Po-Yao Huang, Vasu Sharma, Shang-Wen Li, Gargi Gosh• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Audio RetrievalAudioCaps (test)
Recall@141.5
145
Audio-to-Text RetrievalClotho (test)
R@125.5
78
Text-to-Audio RetrievalClotho (test)
R@120.3
62
Audio-to-Text RetrievalAudioCaps (test)
R@153
62
Showing 4 of 4 rows

Other info

Follow for update