EnCLAP: Combining Neural Audio Codec and Audio-Text Joint Embedding for Automated Audio Captioning
About
We propose EnCLAP, a novel framework for automated audio captioning. EnCLAP employs two acoustic representation models, EnCodec and CLAP, along with a pretrained language model, BART. We also introduce a new training objective called masked codec modeling that improves acoustic awareness of the pretrained language model. Experimental results on AudioCaps and Clotho demonstrate that our model surpasses the performance of baseline models. Source code will be available at https://github.com/jaeyeonkim99/EnCLAP . An online demo is available at https://huggingface.co/spaces/enclap-team/enclap .
Jaeyeon Kim, Jaeyoon Jung, Jinjoo Lee, Sang Hoon Woo• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Captioning | AudioCaps (test) | CIDEr80.3 | 140 | |
| Audio Captioning | Clotho | CIDEr46.4 | 60 | |
| Audio Captioning | AudioCaps | CIDEr80.3 | 47 | |
| Audio Captioning | Clotho 2.1 (test) | CIDEr0.464 | 31 | |
| Automated Audio Captioning | Clotho 2.1 (evaluation) | SPIDEr29.9 | 12 | |
| Automated Audio Captioning | Clotho (evaluation) | SPIDEr29.9 | 10 | |
| Automated Audio Captioning | AudioCaps (evaluation) | SPIDEr49.5 | 9 |
Showing 7 of 7 rows