Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Audiocards: Structured Metadata Improves Audio Language Models For Sound Design

About

Sound designers search for sounds in large sound effects libraries using aspects such as sound class or visual context. However, the metadata needed for such search is often missing or incomplete, and requires significant manual effort to add. Existing solutions to automate this task by generating metadata, i.e. captioning, and search using learned embeddings, i.e. text-audio retrieval, are not trained on metadata with the structure and information pertinent to sound design. To this end we propose audiocards, structured metadata grounded in acoustic attributes and sonic descriptors, by exploiting the world knowledge of LLMs. We show that training on audiocards improves downstream text-audio retrieval, descriptive captioning, and metadata generation on professional sound effects libraries. Moreover, audiocards also improve performance on general audio captioning and retrieval over the baseline single-sentence captioning approach. We release a curated dataset of sound effects audiocards to invite further research in audio language modeling for sound design.

Sripathi Sridhar, Prem Seetharaman, Oriol Nieto, Mark Cartwright, Justin Salamon• 2026

Related benchmarks

TaskDatasetResultRank
Audio CaptioningASFx (eval)
SPIDEr19.36
9
Audio CaptioningClotho (eval)
SPIDEr22.18
9
Text-audio retrievalInternal professional sound effects dataset ID (eval)
R@1075.4
3
Text-audio retrievalClotho (eval)
R@1052.44
3
Showing 4 of 4 rows

Other info

Follow for update