Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

WavLink: Compact Audio-Text Embeddings with a Global Whisper Token

About

Whisper has become the de-facto encoder for extracting general-purpose audio features in large audio-language models, where a 30-second clip is typically represented by 1500 frame features projected into an LLM. In contrast, audio-text embedding models like CLAP-based models have largely relied on alternative audio encoders (e.g., HTS-AT, PaSST), and have not leveraged Whisper effectively. We present WavLink, a compact audio-text embedding model that augments Whisper encoder with a learnable global token, trained jointly with a text encoder. Through a systematic study of design choices, including pretrained text encoders, loss functions, training modes, and data mixtures, we identify configurations that yield state-of-the-art retrieval performance. Our two-stage training recipe across three model sizes, combined with Matryoshka-style supervision, improves scalability, enabling 8x smaller embeddings with minimal performance drop. WavLink also demonstrates competitive performance on AIR-Bench with MCQs and zero-shot classification.

Gokul Karthik Kumar, Ludovick Lepauloux, Hakim Hacid• 2026

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50
Accuracy83
366
Audio ClassificationVGG-Sound
Top-1 Accuracy31.8
83
ClassificationUS8K
Accuracy75
7
Multiple-choice Question AnsweringAirBench Foundational
Total Average Score42
4
Showing 4 of 4 rows

Other info

Follow for update