Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Multiple Utterance-Level Attribute Representations with a Unified Speech Encoder

About

Speech foundation models trained with self-supervised learning produce generic speech representations that support a wide range of speech processing tasks. When further adapted with supervised learning, these models can achieve strong performance on specific downstream tasks. Recent post-training approaches, such as SAMU-XSLR and SONAR, align speech representations with utterance-level semantic representations, enabling effective multimodal (speech-text) and multilingual applications. While speech foundation models typically learn contextual embeddings at the acoustic frame level, these methods learn representations at the utterance level. In this work, we extend this paradigm to arbitrary utterance-level attributes and propose a unified post-training framework that enables a single speech foundation model to generate multiple types of utterance-level representations. We demonstrate the effectiveness of this approach by jointly learning semantic and speaker representations and evaluating them on multilingual speech retrieval and speaker recognition tasks.

Maryem Bouziane, Salima Mdhaffar, Yannick Est\`eve• 2026

Related benchmarks

TaskDatasetResultRank
Speaker VerificationVoxCeleb1 (Vox1-O)
EER91
105
Speech TranslationFLEURS X→En (test)--
12
Speech-to-speech translation retrieval (EN to Y)VoxPopuli
EN->FR Retrieval Score95.96
3
Speech-to-speech translation retrieval (X to EN)VoxPopuli
FR-EN Performance95.94
3
Speech-to-speech translation retrieval (X to Y)VoxPopuli
FR to DE Retrieval Performance93.83
3
Speech-to-text translation retrievalMTEDx
Retrieval Score (IT->EN)90.1
3
Speech-to-text translation retrievalFleurs
NY-CES Retrieval Score25.24
3
Showing 7 of 7 rows

Other info

Follow for update