Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Unifying Diarization, Separation, and ASR with Multi-Speaker Encoder

About

This paper presents a unified multi-speaker encoder (UME), a novel architecture that jointly learns representations for speaker diarization (SD), speech separation (SS), and multi-speaker automatic speech recognition (ASR) tasks using a shared speech foundational encoder. We leverage the hidden representations from multiple layers of UME as a residual weighted-sum encoding (RWSE) to effectively use information from different semantic levels, contributing to bottom-up alignment between tasks. This joint training approach captures the inherent interdependencies among the tasks, enhancing overall performance on overlapping speech data. Our evaluations demonstrate that UME substantially improves over the single-task baselines dedicated to SD, SS, and multi-speaker ASR on LibriMix evaluation sets. Notably, for SD, UME outperforms the previous studies, achieving diarization error rates of 1.37% and 2.29% on Libri2Mix and Libri3Mix evaluation sets, respectively.

Muhammad Shakeel, Yui Sudo, Yifan Peng, Chyi-Jiunn Lin, Shinji Watanabe• 2025

Related benchmarks

TaskDatasetResultRank
Multi-talker Automatic Speech RecognitionLibri2Mix Noisy (Eval)
WER19.6
22
Multi-talker Automatic Speech RecognitionLibri3Mix Clean (Eval)
WER15.9
20
Multi-talker Automatic Speech RecognitionLibri3Mix Noisy (eval)
WER27.1
19
Multi-talker Automatic Speech RecognitionLibri2Mix Clean (test)
WER6.4
16
Multi-talker Automatic Speech RecognitionLibriMix 2 Clean (test)
tcpWER6.4
9
Multi-talker Automatic Speech RecognitionLibriMix 3 Clean (test)
tcpWER15.9
8
Multi-talker Automatic Speech RecognitionLibriMix 2 Both (test)
tcpWER19.6
8
Showing 7 of 7 rows

Other info

Follow for update