Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spatially Aware Self-Supervised Models for Multi-Channel Neural Speaker Diarization

About

Self-supervised models such as WavLM have demonstrated strong performance for neural speaker diarization. However, these models are typically pre-trained on single-channel recordings, limiting their effectiveness in multi-channel scenarios. Existing diarization systems built on these models often rely on DOVER-Lap to combine outputs from individual channels. Although effective, this approach incurs substantial computational overhead and fails to fully exploit spatial information. In this work, building on DiariZen, a pipeline that combines WavLM-based local endto-end neural diarization with speaker embedding clustering, we introduce a lightweight approach to make pre-trained WavLM spatially aware by inserting channel communication modules into the early layers. Our method is agnostic to both the number of microphone channels and array topologies, ensuring broad applicability. We further propose to fuse multi-channel speaker embeddings by leveraging spatial attention weights. Evaluations on five public datasets show consistent improvements over single-channel baselines and demonstrate superior performance and efficiency compared with DOVER-Lap. Our source code is publicly available at https://github.com/BUTSpeechFIT/DiariZen.

Jiangyu Han, Ruoyu Wang, Yoshiki Masuyama, Marc Delcroix, Johan Rohdin, Jun Du, Lukas Burget• 2025

Related benchmarks

TaskDatasetResultRank
Speaker DiarizationAISHELL-4
DER (%)8.9
20
Speaker DiarizationAMI
DER12.8
15
Speaker DiarizationAMI, AliMeeting, AISHELL-4, NOTSOFAR-1 Macro-average (test)
Macro DER12
14
Speaker DiarizationAliMeeting
DER12
9
Speaker DiarizationNOTSOFAR-1
DER14.1
9
Showing 5 of 5 rows

Other info

Follow for update