Spatially Aware Self-Supervised Models for Multi-Channel Neural Speaker Diarization
About
Self-supervised models such as WavLM have demonstrated strong performance for neural speaker diarization. However, these models are typically pre-trained on single-channel recordings, limiting their effectiveness in multi-channel scenarios. Existing diarization systems built on these models often rely on DOVER-Lap to combine outputs from individual channels. Although effective, this approach incurs substantial computational overhead and fails to fully exploit spatial information. In this work, building on DiariZen, a pipeline that combines WavLM-based local endto-end neural diarization with speaker embedding clustering, we introduce a lightweight approach to make pre-trained WavLM spatially aware by inserting channel communication modules into the early layers. Our method is agnostic to both the number of microphone channels and array topologies, ensuring broad applicability. We further propose to fuse multi-channel speaker embeddings by leveraging spatial attention weights. Evaluations on five public datasets show consistent improvements over single-channel baselines and demonstrate superior performance and efficiency compared with DOVER-Lap. Our source code is publicly available at https://github.com/BUTSpeechFIT/DiariZen.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speaker Diarization | AISHELL-4 | DER (%)8.9 | 20 | |
| Speaker Diarization | AMI | DER12.8 | 15 | |
| Speaker Diarization | AMI, AliMeeting, AISHELL-4, NOTSOFAR-1 Macro-average (test) | Macro DER12 | 14 | |
| Speaker Diarization | AliMeeting | DER12 | 9 | |
| Speaker Diarization | NOTSOFAR-1 | DER14.1 | 9 |