Joint Learning Global-Local Speaker Classification to Enhance End-to-End Speaker Diarization and Recognition
About
Large Audio-Language Models (LALMs) have demonstrated remarkable performance in end-to-end speaker diarization and recognition. However, their speaker discriminability remains limited due to the scarcity of large-scale conversational data and the absence of explicit speaker representation optimization. To address this, we propose GLSC-SDR, a paradigm that jointly trains speaker classification with diarization and recognition. We further introduce a Global-Local Speaker Classification strategy, which uses clustered speakers as global labels and re-encoded intra-cluster speakers as local labels. This hierarchical design enhances fine-grained speaker discrimination while preserving semantic transcription accuracy. Experiments on AliMeeting, AISHELL-4, and AMI-SDM demonstrate that GLSC-SDR achieves competitive or superior performance compared to simulation-based and multi-encoder approaches, without relying on large-scale real conversational data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speaker-attributed Automatic Speech Recognition | AliMeeting | Word Error Rate (WER)20.09 | 7 | |
| Speaker-attributed Automatic Speech Recognition | AMI SDM | WER17.49 | 7 | |
| Speaker-attributed Automatic Speech Recognition | AISHELL-4 | WER21.36 | 6 |