Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Speaker Verification with w2v-BERT 2.0 and Knowledge Distillation guided Structured Pruning

About

Large-scale self-supervised Pre-Trained Models (PTMs) have shown significant improvements in the speaker verification (SV) task by providing rich feature representations. In this paper, we utilize w2v-BERT 2.0, a model with approximately 600 million parameters trained on 4.5 million hours of unlabeled data across 143 languages, for the SV task. The MFA structure with Layer Adapter is employed to process the multi-layer feature outputs from the PTM and extract speaker embeddings. Additionally, we incorporate LoRA for efficient fine-tuning. Our model achieves state-of-the-art results with 0.12% and 0.55% EER on the Vox1-O and Vox1-H test sets, respectively. Furthermore, we apply knowledge distillation guided structured pruning, reducing the model size by 80% while achieving only a 0.04% EER degradation. Source code and models are released at https://github.com/ZXHY-82/w2v-BERT-2.0_SV.

Ze Li, Ming Cheng, Ming Li• 2025

Related benchmarks

TaskDatasetResultRank
Speaker VerificationVoxCeleb1 (Vox1-O)
EER0.14
105
Speaker VerificationVoxCeleb1 (Vox1-H)
EER0.55
70
Speaker VerificationVoxCeleb-E
EER0.27
62
Speaker VerificationCN-Celeb (test)
EER4.67
4
Showing 4 of 4 rows

Other info

Follow for update