Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CA-MHFA: A Context-Aware Multi-Head Factorized Attentive Pooling for SSL-Based Speaker Verification

About

Self-supervised learning (SSL) models for speaker verification (SV) have gained significant attention in recent years. However, existing SSL-based SV systems often struggle to capture local temporal dependencies and generalize across different tasks. In this paper, we propose context-aware multi-head factorized attentive pooling (CA-MHFA), a lightweight framework that incorporates contextual information from surrounding frames. CA-MHFA leverages grouped, learnable queries to effectively model contextual dependencies while maintaining efficiency by sharing keys and values across groups. Experimental results on the VoxCeleb dataset show that CA-MHFA achieves EERs of 0.42\%, 0.48\%, and 0.96\% on Vox1-O, Vox1-E, and Vox1-H, respectively, outperforming complex models like WavLM-TDNN with fewer parameters and faster convergence. Additionally, CA-MHFA demonstrates strong generalization across multiple SSL models and tasks, including emotion recognition and anti-spoofing, highlighting its robustness and versatility.

Junyi Peng, Ladislav Mo\v{s}ner, Lin Zhang, Old\v{r}ich Plchot, Themos Stafylakis, Luk\'a\v{s} Burget, Jan \v{C}ernock\'y• 2024

Related benchmarks

TaskDatasetResultRank
Speaker VerificationVoxCeleb1 (Vox1-O)
EER42
105
Speaker VerificationVoxCeleb1 (Vox1-H)
EER0.96
70
Speaker VerificationVoxCeleb-E
EER0.48
62
Showing 3 of 3 rows

Other info

Follow for update