Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Frequency and Multi-Scale Selective Kernel Attention for Speaker Verification

About

The majority of recent state-of-the-art speaker verification architectures adopt multi-scale processing and frequency-channel attention mechanisms. Convolutional layers of these models typically have a fixed kernel size, e.g., 3 or 5. In this study, we further contribute to this line of research utilising a selective kernel attention (SKA) mechanism. The SKA mechanism allows each convolutional layer to adaptively select the kernel size in a data-driven fashion. It is based on an attention mechanism which exploits both frequency and channel domain. We first apply existing SKA module to our baseline. Then we propose two SKA variants where the first variant is applied in front of the ECAPA-TDNN model and the other is combined with the Res2net backbone block. Through extensive experiments, we demonstrate that our two proposed SKA variants consistently improves the performance and are complementary when tested on three different evaluation protocols.

Sung Hwan Mun, Jee-weon Jung, Min Hyun Han, Nam Soo Kim• 2022

Related benchmarks

TaskDatasetResultRank
Speaker VerificationVoxCeleb1 (Vox1-O)
EER0.72
33
Spoofing-robust Automatic Speaker VerificationASVspoof logical access 2019 SASV 2022 (test)
SASV EER (%)21.75
5
Showing 2 of 2 rows

Other info

Follow for update