Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Real Additive Margin Softmax for Speaker Verification

About

The additive margin softmax (AM-Softmax) loss has delivered remarkable performance in speaker verification. A supposed behavior of AM-Softmax is that it can shrink within-class variation by putting emphasis on target logits, which in turn improves margin between target and non-target classes. In this paper, we conduct a careful analysis on the behavior of AM-Softmax loss, and show that this loss does not implement real max-margin training. Based on this observation, we present a Real AM-Softmax loss which involves a true margin function in the softmax training. Experiments conducted on VoxCeleb1, SITW and CNCeleb demonstrated that the corrected AM-Softmax loss consistently outperforms the original one. The code has been released at https://gitlab.com/csltstu/sunine.

Lantian Li, Ruiqian Nai, Dong Wang• 2021

Related benchmarks

TaskDatasetResultRank
Speaker VerificationVoxCeleb1 (Vox1-O)
EER2.484
33
Speaker VerificationCNCeleb (eval)
EER10.138
12
Speaker VerificationVox-H VoxCeleb2 (test)
EER2.258
8
Speaker VerificationVoxCeleb2 O (test)
EER1.085
8
Speaker VerificationVox-E VoxCeleb2 (test)
EER1.223
8
Showing 5 of 5 rows

Other info

Follow for update