Real Additive Margin Softmax for Speaker Verification
About
The additive margin softmax (AM-Softmax) loss has delivered remarkable performance in speaker verification. A supposed behavior of AM-Softmax is that it can shrink within-class variation by putting emphasis on target logits, which in turn improves margin between target and non-target classes. In this paper, we conduct a careful analysis on the behavior of AM-Softmax loss, and show that this loss does not implement real max-margin training. Based on this observation, we present a Real AM-Softmax loss which involves a true margin function in the softmax training. Experiments conducted on VoxCeleb1, SITW and CNCeleb demonstrated that the corrected AM-Softmax loss consistently outperforms the original one. The code has been released at https://gitlab.com/csltstu/sunine.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speaker Verification | VoxCeleb1 (Vox1-O) | EER2.484 | 33 | |
| Speaker Verification | CNCeleb (eval) | EER10.138 | 12 | |
| Speaker Verification | Vox-H VoxCeleb2 (test) | EER2.258 | 8 | |
| Speaker Verification | VoxCeleb2 O (test) | EER1.085 | 8 | |
| Speaker Verification | Vox-E VoxCeleb2 (test) | EER1.223 | 8 |