Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Comparative Study on Recent Neural Spoofing Countermeasures for Synthetic Speech Detection

About

A great deal of recent research effort on speech spoofing countermeasures has been invested into back-end neural networks and training criteria. We contribute to this effort with a comparative perspective in this study. Our comparison of countermeasure models on the ASVspoof 2019 logical access task takes into account recently proposed margin-based training criteria, widely used front ends, and common strategies to deal with varied-length input trials. We also measured intra-model differences through multiple training-evaluation rounds with random initialization. Our statistical analysis demonstrates that the performance of the same model may be significantly different when just changing the random initial seed. Thus, we recommend similar analysis or multiple training-evaluation rounds for further research on the database. Despite the intra-model differences, we observed a few promising techniques such as the average pooling to process varied-length inputs and a new hyper-parameter-free loss function. The two techniques led to the best single model in our experiment, which achieved an equal error rate of 1.92% and was significantly different in statistical sense from most of the other experimental models.

Xin Wang, Junich Yamagishi• 2021

Related benchmarks

TaskDatasetResultRank
Audio Spoofing DetectionASVspoof Logical Access 2019 (Evaluation)
EER1.92
30
Speech Deepfake DetectionASVspoof logical access (LA) 2019 (eval)
min-tDCF0.0524
21
Showing 2 of 2 rows

Other info

Follow for update