Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Investigating self-supervised front ends for speech spoofing countermeasures

About

Self-supervised speech model is a rapid progressing research topic, and many pre-trained models have been released and used in various down stream tasks. For speech anti-spoofing, most countermeasures (CMs) use signal processing algorithms to extract acoustic features for classification. In this study, we use pre-trained self-supervised speech models as the front end of spoofing CMs. We investigated different back end architectures to be combined with the self-supervised front end, the effectiveness of fine-tuning the front end, and the performance of using different pre-trained self-supervised models. Our findings showed that, when a good pre-trained front end was fine-tuned with either a shallow or a deep neural network-based back end on the ASVspoof 2019 logical access (LA) training set, the resulting CM not only achieved a low EER score on the 2019 LA test set but also significantly outperformed the baseline on the ASVspoof 2015, 2021 LA, and 2021 deepfake test sets. A sub-band analysis further demonstrated that the CM mainly used the information in a specific frequency band to discriminate the bona fide and spoofed trials across the test sets.

Xin Wang, Junichi Yamagishi• 2021

Related benchmarks

TaskDatasetResultRank
Audio Deepfake Detectionin the wild
EER25.1
58
Audio Deepfake DetectionASVspoof 2021
EER9.4
27
Audio Deepfake DetectionASVspoof 2019
EER2.3
25
Audio Deepfake DetectionMLAAD-EN
EER27.8
18
Audio Deepfake DetectionASVspoof LA 2019--
11
Showing 5 of 5 rows

Other info

Follow for update