Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transferable Adversarial Attacks against ASR

About

Given the extensive research and real-world applications of automatic speech recognition (ASR), ensuring the robustness of ASR models against minor input perturbations becomes a crucial consideration for maintaining their effectiveness in real-time scenarios. Previous explorations into ASR model robustness have predominantly revolved around evaluating accuracy on white-box settings with full access to ASR models. Nevertheless, full ASR model details are often not available in real-world applications. Therefore, evaluating the robustness of black-box ASR models is essential for a comprehensive understanding of ASR model resilience. In this regard, we thoroughly study the vulnerability of practical black-box attacks in cutting-edge ASR models and propose to employ two advanced time-domain-based transferable attacks alongside our differentiable feature extractor. We also propose a speech-aware gradient optimization approach (SAGO) for ASR, which forces mistranscription with minimal impact on human imperceptibility through voice activity detection rule and a speech-aware gradient-oriented optimizer. Our comprehensive experimental results reveal performance enhancements compared to baseline approaches across five models on two databases.

Xiaoxue Gao, Zexin Li, Yiming Chen, Cong Liu, Haizhou Li• 2024

Related benchmarks

TaskDatasetResultRank
Speech RecognitionLibriSpeech (test)
WER0.4567
59
Automatic Speech RecognitionLJ-Speech
WER21.03
35
Speech RecognitionLJ Speech (test)
WER33.22
35
Automatic Speech RecognitionLibriSpeech
WER30.26
35
Showing 4 of 4 rows

Other info

Follow for update