Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improved Noisy Student Training for Automatic Speech Recognition

About

Recently, a semi-supervised learning method known as "noisy student training" has been shown to improve image classification performance of deep networks significantly. Noisy student training is an iterative self-training method that leverages augmentation to improve network performance. In this work, we adapt and improve noisy student training for automatic speech recognition, employing (adaptive) SpecAugment as the augmentation method. We find effective methods to filter, balance and augment the data generated in between self-training iterations. By doing so, we are able to obtain word error rates (WERs) 4.2%/8.6% on the clean/noisy LibriSpeech test sets by only using the clean 100h subset of LibriSpeech as the supervised set and the rest (860h) as the unlabeled set. Furthermore, we are able to achieve WERs 1.7%/3.4% on the clean/noisy LibriSpeech test sets by using the unlab-60k subset of LibriLight as the unlabeled set for LibriSpeech 960h. We are thus able to improve upon the previous state-of-the-art clean/noisy test WERs achieved on LibriSpeech 100h (4.74%/12.20%) and LibriSpeech (1.9%/4.1%).

Daniel S. Park, Yu Zhang, Ye Jia, Wei Han, Chung-Cheng Chiu, Bo Li, Yonghui Wu, Quoc V. Le• 2020

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER3.3
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.7
833
Automatic Speech RecognitionLibriSpeech (dev-other)
WER3.1
411
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)1.6
319
Speech RecognitionLibriSpeech (test)
WER0.017
59
Automatic Speech RecognitionLibriSpeech 960h (test-clean)
WER0.017
53
Automatic Speech RecognitionLibriSpeech 100h (test-clean)
WER4.2
32
Speech RecognitionLibriSpeech (dev)
WER1.6
21
Automatic Speech RecognitionLibriSpeech 100h clean (dev)
WER3.9
20
Showing 9 of 9 rows

Other info

Follow for update