Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-training and Pre-training are Complementary for Speech Recognition

About

Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%.

Qiantong Xu, Alexei Baevski, Tatiana Likhomanenko, Paden Tomasello, Alexis Conneau, Ronan Collobert, Gabriel Synnaeve, Michael Auli• 2020

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.5
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER3.1
1151
Automatic Speech RecognitionLibriSpeech (dev-other)
WER2.7
462
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)1.1
340
Automatic Speech RecognitionLibriSpeech 960h (test-other)
WER3.1
88
Automatic Speech RecognitionLibriSpeech 960h (dev-other)
WER2.7
50
Long-form TranscriptionEarnings-22
WER28
27
Long-form TranscriptionEarnings-21
WER21.7
26
Speech RecognitionLibriSpeech 960hr (test)
WER1.5
26
Speech RecognitionLibriSpeech 960hr (dev)
WER1.1
25
Showing 10 of 15 rows

Other info

Code

Follow for update