Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Effectiveness of self-supervised pre-training for speech recognition

About

We compare self-supervised representation learning algorithms which either explicitly quantize the audio data or learn representations without quantization. We find the former to be more accurate since it builds a good vocabulary of the data through vq-wav2vec [1] to enable learning of effective representations in subsequent BERT training. Different to previous work, we directly fine-tune the pre-trained BERT models on transcribed speech using a Connectionist Temporal Classification (CTC) loss instead of feeding the representations into a task-specific model. We also propose a BERT-style model learning directly from the continuous audio data and compare pre-training on raw audio to spectral features. Fine-tuning a BERT model on 10 hour of labeled Librispeech data with a vq-wav2vec vocabulary is almost as good as the best known reported system trained on 100 hours of labeled data on testclean, while achieving a 25% WER reduction on test-other. When using only 10 minutes of labeled data, WER is 25.2 on test-other and 16.3 on test-clean. This demonstrates that self-supervision can enable speech recognition systems trained on a near-zero amount of transcribed data.

Alexei Baevski, Michael Auli, Abdelrahman Mohamed• 2019

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER12.1
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER4.5
833
Automatic Speech RecognitionLibriSpeech (dev-other)
WER10.9
411
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)4
319
Automatic Speech RecognitionLibriSpeech 100h (test-clean)
WER4.5
32
Automatic Speech RecognitionLibriSpeech 100h clean (dev)
WER4
20
Showing 6 of 6 rows

Other info

Follow for update