Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

wav2vec: Unsupervised Pre-training for Speech Recognition

About

We explore unsupervised pre-training for speech recognition by learning representations of raw audio. wav2vec is trained on large amounts of unlabeled audio data and the resulting representations are then used to improve acoustic model training. We pre-train a simple multi-layer convolutional neural network optimized via a noise contrastive binary classification task. Our experiments on WSJ reduce WER of a strong character-based log-mel filterbank baseline by up to 36% when only a few hours of transcribed data is available. Our approach achieves 2.43% WER on the nov92 test set. This outperforms Deep Speech 2, the best reported character-based system in the literature while using two orders of magnitude less labeled training data.

Steffen Schneider, Alexei Baevski, Ronan Collobert, Michael Auli• 2019

Related benchmarks

TaskDatasetResultRank
Speech RecognitionWSJ nov93 (dev)
WER5.1
52
Voice ClassificationHC/PD/ALS Voice Cohort Cross-Cohort (External)
BalAcc39.37
52
Voice ClassificationHC/PD/ALS Voice Cohort (Internal)
Balanced Accuracy0.4477
52
Speech RecognitionWSJ nov92 (test)
WER2.43
34
Emotion RecognitionER
Accuracy59.8
33
Phoneme RecognitionTIMIT (test)
PER14.7
31
Speaker IdentificationSID
Accuracy56.6
30
Speech RecognitionWall Street Journal open vocabulary (dev93)
WER5.1
28
Universal Speech Representation EvaluationSUPERB Benchmark
SID Accuracy56.56
27
Phoneme RecognitionTIMIT (dev)
PER12.9
20
Showing 10 of 13 rows

Other info

Code

Follow for update