Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust Speech Recognition via Large-Scale Weak Supervision

About

We study the capabilities of speech processing systems trained simply to predict large amounts of transcripts of audio on the internet. When scaled to 680,000 hours of multilingual and multitask supervision, the resulting models generalize well to standard benchmarks and are often competitive with prior fully supervised results but in a zero-shot transfer setting without the need for any fine-tuning. When compared to humans, the models approach their accuracy and robustness. We are releasing models and inference code to serve as a foundation for further work on robust speech processing.

Alec Radford, Jong Wook Kim, Tao Xu, Greg Brockman, Christine McLeavey, Ilya Sutskever• 2022

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.82
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER3.5
1151
Automatic Speech RecognitionLibriSpeech (dev-other)
WER10.1
462
Audio ClassificationESC-50
Accuracy88.84
374
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)4.4
340
Multimodal Sentiment AnalysisCMU-MOSI--
144
Musical Instrument ClassificationNSynth
Accuracy49.7
106
Automatic Speech RecognitionAISHELL-1 (test)
CER514
97
Automatic Speech RecognitionLibriSpeech Other
WER3.55
96
Speech Translation EvaluationMust-C
Pearson Correlation0.9988
94
Showing 10 of 440 rows
...

Other info

Code

Follow for update