Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Speech Recognition with Deep Recurrent Neural Networks

About

Recurrent neural networks (RNNs) are a powerful model for sequential data. End-to-end training methods such as Connectionist Temporal Classification make it possible to train RNNs for sequence labelling problems where the input-output alignment is unknown. The combination of these methods with the Long Short-term Memory RNN architecture has proved particularly fruitful, delivering state-of-the-art results in cursive handwriting recognition. However RNN performance in speech recognition has so far been disappointing, with better results returned by deep feedforward networks. This paper investigates \emph{deep recurrent neural networks}, which combine the multiple levels of representation that have proved so effective in deep networks with the flexible use of long range context that empowers RNNs. When trained end-to-end with suitable regularisation, we find that deep Long Short-term Memory RNNs achieve a test set error of 17.7% on the TIMIT phoneme recognition benchmark, which to our knowledge is the best recorded score.

Alex Graves, Abdel-rahman Mohamed, Geoffrey Hinton• 2013

Related benchmarks

TaskDatasetResultRank
Scene Text RecognitionIIIT5K
Accuracy64.1
149
Speech RecognitionWSJ (92-eval)
WER22.7
131
Text RecognitionStreet View Text (SVT)
Accuracy73.2
80
Scene Text RecognitionIC03
Accuracy81.8
67
Scene Text RecognitionSVT-Perspective (test)
Accuracy45.7
56
Phoneme RecognitionTIMIT (test)
PER17.7
31
Phone recognitionTIMIT (test)
Frame Error Rate17.7
23
Phoneme RecognitionTIMIT core (test)
PER17.7
20
Online Speech RecognitionTIMIT (test)
PER0.196
6
Showing 9 of 9 rows

Other info

Follow for update