Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Attention-Based Models for Speech Recognition

About

Recurrent sequence generators conditioned on input data through an attention mechanism have recently shown very good performance on a range of tasks in- cluding machine translation, handwriting synthesis and image caption gen- eration. We extend the attention-mechanism with features needed for speech recognition. We show that while an adaptation of the model used for machine translation in reaches a competitive 18.7% phoneme error rate (PER) on the TIMIT phoneme recognition task, it can only be applied to utterances which are roughly as long as the ones it was trained on. We offer a qualitative explanation of this failure and propose a novel and generic method of adding location-awareness to the attention mechanism to alleviate this issue. The new method yields a model that is robust to long inputs and achieves 18% PER in single utterances and 20% in 10-times longer (repeated) utterances. Finally, we propose a change to the at- tention mechanism that prevents it from concentrating too much on single frames, which further reduces PER to 17.6% level.

Jan Chorowski, Dzmitry Bahdanau, Dmitriy Serdyuk, Kyunghyun Cho, Yoshua Bengio• 2015

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy89.53
3518
Machine TranslationIWSLT En-De 2014 (test)
BLEU37.67
92
Phoneme RecognitionTIMIT (test)
PER18.7
31
Phone recognitionTIMIT (test)
Frame Error Rate17.6
23
Phoneme RecognitionTIMIT core (test)
PER17.6
20
Phoneme RecognitionTIMIT (dev)
PER15.8
20
Automatic Lip-ReadingLRS3 v1 (dev)
WER46.69
18
Showing 7 of 7 rows

Other info

Follow for update