Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Simple Recurrent Units for Highly Parallelizable Recurrence

About

Common recurrent neural architectures scale poorly due to the intrinsic difficulty in parallelizing their state computations. In this work, we propose the Simple Recurrent Unit (SRU), a light recurrent unit that balances model capacity and scalability. SRU is designed to provide expressive recurrence, enable highly parallelized implementation, and comes with careful initialization to facilitate training of deep models. We demonstrate the effectiveness of SRU on multiple NLP tasks. SRU achieves 5--9x speed-up over cuDNN-optimized LSTM on classification and question answering datasets, and delivers stronger results than LSTM and convolutional models. We also obtain an average of 0.7 BLEU improvement over the Transformer model on translation by incorporating SRU into the architecture.

Tao Lei, Yu Zhang, Sida I. Wang, Hui Dai, Yoav Artzi• 2017

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU28.4
379
Question AnsweringSQuAD v1.1 (dev)
F1 Score80.2
375
Character-level Language Modelingenwik8 (test)
BPC1.19
195
Machine TranslationWMT English-German 2014 (test)
BLEU28.3
136
Subjectivity ClassificationSubj (test)
Accuracy93.8
125
Question ClassificationTREC (test)
Accuracy94.8
124
Sentiment ClassificationStanford Sentiment Treebank SST-2 (test)
Accuracy89.6
99
Text ClassificationMR (test)
Accuracy83.1
99
Language ModelingPenn Treebank word-level (test)
Perplexity60.3
72
Sentence ClassificationCR (test)
Accuracy86.4
33
Showing 10 of 15 rows

Other info

Code

Follow for update