Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Parallelizing Legendre Memory Unit Training

About

Recently, a new recurrent neural network (RNN) named the Legendre Memory Unit (LMU) was proposed and shown to achieve state-of-the-art performance on several benchmark datasets. Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during training (and yet executed as an RNN during inference), thus overcoming a well known limitation of training RNNs on GPUs. We show that this reformulation that aids parallelizing, which can be applied generally to any deep network whose recurrent components are linear, makes training up to 200 times faster. Second, to validate its utility, we compare its performance against the original LMU and a variety of published LSTM and transformer networks on seven benchmarks, ranging from psMNIST to sentiment analysis to machine translation. We demonstrate that our models exhibit superior performance on all datasets, often using fewer parameters. For instance, our LMU sets a new state-of-the-art result on psMNIST, and uses half the parameters while outperforming DistilBERT and LSTM models on IMDB sentiment analysis.

Narsimha Chilkuri, Chris Eliasmith• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy78.85
681
Sentiment AnalysisIMDB (test)
Accuracy93.2
248
Character-level Language Modelingtext8 (test)
BPC1.61
128
Pixel-by-pixel Image ClassificationPermuted Sequential MNIST (pMNIST) (test)
Accuracy98.49
79
Sequential Image ClassificationPMNIST (test)
Accuracy (Test)98.45
77
Sequential Image ClassificationS-MNIST (test)
Accuracy98.49
70
Pixel-level 1-D image classificationSequential MNIST (test)
Accuracy98.49
53
Paraphrase DetectionQQP (test)
Accuracy86.95
51
Permuted Sequential Image ClassificationPS-MNIST (test)
Accuracy98.49
18
Machine TranslationIWSLT'15 En-Vi TED tst2013 (test)
BLEU (Case Sensitive)25.5
2
Showing 10 of 10 rows

Other info

Follow for update