Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can recurrent neural networks warp time?

About

Successful recurrent models such as long short-term memories (LSTMs) and gated recurrent units (GRUs) use ad hoc gating mechanisms. Empirically these models have been found to improve the learning of medium to long term temporal dependencies and to help with vanishing gradient issues. We prove that learnable gates in a recurrent model formally provide quasi- invariance to general time transformations in the input data. We recover part of the LSTM architecture from a simple axiomatic approach. This result leads to a new way of initializing gate biases in LSTMs and GRUs. Ex- perimentally, this new chrono initialization is shown to greatly improve learning of long term dependencies, with minimal implementation effort.

Corentin Tallec, Yann Ollivier• 2018

Related benchmarks

TaskDatasetResultRank
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy75.4
164
Word-level Language ModelingWikiText-103 word-level (test)
Perplexity35.8
65
Sequential Image ClassificationMNIST ordered pixel-by-pixel 1.0 (test)
Accuracy94.6
32
Sequential Image RecognitionsMNIST
Test Accuracy98.9
16
Heart-rate predictionPPG data TSR archive (test)
Test L2 Error3.31
13
Sequential Image RecognitionnCIFAR-10
Test Accuracy55.9
8
Language ModelingWikiText-103 word-level (val)
Perplexity34.3
7
Showing 7 of 7 rows

Other info

Follow for update