Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Going Beyond Linear Transformers with Recurrent Fast Weight Programmers

About

Transformers with linearised attention (''linear Transformers'') have demonstrated the practical scalability and effectiveness of outer product-based Fast Weight Programmers (FWPs) from the '90s. However, the original FWP formulation is more general than the one of linear Transformers: a slow neural network (NN) continually reprograms the weights of a fast NN with arbitrary architecture. In existing linear Transformers, both NNs are feedforward and consist of a single layer. Here we explore new variations by adding recurrence to the slow and fast nets. We evaluate our novel recurrent FWPs (RFWPs) on two synthetic algorithmic tasks (code execution and sequential ListOps), Wikitext-103 language models, and on the Atari 2600 2D game environment. Our models exhibit properties of Transformers and RNNs. In the reinforcement learning setting, we report large improvements over LSTM in several Atari games. Our code is public.

Kazuki Irie, Imanol Schlag, R\'obert Csord\'as, J\"urgen Schmidhuber• 2021

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 small setting (val)
Perplexity31.8
10
Language ModelingWikiText-103 small setting (test)
Perplexity32.8
10
Reinforcement LearningAtari 2600 (test)
Alien Score3.42e+3
10
Sequential ListOpsSequential ListOps depth 15 (test)
Accuracy79.2
6
Code ExecutionCode Exec 3 variables (test)
Accuracy92.6
6
Code ExecutionCode Exec 5 variables (test)
Accuracy85.1
6
Reinforcement LearningAtari 2600 (test)
Alien1.22e+4
6
Sequential ListOpsSequential ListOps depth 10 (test)
Accuracy0.836
6
Showing 8 of 8 rows

Other info

Code

Follow for update