Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Linear Transformers Are Secretly Fast Weight Programmers

About

We show the formal equivalence of linearised self-attention mechanisms and fast weight controllers from the early '90s, where a ``slow" neural net learns by gradient descent to program the ``fast weights" of another net through sequences of elementary programming instructions which are additive outer products of self-invented activation patterns (today called keys and values). Such Fast Weight Programmers (FWPs) learn to manipulate the contents of a finite memory and dynamically interact with it. We infer a memory capacity limitation of recent linearised softmax attention variants, and replace the purely additive outer products by a delta rule-like programming instruction, such that the FWP can more easily learn to correct the current mapping from keys to values. The FWP also learns to compute dynamically changing learning rates. We also propose a new kernel function to linearise attention which balances simplicity and effectiveness. We conduct experiments on synthetic retrieval problems as well as standard machine translation and language modelling tasks which demonstrate the benefits of our methods.

Imanol Schlag, Kazuki Irie, J\"urgen Schmidhuber• 2021

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity36.659
524
Language ModelingWikiText-103 (val)
PPL35.64
180
Reinforcement LearningAtari 2600 (test)
Alien Score4.70e+3
10
Language ModelingWikiText-103 small setting (test)
Perplexity35.2
10
Language ModelingWikiText-103 small setting (val)
Perplexity34.1
10
Reinforcement LearningAtari 2600 (test)
Alien1.51e+4
6
Sequential ListOpsSequential ListOps depth 10 (test)
Accuracy0.857
6
Sequential ListOpsSequential ListOps depth 15 (test)
Accuracy77.6
6
Code ExecutionCode Exec 3 variables (test)
Accuracy90.7
6
Code ExecutionCode Exec 5 variables (test)
Accuracy61.4
6
Showing 10 of 10 rows

Other info

Follow for update