Looped Transformers as Programmable Computers
About
We present a framework for using transformer networks as universal computers by programming them with specific weights and placing them in a loop. Our input sequence acts as a punchcard, consisting of instructions and memory for data read/writes. We demonstrate that a constant number of encoder layers can emulate basic computing blocks, including embedding edit operations, non-linear functions, function calls, program counters, and conditional branches. Using these building blocks, we emulate a small instruction-set computer. This allows us to map iterative algorithms to programs that can be executed by a looped, 13-layer transformer. We show how this transformer, instructed by its input, can emulate a basic calculator, a basic linear algebra library, and in-context learning algorithms that employ backpropagation. Our work highlights the versatility of the attention mechanism, and demonstrates that even shallow transformers can execute full-fledged, general-purpose programs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Downstream Task Evaluation | Multiple Downstream Datasets (LAMBADA, ARC, WinoGrande, PIQA, HellaSwag, SciQ, RACE) | LAMBADA (OpenAI)40.4 | 12 |