Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Griffin: Mixing Gated Linear Recurrences with Local Attention for Efficient Language Models

About

Recurrent neural networks (RNNs) have fast inference and scale efficiently on long sequences, but they are difficult to train and hard to scale. We propose Hawk, an RNN with gated linear recurrences, and Griffin, a hybrid model that mixes gated linear recurrences with local attention. Hawk exceeds the reported performance of Mamba on downstream tasks, while Griffin matches the performance of Llama-2 despite being trained on over 6 times fewer tokens. We also show that Griffin can extrapolate on sequences significantly longer than those seen during training. Our models match the hardware efficiency of Transformers during training, and during inference they have lower latency and significantly higher throughput. We scale Griffin up to 14B parameters, and explain how to shard our models for efficient distributed training.

Soham De, Samuel L. Smith, Anushan Fernando, Aleksandar Botev, George Cristian-Muraru, Albert Gu, Ruba Haroun, Leonard Berrada, Yutian Chen, Srivatsan Srinivasan, Guillaume Desjardins, Arnaud Doucet, David Budden, Yee Whye Teh, Razvan Pascanu, Nando De Freitas, Caglar Gulcehre• 2024

Related benchmarks

TaskDatasetResultRank
Long-range sequence modelingLong Range Arena (LRA)
Text Accuracy71.75
164
Physical Commonsense ReasoningPIQA (val)
Accuracy66.1
113
Language ModelingPG-19--
96
Commonsense ReasoningWinoGrande (val)
Accuracy52.6
87
Question AnsweringARC Challenge (test)
Accuracy25.4
63
Word PredictionLAMBADA (test)
Accuracy37.6
53
Multiple-choice Question AnsweringARC Easy (test)
Accuracy48.4
50
Hierarchical ReasoningListOps Long Range Arena (test)
Accuracy32.34
26
Commonsense ReasoningHellaSwag (val)
Accuracy38.8
25
Language ModelingPre-training corpus (train)--
20
Showing 10 of 14 rows

Other info

Follow for update