Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hyena Hierarchy: Towards Larger Convolutional Language Models

About

Recent advances in deep learning have relied heavily on the use of large Transformers due to their ability to learn at scale. However, the core building block of Transformers, the attention operator, exhibits quadratic cost in sequence length, limiting the amount of context accessible. Existing subquadratic methods based on low-rank and sparse approximations need to be combined with dense attention layers to match Transformers, indicating a gap in capability. In this work, we propose Hyena, a subquadratic drop-in replacement for attention constructed by interleaving implicitly parametrized long convolutions and data-controlled gating. In recall and reasoning tasks on sequences of thousands to hundreds of thousands of tokens, Hyena improves accuracy by more than 50 points over operators relying on state-spaces and other implicit and explicit methods, matching attention-based models. We set a new state-of-the-art for dense-attention-free architectures on language modeling in standard datasets (WikiText103 and The Pile), reaching Transformer quality with a 20% reduction in training compute required at sequence length 2K. Hyena operators are twice as fast as highly optimized attention at sequence length 8K, and 100x faster at sequence length 64K.

Michael Poli, Stefano Massaroli, Eric Nguyen, Daniel Y. Fu, Tri Dao, Stephen Baccus, Yoshua Bengio, Stefano Ermon, Christopher R\'e• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText-103 (test)
Perplexity18.5
524
Synthetic in-context reasoningMAD synthetic (test)
Compression Score44.8
24
Multi-Query Associative Recall (MQAR)MQAR Shuffle v1
Accuracy22.51
14
Autoregressive Language ModelingWikiText-103
PPL18.5
9
Core Promoter DetectionCPD
Score (all)0.3695
8
Promoter DetectionPD
Score (all)47.38
8
Splice Site PredictionSSP
Reconstruction0.7267
8
Transcription Factor PredictionTFP
Performance Score 062.3
8
Multi-Query Associative Recall (MQAR)MQAR K2V2-Robustness v1
Accuracy65.92
7
Multi-Query Associative Recall (MQAR)MQAR K2V2 v1
Accuracy77.62
7
Showing 10 of 15 rows

Other info

Follow for update