Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hash Layers For Large Sparse Models

About

We investigate the training of sparse layers that use different parameters for different inputs based on hashing in large Transformer models. Specifically, we modify the feedforward layer to hash to different sets of weights depending on the current token, over all tokens in the sequence. We show that this procedure either outperforms or is competitive with learning-to-route mixture-of-expert methods such as Switch Transformers and BASE Layers, while requiring no routing parameters or extra terms in the objective function such as a load balancing loss, and no sophisticated assignment algorithm. We study the performance of different hashing techniques, hash sizes and input features, and show that balanced and random hashes focused on the most local features work best, compared to either learning clusters or using longer-range context. We show our approach works well both on large language modeling and dialogue tasks, and on downstream fine-tuning tasks.

Stephen Roller, Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston• 2021

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy29.68
1460
Question AnsweringARC Challenge
Accuracy19.28
749
Commonsense ReasoningPIQA
Accuracy63.06
647
Language ModelingWikiText-103 (test)
Perplexity21.63
524
Question AnsweringARC-E
Accuracy45.45
242
Reading ComprehensionBoolQ
Accuracy54.95
219
Language ModelingLAMBADA
Accuracy31.44
183
Language ModelingWikiText-103 (val)
PPL32.32
180
Reading ComprehensionRACE
Accuracy27.66
151
Physical Commonsense ReasoningPIQA (val)
Accuracy68.4
113
Showing 10 of 33 rows

Other info

Follow for update