Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DenseFormer: Enhancing Information Flow in Transformers via Depth Weighted Averaging

About

The transformer architecture by Vaswani et al. (2017) is now ubiquitous across application domains, from natural language processing to speech processing and image understanding. We propose DenseFormer, a simple modification to the standard architecture that improves the perplexity of the model without increasing its size -- adding a few thousand parameters for large-scale models in the 100B parameters range. Our approach relies on an additional averaging step after each transformer block, which computes a weighted average of current and past representations -- we refer to this operation as Depth-Weighted-Average (DWA). The learned DWA weights exhibit coherent patterns of information flow, revealing the strong and structured reuse of activations from distant layers. Experiments demonstrate that DenseFormer is more data efficient, reaching the same perplexity of much deeper transformer models, and that for the same perplexity, these new models outperform transformer baselines in terms of memory efficiency and inference time.

Matteo Pagliardini, Amirkeivan Mohtashami, Francois Fleuret, Martin Jaggi• 2024

Related benchmarks

TaskDatasetResultRank
Language ModelingLAMBADA
Accuracy48.2
268
Common Sense ReasoningHellaSwag
Accuracy36.6
213
Common Sense ReasoningBoolQ
Accuracy32.1
212
ReasoningARC Easy
Accuracy56.5
187
ReasoningPIQA
Accuracy65.6
145
ReasoningWinoGrande (WG)
Accuracy49.3
135
ReasoningOpenBookQA
Accuracy17.6
77
Language ModelingWikitext (test)
Perplexity28
62
ReasoningARC Challenge
Accuracy48.7
45
Language ModelingPG-19 (val)
Perplexity18.43
19
Showing 10 of 12 rows

Other info

Code

Follow for update