Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Position-Agnostic Pre-Projection for Transformer Attention: Nonlinear Feature Construction and Content Skip Before Q/K/V

About

We propose two complementary modifications to transformer attention blocks. First, a non-linear pre-projection MLP is inserted between layer norm and Q/K/V projections, constructing richer features in a position-agnostic manner before any positional encoding is applied. Second, a content skip connection routes the pre-projection's features around the attention mechanism, allowing content information to bypass position-aware attention where beneficial. In frozen-probe experiments on Pythia-160M and 410M, the combined approach achieves the strongest results across methods: +40.6% LAMBADA accuracy and -39% perplexity at 160M scale. Learned skip connection weights reveal a consistent pattern across model sizes: later transformer layers activate the content bypass more strongly than earlier layers, suggesting that deeper layers benefit from content information that does not pass through positional attention. All modifications add no K/V cache overhead.

Chirag Shinde• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Easy--
597
Commonsense ReasoningHellaSwag
HellaSwag Accuracy40
350
Language ModelingWikiText-103
PPL17
189
Language ModelingLAMBADA
Accuracy48.4
76
Showing 4 of 4 rows

Other info

Follow for update