Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Optimal Decay Spectra for Linear Recurrences

About

Linear recurrent models offer linear-time sequence processing but often suffer from suboptimal long-range memory. We trace this to the decay spectrum: for $N$ channels, random initialization collapses the minimum spectral gap to $O(N^{-2})$, yielding sub-exponential error $\exp(-\Omega(N/\log N))$; linear spacing avoids collapse but degrades to $\exp(-O(N/\sqrt{T}))$, practically algebraic over long contexts. We introduce Position-Adaptive Spectral Tapering (PoST), an architecture-agnostic framework combining two mechanisms: (1) Spectral Reparameterization, which structurally enforces geometrically spaced log-decay rates, proven minimax optimal at rate $O(\exp(-cN/\log T))$; and (2) Position-Adaptive Scaling, the provably unique mechanism that eliminates the scale mismatch of static spectra (where only $N\log t/\log T$ of $N$ channels are effective at position $t$) by stretching the spectrum to the actual dependency range, sharpening the rate to $O(\exp(-cN/\log t))$. This scaling natively induces fractional invariance: the impulse response becomes scale-free, with channels interpolating between relative and absolute temporal coordinates. PoST integrates into any diagonal linear recurrence without overhead. We instantiate it across Mamba-2, RWKV-7, Gated DeltaNet, Gated Linear Attention, and RetNet. Pre-training at 180M-440M scales shows consistent zero-shot language modeling improvements, significant long-context retrieval gains for Mamba-2 (MQAR and NIAH), and competitive or improved performance across other architectures. Code: https://github.com/SiLifen/PoST.

Yang Cao• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande--
1085
Question AnsweringARC Easy--
597
Language ModelingLAMBADA
Accuracy28.3
268
Question AnsweringARC Challenge
Accuracy (ARC)26.2
142
Question AnsweringOpenBookQA
Normalized Accuracy32.6
102
Common Sense ReasoningPIQA
Accuracy65.3
71
Common Sense ReasoningHellaSwag
Accuracy (acc_n)37.5
8
Long-context retrievalNIAH (Needle-In-A-Haystack) Retrieval Variants long-context
Single-1 Acc (1K)99.8
8
Showing 8 of 8 rows

Other info

Follow for update