Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning to Remember, Learn, and Forget in Attention-Based Models

About

In-Context Learning (ICL) in transformers acts as an online associative memory and is believed to underpin their high performance on complex sequence processing tasks. However, in gated linear attention models, this memory has a fixed capacity and is prone to interference, especially for long sequences. We propose Palimpsa, a self-attention model that views ICL as a continual learning problem that must address a stability-plasticity dilemma. Palimpsa uses Bayesian metaplasticity, where the plasticity of each attention state is tied to an importance state grounded by a prior distribution that captures accumulated knowledge. We demonstrate that various gated linear attention models emerge as specific architecture choices and posterior approximations, and that Mamba2 is a special case of Palimpsa where forgetting dominates. This theoretical link enables the transformation of any non-metaplastic model into a metaplastic one, significantly expanding its memory capacity. Our experiments show that Palimpsa consistently outperforms baselines on the Multi-Query Associative Recall (MQAR) benchmark and on Commonsense Reasoning tasks.

Djohan Bonnet, Jamie Lohoff, Jan Finkbeiner, Elidona Skhikerujah, Emre Neftci• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy51.63
1460
Commonsense ReasoningWinoGrande
Accuracy57.06
776
Commonsense ReasoningPIQA
Accuracy71.06
647
Language ModelingWikiText
PPL19.02
479
Language ModelingLAMBADA
Accuracy43.55
183
Commonsense ReasoningARC Challenge
Accuracy34.64
132
Commonsense ReasoningSocialIQA
Accuracy41.61
97
Common Sense ReasoningARC Easy
ARC (easy) Accuracy67.97
52
Showing 8 of 8 rows

Other info

Follow for update