Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Language Model Memory and Memory Models for Language

About

The ability of machine learning models to store input information in hidden layer vector embeddings, analogous to the concept of `memory', is widely employed but not well characterized. We find that language model embeddings typically contain relatively little input information regardless of data and compute scale during training. In contrast, embeddings from autoencoders trained for input regeneration are capable of nearly perfect memory formation. The substitution of memory embeddings for token sequences leads to substantial computational efficiencies, motivating the introduction of a parallelizable encoder-decoder memory model architecture. Upon causal training these models contain information-poor embeddings incapable of arbitrary information access, but by combining causal and information retention objective functions they learn to form and decode information-rich memories. Training can be further streamlined by freezing a high fidelity encoder followed by a curriculum training approach where decoders first learn to process memories and then learn to additionally predict next tokens. We introduce the perspective that next token prediction training alone is poorly suited for accurate memory formation as the objective itself is non-invertible, motivating the use of combined objective functions for models where the entire input is not exposed.

Benjamin L. Badger• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy27.71
1460
Language UnderstandingMMLU
Accuracy23.09
756
Language ModelingWikiText--
479
Question AnsweringARC Easy
Accuracy44.2
386
Long-context Language UnderstandingLongBench--
219
Language ModelingLambada OpenAI
Accuracy18.65
61
Information ExtractionSWDE
Accuracy0.81
12
Causal Language ModelingCLM Eval
Hr0.684
6
Copy TaskCopy Eval
Hr94.7
6
Instruction FollowingIFEval strict instance
Accuracy25.06
2
Showing 10 of 10 rows

Other info

Follow for update