Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

UltraEdit: Training-, Subject-, and Memory-Free Lifelong Editing in Language Models

About

Lifelong learning enables large language models (LLMs) to adapt to evolving information by continually updating their internal knowledge. An ideal system should support efficient, wide-ranging updates while preserving existing capabilities and ensuring reliable deployment. Model editing stands out as a promising solution for this goal, offering a focused and efficient way to revise a model's internal knowledge. Although recent paradigms have made notable progress, they often struggle to meet the demands of practical lifelong adaptation at scale. To bridge this gap, we propose UltraEdit, a training-, subject-, and memory-free approach that is well-suited for ultra-scalable, real-world lifelong model editing. UltraEdit fundamentally differs from traditional paradigms by computing parameter shifts in one step using only a hidden state and its gradient, making the approach simple yet efficient. To improve scalability in lifelong settings, UltraEdit employs a lifelong normalization strategy that continuously updates feature statistics across turns, allowing it to adapt to distributional shifts and maintain consistency over time. UltraEdit achieves editing speeds over 7x faster than the previous state-of-the-art method, which was also the fastest known approach, while using less than 1/4 the VRAM. This makes it the only method currently capable of editing a 7B LLM on a 24GB consumer-grade GPU. Furthermore, we construct UltraEditBench, the largest dataset in the field to date with over 2M editing pairs, and demonstrate that our method supports up to 2M edits while maintaining high accuracy. Comprehensive experiments on five datasets and six models show that UltraEdit consistently achieves superior performance across diverse model editing scenarios, taking a further step towards safe and scalable lifelong learning. Our code is available at: https://github.com/XiaojieGu/UltraEdit

Xiaojie Gu, Ziying Huang, Jia-Chen Gu, Kai Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Math Score73
171
Commonsense ReasoningARC-C
Accuracy46
51
Model EditingWikiBigEdit
MMLU69.3
34
Model EditingzsRE
Reliability22.7
26
Model EditingCounterFact
Reliability18.1
26
Model EditingzsRE
Reliability0.201
16
Multi-task Language UnderstandingMMLU
MMLU Score67.9
14
Model EditingZsRE 3,000 samples (test)
Relational Score61.9
13
Model EditingCounterFact 3,000 samples (test)
Reliability2.80e+3
13
Model EditingWikiBigEdit 3,000 samples (test)
Reliability87.4
13
Showing 10 of 15 rows

Other info

Follow for update