Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Aging with GRACE: Lifelong Model Editing with Discrete Key-Value Adaptors

About

Deployed language models decay over time due to shifting inputs, changing user needs, or emergent world-knowledge gaps. When such problems are identified, we want to make targeted edits while avoiding expensive retraining. However, current model editors, which modify such behaviors of pre-trained models, degrade model performance quickly across multiple, sequential edits. We propose GRACE, a lifelong model editing method, which implements spot-fixes on streaming errors of a deployed model, ensuring minimal impact on unrelated inputs. GRACE writes new mappings into a pre-trained model's latent space, creating a discrete, local codebook of edits without altering model weights. This is the first method enabling thousands of sequential edits using only streaming errors. Our experiments on T5, BERT, and GPT models show GRACE's state-of-the-art performance in making and retaining edits, while generalizing to unseen inputs. Our code is available at https://www.github.com/thartvigsen/grace}.

Thomas Hartvigsen, Swami Sankaranarayanan, Hamid Palangi, Yoon Kim, Marzyeh Ghassemi• 2022

Related benchmarks

TaskDatasetResultRank
Lifelong Free-text Knowledge EditingMRLF-Bench
BLEU66.82
140
Knowledge EditingzsRE
Generality51.3
110
Privacy EditingTDE Email
Leakage0.00e+0
56
Privacy EditingTDE URL
Leakage1
50
Vision-Language Model EditingFVQA 1.0 (test)
Accuracy98
48
VLM EditingA-OKVQA 2022 (test)
Accuracy98
48
Training Data Extractionphone PII
Leak Count0.00e+0
45
Training Data ExtractionURL PII
Leakage2
45
Training Data Extractionemail PII
Leakage0.00e+0
45
Reliability of post-edit LLMsBooks3
BLEU1
36
Showing 10 of 34 rows

Other info

Code

Follow for update