Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Talking to Yourself: Defying Forgetting in Large Language Models

About

Catastrophic forgetting remains a major challenge when fine-tuning large language models (LLMs) on narrow, task-specific data, often degrading their general knowledge and reasoning abilities. We propose SA-SFT, a lightweight self-augmentation routine in which an LLM generates self-dialogues prior to fine-tuning, and the resulting self-authored data are mixed with task data without modifying optimization or training schedules. Despite requiring no external data or additional tuning, SA-SFT consistently mitigates catastrophic forgetting while improving in-domain performance. Across 50 evaluation scenarios, it maintains performance comparable to the original model and achieves the best results in 40 cases, outperforming common baselines such as layer freezing and external data mixing. Guided by these empirical findings, we further present a theoretical analysis suggesting that forgetting can partly stem from style-induced parameter drift, and that self-alignment through self-generated data provides an effective means to counteract this effect. Overall, our results indicate that self-augmentation offers a simple and effective mechanism for robust LLM adaptation without incurring catastrophic forgetting.

Yutao Sun, Mingshuai Chen, Tiancheng Zhao, Phillip Miao, Zilun Zhang, Haozhan Shen, Ruizhe Zhu, Jianwei Yin• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
292
Multi-task Language UnderstandingMMLU
Accuracy74.4
87
Arithmetic ReasoningIn-domain (test)
Accuracy53.3
50
Medical Text ProcessingMedText
ROUGE-L27
5
General Intelligence EvaluationAGIEval G
Accuracy72
4
Showing 5 of 5 rows

Other info

Follow for update