Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning the Mechanism of Catastrophic Forgetting: A Perspective from Gradient Similarity

About

Catastrophic forgetting during knowledge injection severely undermines the continual learning capability of large language models (LLMs). Although existing methods attempt to mitigate this issue, they often lack a foundational theoretical explanation. We establish a gradient-based theoretical framework to explain catastrophic forgetting. We first prove that strongly negative gradient similarity is a fundamental cause of forgetting. We then use gradient similarity to identify two types of neurons: conflicting neurons that induce forgetting and account for 50%-75% of neurons, and collaborative neurons that mitigate forgetting and account for 25%-50%. Based on this analysis, we propose a knowledge injection method, Collaborative Neural Learning (CNL). By freezing conflicting neurons and updating only collaborative neurons, CNL theoretically eliminates catastrophic forgetting under an infinitesimal learning rate eta and an exactly known mastered set. Experiments on five LLMs, four datasets, and four optimizers show that CNL achieves zero forgetting in in-set settings and reduces forgetting by 59.1%-81.7% in out-of-set settings.

Mutian Yang, Zisen Zhan, Yutong Chen, Haolin Li, Kaiwen Wang, Kaili Zheng, Yuguang Wang, Qi Wang, Jiandong Gao, Ji Wu• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringMMAC
Learned Count1.89e+3
2
Question AnsweringARC-C
LEARNED Count321
2
Question AnsweringMMLU
Learned Count604
2
Question AnsweringCSQA
LEARNED Score268
2
Showing 4 of 4 rows

Other info

Follow for update