Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Forgetting: A New Mechanism Towards Better Large Language Model Fine-tuning

About

Supervised fine-tuning (SFT) plays a critical role for pretrained large language models (LLMs), notably enhancing their capacity to acquire domain-specific knowledge while preserving or potentially augmenting their general-purpose capabilities. However, the efficacy of SFT hinges on data quality as well as data volume, otherwise it may result in limited performance gains or even degradation relative to the associated baselines. To mitigate such reliance, we suggest categorizing tokens within each corpus into two parts -- positive and negative tokens -- based on whether they are useful to improve model performance. Positive tokens can be trained in common ways, whereas negative tokens, which may lack essential semantics or be misleading, should be explicitly forgotten. Overall, the token categorization facilitates the model to learn less informative messages, and the forgetting guides the model on what information to learn more precisely. We conduct experiments across diverse and well-established benchmarks using various model architectures, demonstrating that this forgetting mechanism enhances model performance.

Ali Taheri, Alireza Taban, Qizhou Wang, Shanshan Ye, Abdolreza Mirzaei, Tongliang Liu, Bo Han• 2025

Related benchmarks

TaskDatasetResultRank
Boolean Question AnsweringBoolQ
Accuracy84.13
323
Question AnsweringBoolQ--
317
Mathematical ReasoningASDIV
Accuracy0.5776
245
Logical reasoningLogiQA
LogiQA Accuracy27.95
181
Question AnsweringTruthfulQA
Accuracy58.39
152
Arithmetic ReasoningASDIV
Accuracy17.8
58
Multilingual Question AnsweringTyDiQA
F1 Score48.71
4
Showing 7 of 7 rows

Other info

Follow for update