Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Memorization in NLP Fine-tuning Methods

About

Large language models are shown to present privacy risks through memorization of training data, and several recent works have studied such risks for the pre-training phase. Little attention, however, has been given to the fine-tuning phase and it is not well understood how different fine-tuning methods (such as fine-tuning the full model, the model head, and adapter) compare in terms of memorization risk. This presents increasing concern as the "pre-train and fine-tune" paradigm proliferates. In this paper, we empirically study memorization of fine-tuning methods using membership inference and extraction attacks, and show that their susceptibility to attacks is very different. We observe that fine-tuning the head of the model has the highest susceptibility to attacks, whereas fine-tuning smaller adapters appears to be less vulnerable to known extraction attacks.

Fatemehsadat Mireshghallah, Archit Uniyal, Tianhao Wang, David Evans, Taylor Berg-Kirkpatrick• 2022

Related benchmarks

TaskDatasetResultRank
Membership Inference AttackNews
TPR @ 1% FPR4.24
6
Membership Inference AttackTwitter
TPR @ 1% FPR5.66
6
Membership Inference AttackWikipedia
TPR @ 1% FPR121
4
Showing 3 of 3 rows

Other info

Follow for update