Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Booster: Tackling Harmful Fine-tuning for Large Language Models via Attenuating Harmful Perturbation

About

Harmful fine-tuning attack poses serious safety concerns for large language models' fine-tuning-as-a-service. While existing defenses have been proposed to mitigate the issue, their performances are still far away from satisfactory, and the root cause of the problem has not been fully recovered. To this end, we in this paper show that harmful perturbation over the model weights could be a probable cause of alignment-broken. In order to attenuate the negative impact of harmful perturbation, we propose an alignment-stage solution, dubbed Booster. Technically, along with the original alignment loss, we append a loss regularizer in the alignment stage's optimization. The regularizer ensures that the model's harmful loss reduction after the simulated harmful perturbation is attenuated, thereby mitigating the subsequent fine-tuning risk. Empirical results show that Booster can effectively reduce the harmful score of the fine-tuned models while maintaining the performance of downstream tasks. Our code is available at https://github.com/git-disl/Booster.

Tiansheng Huang, Sihao Hu, Fatih Ilhan, Selim Furkan Tekin, Ling Liu• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval--
227
Topic ClassificationAGNews
FA Score0.867
48
Malicious Fine-tuning DefenseBeaverTails (test)
Harmfulness Score1.26
44
Sentiment AnalysisSST2
FA Score92.59
27
Safety EvaluationBeaverTails Evaluation
Harmful Score (HS)9.06
20
Mathematical ReasoningGSM8K
Hit Score (HS)22.13
20
Aggregate performance evaluationAverage SST2, AGNEWS, GSM8K
HS Score3.21
11
Mathematical ReasoningGSM8K
Fine-tuning Accuracy (FA)16.27
5
Alignment defense against harmful fine-tuningSST2
Harmful Score (HS)28.47
5
Alignment defense against harmful fine-tuningGSM8K
Harmful Score (HS)22.13
5
Showing 10 of 13 rows

Other info

Follow for update