Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging

About

Fine-tuning large language models (LLMs) for downstream tasks often leads to catastrophic forgetting, notably degrading the safety of originally aligned models. While some existing methods attempt to restore safety by incorporating additional safety data, the quality of such data typically falls short of that used in the original alignment process. Moreover, these high-quality safety datasets are generally inaccessible, making it difficult to fully recover the model's original safety. We ask: How can we preserve safety while improving downstream task performance without additional safety data? We show that simply merging the weights of pre- and post-fine-tuned models effectively mitigates safety degradation while enhancing performance. Experiments across different downstream tasks and models validate the method's practicality and effectiveness.

Hua Farn, Hsuan Su, Shachi H Kumar, Saurav Sahay, Shang-Tse Chen, Hung-yi Lee• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringPubMedQA
Accuracy77.2
145
Medical Visual Question AnsweringVQA-RAD
Accuracy62.53
106
Question AnsweringMedQA USMLE
Accuracy62.69
18
Question AnsweringMedbullets-4
Accuracy54.22
15
Medical Question AnsweringMedQA MCMLE
Accuracy83.95
8
Medical Question AnsweringCMExam
Accuracy79.27
8
Medical Question AnsweringSuperGPQA
Accuracy27.59
8
Medical Question AnsweringMedbullets op5
Accuracy43.51
8
Medical Safety EvaluationMedSafetyBench Direct
Safety Score72
8
Medical Safety EvaluationMedSafetyBench FigStep
Safety Score (1-ASR)0.4578
8
Showing 10 of 23 rows

Other info

Follow for update