Safeguard Fine-Tuned LLMs Through Pre- and Post-Tuning Model Merging
About
Fine-tuning large language models (LLMs) for downstream tasks often leads to catastrophic forgetting, notably degrading the safety of originally aligned models. While some existing methods attempt to restore safety by incorporating additional safety data, the quality of such data typically falls short of that used in the original alignment process. Moreover, these high-quality safety datasets are generally inaccessible, making it difficult to fully recover the model's original safety. We ask: How can we preserve safety while improving downstream task performance without additional safety data? We show that simply merging the weights of pre- and post-fine-tuned models effectively mitigates safety degradation while enhancing performance. Experiments across different downstream tasks and models validate the method's practicality and effectiveness.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Answering | PubMedQA | Accuracy77.2 | 145 | |
| Medical Visual Question Answering | VQA-RAD | Accuracy62.53 | 106 | |
| Question Answering | MedQA USMLE | Accuracy62.69 | 18 | |
| Question Answering | Medbullets-4 | Accuracy54.22 | 15 | |
| Medical Question Answering | MedQA MCMLE | Accuracy83.95 | 8 | |
| Medical Question Answering | CMExam | Accuracy79.27 | 8 | |
| Medical Question Answering | SuperGPQA | Accuracy27.59 | 8 | |
| Medical Question Answering | Medbullets op5 | Accuracy43.51 | 8 | |
| Medical Safety Evaluation | MedSafetyBench Direct | Safety Score72 | 8 | |
| Medical Safety Evaluation | MedSafetyBench FigStep | Safety Score (1-ASR)0.4578 | 8 |