SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection
About
Fine-tuning on task-specific data to boost downstream performance is a crucial step for leveraging Large Language Models (LLMs). However, previous studies have demonstrated that fine-tuning the models on several adversarial samples or even benign data can greatly comprise the model's pre-equipped alignment and safety capabilities. In this work, we propose SEAL, a novel framework to enhance safety in LLM fine-tuning. SEAL learns a data ranker based on the bilevel optimization to up rank the safe and high-quality fine-tuning data and down rank the unsafe or low-quality ones. Models trained with SEAL demonstrate superior quality over multiple baselines, with 8.5% and 9.7% win rate increase compared to random selection respectively on Llama-3-8b-Instruct and Merlinite-7b models. Our code is available on github https://github.com/hanshen95/SEAL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy58.09 | 1891 | |
| Multitask Language Understanding | MMLU | Accuracy63.09 | 413 | |
| Safety Evaluation | HEX-PHI | HEx-PHI Score68.83 | 162 | |
| Utility Evaluation | SLIMORCA (test) | Score57.41 | 24 | |
| Safety Evaluation | Anthropic HH (test) | Safety Score58.94 | 24 | |
| Safety Evaluation | HEX-PHI | Harmfulness Score2.88 | 16 | |
| Safety Evaluation | HEX-PHI | Safety Score (HEx-PHI)60.39 | 10 | |
| Mathematical Reasoning | GSM8K | Win Rate60.62 | 3 |