Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SEAL: Safety-enhanced Aligned LLM Fine-tuning via Bilevel Data Selection

About

Fine-tuning on task-specific data to boost downstream performance is a crucial step for leveraging Large Language Models (LLMs). However, previous studies have demonstrated that fine-tuning the models on several adversarial samples or even benign data can greatly comprise the model's pre-equipped alignment and safety capabilities. In this work, we propose SEAL, a novel framework to enhance safety in LLM fine-tuning. SEAL learns a data ranker based on the bilevel optimization to up rank the safe and high-quality fine-tuning data and down rank the unsafe or low-quality ones. Models trained with SEAL demonstrate superior quality over multiple baselines, with 8.5% and 9.7% win rate increase compared to random selection respectively on Llama-3-8b-Instruct and Merlinite-7b models. Our code is available on github https://github.com/hanshen95/SEAL.

Han Shen, Pin-Yu Chen, Payel Das, Tianyi Chen• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy58.09
1891
Multitask Language UnderstandingMMLU
Accuracy63.09
413
Safety EvaluationHEX-PHI
HEx-PHI Score68.83
162
Utility EvaluationSLIMORCA (test)
Score57.41
24
Safety EvaluationAnthropic HH (test)
Safety Score58.94
24
Safety EvaluationHEX-PHI
Harmfulness Score2.88
16
Safety EvaluationHEX-PHI
Safety Score (HEx-PHI)60.39
10
Mathematical ReasoningGSM8K
Win Rate60.62
3
Showing 8 of 8 rows

Other info

Follow for update