Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SoFA: Shielded On-the-fly Alignment via Priority Rule Following

About

The alignment problem in Large Language Models (LLMs) involves adapting them to the broad spectrum of human values. This requirement challenges existing alignment methods due to diversity of preferences and regulatory standards. This paper introduces a novel alignment paradigm, priority rule following, which defines rules as the primary control mechanism in each dialog, prioritizing them over user instructions. Our preliminary analysis reveals that even the advanced LLMs, such as GPT-4, exhibit shortcomings in understanding and prioritizing the rules. Therefore, we present PriorityDistill, a semi-automated approach for distilling priority following signals from LLM simulations to ensure robust rule integration and adherence. Our experiments show that this method not only effectively minimizes misalignments utilizing only one general rule but also adapts smoothly to various unseen rules, ensuring they are shielded from hijacking and that the model responds appropriately.

Xinyu Lu, Bowen Yu, Yaojie Lu, Hongyu Lin, Haiyang Yu, Le Sun, Xianpei Han, Yongbin Li• 2024

Related benchmarks

TaskDatasetResultRank
Bias EvaluationBBQ--
99
Truthful QATruthful QA
Accuracy66.7
83
Question AnsweringWikiQA
Accuracy24
29
Question AnsweringTATQA
F18
27
Rule Alignment EvaluationRuLES
P_manual0.602
22
Safety EvaluationHH-RedTeaming
H.R.adv0.066
22
Showing 6 of 6 rows

Other info

Code

Follow for update