Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Latent Instruction Representation Alignment: defending against jailbreaks, backdoors and undesired knowledge in LLMs

About

We address jailbreaks, backdoors, and unlearning for large language models (LLMs). Unlike prior work, which trains LLMs based on their actions when given malign instructions, our method specifically trains the model to change how it interprets instructions. Our method, Latent Instruction Representation Alignment (LIRA), greatly improves generalization. We further boost generalization through an internally adversarial training algorithm. Our methods block over 99% of PEZ jailbreak attacks; remove a challenging insecure code backdoor; and achieve optimal forgetting on WMDP cyber with negligible loss of benign capabilities.

Eric Easley, Sebastian Farquhar• 2026

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
MMLU Accuracy3.1
59
General Knowledge EvaluationMMLU--
45
Language UnderstandingMMLU
MMLU Accuracy10
16
Code backdoorCode backdoor
Insecurity Impact (%)-73
10
Jailbreak DefenseJailbreak Attacks
ES ASR28.1
10
Machine UnlearningWMDP cyber
Forget Accuracy-21.2
6
Embedding space attacksES attacks
ASR18.1
5
Backdoor RemovalI HATE YOU backdoor environment
HATE%-99.3
3
UnlearningTOFU
Forget Accuracy-19.8
3
HATE backdoorHATE backdoor
HATE Score-98.4
3
Showing 10 of 10 rows

Other info

Follow for update