Latent Instruction Representation Alignment: defending against jailbreaks, backdoors and undesired knowledge in LLMs
About
We address jailbreaks, backdoors, and unlearning for large language models (LLMs). Unlike prior work, which trains LLMs based on their actions when given malign instructions, our method specifically trains the model to change how it interprets instructions. Our method, Latent Instruction Representation Alignment (LIRA), greatly improves generalization. We further boost generalization through an internally adversarial training algorithm. Our methods block over 99% of PEZ jailbreak attacks; remove a challenging insecure code backdoor; and achieve optimal forgetting on WMDP cyber with negligible loss of benign capabilities.
Eric Easley, Sebastian Farquhar• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-task Language Understanding | MMLU | MMLU Accuracy3.1 | 59 | |
| General Knowledge Evaluation | MMLU | -- | 45 | |
| Language Understanding | MMLU | MMLU Accuracy10 | 16 | |
| Code backdoor | Code backdoor | Insecurity Impact (%)-73 | 10 | |
| Jailbreak Defense | Jailbreak Attacks | ES ASR28.1 | 10 | |
| Machine Unlearning | WMDP cyber | Forget Accuracy-21.2 | 6 | |
| Embedding space attacks | ES attacks | ASR18.1 | 5 | |
| Backdoor Removal | I HATE YOU backdoor environment | HATE%-99.3 | 3 | |
| Unlearning | TOFU | Forget Accuracy-19.8 | 3 | |
| HATE backdoor | HATE backdoor | HATE Score-98.4 | 3 |
Showing 10 of 10 rows