Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Amnesia: Adversarial Semantic Layer Specific Activation Steering in Large Language Models

About

Warning: This article includes red-teaming experiments, which contain examples of compromised LLM responses that may be offensive or upsetting. Large Language Models (LLMs) have the potential to create harmful content, such as generating sophisticated phishing emails and assisting in writing code of harmful computer viruses. Thus, it is crucial to ensure their safe and responsible response generation. To reduce the risk of generating harmful or irresponsible content, researchers have developed techniques such as reinforcement learning with human feedback to align LLM's outputs with human values and preferences. However, it is still undetermined whether such measures are sufficient to prevent LLMs from generating interesting responses. In this study, we propose Amnesia, a lightweight activation-space adversarial attack that manipulates internal transformer states to bypass existing safety mechanisms in open-weight LLMs. Through experimental analysis on state-of-the-art, open-weight LLMs, we demonstrate that our attack effectively circumvents existing safeguards, enabling the generation of harmful content without the need for any fine-tuning or additional training. Our experiments on benchmark datasets show that the proposed attack can induce various antisocial behaviors in LLMs. These findings highlight the urgent need for more robust security measures in open-weight LLMs and underscore the importance of continued research to prevent their potential misuse.

Ali Raza, Gurang Gupta, Nikolay Matyunin, Jibesh Patra• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackAdvBench 150 Harmful Behaviors
ASR86.3
45
Jailbreak Attack EvaluationAdvBench
ASR Success Rate86.3
9
Safety Jailbreak EvaluationForbidden Questions
ASR92.3
3
JailbreakWildJailbreak Forbidden Questions (Overall)
ASR92.1
2
Safety ClassificationSafety Evaluation Scenarios Illegal Activity
Safety Rate40
2
Safety ClassificationSafety Evaluation Scenarios Hate Speech
Safe Classification Rate43.3
2
Safety ClassificationSafety Evaluation Scenarios Malware
Safety Accuracy66.7
2
Safety ClassificationSafety Evaluation Scenarios Physical Harm
Safe Rate80
2
Safety ClassificationSafety Evaluation Scenarios Economic Harm
Safe Rate23.3
2
Safety ClassificationSafety Evaluation Scenarios Fraud
Safe Rate52
2
Showing 10 of 17 rows

Other info

Follow for update