Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GUARD-SLM: Token Activation-Based Defense Against Jailbreak Attacks for Small Language Models

About

Small Language Models (SLMs) are emerging as efficient and economically viable alternatives to Large Language Models (LLMs), offering competitive performance with significantly lower computational costs and latency. These advantages make SLMs suitable for resource-constrained and efficient deployment on edge devices. However, existing jailbreak defenses show limited robustness against heterogeneous attacks, largely due to an incomplete understanding of the internal representations across different layers of language models that facilitate jailbreak behaviors. In this paper, we conduct a comprehensive empirical study on 9 jailbreak attacks across 7 SLMs and 3 LLMs. Our analysis shows that SLMs remain highly vulnerable to malicious prompts that bypass safety alignment. We analyze hidden-layer activations across different layers and model architectures, revealing that different input types form distinguishable patterns in the internal representation space. Based on this observation, we propose GUARD-SLM, a lightweight token activation-based method that operates in the representation space to filter malicious prompts during inference while preserving benign ones. Our findings highlight robustness limitations across layers of language models and provide a practical direction for secure small language model deployment.

Md Jueal Mia, Joaquin Molto, Yanzhao Wu, M. Hadi Amini• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak DefenseHarmBench
GCG ASR0.00e+0
12
Jailbreak Defense EfficiencyHarmBench
Additional Tokens0.00e+0
8
Showing 2 of 2 rows

Other info

Follow for update