Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From static to adaptive: immune memory-based jailbreak detection for large language models

About

Large Language Models (LLMs) serve as the backbone of modern AI systems, yet they remain susceptible to adversarial jailbreak attacks. Consequently, robust detection of such malicious inputs is paramount for ensuring model safety. Traditional detection methods typically rely on external models trained on fixed, large-scale datasets, which often incur significant computational overhead. While recent methods shift toward leveraging internal safety signals of models to enable more lightweight and efficient detection. However, these methods remain inherently static and struggle to adapt to the evolving nature of jailbreak attacks. Drawing inspiration from the biological immune mechanism, we introduce the Immune Memory Adaptive Guard (IMAG) framework. By distilling and encoding safety patterns into a persistent, evolvable memory bank, IMAG enables adaptive generalization to emerging threats. Specifically, the framework orchestrates three synergistic components: Immune Detection, which employs retrieval for high-efficiency interception of known jailbreak attacks; Active Immunity, which performs proactive behavioral simulation to resolve ambiguous unknown queries; Memory Updating, which integrates validated attack patterns back into the memory bank. This closed-loop architecture transitions LLM defense from rigid filtering to autonomous adaptive mitigation. Extensive evaluations across five representative open-source LLMs demonstrate that our method surpasses state-of-the-art (SOTA) baselines, achieving a superior average detection accuracy of 94\% across diverse and complex attack types.

Jun Leng, Yu Liu, Litian Zhang, Ruihan Hu, Zhuting Fang, Xi Zhang• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak DetectionGCG
Accuracy99
30
Jailbreak DetectionAutoDAN
Accuracy99
30
Jailbreak DetectionPAIR
Accuracy98
30
Jailbreak DetectionBase64
Accuracy100
30
Jailbreak DetectionZulu
Accuracy92
30
Jailbreak DetectionDrAttack
Accuracy98
30
Jailbreak DetectionAverage of six attacks
Avg Success Rate74
30
Showing 7 of 7 rows

Other info

Follow for update