Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Contrastive Learning for LLM Quantization Attacks

About

Model quantization is critical for deploying large language models (LLMs) on resource-constrained hardware, yet recent work has revealed severe security risks that benign LLMs in full precision may exhibit malicious behaviors after quantization. In this paper, we propose Adversarial Contrastive Learning (ACL), a novel gradient-based quantization attack that achieves superior attack effectiveness by explicitly maximizing the gap between benign and harmful responses probabilities. ACL formulates the attack objective as a triplet-based contrastive loss, and integrates it with a projected gradient descent two-stage distributed fine-tuning strategy to ensure stable and efficient optimization. Extensive experiments demonstrate ACL's remarkable effectiveness, achieving attack success rates of 86.00% for over-refusal, 97.69% for jailbreak, and 92.40% for advertisement injection, substantially outperforming state-of-the-art methods by up to 44.67%, 18.84%, and 50.80%, respectively.

Dinghong Song, Zhiwei Xu, Hai Wan, Xibin Zhao, Pengfei Su, Dong Li• 2026

Related benchmarks

TaskDatasetResultRank
Over-Refusal Attack Resistance EvaluationOver Refusal
MMLU64.91
60
Ad Injection Attack Resistance EvaluationAd Injection
MMLU63.44
60
Jailbreak Attack Resistance EvaluationJailbreak
MMLU59.43
40
JailbreakJailbreak
MMLU64.71
20
Showing 4 of 4 rows

Other info

Follow for update