Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

KoALA: KL-L0 Adversarial Detector via Label Agreement

About

Deep neural networks are highly susceptible to adversarial attacks, which pose significant risks to security- and safety-critical applications. We present KoALA (KL-L0 Adversarial detection via Label Agreement), a novel, semantics-free adversarial detector that requires no architectural changes or adversarial retraining. KoALA operates on a simple principle: it detects an adversarial attack when class predictions from two complementary similarity metrics disagree. These metrics - KL divergence and an L0-based similarity - are specifically chosen to detect different types of perturbations. The KL divergence metric is sensitive to dense, low-amplitude shifts, while the L0-based similarity is designed for sparse, high-impact changes. We provide a formal proof of correctness for our approach. The only training required is a simple fine-tuning step on a pre-trained image encoder using clean images to ensure the embeddings align well with both metrics. This makes KoALA a lightweight, plug-and-play solution for existing models and various data modalities. Our extensive experiments on ResNet/CIFAR-10 and CLIP/Tiny-ImageNet confirm our theoretical claims. When the theorem's conditions are met, KoALA consistently and effectively detects adversarial examples. On the full test sets, KoALA achieves a precision of 0.96 and a recall of 0.97 on ResNet/CIFAR-10, and a precision of 0.71 and a recall of 0.94 on CLIP/Tiny-ImageNet.

Siqi Li, Yasser Shoukry• 2025

Related benchmarks

TaskDatasetResultRank
Adversarial DetectionCIFAR-10
PGD Detection Rate (linf=2/255)71.26
7
Showing 1 of 1 rows

Other info

Follow for update