Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations

About

In this paper, we propose a novel and practical mechanism which enables the service provider to verify whether a suspect model is stolen from the victim model via model extraction attacks. Our key insight is that the profile of a DNN model's decision boundary can be uniquely characterized by its Universal Adversarial Perturbations (UAPs). UAPs belong to a low-dimensional subspace and piracy models' subspaces are more consistent with victim model's subspace compared with non-piracy model. Based on this, we propose a UAP fingerprinting method for DNN models and train an encoder via contrastive learning that takes fingerprint as inputs, outputs a similarity score. Extensive studies show that our framework can detect model IP breaches with confidence > 99.99 within only 20 fingerprints of the suspect model. It has good generalizability across different model architectures and is robust against post-modifications on stolen models.

Zirui Peng, Shaofeng Li, Guoxing Chen, Cheng Zhang, Haojin Zhu, Minhui Xue• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Accuracy71.1
109
Training Data Provenance VerificationCIFAR10
Avg AUC79.63
27
Ownership VerificationModel Extraction Setting Surrogate Models
AUC79.63
24
Image ClassificationCIFAR-10
Accuracy91.84
24
Showing 4 of 4 rows

Other info

Follow for update