Practical Detection of Trojan Neural Networks: Data-Limited and Data-Free Cases
About
When the training data are maliciously tampered, the predictions of the acquired deep neural network (DNN) can be manipulated by an adversary known as the Trojan attack (or poisoning backdoor attack). The lack of robustness of DNNs against Trojan attacks could significantly harm real-life machine learning (ML) systems in downstream applications, therefore posing widespread concern to their trustworthiness. In this paper, we study the problem of the Trojan network (TrojanNet) detection in the data-scarce regime, where only the weights of a trained DNN are accessed by the detector. We first propose a data-limited TrojanNet detector (TND), when only a few data samples are available for TrojanNet detection. We show that an effective data-limited TND can be established by exploring connections between Trojan attack and prediction-evasion adversarial attacks including per-sample attack as well as all-sample universal attack. In addition, we propose a data-free TND, which can detect a TrojanNet without accessing any data samples. We show that such a TND can be built by leveraging the internal response of hidden neurons, which exhibits the Trojan behavior even at random noise inputs. The effectiveness of our proposals is evaluated by extensive experiments under different model architectures and datasets including CIFAR-10, GTSRB, and ImageNet.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Trojaned Model Detection | CIFAR10 Densenet121 (test) | Accuracy59 | 5 | |
| Trojaned Model Detection | MNIST LeNet5 (test) | Accuracy55 | 5 | |
| Trojaned Model Detection | MNIST Resnet18 (test) | Accuracy53 | 5 | |
| Trojaned Model Detection | CIFAR10 Resnet18 (test) | Accuracy51 | 5 | |
| Trojan Detection | IARPA/NIST TrojAI DenseNet Round 1 | ACC49 | 4 | |
| Trojan Detection | IARPA/NIST TrojAI ResNet Round 1 | Accuracy38 | 4 |