Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Backdoor Attacks on Dataset Distillation: A Kernel Method Perspective

About

Dataset distillation offers a potential means to enhance data efficiency in deep learning. Recent studies have shown its ability to counteract backdoor risks present in original training samples. In this study, we delve into the theoretical aspects of backdoor attacks and dataset distillation based on kernel methods. We introduce two new theory-driven trigger pattern generation methods specialized for dataset distillation. Following a comprehensive set of analyses and experiments, we show that our optimization-based trigger design framework informs effective backdoor attacks on dataset distillation. Notably, datasets poisoned by our designed trigger prove resilient against conventional backdoor attack detection and mitigation methods. Our empirical results validate that the triggers developed using our approaches are proficient at executing resilient backdoor attacks.

Ming-Yu Chung, Sheng-Yen Chou, Chia-Mu Yu, Pin-Yu Chen, Sy-Yen Kuo, Tsung-Yi Ho• 2023

Related benchmarks

TaskDatasetResultRank
Backdoor AttackFMNIST
ASR100
75
Backdoor AttackCIFAR10
Attack Success Rate100
70
Backdoor Attack in Dataset CondensationCIFAR10
Clean Trigger Accuracy (CTA)72.7
43
Backdoor Attack in Dataset CondensationFMNIST
CTA88
43
Backdoor Attack in Dataset CondensationTiny-ImageNet
CTA50.9
43
Backdoor Attack in Dataset CondensationSVHN
CTA88
43
Backdoor Attack Stealthiness EvaluationCIFAR10
SSIM0.69
40
Backdoor Attack in Dataset CondensationSTL10
CTA72.38
31
Backdoor AttackSVHN
Attack Success Rate100
27
Backdoor AttackSTL10
Attack Success Rate (ASR)100
21
Showing 10 of 14 rows

Other info

Follow for update