Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Invisible Clean-Label Backdoor Attacks for Generative Data Augmentation

About

With the rapid advancement of image generative models, generative data augmentation has become an effective way to enrich training images, especially when only small-scale datasets are available. At the same time, in practical applications, generative data augmentation can be vulnerable to clean-label backdoor attacks, which aim to bypass human inspection. However, based on theoretical analysis and preliminary experiments, we observe that directly applying existing pixel-level clean-label backdoor attack methods (e.g., COMBAT) to generated images results in low attack success rates. This motivates us to move beyond pixel-level triggers and focus instead on the latent feature level. To this end, we propose InvLBA, an invisible clean-label backdoor attack method for generative data augmentation by latent perturbation. We theoretically prove that the generalization of the clean accuracy and attack success rates of InvLBA can be guaranteed. Experiments on multiple datasets show that our method improves the attack success rate by 46.43% on average, with almost no reduction in clean accuracy and high robustness against SOTA defense methods.

Ting Xiang, Jinhui Zhao, Changjian Chen, Zhuo Tang• 2026

Related benchmarks

TaskDatasetResultRank
Backdoor AttackPets
CAD-1.46
13
Backdoor AttackCelebA-S
CAD0.44
13
Backdoor AttackImageNet 10-S
CAD-0.2
13
Backdoor AttackCIFAR-10-S
Clean Attack Drop (CAD)0.27
13
Backdoor AttackCars
CAD0.43
13
Backdoor AttackCaltech-101
CAD67
13
Showing 6 of 6 rows

Other info

Follow for update