Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generating Natural Language Attacks in a Hard Label Black Box Setting

About

We study an important and challenging task of attacking natural language processing models in a hard label black box setting. We propose a decision-based attack strategy that crafts high quality adversarial examples on text classification and entailment tasks. Our proposed attack strategy leverages population-based optimization algorithm to craft plausible and semantically similar adversarial examples by observing only the top label predicted by the target model. At each iteration, the optimization procedure allow word replacements that maximizes the overall semantic similarity between the original and the adversarial text. Further, our approach does not rely on using substitute models or any kind of training data. We demonstrate the efficacy of our proposed approach through extensive experimentation and ablation studies on five state-of-the-art target models across seven benchmark datasets. In comparison to attacks proposed in prior literature, we are able to achieve a higher success rate with lower word perturbation percentage that too in a highly restricted setting.

Rishabh Maheshwary, Saket Maheshwary, Vikram Pudi• 2020

Related benchmarks

TaskDatasetResultRank
Adversarial AttackYelp
ASR11.9
49
Adversarial AttackAMAZON
ASR16.8
22
Adversarial AttackMR
ASR41.3
22
Adversarial AttackYahoo
ASR39.1
22
Adversarial AttackSST-2
Attack Success Rate (ASR)37.3
22
Textual EntailmentSNLI
ASR23.6
8
Textual EntailmentMNLI mm
ASR46.3
8
Textual EntailmentMNLI-m
ASR43.6
8
Showing 8 of 8 rows

Other info

Follow for update