BAE: BERT-based Adversarial Examples for Text Classification
About
Modern text classification models are susceptible to adversarial examples, perturbed versions of the original text indiscernible by humans which get misclassified by the model. Recent works in NLP use rule-based synonym replacement strategies to generate adversarial examples. These strategies can lead to out-of-context and unnaturally complex token replacements, which are easily identifiable by humans. We present BAE, a black box attack for generating adversarial examples using contextual perturbations from a BERT masked language model. BAE replaces and inserts tokens in the original text by masking a portion of the text and leveraging the BERT-MLM to generate alternatives for the masked tokens. Through automatic and human evaluations, we show that BAE performs a stronger attack, in addition to generating adversarial examples with improved grammaticality and semantic coherence as compared to prior work.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Question Classification | TREC | Accuracy97.6 | 205 | |
| Counterfactual Generation | SNLI Hypothesis | LFR66.5 | 37 | |
| Counterfactual Generation | SNLI Premise | LFR0.518 | 37 | |
| Counterfactual Generation | AG-News | LFR0.443 | 37 | |
| Counterfactual Generation | IMDB | LFR63.7 | 37 | |
| Text Classification | Emotion | ASR (%)0.3295 | 36 | |
| Counterfactual Generation | SST2 (test) | SLFR47 | 29 | |
| Counterfactual Generation | AG News (test) | SLFR19.5 | 29 | |
| Question Answering | HotpotQA (train test) | BLEU50.99 | 4 | |
| Question Answering | TruthfulQA (train test) | BLEU0.5167 | 4 |