Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SSMBA: Self-Supervised Manifold Based Data Augmentation for Improving Out-of-Domain Robustness

About

Models that perform well on a training domain often fail to generalize to out-of-domain (OOD) examples. Data augmentation is a common method used to prevent overfitting and improve OOD generalization. However, in natural language, it is difficult to generate new examples that stay on the underlying data manifold. We introduce SSMBA, a data augmentation method for generating synthetic training examples by using a pair of corruption and reconstruction functions to move randomly on a data manifold. We investigate the use of SSMBA in the natural language domain, leveraging the manifold assumption to reconstruct corrupted text with masked language models. In experiments on robustness benchmarks across 3 tasks and 9 datasets, SSMBA consistently outperforms existing data augmentation methods and baseline models on both in-domain and OOD data, achieving gains of 0.8% accuracy on OOD Amazon reviews, 1.8% accuracy on OOD MNLI, and 1.4 BLEU on in-domain IWSLT14 German-English.

Nathan Ng, Kyunghyun Cho, Marzyeh Ghassemi• 2020

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU27.16
379
Question AnsweringSQuAD v1.1 (dev)
F1 Score32.01
375
Question AnsweringSQuAD v1.1 (test)
F1 Score84.19
260
Machine TranslationIWSLT De-En 2014 (test)
BLEU34.32
146
Question AnsweringSQuAD (test)
F186.53
111
SummarizationXsum
ROUGE-214.81
108
Question AnsweringNewsQA (dev)
F1 Score60.34
101
Machine TranslationWMT Ro-En 2016 (test)
BLEU28.48
82
Sequence ClassificationHuffpost low-resource (test)
Micro F181.11
64
Sequence ClassificationMASSIVE
Micro F177.16
64
Showing 10 of 35 rows

Other info

Follow for update