Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Don't Take the Easy Way Out: Ensemble Based Methods for Avoiding Known Dataset Biases

About

State-of-the-art models often make use of superficial patterns in the data that do not generalize well to out-of-domain or adversarial settings. For example, textual entailment models often learn that particular key words imply entailment, irrespective of context, and visual question answering models learn to predict prototypical answers, without considering evidence in the image. In this paper, we show that if we have prior knowledge of such biases, we can train a model to be more robust to domain shift. Our method has two stages: we (1) train a naive model that makes predictions exclusively based on dataset biases, and (2) train a robust model as part of an ensemble with the naive one in order to encourage it to focus on other patterns in the data that are more likely to generalize. Experiments on five datasets with out-of-domain test sets show significantly improved robustness in all settings, including a 12 point gain on a changing priors visual question answering dataset and a 9 point gain on an adversarial question answering test set.

Christopher Clark, Mark Yatskar, Luke Zettlemoyer• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Acc94.6
706
Image ClassificationImageNet-A (test)--
154
Visual Question AnsweringVQA-CP v2 (test)
Overall Accuracy52.73
109
Visual Question AnsweringVQA v2 (val)
Accuracy63.26
99
Natural Language InferenceMultiNLI matched (test)
Accuracy79.5
65
Visual Question AnsweringVQA (val)
Overall Accuracy59.74
55
Natural Language InferenceHANS (test)
Accuracy69.2
54
Natural Language InferenceHANS (val)
Accuracy82.8
28
Visual Question AnsweringVQA-CP v1 (test)
Accuracy (Overall)55.27
26
Image ClassificationImageNet 9 (val)
Top-1 Accuracy64.1
26
Showing 10 of 33 rows

Other info

Code

Follow for update