Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distilling Robustness into Natural Language Inference Models with Domain-Targeted Augmentation

About

Knowledge distillation optimises a smaller student model to behave similarly to a larger teacher model, retaining some of the performance benefits. While this method can improve results on in-distribution examples, it does not necessarily generalise to out-of-distribution (OOD) settings. We investigate two complementary methods for improving the robustness of the resulting student models on OOD domains. The first approach augments the distillation with generated unlabelled examples that match the target distribution. The second method upsamples data points among the training set that are similar to the target distribution. When applied on the task of natural language inference (NLI), our experiments on MNLI show that distillation with these modifications outperforms previous robustness solutions. We also find that these methods improve performance on OOD domains even beyond the target domain.

Joe Stacey, Marek Rei• 2023

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy80.51
681
Natural Language InferenceMNLI (matched)
Accuracy85.77
110
Natural Language InferenceSNLI (dev)
Accuracy80.16
71
Natural Language InferenceMNLI (mismatched)
Accuracy86.18
68
Natural Language InferenceSNLI hard 1.0 (test)
Accuracy66.04
27
Natural Language InferenceHANS
Accuracy68.3
23
Natural Language InferenceMNLI (all combined)
Accuracy85.98
12
Natural Language InferenceMNLI mismatched (val)
Accuracy59.94
9
Showing 8 of 8 rows

Other info

Code

Follow for update