Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HiddenCut: Simple Data Augmentation for Natural Language Understanding with Better Generalization

About

Fine-tuning large pre-trained models with task-specific data has achieved great success in NLP. However, it has been demonstrated that the majority of information within the self-attention networks is redundant and not utilized effectively during the fine-tuning stage. This leads to inferior results when generalizing the obtained models to out-of-domain distributions. To this end, we propose a simple yet effective data augmentation technique, HiddenCut, to better regularize the model and encourage it to learn more generalizable features. Specifically, contiguous spans within the hidden space are dynamically and strategically dropped during training. Experiments show that our HiddenCut method outperforms the state-of-the-art augmentation methods on the GLUE benchmark, and consistently exhibits superior generalization performances on out-of-distribution and challenging counterexamples. We have publicly released our code at https://github.com/GT-SALT/HiddenCut.

Jiaao Chen, Dinghan Shen, Weizhu Chen, Diyi Yang• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)95.8
504
Natural Language UnderstandingGLUE (test dev)
MRPC Accuracy90.69
81
Natural language generationE2E NLG Challenge
BLEU69.22
58
Natural language generationWebNLG unseen categories
BLEU46.11
17
Natural language generationWebNLG all categories (test)
BLEU55.06
13
Natural language generationWebNLG Seen (test)
BLEU62.43
6
Showing 6 of 6 rows

Other info

Follow for update