Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mixout: Effective Regularization to Finetune Large-scale Pretrained Language Models

About

In natural language processing, it has been observed recently that generalization could be greatly improved by finetuning a large-scale language model pretrained on a large unlabeled corpus. Despite its recent success and wide adoption, finetuning a large pretrained language model on a downstream task is prone to degenerate performance when there are only a small number of training instances available. In this paper, we introduce a new regularization technique, to which we refer as "mixout", motivated by dropout. Mixout stochastically mixes the parameters of two models. We show that our mixout technique regularizes learning to minimize the deviation from one of the two models and that the strength of regularization adapts along the optimization trajectory. We empirically evaluate the proposed mixout and its variants on finetuning a pretrained language model on downstream tasks. More specifically, we demonstrate that the stability of finetuning and the average accuracy greatly increase when we use the proposed approach to regularize finetuning of BERT on downstream tasks in GLUE.

Cheolhyoung Lee, Kyunghyun Cho, Wanmo Kang• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy87.1
681
Natural Language InferenceSNLI
Accuracy87.1
174
Natural Language UnderstandingGLUE (val)--
170
Text ClassificationIMDB (test)
CA79
79
Sentiment ClassificationIMDB
Accuracy79
41
Sentiment AnalysisIMDB (test)
Genetic Score75.4
10
Natural Language InferenceSNLI 1000 random examples (test)
Genetic Score82.6
5
Showing 7 of 7 rows

Other info

Follow for update