Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BAM! Born-Again Multi-Task Networks for Natural Language Understanding

About

It can be challenging to train multi-task neural networks that outperform or even match their single-task counterparts. To help address this, we propose using knowledge distillation where single-task models teach a multi-task model. We enhance this training with teacher annealing, a novel method that gradually transitions the model from distillation to supervised learning, helping the multi-task model surpass its single-task teachers. We evaluate our approach by multi-task fine-tuning BERT on the GLUE benchmark. Our method consistently improves over standard single-task and multi-task training.

Kevin Clark, Minh-Thang Luong, Urvashi Khandelwal, Christopher D. Manning, Quoc V. Le• 2019

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)95.9
504
Natural Language UnderstandingGLUE (test)
SST-2 Accuracy94.9
416
Question AnsweringSQuAD v1.1 (dev)
F1 Score90.9
375
Question AnsweringSQuAD 2.0 (test)
EM80
34
Showing 4 of 4 rows

Other info

Follow for update