Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence-Level Knowledge Distillation

About

Neural machine translation (NMT) offers a novel alternative formulation of translation that is potentially simpler than statistical approaches. However to reach competitive performance, NMT models need to be exceedingly large. In this paper we consider applying knowledge distillation approaches (Bucila et al., 2006; Hinton et al., 2015) that have proven successful for reducing the size of neural models in other domains to the problem of NMT. We demonstrate that standard knowledge distillation applied to word-level prediction can be effective for NMT, and also introduce two novel sequence-level versions of knowledge distillation that further improve performance, and somewhat surprisingly, seem to eliminate the need for beam search (even when applied on the original teacher model). Our best student model runs 10 times faster than its state-of-the-art teacher with little loss in performance. It is also significantly better than a baseline model trained without knowledge distillation: by 4.2/1.7 BLEU with greedy decoding/beam search. Applying weight pruning on top of knowledge distillation results in a student model that has 13 times fewer parameters than the original teacher model, with a decrease of 0.4 BLEU.

Yoon Kim, Alexander M. Rush• 2016

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy51.33
1460
Mathematical ReasoningGSM8K
Accuracy60.94
983
Automatic Speech RecognitionLibriSpeech (test-other)
WER17.36
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER4.23
833
Commonsense ReasoningWinoGrande
Accuracy62.19
776
Commonsense ReasoningPIQA
Accuracy71.55
647
Question AnsweringOpenBookQA
Accuracy30.4
465
Automatic Speech RecognitionLibriSpeech (dev-other)
WER17
411
Machine TranslationWMT En-De 2014 (test)
BLEU28.22
379
Multi-turn Dialogue EvaluationMT-Bench
Overall Score6.88
331
Showing 10 of 68 rows

Other info

Code

Follow for update