Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Pre-trained Language Model Representations for Language Generation

About

Pre-trained language model representations have been successful in a wide range of language understanding tasks. In this paper, we examine different strategies to integrate pre-trained representations into sequence to sequence models and apply it to neural machine translation and abstractive summarization. We find that pre-trained representations are most effective when added to the encoder network which slows inference by only 14%. Our experiments in machine translation show gains of up to 5.3 BLEU in a simulated resource-poor setup. While returns diminish with more labeled data, we still observe improvements when millions of sentence-pairs are available. Finally, on abstractive summarization we achieve a new state of the art on the full text version of CNN/DailyMail.

Sergey Edunov, Alexei Baevski, Michael Auli• 2019

Related benchmarks

TaskDatasetResultRank
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L38.47
169
SummarizationCNN Daily Mail
ROUGE-141.56
67
Showing 2 of 2 rows

Other info

Follow for update