Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Does Pretraining for Summarization Require Knowledge Transfer?

About

Pretraining techniques leveraging enormous datasets have driven recent advances in text summarization. While folk explanations suggest that knowledge transfer accounts for pretraining's benefits, little is known about why it works or what makes a pretraining task or dataset suitable. In this paper, we challenge the knowledge transfer story, showing that pretraining on documents consisting of character n-grams selected at random, we can nearly match the performance of models pretrained on real corpora. This work holds the promise of eliminating upstream corpora, which may alleviate some concerns over offensive language, bias, and copyright issues. To see whether the small residual benefit of using real data could be accounted for by the structure of the pretraining task, we design several tasks motivated by a qualitative study of summarization corpora. However, these tasks confer no appreciable benefit, leaving open the possibility of a small role for knowledge transfer.

Kundan Krishna, Jeffrey Bigham, Zachary C. Lipton• 2021

Related benchmarks

TaskDatasetResultRank
Machine Reading ComprehensionSQuAD 1.1 (test)
EM50.4
46
Semantic ParsingmTOP (test)--
17
Pre-training EvaluationAggregated Downstream Tasks (test)
Average EM55.4
8
RetrosynthesisUSPTO Retrosynthesis 50K (test)
EM41.1
8
Semantic ParsingWEBQSP (test)
EM75.2
8
SummarizationCNNDM 10K (test)
ROUGE-133.2
8
Code TranslationCode Trans. (test)
Exact Match (EM)59
8
Showing 7 of 7 rows

Other info

Follow for update