Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

About

Pretrained language models have become the standard approach for many NLP tasks due to strong performance, but they are very expensive to train. We propose a simple and efficient learning framework, TLM, that does not rely on large-scale pretraining. Given some labeled task data and a large general corpus, TLM uses task data as queries to retrieve a tiny subset of the general corpus and jointly optimizes the task objective and the language modeling objective from scratch. On eight classification datasets in four domains, TLM achieves results better than or similar to pretrained language models (e.g., RoBERTa-Large) while reducing the training FLOPs by two orders of magnitude. With high accuracy and efficiency, we hope TLM will contribute to democratizing NLP and expediting its development.

Xingcheng Yao, Yanan Zheng, Xiaocong Yang, Zhilin Yang• 2021

Related benchmarks

TaskDatasetResultRank
Text ClassificationAGNews--
119
Sentiment ClassificationIMDB--
41
Relation ExtractionChemProt
Micro F183.6
40
Relation ExtractionSciERC--
28
Text ClassificationHyperPartisan
F1 Score94.05
19
Abstract Sentence ClassificationRCT
Micro-F187.49
13
Citation Intent ClassificationACL-ARC
Macro F174.18
13
Text ClassificationHelpfulness
F1 Score71.83
13
Showing 8 of 8 rows

Other info

Follow for update