Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Universal Language Model Fine-tuning for Text Classification

About

Inductive transfer learning has greatly impacted computer vision, but existing approaches in NLP still require task-specific modifications and training from scratch. We propose Universal Language Model Fine-tuning (ULMFiT), an effective transfer learning method that can be applied to any task in NLP, and introduce techniques that are key for fine-tuning a language model. Our method significantly outperforms the state-of-the-art on six text classification tasks, reducing the error by 18-24% on the majority of datasets. Furthermore, with only 100 labeled examples, it matches the performance of training from scratch on 100x more data. We open-source our pretrained models and code.

Jeremy Howard, Sebastian Ruder• 2018

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy63
797
Image ClassificationDTD
Accuracy67.85
419
Image ClassificationSVHN
Accuracy94
359
Image ClassificationFGVCAircraft
Accuracy54.77
225
Text ClassificationAG News (test)
Accuracy84.14
210
Text ClassificationTREC
Accuracy96.4
179
Sentiment ClassificationIMDB (test)
Error Rate4.6
144
Text ClassificationYahoo! Answers (test)
Clean Accuracy64.27
133
Image ClassificationVTAB 1k (test)--
121
Text ClassificationAGNews
Accuracy95
119
Showing 10 of 59 rows

Other info

Code

Follow for update