Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LAMOL: LAnguage MOdeling for Lifelong Language Learning

About

Most research on lifelong learning applies to images or games, but not language. We present LAMOL, a simple yet effective method for lifelong language learning (LLL) based on language modeling. LAMOL replays pseudo-samples of previous tasks while requiring no extra memory or model capacity. Specifically, LAMOL is a language model that simultaneously learns to solve the tasks and generate training samples. When the model is trained for a new task, it generates pseudo-samples of previous tasks for training alongside data for the new task. The results show that LAMOL prevents catastrophic forgetting without any sign of intransigence and can perform five very different language tasks sequentially with only one model. Overall, LAMOL outperforms previous methods by a considerable margin and is only 2-3% worse than multitasking, which is usually considered the LLL upper bound. The source code is available at https://github.com/jojotenya/LAMOL.

Fan-Keng Sun, Cheng-Hao Ho, Hung-Yi Lee• 2019

Related benchmarks

TaskDatasetResultRank
Text ClassificationYahoo! Answers (test)--
133
Question AnsweringSQuAD (test)--
111
Text ClassificationAGNews, Amazon, DBPedia, Yahoo, and Yelp (test)
Exact Match (EM)78.6
55
Text ClassificationYelp (test)--
55
Lifelong LearningSST, QA-SRL, and WOZ Permuted Sequences GPT-2 models (test)
Accuracy (SRL WOZ SST)81.2
28
Semantic ParsingWikiSQL (test)--
27
Continual LearningSelfRC, TweetQA, and SST sequence SQuAD format (test)
Average EM76.3
16
Text ClassificationAGNews, Yelp, Amazon, DBPedia, Yahoo (last epoch of last task)
EM Score77.2
15
End-to-End Dialogue ModelingToDs (test)
Intent Accuracy2.68
11
Multitask Natural Language ProcessingDecaNLP SQuAD 2.0, WikiSQL, SST, QA-SRL, WOZ (test)
Average Score74.1
11
Showing 10 of 21 rows

Other info

Code

Follow for update