Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recall and Learn: Fine-tuning Deep Pretrained Language Models with Less Forgetting

About

Deep pretrained language models have achieved great success in the way of pretraining first and then fine-tuning. But such a sequential transfer learning paradigm often confronts the catastrophic forgetting problem and leads to sub-optimal performance. To fine-tune with less forgetting, we propose a recall and learn mechanism, which adopts the idea of multi-task learning and jointly learns pretraining tasks and downstream tasks. Specifically, we propose a Pretraining Simulation mechanism to recall the knowledge from pretraining tasks without data, and an Objective Shifting mechanism to focus the learning on downstream tasks gradually. Experiments show that our method achieves state-of-the-art performance on the GLUE benchmark. Our method also enables BERT-base to achieve better performance than directly fine-tuning of BERT-large. Further, we provide the open-source RecAdam optimizer, which integrates the proposed mechanisms into Adam optimizer, to facility the NLP community.

Sanyuan Chen, Yutai Hou, Yiming Cui, Wanxiang Che, Ting Liu, Xiangzhan Yu• 2020

Related benchmarks

TaskDatasetResultRank
Multimodal Knowledge EditingMMQAKE Rephrased Image
M-Acc5.71
18
Multimodal Knowledge EditingMMQAKE Original Image
M-Acc4.69
18
Temporal Knowledge ProbingTemporalWiki TWiki-Probes-1112
Accuracy (Unchanged)9.579
11
Temporal Knowledge ProbingTemporalWiki TWiki-Probes-0910
Score (Unchanged)9.514
11
Temporal Knowledge ProbingTemporalWiki TWiki-Probes-1011
Accuracy (Unchanged)8.992
11
Continual Knowledge LearningLAMA-CKL Llama2-7B based (test)
Top Accuracy10
6
Continual Knowledge LearningLAMA-CKL (test)
Top Acc6
6
Showing 7 of 7 rows

Other info

Follow for update