Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Overcoming Catastrophic Forgetting by Exemplar Selection in Task-oriented Dialogue System

About

Intelligent task-oriented dialogue systems (ToDs) are expected to continuously acquire new knowledge, also known as Continual Learning (CL), which is crucial to fit ever-changing user needs. However, catastrophic forgetting dramatically degrades the model performance in face of a long streamed curriculum. In this paper, we aim to overcome the forgetting problem in ToDs and propose a method (HESIT) with hyper-gradient-based exemplar strategy, which samples influential exemplars for periodic retraining. Instead of unilaterally observing data or models, HESIT adopts a profound exemplar selection strategy that considers the general performance of the trained model when selecting exemplars for each task domain. Specifically, HESIT analyzes the training data influence by tracing their hyper-gradient in the optimization process. Furthermore, HESIT avoids estimating Hessian to make it compatible for ToDs with a large pre-trained model. Experimental results show that HESIT effectively alleviates catastrophic forgetting by exemplar selection, and achieves state-of-the-art performance on the largest CL benchmark of ToDs in terms of all metrics.

Chen Chen, Ruizhe Li, Yuchen Hu, Yuanyuan Chen, Chengwei Qin, Qiang Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Dialogue State TrackingToDs benchmark GPT-2 backbone (test)
JGA40.04
11
End-to-End Dialogue ModelingToDs (test)
Intent Accuracy83.46
11
Intent ClassificationToDs benchmark GPT-2 backbone (test)
Accuracy0.8271
11
Natural language generationToDs benchmark GPT-2 backbone (test)
EER5.11
11
Showing 4 of 4 rows

Other info

Follow for update