Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence-Level Knowledge Distillation for Class-Incremental End-to-End Spoken Language Understanding

About

The ability to learn new concepts sequentially is a major weakness for modern neural networks, which hinders their use in non-stationary environments. Their propensity to fit the current data distribution to the detriment of the past acquired knowledge leads to the catastrophic forgetting issue. In this work we tackle the problem of Spoken Language Understanding applied to a continual learning setting. We first define a class-incremental scenario for the SLURP dataset. Then, we propose three knowledge distillation (KD) approaches to mitigate forgetting for a sequence-to-sequence transformer model: the first KD method is applied to the encoder output (audio-KD), and the other two work on the decoder output, either directly on the token-level (tok-KD) or on the sequence-level (seq-KD) distributions. We show that the seq-KD substantially improves all the performance metrics, and its combination with the audio-KD further decreases the average WER and enhances the entity prediction metric.

Umberto Cappellazzo, Muqiao Yang, Daniele Falavigna, Alessio Brutti• 2023

Related benchmarks

TaskDatasetResultRank
Spoken Language UnderstandingFSC (test)
Intent Accuracy73.65
16
Spoken Language UnderstandingSLURP 3 tasks
Average Accuracy74.28
9
Spoken Language UnderstandingFSC 3 tasks
Avg Accuracy84.79
9
Spoken Language UnderstandingSLURP 6 tasks
Avg Accuracy69.91
9
Showing 4 of 4 rows

Other info

Follow for update