Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fine-tuning Pre-trained Language Models for Few-shot Intent Detection: Supervised Pre-training and Isotropization

About

It is challenging to train a good intent classifier for a task-oriented dialogue system with only a few annotations. Recent studies have shown that fine-tuning pre-trained language models with a small amount of labeled utterances from public benchmarks in a supervised manner is extremely helpful. However, we find that supervised pre-training yields an anisotropic feature space, which may suppress the expressive power of the semantic representations. Inspired by recent research in isotropization, we propose to improve supervised pre-training by regularizing the feature space towards isotropy. We propose two regularizers based on contrastive learning and correlation matrix respectively, and demonstrate their effectiveness through extensive experiments. Our main finding is that it is promising to regularize supervised pre-training with isotropization to further improve the performance of few-shot intent detection. The source code can be found at https://github.com/fanolabs/isoIntentBert-main.

Haode Zhang, Haowen Liang, Yuwei Zhang, Liming Zhan, Xiaolei Lu, Albert Y.S. Lam, Xiao-Ming Wu• 2022

Related benchmarks

TaskDatasetResultRank
Intent ClassificationMCID 10-shot
Accuracy83.2
23
Intent ClassificationHINT3 10-shot
Accuracy69.23
23
Intent ClassificationHINT3 5-shot
Accuracy60.33
23
Intent ClassificationBANKING77 10-shot
Accuracy84.49
20
Intent ClassificationHWU64 10-shot
Accuracy84.15
20
Intent ClassificationBANKING77 5-shot (test)
Accuracy71.78
20
Intent ClassificationHWU64 5-shot (test)
Accuracy78.26
12
Intent ClassificationHWU64 10-shot (test)
Accuracy83.7
12
Intent ClassificationBANKING77 10-shot (test)
Accuracy81.3
12
Intent ClassificationMCID 5-shot (test)
Accuracy0.7828
12
Showing 10 of 13 rows

Other info

Follow for update