Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BERT for Joint Intent Classification and Slot Filling

About

Intent classification and slot filling are two essential tasks for natural language understanding. They often suffer from small-scale human-labeled training data, resulting in poor generalization capability, especially for rare words. Recently a new language representation model, BERT (Bidirectional Encoder Representations from Transformers), facilitates pre-training deep bidirectional representations on large-scale unlabeled corpora, and has created state-of-the-art models for a wide variety of natural language processing tasks after simple fine-tuning. However, there has not been much effort on exploring BERT for natural language understanding. In this work, we propose a joint intent classification and slot filling model based on BERT. Experimental results demonstrate that our proposed model achieves significant improvement on intent classification accuracy, slot filling F1, and sentence-level semantic frame accuracy on several public benchmark datasets, compared to the attention-based recurrent neural network models and slot-gated models.

Qian Chen, Zhu Zhuo, Wen Wang• 2019

Related benchmarks

TaskDatasetResultRank
Joint Multiple Intent Detection and Slot FillingMixSNIPS (test)
Slot F195.9
57
Joint Multiple Intent Detection and Slot FillingMixATIS (test)
F1 Score (Slot)86.3
42
Intent ClassificationSnips (test)
Accuracy98.6
40
Natural Language UnderstandingSnips (test)
Intent Acc98.6
27
Intent DetectionATIS
ID Accuracy97.5
27
Slot FillingSnips (test)
F1 Score0.97
25
Spoken Language UnderstandingATIS (test)
Slot F196.1
18
Spoken Language UnderstandingSNIPS
Slot F197
15
Slot FillingATIS
F1 Score96.1
14
Natural Language UnderstandingATIS (test)
Intent Accuracy97.9
12
Showing 10 of 11 rows

Other info

Code

Follow for update