Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CM-Net: A Novel Collaborative Memory Network for Spoken Language Understanding

About

Spoken Language Understanding (SLU) mainly involves two tasks, intent detection and slot filling, which are generally modeled jointly in existing works. However, most existing models fail to fully utilize co-occurrence relations between slots and intents, which restricts their potential performance. To address this issue, in this paper we propose a novel Collaborative Memory Network (CM-Net) based on the well-designed block, named CM-block. The CM-block firstly captures slot-specific and intent-specific features from memories in a collaborative manner, and then uses these enriched features to enhance local context representations, based on which the sequential information flow leads to more specific (slot and intent) global utterance representations. Through stacking multiple CM-blocks, our CM-Net is able to alternately perform information exchange among specific memories, local contexts and the global utterance, and thus incrementally enriches each other. We evaluate the CM-Net on two standard benchmarks (ATIS and SNIPS) and a self-collected corpus (CAIS). Experimental results show that the CM-Net achieves the state-of-the-art results on the ATIS and SNIPS in most of criteria, and significantly outperforms the baseline models on the CAIS. Additionally, we make the CAIS dataset publicly available for the research community.

Yijin Liu, Fandong Meng, Jinchao Zhang, Jie Zhou, Yufeng Chen, Jinan Xu• 2019

Related benchmarks

TaskDatasetResultRank
Intent ClassificationSnips (test)
Accuracy99.32
40
Natural Language UnderstandingSnips (test)
Intent Acc99.29
27
Slot FillingSnips (test)
F1 Score0.9731
25
Spoken Language UnderstandingATIS (test)
Slot F196.2
18
Intent DetectionCAIS
Accuracy94.56
3
Slot FillingCAIS
F1 Score86.16
3
Showing 6 of 6 rows

Other info

Code

Follow for update