Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Two-Stage Prediction-Aware Contrastive Learning Framework for Multi-Intent NLU

About

Multi-intent natural language understanding (NLU) presents a formidable challenge due to the model confusion arising from multiple intents within a single utterance. While previous works train the model contrastively to increase the margin between different multi-intent labels, they are less suited to the nuances of multi-intent NLU. They ignore the rich information between the shared intents, which is beneficial to constructing a better embedding space, especially in low-data scenarios. We introduce a two-stage Prediction-Aware Contrastive Learning (PACL) framework for multi-intent NLU to harness this valuable knowledge. Our approach capitalizes on shared intent information by integrating word-level pre-training and prediction-aware contrastive fine-tuning. We construct a pre-training dataset using a word-level data augmentation strategy. Subsequently, our framework dynamically assigns roles to instances during contrastive fine-tuning while introducing a prediction-aware contrastive loss to maximize the impact of contrastive learning. We present experimental results and empirical analysis conducted on three widely used datasets, demonstrating that our method surpasses the performance of three prominent baselines on both low-data and full-data scenarios.

Guanhua Chen, Yutong Yao, Derek F. Wong, Lidia S. Chao• 2024

Related benchmarks

TaskDatasetResultRank
Joint Multiple Intent Detection and Slot FillingMixSNIPS (test)
Slot F196.8
57
Multi-intent Natural Language UnderstandingMixATIS (test)
Overall Accuracy50.4
16
Multi-intent Natural Language UnderstandingStanfordLU (test)
IC Acc89.1
8
Showing 3 of 3 rows

Other info

Code

Follow for update