Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FutureTOD: Teaching Future Knowledge to Pre-trained Language Model for Task-Oriented Dialogue

About

Pre-trained language models based on general text enable huge success in the NLP scenario. But the intrinsical difference of linguistic patterns between general text and task-oriented dialogues makes existing pre-trained language models less useful in practice. Current dialogue pre-training methods rely on a contrastive framework and face the challenges of both selecting true positives and hard negatives. In this paper, we propose a novel dialogue pre-training model, FutureTOD, which distills future knowledge to the representation of the previous dialogue context using a self-training framework. Our intuition is that a good dialogue representation both learns local context information and predicts future information. Extensive experiments on diverse downstream dialogue tasks demonstrate the effectiveness of our model, especially the generalization, robustness, and learning discriminative dialogue representations capabilities.

Weihao Zeng, Keqing He, Yejie Wang, Chen Zeng, Jingang Wang, Yunsen Xian, Weiran Xu• 2023

Related benchmarks

TaskDatasetResultRank
Intent RecognitionOOS (test)
Overall Accuracy87.2
19
Response SelectionMWOZ 2.1
Accuracy (1/100)68.5
17
Dialogue State TrackingMultiWOZ 2.1 (5%)
Joint Goal Acc29.1
11
Dialogue State TrackingMultiWOZ 2.1 (1%)
Joint Goal Acc9.9
10
Dialogue act predictionMWOZ (Full Data)
Micro-F192
7
Dialogue act predictionDSTC2
Micro-F1 Score94.6
7
Dialogue act predictionDSTC2 (1% Data)
Micro F183.7
6
Dialogue act predictionMWOZ 10% Data 2.1
Micro F191
6
Dialogue act predictionDSTC2 (10% Data)
Micro F1 Score93.6
6
Dialogue act predictionMWOZ (1% Data)
Micro-F187.9
6
Showing 10 of 13 rows

Other info

Code

Follow for update