Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TOD-BERT: Pre-trained Natural Language Understanding for Task-Oriented Dialogue

About

The underlying difference of linguistic patterns between general text and task-oriented dialogue makes existing pre-trained language models less useful in practice. In this work, we unify nine human-human and multi-turn task-oriented dialogue datasets for language modeling. To better model dialogue behavior during pre-training, we incorporate user and system tokens into the masked language modeling. We propose a contrastive objective function to simulate the response selection task. Our pre-trained task-oriented dialogue BERT (TOD-BERT) outperforms strong baselines like BERT on four downstream task-oriented dialogue applications, including intention recognition, dialogue state tracking, dialogue act prediction, and response selection. We also show that TOD-BERT has a stronger few-shot ability that can mitigate the data scarcity problem for task-oriented dialogue.

Chien-Sheng Wu, Steven Hoi, Richard Socher, Caiming Xiong• 2020

Related benchmarks

TaskDatasetResultRank
Dialog State TrackingMultiWOZ 2.1 (test)
Joint Goal Accuracy48
88
Intent ClassificationHINT3 10-shot
Accuracy66.42
23
Intent ClassificationMCID 10-shot
Accuracy74.66
23
Intent ClassificationHINT3 5-shot
Accuracy56.33
23
Intent ClassificationBANKING77 5-shot (test)
Accuracy67.69
20
Intent RecognitionOOS (test)
Overall Accuracy86.6
19
Response SelectionMWOZ 2.1
Accuracy (1/100)65.8
17
Intent ClassificationBANKING77 10-shot (test)
Accuracy79.71
12
Intent ClassificationHWU64 10-shot (test)
Accuracy82.15
12
Intent ClassificationHWU64 5-shot (test)
Accuracy74.83
12
Showing 10 of 22 rows

Other info

Follow for update