Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hello, It's GPT-2 -- How Can I Help You? Towards the Use of Pretrained Language Models for Task-Oriented Dialogue Systems

About

Data scarcity is a long-standing and crucial challenge that hinders quick development of task-oriented dialogue systems across multiple domains: task-oriented dialogue models are expected to learn grammar, syntax, dialogue reasoning, decision making, and language generation from absurdly small amounts of task-specific data. In this paper, we demonstrate that recent progress in language modeling pre-training and transfer learning shows promise to overcome this problem. We propose a task-oriented dialogue model that operates solely on text input: it effectively bypasses explicit policy and language generation modules. Building on top of the TransferTransfo framework (Wolf et al., 2019) and generative model pre-training (Radford et al., 2019), we validate the approach on complex multi-domain task-oriented dialogues from the MultiWOZ dataset. Our automatic and human evaluations show that the proposed model is on par with a strong task-specific neural baseline. In the long run, our approach holds promise to mitigate the data scarcity problem, and to support the construction of more engaging and more eloquent task-oriented conversational agents.

Pawe{\l} Budzianowski, Ivan Vuli\'c• 2019

Related benchmarks

TaskDatasetResultRank
End-to-end task-oriented dialogueMultiWOZ (test)
Task Success Rate61.36
68
Task-oriented DialogueCamRest676 end-to-end modeling (test)
Task Success Rate86.2
18
Showing 2 of 2 rows

Other info

Follow for update