Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Intention Dialogue Models

About

Developing a dialogue agent that is capable of making autonomous decisions and communicating by natural language is one of the long-term goals of machine learning research. Traditional approaches either rely on hand-crafting a small state-action set for applying reinforcement learning that is not scalable or constructing deterministic models for learning dialogue sentences that fail to capture natural conversational variability. In this paper, we propose a Latent Intention Dialogue Model (LIDM) that employs a discrete latent variable to learn underlying dialogue intentions in the framework of neural variational inference. In a goal-oriented dialogue scenario, these latent intentions can be interpreted as actions guiding the generation of machine responses, which can be further refined autonomously by reinforcement learning. The experimental evaluation of LIDM shows that the model out-performs published benchmarks for both corpus-based and human evaluation, demonstrating the effectiveness of discrete latent variable models for learning goal-oriented dialogues.

Tsung-Hsien Wen, Yishu Miao, Phil Blunsom, Steve Young• 2017

Related benchmarks

TaskDatasetResultRank
Belief TrackingCamRest676
Joint Goal Accuracy84.2
6
Response GenerationCamRest676
Match Acc91.2
6
Response GenerationIn-Car
Match Score0.721
5
Showing 3 of 3 rows

Other info

Follow for update