Building End-To-End Dialogue Systems Using Generative Hierarchical Neural Network Models
About
We investigate the task of building open domain, conversational dialogue systems based on large dialogue corpora using generative models. Generative models produce system responses that are autonomously generated word-by-word, opening up the possibility for realistic, flexible interactions. In support of this goal, we extend the recently proposed hierarchical recurrent encoder-decoder neural network to the dialogue domain, and demonstrate that this model is competitive with state-of-the-art neural language models and back-off n-gram models. We investigate the limitations of this and similar approaches, and show how its performance can be improved by bootstrapping the learning from a larger question-answer pair corpus and from pretrained word embeddings.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Dialogue Generation | Reddit Conversation Corpus (test) | PPL84.02 | 13 | |
| Multi-turn Dialogue Response Generation | KdConv Music domain (test) | BLEU-129.92 | 6 | |
| Multi-turn Dialogue Response Generation | KdConv Film domain (test) | BLEU-127.03 | 6 | |
| Dialogue Response Generation | Ubuntu Dialogue Corpus | Activity Precision5.93 | 6 | |
| Multi-turn Dialogue Response Generation | KdConv Travel domain (test) | BLEU-130.92 | 6 | |
| Dialog Response Generation | Noun Precision0.31 | 4 | ||
| Dialogue Systems | Personal Chat | Distinct-10.22 | 3 |