A Copy-Augmented Sequence-to-Sequence Architecture Gives Good Performance on Task-Oriented Dialogue
About
Task-oriented dialogue focuses on conversational agents that participate in user-initiated dialogues on domain-specific topics. In contrast to chatbots, which simply seek to sustain open-ended meaningful discourse, existing task-oriented agents usually explicitly model user intent and belief states. This paper examines bypassing such an explicit representation by depending on a latent neural embedding of state and learning selective attention to dialogue history together with copying to incorporate relevant prior context. We complement recent work by showing the effectiveness of simple sequence-to-sequence neural architectures with a copy mechanism. Our model outperforms more complex memory-augmented models by 7% in per-response generation and is on par with the current state-of-the-art on DSTC2.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Task-oriented Dialogue | Stanford Multi-Domain Dialogue (SMD) (test) | BLEU11 | 29 | |
| Dialog Generation | DSTC2 (test) | Accuracy (Response)47.3 | 10 | |
| Task-oriented Dialogue Response Generation | Stanford Multi-turn Multi-domain Task-oriented Dialogue Dataset Navigation (test) | BLEU8.7 | 4 | |
| Task-oriented Dialogue Response Generation | Stanford Multi-turn Multi-domain Task-oriented Dialogue Dataset Weather SMD (test) | BLEU17.5 | 4 | |
| Task-oriented Dialogue | In-car personal assistant dataset realtime dialogues | Fluency2.33 | 4 | |
| Dialogue Generation | Navigation (test) | Correctness3.52 | 3 | |
| Task-oriented Dialogue | In-car personal assistant dialogue dataset (test) | Correctness3.52 | 3 |