Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings
About
We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to n-gram-based scores while providing more relevant outputs.
Ond\v{r}ej Du\v{s}ek, Filip Jur\v{c}\'i\v{c}ek• 2016
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Natural language generation | E2E (test) | -- | 79 | |
| Table-to-text generation | E2ENLG (test) | -- | 37 | |
| Data-to-text generation | E2E | ROUGE-L0.685 | 36 | |
| Data-to-text generation | E2E (test) | BLEU65.93 | 33 | |
| Generation from meaning representations | E2E (test) | BLEU0.6593 | 6 | |
| Data-to-text generation | Hotel | BLEU0.5059 | 4 | |
| Data-to-text generation | Restaurant | BLEU0.4074 | 4 | |
| Data-to-text generation | E2E+ | BLEU0.6292 | 3 |
Showing 8 of 8 rows