Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sequence-to-Sequence Generation for Spoken Dialogue via Deep Syntax Trees and Strings

About

We present a natural language generator based on the sequence-to-sequence approach that can be trained to produce natural language strings as well as deep syntax dependency trees from input dialogue acts, and we use it to directly compare two-step generation with separate sentence planning and surface realization stages to a joint, one-step approach. We were able to train both setups successfully using very little training data. The joint setup offers better performance, surpassing state-of-the-art with regards to n-gram-based scores while providing more relevant outputs.

Ond\v{r}ej Du\v{s}ek, Filip Jur\v{c}\'i\v{c}ek• 2016

Related benchmarks

TaskDatasetResultRank
Natural language generationE2E (test)--
79
Table-to-text generationE2ENLG (test)--
37
Data-to-text generationE2E
ROUGE-L0.685
36
Data-to-text generationE2E (test)
BLEU65.93
33
Generation from meaning representationsE2E (test)
BLEU0.6593
6
Data-to-text generationHotel
BLEU0.5059
4
Data-to-text generationRestaurant
BLEU0.4074
4
Data-to-text generationE2E+
BLEU0.6292
3
Showing 8 of 8 rows

Other info

Follow for update