Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Semantic Noise Matters for Neural Natural Language Generation

About

Neural natural language generation (NNLG) systems are known for their pathological outputs, i.e. generating text which is unrelated to the input specification. In this paper, we show the impact of semantic noise on state-of-the-art NNLG models which implement different semantic control mechanisms. We find that cleaned data can improve semantic correctness by up to 97%, while maintaining fluency. We also find that the most common error is omitting information, rather than hallucination.

Ond\v{r}ej Du\v{s}ek, David M. Howcroft, Verena Rieser• 2019

Related benchmarks

TaskDatasetResultRank
Data-to-text generationE2E (test)
BLEU66.41
33
SummarizationNYT Summarization (test)
Hallucination Rate7.14
10
Data-to-text generationCleaned E2E (test)
BLEU40.5
9
Factual CorrectnessE2E Original (test)
Add0.14
5
Data-to-text generationE2E Cleaned
Fluency5.23
5
Showing 5 of 5 rows

Other info

Code

Follow for update