Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Text-to-Text Pre-Training for Data-to-Text Tasks

About

We study the pre-train + fine-tune strategy for data-to-text tasks. Our experiments indicate that text-to-text pre-training in the form of T5, enables simple, end-to-end transformer based models to outperform pipelined neural architectures tailored for data-to-text generation, as well as alternative language model based pre-training techniques such as BERT and GPT-2. Importantly, T5 pre-training leads to better generalization, as evidenced by large improvements on out-of-domain test sets. We hope our work serves as a useful baseline for future research, as transfer learning becomes ever more prevalent for data-to-text tasks.

Mihir Kale, Abhinav Rastogi• 2020

Related benchmarks

TaskDatasetResultRank
Data-to-text generationWebNLG (test)
BLEU57.1
39
Response GenerationMultiWOZ (test)
BLEU Score35.1
27
Data-to-text generationToTTo
BLEU49.5
18
Table-to-text generationLogic2Text (test)
BLEURT Score-1.079
18
Table-to-text generationToTTo (test)
BLEURT Score0.23
15
Table-to-text generationTotto All (dev)
BLEURT0.233
15
Table-to-text generationToTTo Non (test)
BLEURT Score0.106
15
Table-to-text generationToTTo Over (test)
BLEURT0.354
15
Loosely controlled table-to-text generationToTTO Logic2Text-style (test)
BLEU29.4
15
Graph-to-text generationWebNLG seen v1.0 (test)
BLEU63.9
12
Showing 10 of 36 rows

Other info

Follow for update