Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Compositional Generalization with Self-Training for Data-to-Text Generation

About

Data-to-text generation focuses on generating fluent natural language responses from structured meaning representations (MRs). Such representations are compositional and it is costly to collect responses for all possible combinations of atomic meaning schemata, thereby necessitating few-shot generalization to novel MRs. In this work, we systematically study the compositional generalization of the state-of-the-art T5 models in few-shot data-to-text tasks. We show that T5 models fail to generalize to unseen MRs, and we propose a template-based input representation that considerably improves the model's generalization capability. To further improve the model's performance, we propose an approach based on self-training using fine-tuned BLEURT for pseudo response selection. On the commonly-used SGD and Weather benchmarks, the proposed self-training approach improves tree accuracy by 46%+ and reduces the slot error rates by 73%+ over the strong T5 baselines in few-shot settings.

Sanket Vaibhav Mehta, Jinfeng Rao, Yi Tay, Mihir Kale, Ankur P. Parikh, Emma Strubell• 2021

Related benchmarks

TaskDatasetResultRank
Task-oriented DialogueFewShotWeather seen structures (test)
BLEU73.82
13
Task-oriented DialogueFewShotWeather unseen structures (test)
BLEU57.11
13
Task-oriented DialogueFewShotSGD seen schemata (test)
BLEU27.48
13
Task-oriented DialogueFewShotSGD unseen schemata (test)
BLEU27.53
13
Data-to-text generationFewShotWeather 1-shot-250 (Seen)
Grammaticality2.66
3
Data-to-text generationFewShotWeather 1-shot-250 (Unseen)
Grammaticality2.5
3
Data-to-text generationSGD 5-shot (Seen)
Grammaticality2.69
3
Data-to-text generationSGD 5-shot (unseen)
Grammaticality2.67
3
Showing 8 of 8 rows

Other info

Code

Follow for update