Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ReGen: Reinforcement Learning for Text and Knowledge Base Generation using Pretrained Language Models

About

Automatic construction of relevant Knowledge Bases (KBs) from text, and generation of semantically meaningful text from KBs are both long-standing goals in Machine Learning. In this paper, we present ReGen, a bidirectional generation of text and graph leveraging Reinforcement Learning (RL) to improve performance. Graph linearization enables us to re-frame both tasks as a sequence to sequence generation problem regardless of the generative direction, which in turn allows the use of Reinforcement Learning for sequence training where the model itself is employed as its own critic leading to Self-Critical Sequence Training (SCST). We present an extensive investigation demonstrating that the use of RL via SCST benefits graph and text generation on WebNLG+ 2020 and TekGen datasets. Our system provides state-of-the-art results on WebNLG+ 2020 by significantly improving upon published results from the WebNLG 2020+ Challenge for both text-to-graph and graph-to-text generation tasks.

Pierre L. Dognin, Inkit Padhi, Igor Melnyk, Payel Das• 2021

Related benchmarks

TaskDatasetResultRank
Triple ExtractionWebNLG+ 2020 (test)
F1 Score76.7
27
Text-to-GraphWebNLG+ 2020 v3.0 (test)
F1 Score80.7
16
Knowledge Graph GenerationTEKGEN (test)
F1 Score62.3
13
Relational Triplet ExtractionWebNLG
Partial F176.7
11
Relational Fact ExtractionWebNLG (test)
Partial Precision75.5
11
RDF-to-Text GenerationWebNLG+ 2020 3.0 (testB)
BLEU0.563
10
Knowledge Graph GenerationNYT (test)
F1 Score83.4
9
Showing 7 of 7 rows

Other info

Follow for update