Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Thank you BART! Rewarding Pre-Trained Models Improves Formality Style Transfer

About

Scarcity of parallel data causes formality style transfer models to have scarce success in preserving content. We show that fine-tuning pre-trained language (GPT-2) and sequence-to-sequence (BART) models boosts content preservation, and that this is possible even with limited amounts of parallel data. Augmenting these models with rewards that target style and content -- the two core aspects of the task -- we achieve a new state-of-the-art.

Huiyuan Lai, Antonio Toral, Malvina Nissim• 2021

Related benchmarks

TaskDatasetResultRank
Formality Style TransferGYAFC Entertainment & Music 1.0 (test)
BLEURT0.274
15
Formality Style TransferGYAFC Family & Relationships 1.0 (test)
BLEU0.793
15
Formality Style TransferGYAFC Entertainment & Music (test)
BLEU76.5
10
Formality Style TransferGYAFC Family & Relationships (test)
BLEU79.25
10
Showing 4 of 4 rows

Other info

Code

Follow for update