Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Factorising Meaning and Form for Intent-Preserving Paraphrasing

About

We propose a method for generating paraphrases of English questions that retain the original intent but use a different surface form. Our model combines a careful choice of training objective with a principled information bottleneck, to induce a latent encoding space that disentangles meaning and form. We train an encoder-decoder model to reconstruct a question from a paraphrase with the same meaning and an exemplar with the same surface form, leading to separated encoding spaces. We use a Vector-Quantized Variational Autoencoder to represent the surface form as a set of discrete latent variables, allowing us to use a classifier to select a different surface form at test time. Crucially, our method does not require access to an external source of target exemplars. Extensive experiments and a human evaluation show that we are able to generate paraphrases with a better tradeoff between semantic preservation and syntactic novelty compared to previous methods.

Tom Hosking, Mirella Lapata• 2021

Related benchmarks

TaskDatasetResultRank
Paraphrase GenerationQQP (test)--
22
Paraphrase GenerationMSCOCO (test)
Self-BLEU12.76
14
Paraphrase GenerationParalex (test)
BLEU36.36
11
Paraphrase GenerationParalex
iBLEU21.67
4
Paraphrase GenerationQQP
iBLEU13.63
4
Paraphrase GenerationMSCOCO
iBLEU13.77
4
Showing 6 of 6 rows

Other info

Code

Follow for update