Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On Variational Learning of Controllable Representations for Text without Supervision

About

The variational autoencoder (VAE) can learn the manifold of natural images on certain datasets, as evidenced by meaningful interpolating or extrapolating in the continuous latent space. However, on discrete data such as text, it is unclear if unsupervised learning can discover similar latent space that allows controllable manipulation. In this work, we find that sequence VAEs trained on text fail to properly decode when the latent codes are manipulated, because the modified codes often land in holes or vacant regions in the aggregated posterior latent space, where the decoding network fails to generalize. Both as a validation of the explanation and as a fix to the problem, we propose to constrain the posterior mean to a learned probability simplex, and performs manipulation within this simplex. Our proposed method mitigates the latent vacancy problem and achieves the first success in unsupervised learning of controllable representations for text. Empirically, our method outperforms unsupervised baselines and strong supervised approaches on text style transfer, and is capable of performing more flexible fine-grained control over text generation than existing methods.

Peng Xu, Jackie Chi Kit Cheung, Yanshuai Cao• 2019

Related benchmarks

TaskDatasetResultRank
Semantics preservationYelp
CTC Score46.3
7
Style TransferYahoo
Accuracy43.92
7
Semantics preservationIMDB
CTC Score46.2
7
Semantics preservationAMAZON
CTC Score0.461
7
Style TransferYelp
Accuracy14.5
7
Semantics preservationYahoo
CTC Score44.3
7
Style TransferIMDB
Accuracy20.15
7
Style TransferAMAZON
Accuracy32.6
7
Sentiment TransferAmazon, IMDB, Yelp, and Yahoo (test)
Style Score1.3
3
Showing 9 of 9 rows

Other info

Follow for update