Heavy-tailed Representations, Text Polarity Classification & Data Augmentation
About
The dominant approaches to text representation in natural language rely on learning embeddings on massive corpora which have convenient properties such as compositionality and distance preservation. In this paper, we develop a novel method to learn a heavy-tailed embedding with desirable regularity properties regarding the distributional tails, which allows to analyze the points far away from the distribution bulk using the framework of multivariate extreme value theory. In particular, a classifier dedicated to the tails of the proposed embedding is obtained which performance outperforms the baseline. This classifier exhibits a scale invariance property which we leverage by introducing a novel text generation method for label preserving dataset augmentation. Numerical experiments on synthetic and real text data demonstrate the relevance of the proposed framework and confirm that this method generates meaningful sentences with controllable attribute, e.g. positive or negative sentiment.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sentiment Classification | Yelp (test) | -- | 46 | |
| Qualitative Evaluation of Generated Sequences | Yelp | Sentiment Preservation85.7 | 4 | |
| Sentiment Classification | Amazon Extreme (test) | Loss0.08 | 4 | |
| Sentiment Classification | Amazon Overall (test) | Classification Loss0.084 | 4 | |
| Sentiment Classification | Yelp Bulk (test) | Classification Loss0.097 | 4 | |
| Sentiment Classification | Yelp Extreme (test) | Loss0.1205 | 4 | |
| Text Classification | Amazon Medium | F1 Score86.3 | 4 | |
| Text Classification | Amazon Large | F1 Score94 | 4 | |
| Text Classification | Yelp Medium | F1 Score88.4 | 4 | |
| Sentiment Classification | Amazon Bulk (test) | Loss0.085 | 4 |