Differentiable Data Augmentation for Contrastive Sentence Representation Learning
About
Fine-tuning a pre-trained language model via the contrastive learning framework with a large amount of unlabeled sentences or labeled sentence pairs is a common way to obtain high-quality sentence representations. Although the contrastive learning framework has shown its superiority on sentence representation learning over previous methods, the potential of such a framework is under-explored so far due to the simple method it used to construct positive pairs. Motivated by this, we propose a method that makes hard positives from the original training examples. A pivotal ingredient of our approach is the use of prefix that is attached to a pre-trained language model, which allows for differentiable data augmentation during contrastive learning. Our method can be summarized in two steps: supervised prefix-tuning followed by joint contrastive fine-tuning with unlabeled or labeled examples. Our experiments confirm the effectiveness of our data augmentation approach. The proposed method yields significant improvements over existing methods under both semi-supervised and supervised settings. Our experiments under a low labeled data setting also show that our method is more label-efficient than the state-of-the-art contrastive learning methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic Textual Similarity | STS (Semantic Textual Similarity) 2012-2016 (test) | STS-12 Score76.92 | 57 | |
| Semantic Textual Similarity | CDSC-R (val) | Spearman Correlation62.47 | 22 | |
| Semantic Textual Similarity | CDSC-R (test) | Spearman's Correlation0.6465 | 22 | |
| Semantic Textual Similarity | BIOSSES | Spearman Correlation40.12 | 22 | |
| Binary Classification | QQP, QNLI, MRPC Average | Average AUC78.05 | 16 | |
| Reranking | MTEB Reranking (test) | MAP (AU)51.1 | 11 |