Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generating Datasets with Pretrained Language Models

About

To obtain high-quality sentence embeddings from pretrained language models (PLMs), they must either be augmented with additional pretraining objectives or finetuned on a large set of labeled text pairs. While the latter approach typically outperforms the former, it requires great human effort to generate suitable datasets of sufficient size. In this paper, we show how PLMs can be leveraged to obtain high-quality sentence embeddings without the need for labeled data, finetuning or modifications to the pretraining objective: We utilize the generative abilities of large and high-performing PLMs to generate entire datasets of labeled text pairs from scratch, which we then use for finetuning much smaller and more efficient models. Our fully unsupervised approach outperforms strong baselines on several semantic textual similarity datasets.

Timo Schick, Hinrich Sch\"utze• 2021

Related benchmarks

TaskDatasetResultRank
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R)
STS12 Score73.94
195
Semantic Textual SimilaritySTS-B
Spearman's Rho (x100)77.82
70
Semantic Textual SimilaritySTS-12
Spearman Correlation (rho)0.7027
23
Semantic Textual SimilaritySTS13 (test)
Spearman Correlation81.26
12
Semantic Textual SimilaritySTS15 (test)
Spearman Correlation0.8049
12
Semantic Textual SimilaritySTS16 (test)
Spearman Corr77.18
12
Semantic Textual SimilaritySTS14 (test)
Spearman Correlation0.7125
12
Semantic Textual SimilaritySICK (test)
Spearman Correlation0.7426
12
Showing 8 of 8 rows

Other info

Code

Follow for update