PromptBERT: Improving BERT Sentence Embeddings with Prompts
About
We propose PromptBERT, a novel contrastive learning method for learning better sentence representation. We firstly analyze the drawback of current sentence embedding from original BERT and find that it is mainly due to the static token embedding bias and ineffective BERT layers. Then we propose the first prompt-based sentence embeddings method and discuss two prompt representing methods and three prompt searching methods to make BERT achieve better sentence embeddings. Moreover, we propose a novel unsupervised training objective by the technology of template denoising, which substantially shortens the performance gap between the supervised and unsupervised settings. Extensive experiments show the effectiveness of our method. Compared to SimCSE, PromptBert achieves 2.29 and 2.58 points of improvement based on BERT and RoBERTa in the unsupervised setting.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic Textual Similarity | STS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) various (test) | STS12 Score76.75 | 393 | |
| Semantic Textual Similarity | STS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) | STS12 Score76.41 | 195 | |
| Sentence Classification Transfer Tasks | SentEval transfer tasks | Average Accuracy0.89 | 99 | |
| Sentence Classification | SentEval Transfer tasks (test) | MR82.88 | 73 | |
| Semantic Textual Similarity | English STS | Average Score79.15 | 68 | |
| Semantic Textual Similarity | STS (Semantic Textual Similarity) 2012-2016 (test) | STS-12 Score60.96 | 57 | |
| Sentence Embedding Evaluation | SentEval | Average Score (Avg)89.11 | 44 |