Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks

About

BERT (Devlin et al., 2018) and RoBERTa (Liu et al., 2019) has set a new state-of-the-art performance on sentence-pair regression tasks like semantic textual similarity (STS). However, it requires that both sentences are fed into the network, which causes a massive computational overhead: Finding the most similar pair in a collection of 10,000 sentences requires about 50 million inference computations (~65 hours) with BERT. The construction of BERT makes it unsuitable for semantic similarity search as well as for unsupervised tasks like clustering. In this publication, we present Sentence-BERT (SBERT), a modification of the pretrained BERT network that use siamese and triplet network structures to derive semantically meaningful sentence embeddings that can be compared using cosine-similarity. This reduces the effort for finding the most similar pair from 65 hours with BERT / RoBERTa to about 5 seconds with SBERT, while maintaining the accuracy from BERT. We evaluate SBERT and SRoBERTa on common STS tasks and transfer learning tasks, where it outperforms other state-of-the-art sentence embeddings methods.

Nils Reimers, Iryna Gurevych• 2019

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy75.2
1460
Natural Language InferenceSNLI (test)
Accuracy77
681
ReasoningBBH
Accuracy63.05
507
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) various (test)
STS12 Score74.53
393
Natural Language InferenceRTE
Accuracy60.2
367
Question AnsweringOBQA
Accuracy47.2
276
Subjectivity ClassificationSubj
Accuracy94.5
266
Question AnsweringARC-E
Accuracy62.9
242
Reading ComprehensionBoolQ
Accuracy73.6
219
Sentiment ClassificationSST2 (test)
Accuracy87.65
214
Showing 10 of 248 rows
...

Other info

Code

Follow for update