Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Sentence Embeddings with Automatic Generation of Training Data Using Few-shot Examples

About

Decoder-based large language models (LLMs) have shown high performance on many tasks in natural language processing. This is also true for sentence embedding learning, where a decoder-based model, PromptEOL, has achieved the best performance on semantic textual similarity (STS) tasks. However, PromptEOL requires a manually annotated natural language inference (NLI) dataset for fine-tuning. We aim to improve sentence embeddings without using large manually annotated datasets by automatically generating an NLI dataset with an LLM and using it for fine-tuning of PromptEOL. To achieve this, we explore methods of data generation suitable for sentence embedding learning in this study. Specifically, we will focus on automatic dataset generation through few-shot learning and explore the appropriate methods to leverage few-shot examples. Experimental results on the STS tasks demonstrate that our approach outperforms existing models in settings without large manually annotated datasets.

Soma Sato, Hayato Tsukagoshi, Ryohei Sasano, Koichi Takeda• 2024

Related benchmarks

TaskDatasetResultRank
Semantic Textual SimilaritySTS tasks (STS12, STS13, STS14, STS15, STS16, STS-B, SICK-R) various (test)
STS12 Score78.75
393
Sentence ClassificationSentEval Transfer tasks (test)
MR90.53
73
Showing 2 of 2 rows

Other info

Code

Follow for update