Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhancing Lexicon-Based Text Embeddings with Large Language Models

About

Recent large language models (LLMs) have demonstrated exceptional performance on general-purpose text embedding tasks. While dense embeddings have dominated related research, we introduce the first Lexicon-based EmbeddiNgS (LENS) leveraging LLMs that achieve competitive performance on these tasks. Regarding the inherent tokenization redundancy issue and unidirectional attention limitations in traditional causal LLMs, LENS consolidates the vocabulary space through token embedding clustering, and investigates bidirectional attention and various pooling strategies. Specifically, LENS simplifies lexicon matching by assigning each dimension to a specific token cluster, where semantically similar tokens are grouped together, and unlocking the full potential of LLMs through bidirectional attention. Extensive experiments demonstrate that LENS outperforms dense embeddings on the Massive Text Embedding Benchmark (MTEB), delivering compact feature representations that match the sizes of dense counterparts. Notably, combining LENSE with dense embeddings achieves state-of-the-art performance on the retrieval subset of MTEB (i.e. BEIR).

Yibin Lei, Tao Shen, Yu Cao, Andrew Yates• 2025

Related benchmarks

TaskDatasetResultRank
Information RetrievalBEIR--
59
Sentence Embedding EvaluationMTEB (test)
Re-Rank Score60.91
48
RetrievalAIR-Bench English 24.04
Wiki Score65.5
10
Showing 3 of 3 rows

Other info

Follow for update