Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

On the Challenges and Opportunities of Learned Sparse Retrieval for Code

About

Retrieval over large codebases is a key component of modern LLM-based software engineering systems. Existing approaches predominantly rely on dense embedding models, while learned sparse retrieval (LSR) remains largely unexplored for code. However, applying sparse retrieval to code is challenging due to subword fragmentation, semantic gaps between natural-language queries and code, diversity of programming languages and sub-tasks, and the length of code documents, which can harm sparsity and latency. We introduce SPLADE-Code, the first large-scale family of learned sparse retrieval models specialized for code retrieval (600M-8B parameters). Despite a lightweight one-stage training pipeline, SPLADE-Code achieves state-of-the-art performance among retrievers under 1B parameters (75.4 on MTEB Code) and competitive results at larger scales (79.0 with 8B). We show that learned expansion tokens are critical to bridge lexical and semantic matching, and provide a latency analysis showing that LSR enables sub-millisecond retrieval on a 1M-passage collection with little effectiveness loss.

Simon Lupart, Maxime Louis, Thibault Formal, Herv\'e D\'ejean, St\'ephane Clinchant• 2026

Related benchmarks

TaskDatasetResultRank
Code RetrievalMTEB Code (test)
Apps Score86.7
12
RetrievalCodeRAG-Bench
HumanEval Score100
11
RetrievalCPRet
T2C70.9
10
Showing 3 of 3 rows

Other info

Follow for update