Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visual Words Meet BM25: Sparse Auto-Encoder Visual Word Scoring for Image Retrieval

About

Dense image retrieval is accurate but offers limited interpretability and attribution, and it can be compute-intensive at scale. We present \textbf{BM25-V}, which applies Okapi BM25 scoring to sparse visual-word activations from a Sparse Auto-Encoder (SAE) on Vision Transformer patch features. Across a large gallery, visual-word document frequencies are highly imbalanced and follow a Zipfian-like distribution, making BM25's inverse document frequency (IDF) weighting well suited for suppressing ubiquitous, low-information words and emphasizing rare, discriminative ones. BM25-V retrieves high-recall candidates via sparse inverted-index operations and serves as an efficient first-stage retriever for dense reranking. Across seven benchmarks, BM25-V achieves Recall@200 $\geq$ 0.993, enabling a two-stage pipeline that reranks only $K{=}200$ candidates per query and recovers near-dense accuracy within $0.2$\% on average. An SAE trained once on ImageNet-1K transfers zero-shot to seven fine-grained benchmarks without fine-tuning, and BM25-V retrieval decisions are attributable to specific visual words with quantified IDF contributions.

Donghoon Han, Eunhwan Park, Seunghyeon Seo• 2026

Related benchmarks

TaskDatasetResultRank
Image RetrievalAircraft (test)
Recall@170.4
11
Fine-grained retrievalPets (test)
R@10.916
6
Fine-grained retrievalFlowers-102 (test)
Recall@199.1
6
Fine-grained retrievalDTD (test)
R@176.9
6
Fine-grained retrievalCUB-200 (test)
R@10.755
6
Fine-grained retrievalCARS 196 (test)
R@191.8
6
Fine-grained retrievalFood-101 (test)
Recall@195
6
Showing 7 of 7 rows

Other info

Follow for update