Visual Words Meet BM25: Sparse Auto-Encoder Visual Word Scoring for Image Retrieval
About
Dense image retrieval is accurate but offers limited interpretability and attribution, and it can be compute-intensive at scale. We present \textbf{BM25-V}, which applies Okapi BM25 scoring to sparse visual-word activations from a Sparse Auto-Encoder (SAE) on Vision Transformer patch features. Across a large gallery, visual-word document frequencies are highly imbalanced and follow a Zipfian-like distribution, making BM25's inverse document frequency (IDF) weighting well suited for suppressing ubiquitous, low-information words and emphasizing rare, discriminative ones. BM25-V retrieves high-recall candidates via sparse inverted-index operations and serves as an efficient first-stage retriever for dense reranking. Across seven benchmarks, BM25-V achieves Recall@200 $\geq$ 0.993, enabling a two-stage pipeline that reranks only $K{=}200$ candidates per query and recovers near-dense accuracy within $0.2$\% on average. An SAE trained once on ImageNet-1K transfers zero-shot to seven fine-grained benchmarks without fine-tuning, and BM25-V retrieval decisions are attributable to specific visual words with quantified IDF contributions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Retrieval | Aircraft (test) | Recall@170.4 | 11 | |
| Fine-grained retrieval | Pets (test) | R@10.916 | 6 | |
| Fine-grained retrieval | Flowers-102 (test) | Recall@199.1 | 6 | |
| Fine-grained retrieval | DTD (test) | R@176.9 | 6 | |
| Fine-grained retrieval | CUB-200 (test) | R@10.755 | 6 | |
| Fine-grained retrieval | CARS 196 (test) | R@191.8 | 6 | |
| Fine-grained retrieval | Food-101 (test) | Recall@195 | 6 |