Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VisualSparta: An Embarrassingly Simple Approach to Large-scale Text-to-Image Search with Weighted Bag-of-words

About

Text-to-image retrieval is an essential task in cross-modal information retrieval, i.e., retrieving relevant images from a large and unlabelled dataset given textual queries. In this paper, we propose VisualSparta, a novel (Visual-text Sparse Transformer Matching) model that shows significant improvement in terms of both accuracy and efficiency. VisualSparta is capable of outperforming previous state-of-the-art scalable methods in MSCOCO and Flickr30K. We also show that it achieves substantial retrieving speed advantages, i.e., for a 1 million image index, VisualSparta using CPU gets ~391X speedup compared to CPU vector search and ~5.4X speedup compared to vector search with GPU acceleration. Experiments show that this speed advantage even gets bigger for larger datasets because VisualSparta can be efficiently implemented as an inverted index. To the best of our knowledge, VisualSparta is the first transformer-based text-to-image retrieval model that can achieve real-time searching for large-scale datasets, with significant accuracy improvement compared to previous state-of-the-art methods.

Xiaopeng Lu, Tiancheng Zhao, Kyusong Lee• 2021

Related benchmarks

TaskDatasetResultRank
Text-to-Image RetrievalFlickr30K
R@157.1
460
Text-to-Image RetrievalFlickr30k (test)
Recall@145.4
423
Text-to-Image RetrievalMSCOCO 5K (test)
R@145.1
286
Text-to-Image RetrievalMSCOCO (1K test)
R@168.7
104
Text-to-Image RetrievalMSCOCO 113K (test)
Throughput (Q/s)275.5
4
Text-to-Image RetrievalMSCOCO 1M (test)
Throughput (Query/s)117.3
4
Showing 6 of 6 rows

Other info

Follow for update