Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Pixels: A Training-Free, Text-to-Text Framework for Remote Sensing Image Retrieval

About

Semantic retrieval of remote sensing (RS) images is a critical task fundamentally challenged by the \textquote{semantic gap}, the discrepancy between a model's low-level visual features and high-level human concepts. While large Vision-Language Models (VLMs) offer a promising path to bridge this gap, existing methods often rely on costly, domain-specific training, and there is a lack of benchmarks to evaluate the practical utility of VLM-generated text in a zero-shot retrieval context. To address this research gap, we introduce the Remote Sensing Rich Text (RSRT) dataset, a new benchmark featuring multiple structured captions per image. Based on this dataset, we propose a fully training-free, text-only retrieval reference called TRSLLaVA. Our methodology reformulates cross-modal retrieval as a text-to-text (T2T) matching problem, leveraging rich text descriptions as queries against a database of VLM-generated captions within a unified textual embedding space. This approach completely bypasses model training or fine-tuning. Experiments on the RSITMD and RSICD benchmarks show our training-free method is highly competitive with state-of-the-art supervised models. For instance, on RSITMD, our method achieves a mean Recall of 42.62\%, nearly doubling the 23.86\% of the standard zero-shot CLIP baseline and surpassing several top supervised models. This validates that high-quality semantic representation through structured text provides a powerful and cost-effective paradigm for remote sensing image retrieval.

J. Xiao, Y. Guo, X. Zi, K. Thiyagarajan, C. Moreira, M. Prasad• 2025

Related benchmarks

TaskDatasetResultRank
Image-Text RetrievalRSICD
Mean Recall31.33
26
Image-to-Text RetrievalRSITMD
Rank42.62
19
Text-to-Image RetrievalRSITMD
mR42.62
19
Showing 3 of 3 rows

Other info

Follow for update