Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust Remote Sensing Image-Text Retrieval with Noisy Correspondence

About

As a pivotal task that bridges remote visual and linguistic understanding, Remote Sensing Image-Text Retrieval (RSITR) has attracted considerable research interest in recent years. However, almost all RSITR methods implicitly assume that image-text pairs are matched perfectly. In practice, acquiring a large set of well-aligned data pairs is often prohibitively expensive or even infeasible. In addition, we also notice that the remote sensing datasets (e.g., RSITMD) truly contain some inaccurate or mismatched image text descriptions. Based on the above observations, we reveal an important but untouched problem in RSITR, i.e., Noisy Correspondence (NC). To overcome these challenges, we propose a novel Robust Remote Sensing Image-Text Retrieval (RRSITR) paradigm that designs a self-paced learning strategy to mimic human cognitive learning patterns, thereby learning from easy to hard from multi-modal data with NC. Specifically, we first divide all training sample pairs into three categories based on the loss magnitude of each pair, i.e., clean sample pairs, ambiguous sample pairs, and noisy sample pairs. Then, we respectively estimate the reliability of each training pair by assigning a weight to each pair based on the values of the loss. Further, we respectively design a new multi-modal self-paced function to dynamically regulate the training sequence and weights of the samples, thus establishing a progressive learning process. Finally, for noisy sample pairs, we present a robust triplet loss to dynamically adjust the soft margin based on semantic similarity, thereby enhancing the robustness against noise. Extensive experiments on three popular benchmark datasets demonstrate that the proposed RRSITR significantly outperforms the state-of-the-art methods, especially in high noise rates. The code is available at: https://github.com/MSFLabX/RRSITR

Qiya Song, Yiqiang Xie, Yuan Sun, Renwei Dian, Xudong Kang• 2026

Related benchmarks

TaskDatasetResultRank
Image-Text RetrievalRSICD
Mean Recall32.7
119
Image-to-Text RetrievalNWPU (test)
Recall@1 (R@1)22.84
44
Text-to-Image RetrievalNWPU (test)
R@113.62
44
Image-to-Text RetrievalRSITMD 20% noise ratio
R@124.6
11
Image-to-Text RetrievalRSITMD 40% noise ratio
R@123.23
11
Image-to-Text RetrievalRSITMD 60% noise ratio
R@122.03
11
Image-to-Text RetrievalRSITMD 80% noise ratio
R@116.9
11
Text-to-Image RetrievalRSITMD 20% noise ratio
Recall@120.19
11
Text-to-Image RetrievalRSITMD 40% noise ratio
Recall@1 (R@1)19.27
11
Text-to-Image RetrievalRSITMD 60% noise ratio
Recall@117.72
11
Showing 10 of 11 rows

Other info

Follow for update