Composed Image Retrieval for Remote Sensing
About
This work introduces composed image retrieval to remote sensing. It allows to query a large image archive by image examples alternated by a textual description, enriching the descriptive power over unimodal queries, either visual or textual. Various attributes can be modified by the textual part, such as shape, color, or context. A novel method fusing image-to-image and text-to-image similarity is introduced. We demonstrate that a vision-language model possesses sufficient descriptive power and no further learning step or training data are necessary. We present a new evaluation benchmark focused on color, context, density, existence, quantity, and shape modifications. Our work not only sets the state-of-the-art for this task, but also serves as a foundational step in addressing a gap in the field of remote sensing image retrieval. Code at: https://github.com/billpsomas/rscir
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Domain Conversion Retrieval | ImageNet-R | Recall@1012.17 | 24 | |
| Composed Image Retrieval | ImageNet-R (test) | Cartoon R@1011.61 | 19 | |
| Domain Conversion | LTLL | mAP (Today)24.56 | 10 | |
| Domain Conversion | ImageNet-R | mAP (Cartoon)10.07 | 10 | |
| Domain Conversion | NICO++ | AUT8.58 | 10 | |
| Domain Conversion | miniDomainNet | CLIP Similarity7.52 | 10 |