Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bi-directional Training for Composed Image Retrieval via Text Prompt Learning

About

Composed image retrieval searches for a target image based on a multi-modal user query comprised of a reference image and modification text describing the desired changes. Existing approaches to solving this challenging task learn a mapping from the (reference image, modification text)-pair to an image embedding that is then matched against a large image corpus. One area that has not yet been explored is the reverse direction, which asks the question, what reference image when modified as described by the text would produce the given target image? In this work we propose a bi-directional training scheme that leverages such reversed queries and can be applied to existing composed image retrieval architectures with minimum changes, which improves the performance of the model. To encode the bi-directional query we prepend a learnable token to the modification text that designates the direction of the query and then finetune the parameters of the text embedding module. We make no other changes to the network architecture. Experiments on two standard datasets show that our novel approach achieves improved performance over a baseline BLIP-based model that itself already achieves competitive performance. Our code is released at https://github.com/Cuberick-Orion/Bi-Blip4CIR.

Zheyuan Liu, Weixuan Sun, Yicong Hong, Damien Teney, Stephen Gould• 2023

Related benchmarks

TaskDatasetResultRank
Composed Image RetrievalCIRR (test)
Recall@140.17
481
Composed Image RetrievalFashionIQ (val)
Shirt Recall@1041.76
455
Compositional Image RetrievalFashionIQ 1.0 (val)
Average Recall@1043.49
42
Showing 3 of 3 rows

Other info

Code

Follow for update