Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Beyond Text: Aligning Vision and Language for Multimodal E-Commerce Retrieval

About

Modern e-commerce search is inherently multimodal: customers make purchase decisions by jointly considering product text and visual informations. However, most industrial retrieval and ranking systems primarily rely on textual information, underutilizing the rich visual signals available in product images. In this work, we study unified text-image fusion for two-tower retrieval models in the e-commerce domain. We demonstrate that domain-specific fine-tuning and two stage alignment between query with product text and image modalities are both crucial for effective multimodal retrieval. Building on these insights, we propose a noval modality fusion network to fuse image and text information and capture cross-modal complementary information. Experiments on large-scale e-commerce datasets validate the effectiveness of the proposed approach.

Qujiaheng Zhang, Guagnyue Xu, Fengjie Li• 2026

Related benchmarks

TaskDatasetResultRank
Product RetrievalE-commerce Product Dataset Desirability
nDCG@184.1
2
Product RetrievalE-commerce Product Dataset Relevance
nDCG@191.1
2
Showing 2 of 2 rows

Other info

Follow for update