Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

PinCLIP: Large-scale Foundational Multimodal Representation at Pinterest

About

While multi-modal Visual Language Models (VLMs) have demonstrated significant success across various domains, the integration of VLMs into recommendation and retrieval systems remains a challenge, due to issues like training objective discrepancies and serving efficiency bottlenecks. This paper introduces PinCLIP, a large-scale visual representation learning approach developed to enhance retrieval and ranking models at Pinterest by leveraging VLMs to learn image-text alignment. We propose a novel hybrid Vision Transformer architecture that utilizes a VLM backbone and a hybrid fusion mechanism to capture multi-modality content representation at varying granularities. Beyond standard image-to-text alignment objectives, we introduce a neighbor alignment objective to model the cross-fusion of multi-modal representations within the Pinterest Pin-Board graph. Offline evaluations show that PinCLIP outperforms state-of-the-art baselines, such as Qwen, by 20% in multi-modal retrieval tasks. Online A/B testing demonstrates significant business impact, including substantial engagement gains across all major surfaces in Pinterest. Notably, PinCLIP significantly addresses the "cold-start" problem, enhancing fresh content distribution with a 15% Repin increase in organic content and 8.7% higher click for new Ads.

Josh Beal, Eric Kim, Jinfeng Rao, Rex Wu, Dmitry Kislyuk, Charles Rosenberg• 2026

Related benchmarks

TaskDatasetResultRank
Ads RankingPinterest Ads (online experiment)
CTR5.02
1
Candidate RetrievalPinterest Related Pins
Sitewide Repins36
1
Image RetrievalPinterest Search
Fulfillment Rate34
1
RankingPinterest Homefeed
Surface Repins91
1
RankingPinterest Related Pins
Surface Repins1.84
1
RankingPinterest Search
Surface Repins96
1
Showing 6 of 6 rows

Other info

Follow for update