Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-View Graph Convolutional Network for Multimedia Recommendation

About

Multimedia recommendation has received much attention in recent years. It models user preferences based on both behavior information and item multimodal information. Though current GCN-based methods achieve notable success, they suffer from two limitations: (1) Modality noise contamination to the item representations. Existing methods often mix modality features and behavior features in a single view (e.g., user-item view) for propagation, the noise in the modality features may be amplified and coupled with behavior features. In the end, it leads to poor feature discriminability; (2) Incomplete user preference modeling caused by equal treatment of modality features. Users often exhibit distinct modality preferences when purchasing different items. Equally fusing each modality feature ignores the relative importance among different modalities, leading to the suboptimal user preference modeling. To tackle the above issues, we propose a novel Multi-View Graph Convolutional Network for the multimedia recommendation. Specifically, to avoid modality noise contamination, the modality features are first purified with the aid of item behavior information. Then, the purified modality features of items and behavior features are enriched in separate views, including the user-item view and the item-item view. In this way, the distinguishability of features is enhanced. Meanwhile, a behavior-aware fuser is designed to comprehensively model user preferences by adaptively learning the relative importance of different modality features. Furthermore, we equip the fuser with a self-supervised auxiliary task. This task is expected to maximize the mutual information between the fused multimodal features and behavior features, so as to capture complementary and supplementary preference information simultaneously. Extensive experiments on three public datasets demonstrate the effectiveness of our methods.

Penghang Yu, Zhiyi Tan, Guanming Lu, Bing-Kun Bao• 2023

Related benchmarks

TaskDatasetResultRank
RecommendationAmazon Sports (test)
Recall@104.76
57
RecommendationAmazon Baby (test)
Recall@100.042
42
Multimodal RecommendationSports Amazon (test)
Recall@107.29
39
Multimodal RecommendationAmazon Baby (test)
Recall@106.2
39
Multimodal RecommendationBaby
Recall@106.13
38
Multimodal RecommendationElectronics
Recall@100.0442
19
RecommendationAmazon Electronics (Elec) (test)
R@100.033
15
RecommendationBaby
HR@104.14
15
RecommendationElec
Hit Rate@100.0332
15
Video RecommendationMicroLens-50K raw-video (test)
HR@107.08
11
Showing 10 of 11 rows

Other info

Follow for update