Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Modal Recommendation Unlearning for Legal, Licensing, and Modality Constraints

About

User data spread across multiple modalities has popularized multi-modal recommender systems (MMRS). They recommend diverse content such as products, social media posts, TikTok reels, etc., based on a user-item interaction graph. With rising data privacy demands, recent methods propose unlearning private user data from uni-modal recommender systems (RS). However, methods for unlearning item data related to outdated user preferences, revoked licenses, and legally requested removals are still largely unexplored. Previous RS unlearning methods are unsuitable for MMRS due to the incompatibility of their matrix-based representation with the multi-modal user-item interaction graph. Moreover, their data partitioning step degrades performance on each shard due to poor data heterogeneity and requires costly performance aggregation across shards. This paper introduces MMRecUn, the first approach known to us for unlearning in MMRS and unlearning item data. Given a trained RS model, MMRecUn employs a novel Reverse Bayesian Personalized Ranking (BPR) objective to enable the model to forget marked data. The reverse BPR attenuates the impact of user-item interactions within the forget set, while the forward BPR reinforces the significance of user-item interactions within the retain set. Our experiments demonstrate that MMRecUn outperforms baseline methods across various unlearning requests when evaluated on benchmark MMRS datasets. MMRecUn achieves recall performance improvements of up to 49.85% compared to baseline methods and is up to 1.3x faster than the Gold model, which is trained on retain set from scratch. MMRecUn offers significant advantages, including superiority in removing target interactions, preserving retained interactions, and zero overhead costs compared to previous methods. Code: https://github.com/MachineUnlearn/MMRecUN Extended version: arXiv:2405.15328

Yash Sinha, Murari Mandal, Mohan Kankanhalli• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal Recommendation UnlearningSports (user-level)
Recall@20 (R)10.74
7
Multimodal Recommendation UnlearningClothing (user-level)
Recall@20 (R)9.09
7
Multimodal Recommendation UnlearningAmazon Baby Forget (test)
Recall@20 (User)7.75
6
Multimodal Recommendation UnlearningAmazon Clothing Retain (test)
Recall@20 (User)0.0909
6
Multimodal Recommendation UnlearningAmazon Clothing Forget (test)
Recall@20 (User)52.34
6
Multimodal Recommendation UnlearningAmazon Sports Retain (test)
Recall@20 (User)10.74
6
Multimodal Recommendation UnlearningAmazon Sports Forget (test)
Recall@20 (User)4.57
6
Multimodal Recommendation UnlearningClothing
Balanced Accuracy94.3
6
Multimodal Recommendation UnlearningSports
Balanced Accuracy56.26
6
Multimodal Recommendation UnlearningAmazon Baby Retain (test)
Recall@20 (User)8.83
6
Showing 10 of 10 rows

Other info

Follow for update