Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Aligning Multimodal Sequential Recommendations via Robust Direct Preference Optimization with Sparse MoE

About

Preference-based alignment objectives have been widely adopted, from RLHF-style pairwise learning in large language models to emerging applications in recommender systems. Yet, existing work rarely examines how Direct Preference Optimization (DPO) behaves under implicit feedback, where unobserved items are not reliable negatives. We conduct systematic experiments on multimodal sequential recommendation to compare common negative-selection strategies and their interaction with DPO training. Our central finding is that a simple modification, replacing deterministic hard negatives with stochastic sampling from a dynamic top-K candidate pool, consistently improves ranking performance. We attribute its effectiveness to two factors: (1) reducing erroneous suppressive gradients caused by false negatives, and (2) retaining informative hard signals while smoothing optimization via controlled stochasticity. With an optional sparse Mixture-of-Experts encoder for efficient capacity scaling, RoDPO achieves up to 5.25% NDCG@5 on three Amazon benchmarks, with nearly unchanged inference cost.

Hejin Huang, Jusheng Zhang, Kaitong Cai, Jian Wang, Rong Pan• 2026

Related benchmarks

TaskDatasetResultRank
Sequential RecommendationAmazon Beauty
NDCG@105.08
84
Sequential RecommendationAmazon Toys and Games
NDCG@55.21
24
Sequential RecommendationAmazon Home and Kitchen
NDCG@50.0165
15
Showing 3 of 3 rows

Other info

Follow for update