Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Curr-RLCER:Curriculum Reinforcement Learning For Coherence Explainable Recommendation

About

Explainable recommendation systems (RSs) are designed to explicitly uncover the rationale of each recommendation, thereby enhancing the transparency and credibility of RSs. Previous methods often jointly predicted ratings and generated explanations, but overlooked the incoherence of such two objectives. To address this issue, we propose Curr-RLCER, a reinforcement learning framework for explanation coherent recommendation with dynamic rating alignment. It employs curriculum learning, transitioning from basic predictions (i.e., click through rating-CTR, selection-based rating) to open-ended recommendation explanation generation. In particular, the rewards of each stage are designed for progressively enhancing the stability of RSs. Furthermore, a coherence-driven reward mechanism is also proposed to enforce the coherence between generated explanations and predicted ratings, supported by a specifically designed evaluation scheme. The extensive experimental results on three explainable recommendation datasets indicate that the proposed framework is effective. Codes and datasets are available at https://github.com/pxcstart/Curr-RLCER.

Xiangchen Pan, Wei Wei• 2026

Related benchmarks

TaskDatasetResultRank
Rating PredictionBaby
RMSE0.6899
6
Rating PredictionSports
RMSE0.6943
6
Rating PredictionClothing
RMSE0.6099
6
Explanation-rating coherence evaluationBaby
GPT Score83.34
4
Explanation-rating coherence evaluationSports
GPT Coherence Score0.8516
4
Explanation-rating coherence evaluationClothing
GPT Score0.8679
4
Showing 6 of 6 rows

Other info

Follow for update