Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Decoding Matters: Addressing Amplification Bias and Homogeneity Issue for LLM-based Recommendation

About

Adapting Large Language Models (LLMs) for recommendation requires careful consideration of the decoding process, given the inherent differences between generating items and natural language. Existing approaches often directly apply LLMs' original decoding methods. However, we find these methods encounter significant challenges: 1) amplification bias -- where standard length normalization inflates scores for items containing tokens with generation probabilities close to 1 (termed ghost tokens), and 2) homogeneity issue -- generating multiple similar or repetitive items for a user. To tackle these challenges, we introduce a new decoding approach named Debiasing-Diversifying Decoding (D3). D3 disables length normalization for ghost tokens to alleviate amplification bias, and it incorporates a text-free assistant model to encourage tokens less frequently generated by LLMs for counteracting recommendation homogeneity. Extensive experiments on real-world datasets demonstrate the method's effectiveness in enhancing accuracy and diversity. The code is available at https://github.com/SAI990323/DecodingMatters.

Keqin Bao, Jizhi Zhang, Yang Zhang, Xinyue Huo, Chong Chen, Fuli Feng• 2024

Related benchmarks

TaskDatasetResultRank
Generative RecommendationML OOD 10M
Hit Rate @1050
18
Generative RecommendationBook-Crossing OOD
Hit Rate @ 1050
9
Generative RecommendationGenerative Recommendation Popularity and Noisy Shifts
H@100.45
9
Generative RecommendationYelp OOD 2018
H@100.63
9
Generative RecommendationBook-Crossing (test)
H@101.48
9
Showing 5 of 5 rows

Other info

Follow for update