Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Rethinking Personalization in Large Language Models at the Token Level

About

With large language models (LLMs) now performing strongly across diverse tasks, there is growing demand for them to personalize outputs for individual users. Personalization is typically framed as an additional layer on top of a base NLP task, requiring model responses to meet user-specific needs while still accomplishing the underlying task. From a token-level perspective, different tokens in a response contribute to personalization to varying degrees. Tokens with higher personalization relevance should therefore receive greater emphasis when developing personalized LLMs. However, accurately estimating such personalization degrees remains challenging. To address this challenge, we propose PerContrast, a self-contrast method that estimates each output token's dependence on user-specific information through causal intervention. Building on this mechanism, we develop the PerCE loss, which adaptively upweights tokens with higher estimated personalization degrees during training via a bootstrap procedure, enabling the model to alternate between estimating and optimizing these tokens. Experiments on multiple LLMs demonstrate that PerCE substantially improves personalization performance with minimal additional cost, achieving average gains of over 10% and up to 68.04% on the LongLaMP dataset, along with strong cross-task and cross-scenario transferability. These results highlight the importance of token-level personalization modeling and establish token-aware training as a simple yet effective paradigm for advancing personalized LLMs.

Chenheng Zhang, Yijun Lu, Lizhe Fang, Chunyuan Zheng, Jiajun Chai, Xiaohan Wang, Guojun Yin, Wei Lin, Yisen Wang, Zhouchen Lin• 2026

Related benchmarks

TaskDatasetResultRank
Personalized Dialogue EvaluationALOE
Score (k=6)3.08
12
Personalized Review WritingLongLaMP
ROUGE-L27.84
12
Personalized Topic WritingLongLaMP
ROUGE-L23.07
12
Personalized Abstract GenerationLongLaMP
ROUGE-L37.85
12
Question AnsweringDROP
Score0.66
10
Personalized Text GenerationLongLaMP PRW Qwen3-4B (test)
ROUGE-L26.68
6
Personalized Text GenerationLongLaMP PTW Qwen3-4B (test)
ROUGE-L21.02
6
Post Review WritingLongLaMP PRW (test)
ROUGE-L27.71
6
Product Title WritingLongLaMP PTW (test)
ROUGE-L22.18
6
Profile Attribute GenerationLongLaMP PAG (test)
ROUGE-L37.56
6
Showing 10 of 18 rows

Other info

Follow for update