Rethinking Personalization in Large Language Models at the Token Level
About
With large language models (LLMs) now performing strongly across diverse tasks, there is growing demand for them to personalize outputs for individual users. Personalization is typically framed as an additional layer on top of a base NLP task, requiring model responses to meet user-specific needs while still accomplishing the underlying task. From a token-level perspective, different tokens in a response contribute to personalization to varying degrees. Tokens with higher personalization relevance should therefore receive greater emphasis when developing personalized LLMs. However, accurately estimating such personalization degrees remains challenging. To address this challenge, we propose PerContrast, a self-contrast method that estimates each output token's dependence on user-specific information through causal intervention. Building on this mechanism, we develop the PerCE loss, which adaptively upweights tokens with higher estimated personalization degrees during training via a bootstrap procedure, enabling the model to alternate between estimating and optimizing these tokens. Experiments on multiple LLMs demonstrate that PerCE substantially improves personalization performance with minimal additional cost, achieving average gains of over 10% and up to 68.04% on the LongLaMP dataset, along with strong cross-task and cross-scenario transferability. These results highlight the importance of token-level personalization modeling and establish token-aware training as a simple yet effective paradigm for advancing personalized LLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Personalized Dialogue Evaluation | ALOE | Score (k=6)3.08 | 12 | |
| Personalized Review Writing | LongLaMP | ROUGE-L27.84 | 12 | |
| Personalized Topic Writing | LongLaMP | ROUGE-L23.07 | 12 | |
| Personalized Abstract Generation | LongLaMP | ROUGE-L37.85 | 12 | |
| Question Answering | DROP | Score0.66 | 10 | |
| Personalized Text Generation | LongLaMP PRW Qwen3-4B (test) | ROUGE-L26.68 | 6 | |
| Personalized Text Generation | LongLaMP PTW Qwen3-4B (test) | ROUGE-L21.02 | 6 | |
| Post Review Writing | LongLaMP PRW (test) | ROUGE-L27.71 | 6 | |
| Product Title Writing | LongLaMP PTW (test) | ROUGE-L22.18 | 6 | |
| Profile Attribute Generation | LongLaMP PAG (test) | ROUGE-L37.56 | 6 |