Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Do LLMs Recognize Your Preferences? Evaluating Personalized Preference Following in LLMs

About

Large Language Models (LLMs) are increasingly used as chatbots, yet their ability to personalize responses to user preferences remains limited. We introduce PrefEval, a benchmark for evaluating LLMs' ability to infer, memorize and adhere to user preferences in a long-context conversational setting. PrefEval comprises 3,000 manually curated user preference and query pairs spanning 20 topics. PrefEval contains user personalization or preference information in both explicit and implicit forms, and evaluates LLM performance using a generation and a classification task. With PrefEval, we evaluated the aforementioned preference following capabilities of 10 open-source and proprietary LLMs in multi-session conversations with varying context lengths up to 100k tokens. We benchmark with various prompting, iterative feedback, and retrieval-augmented generation methods. Our benchmarking effort reveals that state-of-the-art LLMs face significant challenges in proactively following users' preferences during conversations. In particular, in zero-shot settings, preference following accuracy falls below 10% at merely 10 turns (~3k tokens) across most evaluated models. Even with advanced prompting and retrieval methods, preference following still deteriorates in long-context conversations. Furthermore, we show that fine-tuning on PrefEval significantly improves performance. We believe PrefEval serves as a valuable resource for measuring, understanding, and enhancing LLMs' preference following abilities, paving the way for personalized conversational agents. Our code and dataset are available at https://prefeval.github.io/.

Siyan Zhao, Mingyi Hong, Yang Liu, Devamanyu Hazarika, Kaixiang Lin• 2025

Related benchmarks

TaskDatasetResultRank
Personality AlignmentP-SOUPS
Expertise42.16
7
Personality AlignmentPersona-MME 128k context
Accuracy57.66
6
Personality AlignmentPersona-MME 32k context
Accuracy59.73
6
Personalized Response GenerationReal-world failure cases from large-scale commercial PA
Macro Accuracy48.2
4
Personalized Response GenerationRPEVAL
Macro Accuracy1.3
4
Showing 5 of 5 rows

Other info

Follow for update