Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Analysis of Large Language Models for Simulating User Responses in Surveys

About

Using Large Language Models (LLMs) to simulate user opinions has received growing attention. Yet LLMs, especially trained with reinforcement learning from human feedback (RLHF), are known to exhibit biases toward dominant viewpoints, raising concerns about their ability to represent users from diverse demographic and cultural backgrounds. In this work, we examine the extent to which LLMs can simulate human responses to cross-domain survey questions through direct prompting and chain-of-thought prompting. We further propose a claim diversification method CLAIMSIM, which elicits viewpoints from LLM parametric knowledge as contextual input. Experiments on the survey question answering task indicate that, while CLAIMSIM produces more diverse responses, both approaches struggle to accurately simulate users. Further analysis reveals two key limitations: (1) LLMs tend to maintain fixed viewpoints across varying demographic features, and generate single-perspective claims; and (2) when presented with conflicting claims, LLMs struggle to reason over nuanced differences among demographic features, limiting their ability to adapt responses to specific user profiles.

Ziyun Yu, Yiru Zhou, Chen Zhao, Hongyi Wen• 2025

Related benchmarks

TaskDatasetResultRank
User Opinion SimulationWVS Religion
Wasserstein Distance0.4
9
User Opinion SimulationWVS Gender
Wasserstein distance0.47
9
User Opinion SimulationWVS Politics
Wasserstein Distance0.53
9
Opinion SimulationWorld Values Survey (WVS) (test)
Gender Accuracy40
9
Showing 4 of 4 rows

Other info

Follow for update