Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Understanding Generalization in Role-Playing Models via Information Theory

About

Role-playing models (RPMs) are widely used in real-world applications but underperform when deployed in the wild. This degradation can be attributed to distribution shifts, including user, character, and dialogue compositional shifts. Existing methods like LLM-as-a-judge fall short in providing a fine-grained diagnosis of how these shifts affect RPM generalization, and thus there lack formal frameworks to characterize RPM generalization behaviors. To bridge these gaps, we introduce an information-theoretic metric, named reasoning-based effective mutual information difference (R-EMID), to measure RPM performance degradation in an interpretable way. We also derive an upper bound on R-EMID to predict the worst-case generalization performance of RPMs and theoretically reveal how various shifts contribute to the RPM performance degradation. Moreover, we propose a co-evolving reinforcement learning framework to adaptively model the connection among user, character, and dialogue context and thus enhance the estimation of dialogue response generation probability, which is critical for calculating R-EMID. Finally, we evaluate the generalization performance of various RPMs using R-EMID, finding that user shift poses the highest risk among all shifts and reinforcement learning is the most effective approach for enhancing RPM generalization.

Yongqi Li, Hao Lang, Fei Huang, Tieyun Qian, Yongbin Li• 2025

Related benchmarks

TaskDatasetResultRank
Role-playingRPGBench User Shift Generalization
RP Score (German)-0.016
18
Role-playingRPGBench Aggregate (Overall)
Avg Score-0.026
18
Role-playingRPGBench In-distribution
R-EMI-0.04
18
Role-playingRPGBench Dialogue Shift (Generalization)
Turn Composition-0.066
18
Role-playingRPGBench Character Shift (Generalization)
Deviation Score (Literature)-0.049
18
Showing 5 of 5 rows

Other info

Follow for update